instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
how the code '-input[range(target.shape[0]),target]' works?
I'm learing pytorch.Reading the official tutorial,I met the preplexing code. input is a tensor, so is target. def nll(input,target): return -input[range(target.shape[0]),target].mean() And the pred is: target is: the '-input[range(target.shape[0]),target]' is: Output shows this is not substracting target from input or merging two tensors
The code input[range(target.shape[0]), target] simply picks, from each row i of input the element at column indicated by the corresponding element of target, that is target[i]. In other words, if out = input[range(target.shape[0]), target] then out[i] = input[i, target[i]]. This is very similar to torch.gather.
https://stackoverflow.com/questions/73711299/
Solving Sylvester equations in PyTorch
I'm trying to solve a Sylvester matrix equation of the form AX + XB = C From what I've seen, these equations are usually solved with the Bartels-Stewart algorithm taking successive Schur decompositions. I'm aware scipy.linalg already has a solve_sylvester function, but I'm integrating the solution to the Sylvester equation into a neural network, so I need a way to calculate gradients to make A, B, and C learnable. Currently, I'm just solving a linear system with torch.linalg.solve using the Kronecker product and vectorization trick, but this has terrible runtime complexity. I haven't found any PyTorch support for Sylvester equations, let alone Schur decompositions, but before I try to implement Barters-Stewart on the GPU, is there a simpler way to find the gradients?
Initially I wrote a solution that would give complex X based on Bartels-Stewart algorithm for the m=n case. I had some problems because the eigenvector matrix is not accurate enough. Also the real part gives the real solution, and the imaginary part must be a solution for AX - XB = 0 import torch def sylvester(A, B, C, X=None): m = B.shape[-1]; n = A.shape[-1]; R, U = torch.linalg.eig(A) S, V = torch.linalg.eig(B) F = torch.linalg.solve(U, (C + 0j) @ V) W = R[..., :, None] - S[..., None, :] Y = F / W X = U[...,:n,:n] @ Y[...,:n,:m] @ torch.linalg.inv(V)[...,:m,:m] return X.real if all(torch.isreal(x.flatten()[0]) for x in [A, B, C]) else X As can be verified on the GPU with device='cuda' # Try different dimensions for batch_size, M, N in [(1, 4, 4), (20, 16, 16), (6, 13, 17), (11, 29, 23)]: print(batch_size, (M, N)) A = torch.randn((batch_size, N, N), dtype=torch.float64, device=device, requires_grad=True) B = torch.randn((batch_size, M, M), dtype=torch.float64, device=device, requires_grad=True) X = torch.randn((batch_size, N, M), dtype=torch.float64, device=device, requires_grad=True) C = A @ X - X @ B X_ = sylvester(A, B, C) C_ = (A) @ X_ - X_ @ (B) print(torch.max(abs(C - C_))) X.sum().backward() A faster algorithm, but inaccurate in the current pytorch version is def sylvester_of_the_future(A, B, C): def h(V): return V.transpose(-1,-2).conj() m = B.shape[-1]; n = A.shape[-1]; R, U = torch.linalg.eig(A) S, V = torch.linalg.eig(B) F = h(U) @ (C + 0j) @ V W = R[..., :, None] - S[..., None, :] Y = F / W X = U[...,:n,:n] @ Y[...,:n,:m] @ h(V)[...,:m,:m] return X.real if all(torch.isreal(x.flatten()[0]) for x in [A, B, C]) else X I will leave it here maybe in the future it will work properly.
https://stackoverflow.com/questions/73713072/
How to give TemporalFusionTransformer model a name?
Im working on creating .py scripts that preprocess some data and then train a TemporalFusionTransformer model. After the training, I have a function that logs the evaluation metrics in a .txt file, whose name should be [email protected]. Everywhere that I have looked, searched, in the docs, on forums, articles, I cannot find a way to give my models a custom name? Any idea how to do this? Edit: Can someone with >1500 reputation please add the tag temporalfusiontransformer in he tags section. Users below 1500 reputation (like me) cannot add new tags to the site.
You could create a custom class inheriting the original that requires and stores a name property on top of what other functionality the model provides, e.g. class NamedTFT(TemporalFusionTransformer): def __init__(self, name: str, *args, **kwargs): super(NamedTFT, self).__init__(*args, **kwargs) self.name = name then you could grab the model's name afterwards.
https://stackoverflow.com/questions/73714526/
Create a skip connection in a neural network - non Resnet
I want to add a skip connection to my neural network; I'm not trying to implement a ResNet, just a regular MLP. I can't find a resource that doesn't point to resnet or densenet. I tried naively adding layers, but it's throwing an error; I'd appreciate the help. thank you input_size = 615 output_size = 40 model = torch.nn.Sequential() layer_0 = model.add_module("linear_0", torch.nn.Linear(input_size, 2048)) activ_0 = model.add_module("activation_0", ReLU()) layer_1 = model.add_module("linear_1", torch.nn.Linear(2048, 2048)) activ_1 = model.add_module("activation_1", ReLU()) layer_2 = model.add_module("linear_2", torch.nn.Linear(2048, 2048)) # skip connection skip_0 = model.add_module(layer_1 + layer_2) activ_2 = model.add_module("activation_2", ReLU()) layer_3 = model.add_module("linear_3", torch.nn.Linear(2048, output_size))
They way skip connections are usually implemented is in the forward function. For example: from torch import nn import torch.nn.functional as nnf class MLPWithSkip(nn.Module): def __init__(self, input_size, output_size): self.linear_modules = nn.ModuleList([nn.Linear(input_size, 2048), nn.linear(2048, 2048), nn.Linear(2048, 2048), nn.linear(2048, output_size)]) def forward(self, x): h = [] for layer in self.linear_modules[:-1]: x = layer(x) h.append(x) # store the features x = nnf.relu(x) # implement the skip x = nnf.relu(h[-1] + h[-2]) y = self.linear_modules[-1](x) return y
https://stackoverflow.com/questions/73716090/
What is the correct implementation for rounding a 2D tensor given the rounding values in a 1D tensor?
This is what I have done so far: def round_values(predictions): # Rounding values rounded = torch.tensor([1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0]) # Find nearest rounding value dif = predictions.unsqueeze(2) - rounded.view(1,1,-1) indexes = torch.argmin(abs(dif), dim=2) # Fill tensor with nearest round value # TO IMPROVE rounded_predictions = torch.zeros(*predictions.shape) for i in range(rounded_predictions.shape[0]): for j in range(rounded_predictions.shape[1]): index = indexes[i,j] rounded_predictions[i,j] = rounded[index] return rounded_predictions a = 4*torch.rand((6,6)) + 1 print(a) print(round_values(a)) The double loop is terrible. I would like something like rounded[indexes], so it returns a tensor with the same shape as indexes with the values of rounded, but I have not found a way to do it at tensor level instead of element-wise. I would like Any idea?
A thousand times faster for a 100 x 100 array (on cpu) def round_values_bob(predictions): return torch.clamp(torch.round(predictions * 2) / 2, 1, 5)
https://stackoverflow.com/questions/73716250/
Problem with Deep Sarsa algorithm which work with pytorch (Adam optimizer) but not with keras/Tensorflow (Adam optimizer)
I have a deep sarsa algorithm which work great on Pytorch on lunar-lander-v2 and I would use with Keras/Tensorflow. It use mini-batch of size 64 which are used 128 time to train at each episode. There are the results I get. As you can see, it work great with Pytorch but not with Keras / Tensorflow... So I think I do not correctly implement the training function is Keras/Tensorflow (code is below). It seems that loss is oscillating in Keras because epsilon go to early to slow value but it work very great in Pytorch... Do you see something that could explain why it do not work in Keras/Tensorflow please? Thanks a lot for your help and any idea that could help me... Network information: It use Adam optimizer, and a network with two layers : 256 and 128, with relu on each: class Q_Network(nn.Module): def __init__(self, state_dim , action_dim): super(Q_Network, self).__init__() self.x_layer = nn.Linear(state_dim, 256) self.h_layer = nn.Linear(256, 128) self.y_layer = nn.Linear(128, action_dim) print(self.x_layer) def forward(self, state): xh = F.relu(self.x_layer(state)) hh = F.relu(self.h_layer(xh)) state_action_values = self.y_layer(hh) return state_action_values For keras/Tensorflwo I use this one: def CreationModele(dimension): entree_etat = keras.layers.Input(shape=(dimension)) sortie = keras.layers.Dense(units=256, activation='relu')(entree_etat) sortie = keras.layers.Dense(units=128, activation='relu')(sortie) sortie = keras.layers.Dense(units=4)(sortie) modele = keras.Model(inputs=entree_etat,outputs=sortie) return modele Training code In Pytorch, the training is done by: def update_Sarsa_Network(self, state, next_state, action, next_action, reward, ends): actions_values = torch.gather(self.qnet(state), dim=1, index=action.long()) next_actions_values = torch.gather(self.qnet(next_state), dim=1, index=next_action.long()) next_actions_values = reward + (1.0 - ends) * (self.discount_factor * next_actions_values) q_network_loss = self.MSELoss_function(actions_values, next_actions_values.detach()) self.qnet_optim.zero_grad() q_network_loss.backward() self.qnet_optim.step() return q_network_loss And in Keras/Tensorflow by: mse = keras.losses.MeanSquaredError( reduction=keras.losses.Reduction.SUM) @tf.function def train(model, batch_next_states_tensor, batch_next_actions_tensor, batch_reward_tensor, batch_end_tensor, batch_states_tensor, batch_actions_tensor, optimizer, gamma): with tf.GradientTape() as tape: # EStimation des valeurs des actions courantes actions_values = model(batch_states_tensor) # (mini_batch_size,4) actions_values = tf.linalg.diag_part(tf.gather(actions_values,batch_actions_tensor,axis=1)) # (mini_batch_size,) actions_values = tf.expand_dims(actions_values,-1) # (mini_batch_size,1) # EStimation des valeurs des actions suivantes next_actions_values = model(batch_next_states_tensor) # (mini_batch_size,4) next_actions_values = tf.linalg.diag_part(tf.gather(next_actions_values,batch_next_actions_tensor,axis=1)) # (mini_batch_size,) cibles = batch_reward_tensor + (1.0 - batch_end_tensor)*gamma*tf.expand_dims(next_actions_values,-1) # (mini_batch_size,1) error = mse(cibles, actions_values) grads = tape.gradient(error, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) return error Error function and Optimizer code The optimizer is Adam in Pytorch and Tensorflow with lr=0.001. In Pytorch: def __init__(self, state_dim, action_dim): self.qnet = Q_Network(state_dim, action_dim) self.qnet_optim = torch.optim.Adam(self.qnet.parameters(), lr=0.001) self.discount_factor = 0.99 self.MSELoss_function = nn.MSELoss(reduction='sum') self.replay_buffer = ReplayBuffer() pass In Keras / Tensorflow: alpha = 1e-3 # Initialise le modèle modele_Keras = CreationModele(8) optimiseur_Keras = keras.optimizers.Adam(learning_rate=alpha)
Ok I finnaly foud a solution by de-correlate target and action value using two model, one being updated periodically for target values calculation. I use a model for estimating the epsilon-greedy actions and computing the Q(s,a) values and a fixed model (but periodically uptated with the weight of the previous model) for calculate the targer r+gamma*Q(s',a'). Here is my result :
https://stackoverflow.com/questions/73723103/
How to freeze param when I use transfer learning in python-pytorch
I want to learn only the first layer by transfer learning and fix(freeze) the parameters of the other layers. but I was required to **requires_grad = True **. How can I solve this problem? The following is a description of what we tried and the errors we encountered. from efficientnet_pytorch import EfficientNet model_b0 = EfficientNet.from_pretrained('efficientnet-b0') num_ftrs = model_b0._fc.in_features model_b0._fc = nn.Linear(num_ftrs, 10) for param in model_b0.parameters(): param.requires_grad = False last_layer = list(model_b0.children())[-1] print(f'except last layer: {last_layer}') for param in last_layer.parameters(): param.requires_grad = True criterion = nn.CrossEntropyLoss() optimizer_ft = optim.SGD(model_b0.parameters(), lr=0.001, momentum=0.9) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_b0 = train_model(model_b0, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=3) if I change requires_grad = True , above code can run. and error is 4 optimizer_ft = optim.SGD(model_b7.parameters(), lr=0.001, momentum=0.9) 5 exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) ----> 7 model_b0 = train_model(model_b7, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=15) Cell In [69], line 43, in train_model(model, criterion, optimizer, scheduler, num_epochs) 41 loss = criterion(outputs, labels) ---> 43 loss.backward() 44 optimizer.step() \site-packages\torch\_tensor.py:396, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 394 create_graph=create_graph, 395 inputs=inputs) --> 396 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) \site-packages\torch\autograd\__init__.py:173, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 172 # calls in the traceback and some print out the last line --> 173 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 174 tensors, grad_tensors_, retain_graph, create_graph, inputs, RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn thank you for read!
There are several possible causes of this problem: The input: RuntimeError: element 0 of variables does not require grad and does not have a grad_fn The tensor you passed in does not have the requires_grad=True. Make sure your new Variable is with requires_grad = True: var_xs_h = Variable(xs_h.data, requires_grad=True) The requires_grad: Freeze last layers of the model As mentioned by the Pytorch forum moderator ptrblck: If you are setting requires_grad = False for all parameters, the error message is expected, as Autograd won’t be able to calculate any gradients, since no parameter requires them. Which I think your case is similar to the latter one, you can read the second post. Another suggestion by ptrblck in debugging. # standard use case x = torch.randn(1, 1) print(x.requires_grad) # > False lin = nn.Linear(1, 1) out = lin(x) print(out.grad_fn) # > <AddmmBackward0 object at 0x7fcea08c5610> out.backward() print(lin.weight.grad) # > tensor([[-0.9785]]) print(x.grad) # > None # input requires grad x = torch.randn(1, 1, requires_grad=True) print(x.requires_grad) # > True lin = nn.Linear(1, 1) out = lin(x) print(out.grad_fn) # > <AddmmBackward0 object at 0x7fcea08d4640> out.backward() print(lin.weight.grad) # > tensor([[1.6739]]) print(x.grad) # >tensor([[0.0300]])
https://stackoverflow.com/questions/73724830/
Iterating over torch tensor
What is the best and fastest way to iterate over Tensor. It is confusing why do I get tensor instead of value.. got this : [ x for x in t] Out[122]: [tensor(-0.12), tensor(-0.11), tensor(0.68), tensor(0.68), tensor(0.17)] but expected this behavior : [ x for x in t.numpy() ] Out[123]: [-0.11932722, -0.114598714, 0.67563725, 0.6756373, 0.16548502] I would prefer not to convert to numpy if possible ?
With numpy everything is simpler because np.arrays are just a collection of numbers always stored on CPU. Therefore, if you iterate over an np.array you get these float numbers. However, in PyTorch, tensors store not only numbers but also their gradients. Additionally, PyTorch tensors may be stored on CPU or GPU. Thus, in order to preserve all this "side-information", PyTorch returns single-element tensors when iterating over a tensor. If you insist on getting simple "numbers" from a tensor, you can use tensor.item(): [x.item() for x in t] Or, tensor.tolist(): t.tolist() For more information on the differences between numpy np.arrays and torch.tensors, see this answer.
https://stackoverflow.com/questions/73725148/
How to freeze part of selected layer(eg nn.Linear()) of a model in Pytorch?
question: fc = nn.Linear(n,3); I want to freeze the parameters of the third output of the fc when I train this layer.
According to @PlainRavioli , it's not possible yet and you can set the gradient to zero so the current weights do not change. But you have to do this after calling loss.backward() and before calling optimizer.step(). So being fc = nn.Linear(n,3), for freezing the parameters of the third output: loss.backward() fc.weight.grad[2,:] = torch.zeros_like(fc.weight.grad[2,:]) fc.bias.grad[2] = torch.zeros_like(fc.bias.grad[2]) optimizer.step() Calling loss.backward() computes dloss/dx and do x.grad += dloss/dx. So before this operation, gradients are set to None.
https://stackoverflow.com/questions/73727816/
'RuntimeError: Expected object of scalar type Long but got scalar' for torch.nn.CrossEntropyLoss()
I'm using this loss function for xlm-roberta-large-longformer and it gives me this error: import torch.nn.functional as f from scipy.special import softmax loss_func = torch.nn.CrossEntropyLoss() output = torch.softmax(logits.view(-1,num_labels), dim=0).float() target = b_labels.type_as(logits).view(-1,num_labels) loss = loss_func(output, target) train_loss_set.append(loss.item()) when I try b_labels.type_as(logits).view(-1,num_labels).long() it tells me RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 What should I do?
Your target tensor should contain integers corresponding to the correct class labels and should not be a one/multi-hot encoding of the class. You can extract the class labels from a one-hot encoding format using argmax: >>> b_labels.argmax(1)
https://stackoverflow.com/questions/73727922/
Pytorch input tensor is correct data type, but throws RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #3
I am trying to write a simple CNN using PyTorch, but am getting an error in my first layer. The following lines of code will produce the error RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #3 'mat1' in call to _th_addmm_, despite me setting the input to type Long. The print statement in the 3rd line also gives torch.LongTensor, yet the fourth line still throws the error. import torch import torch.nn as nn data = torch.randint(low=0, high=255, size=[2, 1, 1024, 1024], dtype=torch.int64) model = nn.Conv2d(1, 3, kernel_size=3, padding=1, bias=False) print(data.type()) out = model(data)
The type of your data is Long, but the type of the weights of your model is Float. You need to change the type of your data if you are planning to train a model: import torch import torch.nn as nn data = torch.randint(low=0, high=255, size=[2, 1, 1024, 1024], dtype=torch.float32) model = nn.Conv2d(1, 3, kernel_size=3, padding=1, bias=False) print(data.type()) out = model(data)
https://stackoverflow.com/questions/73734598/
How to use the command "with" conditionally without duplication of code
I am trying to use with before executing a block of code, but only if a condition is met, but (at least the common) usage of with doesn't appear to support that unless I duplicate the block of code. More concretely, I know I can do the following: if condition: with blah_blah(): my_code_block else: my_code_block But that's unsatisfying during development since any change I make to my_code_block must be made twice. What I want to do (conceptually) is: if condition: with blah_blah(): else: my_code_block That doesn't work, though. Is there a way to accomplish what I'm trying to do? For anyone that's interested in my particular use, I'm trying to write code that runs a batch of examples in pytorch, with torch.no_grad() if I'm in evaluation mode and without it if I'm in train mode. So what I want to do becomes if mode == 'eval': with torch.no_grad(): else: run_batch(features, labels)
Use the with statement, but with a nullcontext context manager if necessary. from contextlib import nullcontext with blah_blah() if condition else nullcontext(): my_code_block nullcontext takes an optional argument that it will return if your with statement expects something to be bound with as. For example, with nullcontext("hello") as f: print(f) # outputs "hello"
https://stackoverflow.com/questions/73735168/
stable-baselines3 PPO model loaded but not working
I am trying to make an AI agent for playing OpenAI Gym CarRacing environment and I am having trouble loading saved models. I train them, they work, I save them and load them and suddenly the car doesn't even move. I even tried downloading models from other people and when loaded, the car just doesn't move. I am on Ubuntu 20.04 in VS Code in a Jupyter notebook using gym==0.21.0, stable-baselines3==1.6.0, python==3.7.0 import gym from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy import os I make the environment environment_name = "CarRacing-v0" env = gym.make(environment_name) I create the PPO model and make it learn for a couple thousand timesteps. Now when I evaluate the policy, the car renders as moving. log_path = os.path.join('Training', 'Logs') model = PPO("CnnPolicy", env, verbose=1, tensorboard_log=log_path) model.learn(total_timesteps=4000) evaluate_policy(model, env, n_eval_episodes=1, render=True) I save the model ppo_path = os.path.join('Training', 'Saved Models', 'PPO_Car_Testing') model.save(ppo_path) now I delete the model and load the saved one and when I evaluate it the car just doesn't move as if it always got action do nothing. I tried models learning for 2k timesteps up to a model which has been learning for 2 million timesteps. del model model = PPO("CnnPolicy", env, verbose=1, tensorboard_log=log_path) ppo_path_load = os.path.join('Training', 'Saved Models', 'PPO_2m_Driving_model') model.load(ppo_path_load, env) evaluate_policy(model, env, n_eval_episodes=1, render=True) Any ideas why the models load incorrectly?
it seems like you model didn't load Correctly you gave the code wrong model = PPO("CnnPolicy", env, verbose=1, tensorboard_log=log_path) ppo_path_load = os.path.join('Training', 'Saved Models', 'PPO_2m_Driving_model') model.load(ppo_path_load, env) Change it to as model = PPO("CnnPolicy", env, verbose=1, tensorboard_log=log_path) ppo_path_load = os.path.join('Training', 'Saved Models', 'PPO_2m_Driving_model') model = ppo.load(ppo_path_load, env) change RLALGORITHM to your Rl-agorithm such as PPO or A2C, etc model = RLALGORITHM.load(ppo_path_load, env)
https://stackoverflow.com/questions/73737008/
Efficient pseudo-inverse for PyTorch 2D convolution
Background: Thanks for your attention! I am learning the basic knowledge of 2D convolution, linear algebra and PyTorch. I encounter the implementation problem about the psedo-inverse of the convolution operator. Specifically, I have no idea about how to implement it in an efficient way. Please see the following problem statements for details. Any help/tip/suggestion is welcomed. (Thanks a lot for your attention!) The Original Problem: I have an image feature x with shape [b,c,h,w] and a 3x3 convolutional kernel K with shape [c,c,3,3]. There is y = K * x. How to implement the corresponding pseudo-inverse on y in an efficient way? There is [y = K * x = Ax], how to implement [x_hat = (A^+)y]? I guess that there should be some operations using torch.fft. However, I still have no idea about how to implement it. I do not know if there exists an implementation previously. import torch import torch.nn.functional as F c = 32 K = torch.randn(c, c, 3, 3) x = torch.randn(1, c, 128, 128) y = F.conv2d(x, K, padding=1) print(y.shape) # How to implement pseudo-inverse for y = K * x in an efficient way? Some of My Efforts: I may know that the 2D convolution is a linear operator. It is equivalent to a "matrix product" operator. We can actually write out the matrix form of the convolution and calculate its psedo-inverse. However, I think this type of operation will be inefficient. And I have no idea about how to implement it in an efficient way. According to Wikipedia, the psedo-inverse may satisfy the property of A(A_pinv(x))=x, where A is the convolutional operator, A_pinv is its psedo-inverse, and x may be any image feature. (Thanks again for reading such a long post!)
This takes the problem to another level. The convolution itself is a linear operation, you can determine the matrix of the operation and solve a least square problem directly [1], or compute the pseudo-inverse as you mentioned, and then apply to different outputs and predicting a projection of the input. I am changing your code to using padding=0 import torch import torch.nn.functional as F # your code c = 32 K = torch.randn(c, c, 1, 1) x = torch.randn(4, c, 128, 128) y = F.conv2d(x, K, bias=torch.zeros((c,))) Also, as you probably already suggested the convolution can be computed as ifft(fft(h)*fft(x)). However, the conv2d function is a cross-correlation, so you have to conjugate the filter leading to ifft(fft(h)*fft(x)), also you have to apply this to two axes, and you have to make sure the FFT is calcuated using the same representation (size), since the data is real, we can apply multi-dimensional real FFT. To be complete, conv2d works on multiple channels, so we have to calculate summations of convolutions. Since the FFT is linear, we can simply compute the summations on the frequency domain using einsum. s = y.shape[-2:] K_f = torch.fft.rfftn(K, s) x_f = torch.fft.rfftn(x, s) y_f = torch.einsum('jkxy,ikxy->ijxy', K_f.conj(), x_f) y_hat = torch.fft.irfftn(y_f, s) Except for the borders it should be accurate (remember FFT computes a cyclic convolution). torch.max(abs(y_hat[:,:,:-2,:-2] - y[:,:,:,:])) Now, notice the pattern jk,ik->ij on the einsum, that means y_f[i,j] = sum(K_f[j,k] * x_f[i,k]) = x_f @ K_f.T, if @ is the matrix product on the first two dimensions. So to invert this operation we have to can interpret the first two dimensions as matrices. The function pinv will compute pseudo-inverses on the last two axes, so in order to use that we have to permute the axes. If we right multiply the output by the pseudo-inverse of transposed K_f we should invert this operation. s = 128,128 K_f = torch.fft.rfftn(K, s) K_f_inv = torch.linalg.pinv(K_f.T).T y_f = torch.fft.rfftn(y_hat, s) x_f = torch.einsum('jkxy,ikxy->ijxy', K_f_inv.conj(), y_f) x_hat = torch.fft.irfftn(x_f, s) print(torch.mean((x - x_hat)**2) / torch.mean((x)**2)) Nottice that I am using the full convolution, but the conv2d actually cropped the images. Let's apply that y_hat[:,:,128-(k-1):,:] = 0 y_hat[:,:,:,128-(k-1):] = 0 Repeating the calculation you will see that the input is not accurate anymore, so you have to be careful about what you do with your convolution, but in some situations where you can get this to work it will be in fact efficient. s = 128,128 K_f = torch.fft.rfftn(K, s) K_f_inv = torch.linalg.pinv(K_f.T).T y_f = torch.fft.rfftn(y_hat, s) x_f = torch.einsum('jkxy,ikxy->ijxy', K_f_inv.conj(), y_f) x_hat = torch.fft.irfftn(x_f, s) print(torch.mean((x - x_hat)**2) / torch.mean((x)**2))
https://stackoverflow.com/questions/73739573/
How to pass `--gpus all` option to Docker with Go SDK?
I have seen how to do some basic commands such as running a container, pulling images, listing images, etc from the SDK examples. I am working on a project where I need to use the GPU from within the container. My system has GPU, I have installed the drivers, and I have also installed the nvidia-container-runtime. If we remove Go SDK from the scene for a moment, I can run the following command to get the nvidia-smi output on my host system: docker run -it --rm --gpus all nvidia/cuda:10.0-base nvidia-smi I have to do this via the SDK. Here is the code to start with. This code prints "hello world". But in actual I will be running nvidia-smi command at that place: package main import ( "context" "os" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/container" "github.com/docker/docker/client" "github.com/docker/docker/pkg/stdcopy" ) func main() { ctx := context.Background() cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation()) if err != nil { panic(err) } RunContainer(ctx, cli) } func RunContainer(ctx context.Context, cli *client.Client) { reader, err := cli.ImagePull(ctx, "nvidia/cuda:10.0-base", types.ImagePullOptions{}) if err != nil { panic(err) } defer reader.Close() // io.Copy(os.Stdout, reader) resp, err := cli.ContainerCreate(ctx, &container.Config{ Image: "nvidia/cuda:10.0-base", Cmd: []string{"echo", "hello world"}, // Tty: false, }, nil, nil, nil, "") if err != nil { panic(err) } if err := cli.ContainerStart(ctx, resp.ID, types.ContainerStartOptions{}); err != nil { panic(err) } statusCh, errCh := cli.ContainerWait(ctx, resp.ID, container.WaitConditionNotRunning) select { case err := <-errCh: if err != nil { panic(err) } case <-statusCh: } out, err := cli.ContainerLogs(ctx, resp.ID, types.ContainerLogsOptions{ShowStdout: true}) if err != nil { panic(err) } stdcopy.StdCopy(os.Stdout, os.Stderr, out) }
see: https://github.com/docker/cli/blob/9ac8584acfd501c3f4da0e845e3a40ed15c85041/cli/command/container/opts.go#L594 import "github.com/docker/cli/opts" // ... gpuOpts := opts.GpuOpts{} gpuOpts.Set("all") resp, err := cli.ContainerCreate(ctx, &container.Config{ Image: "nvidia/cuda:10.0-base", Cmd: []string{"echo", "hello world"}, // Tty: false, }, &container.HostConfig{Resources: container.Resources{DeviceRequests: gpuOpts.Value()}}, nil, nil, "")
https://stackoverflow.com/questions/73742554/
Gradients not populating as expected
Sorry, I know questions of this sort have been asked a lot, but I still don't understand the behavior of autograd. A simple example is below: ce_loss=torch.nn.BCELoss() par=torch.randn((1,n),requires_grad=True) act=torch.nn.Sigmoid() y_hat=[] for obs in data: y_hat.append(act(par@obs)) loss=ce_loss(torch.tensor(y_hat,requires_grad=True),y) loss.backward() After applying backward, the grad of par remains None (although it is a leaf node with requires_grad=True). Any tips?
It is simply because torch.tensor(...) create a new leaf of the computational graph. It means by definition that the operation inside torch.tensor are blocked, in particular the computation using elements of par (and so, the grads are never computed). Note that adding requires_grad=True doesn't change anything because it always creates a leaf (with grads) that forgot the previous operations by definition of a leaf. I suggest you an other way to make your computation without iterate on data and using native parallelization: batch_size, n = 8, 10 # or something else # Random data and labels to reproduce the code data = torch.randn((batch_size, n)) y = torch.randn((batch_size, )) y = y.unsqueeze(1) # size (batch_size, 1) ce_loss = torch.nn.BCELoss() par = torch.randn((1, n), requires_grad=True) act = torch.nn.Sigmoid() y_hat = act(data @ par.T) # compute all predictions in parallel loss = ce_loss(y_hat, y) # automatically reduced to scalar (mean) loss.backward() print(par.grad) # no longer None!
https://stackoverflow.com/questions/73744146/
PyTorch: how to use torchvision.transforms.AugMIx with torch.float32?
PyTorch: how to use torchvision.transforms.AugMIx with torch.float32? I am trying to apply data augmentation in image dataset by using torchvision.transforms.AugMIx, but I have the following error: TypeError: Only torch.uint8 image tensors are supported, but found torch.float32. I tried to convert it to int, but I have another error. My code where I am trying to use the AugMix function: transform = torchvision.transforms.Compose( [ torchvision.transforms.Resize((224, 224)), # resize to 224*224 torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), # normalization torchvision.transforms.AugMix() ] ) to_tensor = torchvision.transforms.ToTensor() Image.MAX_IMAGE_PIXELS = None class BreastDataset(torch.utils.data.Dataset): def __init__(self, json_path, data_dir_path='./dataset', clinical_data_path=None, is_preloading=True): self.data_dir_path = data_dir_path self.is_preloading = is_preloading with open(json_path) as f: print(f"load data from {json_path}") self.json_data = json.load(f) def __len__(self): return len(self.json_data) def __getitem__(self, index): label = int(self.json_data[index]["label"]) patient_id = self.json_data[index]["id"] patch_paths = self.json_data[index]["patch_paths"] data = {} if self.is_preloading: data["bag_tensor"] = self.bag_tensor_list[index] else: data["bag_tensor"] = self.load_bag_tensor([os.path.join(self.data_dir_path, p_path) for p_path in patch_paths]) data["label"] = label data["patient_id"] = patient_id data["patch_paths"] = patch_paths return data def load_bag_tensor(self, patch_paths): """Load a bag data as tensor with shape [N, C, H, W]""" patch_tensor_list = [] for p_path in patch_paths: patch = Image.open(p_path).convert("RGB") patch_tensor = transform(patch) # [C, H, W] patch_tensor = torch.unsqueeze(patch_tensor, dim=0) # [1, C, H, W] patch_tensor_list.append(patch_tensor) bag_tensor = torch.cat(patch_tensor_list, dim=0) # [N, C, H, W] return bag_tensor Any help is appreciated! Thank you in advance!
For me applying AugMix first and then ToTensor() worked transformation = transforms.Compose([ transforms.AugMix(severity= 6,mixture_width=2), transforms.ToTensor(), transforms.RandomErasing(), transforms.RandomGrayscale(p = 0.35) ])
https://stackoverflow.com/questions/73754867/
Pytorch tensor and it's transpose have different storage
I was reading the book Deep Learning with Pytorch and was trying out an example which shows that a tensor and it's transpose share the same storage. However, when i tried it out on my local machine, I can see that the storage is different for both. Just wanted to understand why this might be the case here ? The code i tried and the output is as below: >>> points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) >>> points_t = torch.transpose(points,0,1) >>> points_t tensor([[4., 5., 2.], [1., 3., 1.]]) >>> id(points.storage())==id(points_t.storage()) False >>> id(points.storage()) 2796700202176 >>> id(points_t.storage()) 2796700201888 My python version is 3.9.7 and pytorch version is 1.11.0
You need to compare the pointer of storages instead of taking the id of it. >>> points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) >>> points_t = torch.transpose(points,0,1) >>> points_t tensor([[4., 5., 2.], [1., 3., 1.]]) >>> points.storage().data_ptr() == points_t.storage().data_ptr() True The reason you are getting False for id comparison is that Python objects (points and points_t) are different objects but the underlying storages (the memory that you allocate to keep the data) are the same.
https://stackoverflow.com/questions/73761229/
How to perform multiplication along axes in pytorch?
I have 2 tensors X and Y - X has shape (20,4,300) and Y has shape(20,300) . How to perform multiplication such that I have an result of shape (20,4). The corresponding techinique in keras is doc_product = Dot(axes=(2,1))([X,Y]) I would like to know how the same can be done in pytorch?
Your most versatile function for matrix multiplication is torch.einsum: it allows you specify the dimensions along which to multiply and the order of the dimensions of the output tensor. In your case it would look like: dot_product = torch.einsum('bij,bj->bi')
https://stackoverflow.com/questions/73764581/
Problem with nested network on pytorch ."TypeError: forward() missing 1 required positional argument: 'x'"
I attempt to create an architecture consisting of one convolutional filter and one layer of three convolutional filters. I first build the inner layer with the name "MysmallNet(nn.module)", and then I build "MybigNet" calling the small network. This is my code. #In[] class MysmallNet(nn.Module): def __init__(self): super(MysmallNet, self).__init__() # TODO Task 3: Design Your Network self.Convlayer_1 = nn.Conv2d(in_channels = 16, out_channels = 16, kernel_size = 3, stride = 1,padding=1) self.Convlayer_2 = nn.Conv2d(in_channels=16,out_channels=16,kernel_size=3,stride=1, padding=1) self.Convlayer_3 = nn.Conv2d(in_channels=16,out_channels=16,kernel_size=3,stride=1, padding=1) def forward(self, x): # TODO Task 3: Design Your Network residual1 = x x = self.Convlayer_1(x) x = self.Convlayer_2(x) x = self.Convlayer_3(x) return x MysmallNetV2= MysmallNet() class MybigNet(nn.Module): def __init__(self): super(MybigNet, self).__init__() self.Convlayer_1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,stride=1,padding=1) self.smallNet= MysmallNetV2() def forward(self, x): x = self.Convlayer_1(x) x = self.smallNet(x) return x modelBig = MybigNet() I have the issue when I save my model as "modelBig". The displayed error is : TypeError: forward() missing 1 required positional argument: 'x'
Your definition of big net is wrong, it should be: class MybigNet(nn.Module): def __init__(self): super(MybigNet, self).__init__() self.Convlayer_1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,stride=1,padding=1) self.smallNet= MysmallNet() def forward(self, x): x = self.Convlayer_1(x) x = self.smallNet(x) return x This should solve the issue.
https://stackoverflow.com/questions/73769334/
How to extract Integer from Pytorch Tensor
This is a part of code... VSCODE declares variable > xyxy as 'list' for *xyxy, conf, cls in reversed(det): if save_txt: # Write to file xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh) # label format with open(txt_path + '.txt', 'a') as f: f.write(('%g ' * len(line)).rstrip() % line + '\n') if save_img or view_img: # Add bbox to image label = f'{names[int(cls)]} {conf:.2f}' plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=1) else : pass # # Print time (inference + NMS) print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS') #print(xyxy_custom = xyxy.numpy()` Printing variable xyxy gives this output (possibly pytorch tensor) [tensor(513., device='cuda:0'), tensor(308., device='cuda:0'), tensor(661., device='cuda:0'), tensor(394., device='cuda:0')] I want to extract integers from this list (for example output should be [513,308,661,394] i have tried print(xyxy.list()) or print(xyxy.numpy(). this gives an error AttributeError: 'list' object has no attribute 'list'
You can convert the elements of the list into integers using .item(): xyxy = [int(e_.item()) for e_ in xyxy]
https://stackoverflow.com/questions/73770032/
Pytorch gradient calulation of one Tensor
I'm a beginner in pytorch and I'm probably stuck on a relatively trivial problem, but it's not clearing up for me at the moment. When calculating the gradient of a tensor I get a constant gradient of 1. Shouldn't the gradient of a constant however result in 0? Here is a minimal example: import torch x = torch.tensor(50., requires_grad=True) y = x y.backward() print(x.grad) #Ouput: tensor(1.) So why is the ouput 1 and not 0?
You are not computing the gradient of a constant, but that of the variable x which has a constant value 50. The derivative of x with respect to x is 1.
https://stackoverflow.com/questions/73772484/
Pytorch custom dataset is super slow
During training it takes ages to load one batch of data. What can cause this problem? I am new to Pytorch, i had been working with tensorflow for a while, this my first attempt to create something like this. I wrote a custom dataset which gets its images from folders, it gets stored in a dataframe which will be splited it into train val sets. class CustomDataset(torch.utils.data.Dataset): def __init__(self, root, split, train_ratio = 0.85, val_ratio = 0.1, transform=None): self.root = root self.train_ratio = train_ratio self.val_ratio = val_ratio self.test_ratio = 1 - (self.train_ratio + self.val_ratio) df = self.folder2pandas() self.split = split self.data = self.splitDf(df) self.transform = transform def __len__(self): return len(self.data) def __getitem__(self, idx): row = self.data.iloc[[idx]] x = row.values[0][0] y = row.values[0][1] x = cv2.imread(x) if self.sourceTransform: x = self.sourceTransform(x) return x, y def folder2pandas(self): tuples = [] for folder, subs, files in os.walk(self.root): for filename in files: path = os.path.abspath(os.path.join(folder, filename)) tuples.append((path, folder.split('\\')[-1])) return pd.DataFrame(tuples, columns=["x", "y"]) def splitDf(self, df): df = df.sort_values(by=['x'], ascending=True).reset_index(drop=True) train_idxs = df.loc[range(0, int(self.train_ratio * len(df)))] val_idxs = df.loc[range(int(self.train_ratio * len(df)), int(self.train_ratio * len(df)) + int(self.val_ratio * len(df)))] test_idxs = df.loc[range( int(self.train_ratio * len(df)) + int(self.val_ratio * len(df)), len(df))] if self.split == 'train': return train_idxs elif self.split == 'val': return val_idxs elif self.split == 'test': return test_idxs Augmentations: train_transforms = transforms.Compose([ transforms.Resize((224,224)), transforms.RandomChoice([ transforms.RandomAutocontrast(), transforms.ColorJitter(brightness=0.3, contrast=0.5, saturation=0.1, hue=0.1), transforms.GaussianBlur(kernel_size=(5,5), sigma=(0.1, 2.0)), transforms.Grayscale(num_output_channels=3), transforms.RandomVerticalFlip(),]), transforms.RandomHorizontalFlip(0.5), transforms.ToTensor(), transforms.Normalize(res[0].numpy(), res[1].numpy()), ]) val_transforms = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize(res[0].numpy(), res[1].numpy()), ]) Initializing datasets: In 'resources' folder there are two folder which name's represents the labels (BinaryClassification). train_set=CustomDataset(root="resources/",split='train', transform=train_transforms) val_set=CustomDataset(root="resources/",split='val', transform=val_transforms) Giving datasets to dataloader: trainloader = torch.utils.data.DataLoader(train_set, shuffle = True, batch_size=32, num_workers=4) testloader = torch.utils.data.DataLoader(val_set, shuffle = True, batch_size=32, num_workers=4)
Putting the solution of the comments in a cleaner way: The creation of several workers was taking large amount of time. It seems that on windows the creation of processes can have weird behaviours in terms of time. As __getitem__() is not called, the problem is not in data loading per se, try removing the num_workers parameters. testloader = torch.utils.data.DataLoader(val_set, shuffle = True, batch_size=32) Then if this works, try increasing it and check the behaviour.
https://stackoverflow.com/questions/73777647/
Pytorch random choose an index with condition
I have a tensor that stores whether or not an index in available available = torch.Tensor([1,1,0,0,1,0]) and I want to return an index of either 0, 1, or 4 given that available[0],available[1],available[4] all equal to 1 with the same possibility. can somebody help me with this? Thanks
Torch makes this easy. You can use multinomial as per this answer: num_samples = 1 available.multinomial(num_samples, replacement=False) Here, num_samples indicates how many samples you'd like to draw. Because you have 1s and 0s already, your available tensor naturally gives the correct weights for the multinomial function. If you are going to draw more than 3 samples, this will error unless you change replacement to True.
https://stackoverflow.com/questions/73781225/
Why are some nn.Linear layers not quantized by Pytorch?
I'm quantizing the Swin transformer (static PTQ) using the following function: def static_quantize(m, data_loader): backend = 'qnnpack' torch.backends.quantized.engine = backend m.eval() m.qconfig = torch.quantization.get_default_qconfig(backend) torch.quantization.prepare(m, inplace=True) with torch.no_grad(): for i, data in enumerate(data_loader): if i >= 100: break result = m(return_loss=False, **data) torch.quantization.convert(m, inplace=True) return m Most modules, including linear layers, do get quantized. However some linear layers of a SwinBlock are skipped, as you can see here: (3): SwinBlockSequence( (blocks): ModuleList( (0): SwinBlock( (quant): Quantize(scale=tensor([0.3938]), zero_point=tensor([122]), dtype=torch.quint8) (dequant): DeQuantize() (norm1): QuantizedLayerNorm((768,), eps=1e-05, elementwise_affine=True) (attn): ShiftWindowMSA( (w_msa): WindowMSA( (quant): Quantize(scale=tensor([0.0294]), zero_point=tensor([155]), dtype=torch.quint8) (dequant): DeQuantize() (qkv): QuantizedLinear(in_features=768, out_features=2304, scale=0.039033032953739166, zero_point=133, qscheme=torch.per_tensor_affine) (attn_drop): Dropout(p=0, inplace=False) (proj): QuantizedLinear(in_features=768, out_features=768, scale=0.0369536317884922, zero_point=110, qscheme=torch.per_tensor_affine) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop): DropPath() ) (norm2): QuantizedLayerNorm((768,), eps=1e-05, elementwise_affine=True) (ffn): FFN( // <------- HERE (children not quantized) (activate): GELU() (layers): Sequential( (0): Sequential( (0): Linear(in_features=768, out_features=3072, bias=True) (1): GELU() (2): Dropout(p=0, inplace=False) ) (1): Linear(in_features=3072, out_features=768, bias=True) (2): Dropout(p=0, inplace=False) ) (dropout_layer): DropPath() ) ) I am referring to the FFN submodule, where nothing is quantized. However, it contains linear layers, which ought to pose no problems for quantization. Here's how FFN is added to the module: _ffn_cfgs = { 'embed_dims': embed_dims, 'feedforward_channels': int(embed_dims * ffn_ratio), 'num_fcs': 2, 'ffn_drop': 0, 'dropout_layer': dict(type='DropPath', drop_prob=drop_path), 'act_cfg': dict(type='GELU'), **ffn_cfgs } self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] self.ffn = FFN(**_ffn_cfgs) Here's the source code for FFN: @FEEDFORWARD_NETWORK.register_module() class FFN(BaseModule): """Implements feed-forward networks (FFNs) with identity connection. Args: embed_dims (int): The feature dimension. Same as `MultiheadAttention`. Defaults: 256. feedforward_channels (int): The hidden dimension of FFNs. Defaults: 1024. num_fcs (int, optional): The number of fully-connected layers in FFNs. Default: 2. act_cfg (dict, optional): The activation config for FFNs. Default: dict(type='ReLU') ffn_drop (float, optional): Probability of an element to be zeroed in FFN. Default 0.0. add_identity (bool, optional): Whether to add the identity connection. Default: `True`. dropout_layer (obj:`ConfigDict`): The dropout_layer used when adding the shortcut. init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. Default: None. """ @deprecated_api_warning( { 'dropout': 'ffn_drop', 'add_residual': 'add_identity' }, cls_name='FFN') def __init__(self, embed_dims=256, feedforward_channels=1024, num_fcs=2, act_cfg=dict(type='ReLU', inplace=True), ffn_drop=0., dropout_layer=None, add_identity=True, init_cfg=None, **kwargs): super().__init__(init_cfg) assert num_fcs >= 2, 'num_fcs should be no less ' \ f'than 2. got {num_fcs}.' self.embed_dims = embed_dims self.feedforward_channels = feedforward_channels self.num_fcs = num_fcs self.act_cfg = act_cfg self.activate = build_activation_layer(act_cfg) layers = [] in_channels = embed_dims for _ in range(num_fcs - 1): layers.append( Sequential( Linear(in_channels, feedforward_channels), self.activate, nn.Dropout(ffn_drop))) in_channels = feedforward_channels layers.append(Linear(feedforward_channels, embed_dims)) layers.append(nn.Dropout(ffn_drop)) self.layers = Sequential(*layers) self.dropout_layer = build_dropout( dropout_layer) if dropout_layer else torch.nn.Identity() self.add_identity = add_identity @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') def forward(self, x, identity=None): """Forward function for `FFN`. The function would add x to the output tensor if residue is None. """ out = self.layers(x) if not self.add_identity: return self.dropout_layer(out) if identity is None: identity = x return identity + self.dropout_layer(out)
The problem is very silly: Linear in this case referred to an mmcv wrapper class for nn.Linear. Quantizing the wrapper class is not supported. class Linear(torch.nn.Linear): def forward(self, x: torch.Tensor) -> torch.Tensor: # empty tensor forward of Linear layer is supported in Pytorch 1.6 if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 5)): out_shape = [x.shape[0], self.out_features] empty = NewEmptyTensorOp.apply(x, out_shape) if self.training: # produce dummy gradient to avoid DDP warning. dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 return empty + dummy else: return empty return super().forward(x) By the looks of it (since I'm using PyTorch 1.8.1) this can be easily remedied by modifying the FFN class to use nn.Linear.
https://stackoverflow.com/questions/73784322/
How is it possible that a list was given as a parameter to a function that expects a tuple?
I was watching a tutorial on PyTorch and coding along and got stuck on function torch.randint According to the documentation: torch.randint(low=0, high, size, \*, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor here, size is : size (tuple) – a tuple defining the shape of the output tensor. The YouTuber wrote random_idx = torch.randint(0, len(train_data), size=[1]).item() But [1] is not a tuple, it is a list. How is this possible? I also tested it with a tuple and it worked just fine and every usage of randint() i found on the internet provides a tuple for size. E.g. size = (1,2) or size = (1,1). I searched the source code for torch.randint but could not find it. I searched GitHub, PyTorch docs and even tried to find it in a local PyTorch library.
The documentation states that this should be a tuple however in practice, the definition of randint() is: def randint(low: _int, high: _int, size: _size, ... ) Where _size is defined as: (type alias) _size: Type[Size] | Type[List[int]] | Type[Tuple[int, ...]] So in practice, the requirement is for the size parameter to be of type Size, List of int or Tuple of int, which will pretty much behave the same in this case EDIT: As stated above, indeed, typing is only an indicative in Python, so if you use any type of variable, there won't be any issue if the function itself doesn't raise an error. For the question of why the function acts accordingly and returns what is expected, this is because of the first part of the answer :)
https://stackoverflow.com/questions/73785480/
What do I do wrong when install OpenKiwi?
I tried to work with OpenKiwi to Anaconda3 and after installation (pip install openkiwi) I execute following code (I do this because I want to create openkiwi vocabulary) : import warnings from collections import defaultdict import torchtext from kiwi.constants import PAD, START, STOP, UNALIGNED, UNK, UNK_ID And than I have an error message: ImportError Traceback (most recent call last) <ipython-input-6-ea850b280bef> in <module> 4 import torchtext 5 ----> 6 from kiwi.constants import PAD, START, STOP, UNALIGNED, UNK, UNK_ID ImportError: cannot import name 'UNK_ID' from 'kiwi.constants' (C:\Users\Mike\anaconda3\lib\site-packages\kiwi\constants.py) Anaconda3 has following versions: >! pytorch-lightning: 1.7.6 pytorch-nlp: 0.5.0 torch: 1.4.0 torch metrics 0.9.3 torch: text 0.13.1 transformers: 3.5.1
UNK_ID is no longer a constant in the latest version of OpenKiwi. That second link shows some code targeting version 1.4 or so, while OpenKiwi is at version 2.1 now. Just drop the UNK_ID from the import line, and replace the one in the code with 0.
https://stackoverflow.com/questions/73791169/
AssertionError when running U-Net script
This is a continuation of this problem. While I ironed out the problems I still get another issue. Would anyone be able to help me in this regards? Looks like the predicted mask and actual mask have different sizes? The output code is below: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) /tmp/ipykernel_18/459131192.py in <module> 25 with torch.set_grad_enabled(phase == "train"): 26 y_pred = unet(x) ---> 27 loss = dsc_loss(y_pred, y_true) 28 running_loss += loss.item() 29 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] /tmp/ipykernel_18/3969884729.py in forward(self, y_pred, y_true) 6 7 def forward(self, y_pred, y_true): ----> 8 assert y_pred.size() == y_true.size() 9 y_pred = y_pred[:, 0].contiguous().view(-1) 10 y_true = y_true[:, 0].contiguous().view(-1) AssertionError: The below is the U-Net model. Please have a look. unet_network.py: #Unet #https://github.com/mateuszbuda/brain-segmentation-pytorch from collections import OrderedDict import torch import torch.nn as nn class UNet(nn.Module): def __init__(self, in_channels=3, out_channels=1, init_features=8): super(UNet, self).__init__() features = init_features self.encoder1 = UNet._block(in_channels, features, name="enc1") self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.encoder2 = UNet._block(features, features * 2, name="enc2") self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.encoder3 = UNet._block(features * 2, features * 4, name="enc3") self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.encoder4 = UNet._block(features * 4, features * 8, name="enc4") self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2) self.bottleneck = UNet._block(features * 8, features * 16, name="bottleneck") self.upconv4 = nn.ConvTranspose2d( features * 16, features * 8, kernel_size=2, stride=2 ) self.decoder4 = UNet._block((features * 8) * 2, features * 8, name="dec4") self.upconv3 = nn.ConvTranspose2d( features * 8, features * 4, kernel_size=2, stride=2 ) self.decoder3 = UNet._block((features * 4) * 2, features * 4, name="dec3") self.upconv2 = nn.ConvTranspose2d( features * 4, features * 2, kernel_size=2, stride=2 ) self.decoder2 = UNet._block((features * 2) * 2, features * 2, name="dec2") self.upconv1 = nn.ConvTranspose2d( features * 2, features, kernel_size=2, stride=2 ) self.decoder1 = UNet._block(features * 2, features, name="dec1") self.conv = nn.Conv2d( in_channels=features, out_channels=out_channels, kernel_size=1 ) def forward(self, x): enc1 = self.encoder1(x) enc2 = self.encoder2(self.pool1(enc1)) enc3 = self.encoder3(self.pool2(enc2)) enc4 = self.encoder4(self.pool3(enc3)) bottleneck = self.bottleneck(self.pool4(enc4)) dec4 = self.upconv4(bottleneck) dec4 = torch.cat((dec4, enc4), dim=1) dec4 = self.decoder4(dec4) dec3 = self.upconv3(dec4) dec3 = torch.cat((dec3, enc3), dim=1) dec3 = self.decoder3(dec3) dec2 = self.upconv2(dec3) dec2 = torch.cat((dec2, enc2), dim=1) dec2 = self.decoder2(dec2) dec1 = self.upconv1(dec2) dec1 = torch.cat((dec1, enc1), dim=1) dec1 = self.decoder1(dec1) return torch.sigmoid(self.conv(dec1)) @staticmethod def _block(in_channels, features, name): return nn.Sequential( OrderedDict( [ ( name + "conv1", nn.Conv2d( in_channels=in_channels, out_channels=features, kernel_size=3, padding=1, bias=False, ), ), (name + "norm1", nn.BatchNorm2d(num_features=features)), (name + "relu1", nn.ReLU(inplace=True)), ( name + "conv2", nn.Conv2d( in_channels=features, out_channels=features, kernel_size=3, padding=1, bias=False, ), ), (name + "norm2", nn.BatchNorm2d(num_features=features)), (name + "relu2", nn.ReLU(inplace=True)), ] ) ) Thanks & Best Regards Schroter Michael
Your error stems from the difference in number of channels between the prediction (pred=torch.Size([5, 1, 512, 512])) and the target (y_true=torch.Size([5, 3, 512, 512])). For a target with 3 channels, you need your pred to have three channels as well. That is, you need to configure your UNet to have out_channels=3 instead of the default of 1.
https://stackoverflow.com/questions/73795865/
Fine tuning resnet18 for cifar10
I just want fine tuning ResNet18 on cifar10 datasets. so I just want to change the last linear layer from 1000 to 10. I tried use children function to get the previous layers ResModel = resnet18(weights=ResNet18_Weights) model = nn.Sequential( *list(ResModel.children())[:-1], nn.Linear(512,10) ) so it raised error RuntimeError: mat1 and mat2 shapes cannot be multiplied (32768x1 and 512x10) and then I tried this way ResModel.fc=nn.Linear(512,10) it works fine. so why?
The difference between stacking all layers into a single nn.Sequential and overriding only the last layer is the forward function: Your ResModel is of type torchvision.models.ResNet, while your model is a simple nn.Sequential. The forward pass of ResNet has an additional flatten operation before the last linear layer -- you do not have this operation in your nn.Sequential model.
https://stackoverflow.com/questions/73799136/
How to automate layers addition to neural network models in pytorch
I am dealing with a model in pytorch and I want to automate the layers and activations addition to the model. This code is my simple model: import torch from torch import nn import torch.nn.functional as F class NeuralNetwork(nn.Module): def __init__(self, n_inputs, n_hidden_unit, n_output): super().__init__() l1 = nn.Linear(n_inputs, n_hidden_unit) a1 = nn.Sigmoid() l2 = nn.Linear(n_hidden_unit, n_output) l = [l1, a1, l2] self.module_list = nn.ModuleList(l) def forward(self, x): for f in self.module_list: x = f(x) return x model = NeuralNetwork(n_inputs=10, n_hidden_unit=30, n_output=2) model As you see two layers and one activation is added manually but i want to for example have two lists or numpy arrays of them and then call the lists into my model. Lists will look like the following: connections = [(10, 30), (30, 2)] activation = [nn.Sigmoid()] A similar thing I did using Sequential model: layers = [] layers.append(nn.Linear(10, 30)) layers.append(nn.Sigmoid()) layers.append(nn.Linear(30, 2)) model = nn.Sequential(*layers) model
You can just use a loop: def __init__(self, connections, activation): super().__init__() l = [] for layer_idx, (n_input, n_output) in enumerate(connections): l.append(nn.Linear(n_input, n_output)) if layer_idx < len(activation): l.append(activation[layer_idx]) self.module_list = nn.ModuleList(l)
https://stackoverflow.com/questions/73800549/
Efficiently check whether each column in a PyTorch tensor has a corresponding reversed column
I have a collection of tensors of common shape (2,ncol). Example: torch.tensor([[1, 2, 3, 7, 8], [3, 3, 1, 8, 7]], dtype=torch.long) For each tensor, I want to determine if, for each column [[a], [b]], the reversed column [[b], [a]] is also in the tensor. For example, in this case, since ncol is odd, I can immediately say that this is not the case. But in this other example torch.tensor([[1, 2, 3, 7, 8, 4], [3, 3, 1, 8, 7, 2]], dtype=torch.long) I would actually have to perform the check. A naive solution would be test = torch.tensor([[1, 2, 3, 7, 8, 4], [3, 3, 1, 8, 7, 2]], dtype=torch.long) def are_column_paired(matrix: torch_geometric.data.Data) -> bool: ncol = matrix.shape[1] if ncol % 2 != 0: all_paired = False return all_paired column_has_match = torch.zeros(ncol, dtype=torch.bool) for i in range(ncol): if column_has_match[i]: continue column = matrix[:, i] j = i + 1 while not (column_has_match[i]) and (j <= (ncol - 1)): if column_has_match[j]: j = j + 1 continue current_column = matrix[:, j] current_column = current_column.flip(dims=[0]) if torch.equal(column, current_column): column_has_match[i], column_has_match[j] = True, True j = j + 1 all_paired = torch.all(column_has_match).item() return all_paired But of course this is slow and possibly not pythonic. How can I write a more efficient code? PS note that while test here is very small, in the actual use case I expect ncol to be O(10^5).
Here is one possible simple approach. It is likely not the most efficient you can get, but is much faster than your current solution. The idea is to simply check if the sorting the columns in the original and row-flipped tensors are identical. I believe the time complexity of this approach is O(n logn), as opposed to O(n^2) in your case. def are_columns_paired(matrix): flipped_matrix = matrix.flip(dims=[0]) matrix_sorted = matrix[:,matrix[1].argsort()] # sort second row matrix_sorted = matrix_sorted[:, matrix_sorted[0].sort(stable=True)[1]] # sort first row, keeping positions in second row fixed when there is a tie flipped_matrix = flipped_matrix[:,flipped_matrix[1].argsort()] flipped_matrix = flipped_matrix[:, flipped_matrix[0].sort(stable=True)[1]] return (matrix_sorted == flipped_matrix).all() Here, for both the original and flipped matrix, we sort the columns, first based on the first row, and when there is a tie, based on the second row. I tested both approaches on a randomly generated tensor with ncol=2000000 and values ranging from 0 to 999999. The above code ran in about 1 second, while the approach from the question did not provide a solution even after an hour.
https://stackoverflow.com/questions/73803162/
UserWarning: Trying to infer the `batch_size` from an ambiguous collection
I have a pytorch lightning module like this: class GraphLevelGNN(pl.LightningModule): def __init__(self,**model_kwargs): super().__init__() # Saving hyperparameters self.save_hyperparameters() self.model = GraphGNNModel(**model_kwargs) self.loss_module = nn.BCEWithLogitsLoss() def forward(self, data, mode="train"): x, edge_index, batch_idx = data.x, data.edge_index, data.batch x = self.model(x.cpu(), edge_index.cpu(), batch_idx.cpu()) x = x.squeeze(dim=-1) if self.hparams.c_out == 1: preds = (x > 0).float().cpu() data.y = data.y.float().cpu() else: preds = x.argmax(dim=-1).cpu() loss = self.loss_module(x.cpu(), data.y.cpu()) acc = (preds.cpu() == data.y.cpu()).sum().float() / preds.shape[0] f1 = f1_score(preds.cpu(),data.y.cpu()) ##change f1/precision and recall was just testing precision = precision_score(preds.cpu(),data.y.cpu()) recall = recall_score(preds.cpu(),data.y.cpu()) return loss, acc, f1,precision, recall,preds def configure_optimizers(self): optimizer = optim.SGD(self.parameters(),lr=0.1) # High lr because of small dataset and small model return optimizer def training_step(self, batch, batch_idx): loss, acc, _,_,_,_ = self.forward(batch, mode="train") self.log('train_loss', loss,on_epoch=True,logger=True) self.log('train_acc', acc,on_epoch=True,logger=True) return loss def validation_step(self, batch, batch_idx): loss, acc, _,_,_,_ = self.forward(batch, mode="val") self.log('val_acc', acc,on_epoch=True,logger=True) self.log('val_loss', loss,on_epoch=True,logger=True) def test_step(self, batch, batch_idx): loss,acc, f1,precision, recall,preds = self.forward(batch, mode="test") self.log('test_acc', acc,on_epoch=True,logger=True) self.log('test_f1', f1,on_epoch=True,logger=True) self.log('test_precision', precision,on_epoch=True,logger=True) self.log('test_recall', recall,on_epoch=True,logger=True) When I run the code, I get a warning: (train_fn pid=404034) /opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/data.py:99: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 127. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`. I'm not clear, which function am I meant to add the extra self.log to?
This warning means PyTorch Lightning has trouble inferring the batch size of your training perhaps because the batch contains different element types with varying amounts of elements inside them. To make sure it uses the correct batch_size for loss and metric computation. You can specify it yourself as described on the warning message. By setting the batch_size argument on each log call, e.g. self.log('train_acc', acc, on_epoch=True, logger=True, batch_size=batch_size)
https://stackoverflow.com/questions/73803619/
PyTorch Datapipes and how does overwriting the datapipe classes work?
Pytorch Datapipes are a new inplace dataset loaders for large data that can be fed into Pytorch models through streaming, for reference these are Official Doc: https://pytorch.org/data/main/tutorial.html A crash-course post explaining the usage https://sebastianraschka.com/blog/2022/datapipes.html Given a myfile.csv file, initialised as csv_file variable in code, the file looks like this, : imagefile,label train/0/16585.png,0 train/0/56789.png,0 ... In the example code, that uses datapipes that reads a csv_file and then create a iterable dataset using torchdata.datapipes and we see something like: from torchdata import datapipes as dp def build_data_pipe(csv_file, transform, len=1000, batch_size=32): new_dp = dp.iter.FileOpener([csv_file]) new_dp = new_dp.parse_csv(skip_lines=1) # returns tuples like ('train/0/16585.png', '0') new_dp = new_dp.shuffle(buffer_size=len) ... # More code that returns `new_dp` variable that looks like some # lazy-loaded unevaluated/materialized Iterable objects. return new_dp If we look at each step and the return to new_dp, we see: >>> from torchdata import datapipes as dp # The first initialize a FileOpenerIterDataPipe type >>> new_dp = dp.iter.FileOpener(["myfile.csv"]) >>> new_dp FileOpenerIterDataPipe # Then after that the API to the DataPipes allows some overwriting/subclassing # by calling a partial function, e.g. >>> new_dp.parse_csv functools.partial(<function IterDataPipe.register_datapipe_as_function.<locals>.class_function at 0x213123>, <class 'torchdata.datapipes.iter.util.plain_text_reader.CSVParserIterDataPipe'>, False, FileOpenerIterDataPipe) >>> new_dp = new_dp.parse_csv(skip_lines=1) >>> new_dp CSVParserIterDataPipe It looks like the new_dp.parse_csv(skip_lines=1) is trying do a a new initialization through a MixIn between CSVParserIterDataPipe and FileOpenerIterDataPipe but I'm not exactly sure what's happening. To fully get a working datapipe, there's a whole bunch of other new_dp = new_dp.xxx() to call. And my question are, Q1. Can't the DataPipe be initialize in a non-sequetial way? (P/S: This didn't work as expected) from torchdata imnport datapipes as dp class MyDataPipe(dp.iterGenericDataPipe): def __init__(self, csv_file, skip_lines=1, shuffle_buffer=1000): super().__init__([csv_file]) self.parse_csv(skip_lines=1) self.new_dp.shuffle(buffer_size=shuffle_buffer) But given that we have to overwrite the new_dp, seems like we might have to do something like: from torchdata imnport datapipes as dp class MyDataPipe(dp.iterGenericDataPipe): def __init__(self, csv_file, skip_lines=1, shuffle_buffer=1000): super().__init__([csv_file]) self = self.parse_csv(skip_lines=1) self = self.new_dp.shuffle(buffer_size=shuffle_buffer) Q2. Is self = self.xxx() an anti-pattern in Python? Q3. How else to initialize a DataPipe if we don't do self = self.xxx() ?
It looks like you're trying to chain together a series of torch DataPipes, namely: FileOpener / open_files CSVParser / parse_csv Shuffler / shuffle The official torchdata tutorial at https://pytorch.org/data/0.4/tutorial.html does so using a function (e.g. def custom_data_pipe()), but you seem to prefer a class-based approach (e.g. class CustomDataPipe). Let's call this a DataPipeLine. An additional complication is that you're trying to apply an inheritance-style torch.utils.data.Dataset to a composition-style torchdata.datapipes.iter.IterDataPipe. Presumably, the reason you're doing this is to create a configurable 'dataset', e.g. one that can skip N lines, has a shuffle buffer of B, etc. Now there's a few things wrong about this, but let's go with it. Bad example (please don't use) from torchdata.datapipes import functional_datapipe from torchdata.datapipes.iter import IterDataPipe, IterableWrapper @functional_datapipe("csv_processor_and_batcher") class MyDataPipeLine(IterDataPipe): def __init__( self, source_datapipe: IterDataPipe[str], skip_lines: int = 1, shuffle_buffer: int = 1000, ): super().__init__() self.source_datapipe: IterDataPipe[str] = source_datapipe self.chained_datapipe = ( self.source_datapipe.open_files() .parse_csv(skip_lines=skip_lines) .shuffle(buffer_size=shuffle_buffer) ) def __iter__(self): for item in self.chained_datapipe: yield item And the way you would use it is: dp = IterableWrapper(iterable=["file1.csv", "file2.csv"]) dp_custom = dp.csv_processor_and_batcher() dataloader = torch.utils.data.DataLoader(dataset=dp_custom) for batch in dataloader: print(batch) Now to be honest, this is really not recommended (and I'm half regretting writing up this answer already) because the reason torchdata exists is to have compositional DataPipes, i.e. each DataPipe should be specialized to do one thing only rather than many things. Also, you won't be streaming data properly, as the iterator will need to run your data through all 3 functions (open_files, parse_csv, shuffle) per file, instead of doing things piecewise (in a parallelizable way), thus defeating the whole purpose of using torchdata! What you probably want is to 1) Read up more on composition and pipe-ing: https://realpython.com/inheritance-composition-python https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.DataFrame.pipe.html Then 2) write something like the below. I'm using a LightningDataModule not only because it's cool, but because it's closer to the thing you actually want to subclass: Better example from typing import Optional from torch.utils.data import DataLoader2 from torchdata.datapipes.iter import IterDataPipe, IterableWrapper import pytorch_lightning as pl class MyDataPipeModule(pl.LightningDataModule): def __init__( self, csv_files: list[str], skip_lines: int = 1, shuffle_buffer: int = 1000, ): super().__init__() self.csv_files: list[str] = csv_files self.skip_lines: int = skip_lines self.shuffle_buffer: int = shuffle_buffer # Run the datapipe composition setup() self.setup() def setup(self, stage: Optional[str] = None) -> IterDataPipe: self.dp_chained_datapipe: IterDataPipe = ( IterableWrapper(iterable=self.csv_files) .open_files() .parse_csv(skip_lines=self.skip_lines) .shuffle(buffer_size=self.shuffle_buffer) ) return self.dp_chained_datapipe def train_dataloader(self) -> DataLoader2: return DataLoader2(dataset=self.dp_chained_datapipe) Usage: datamodule = MyDataPipeModule(csv_files=["file1.csv", "file2.csv"]) model: pl.LightningModule = MyModel() trainer = pl.Trainer(accelerator="auto", max_epochs=3) trainer.fit(model=model, datamodule=datamodule) Maybe not quite the answer you expected, but I'd encourage you to experiment a bit more. The key bit is to switch your mindset from inheritance (subclassing) to composition (chaining/pipe-ing). P.S. Gonna throw in a shameless plug on some tutorials I wrote at https://zen3geo.readthedocs.io/en/v0.4.0/walkthrough.html. It's a bit geospatial specific, but might be helpful to get a feel of the DataPipe-way of working. Good luck!
https://stackoverflow.com/questions/73805458/
Why couldn't I feed a 4-tuple to nn.ReplicationPad2d()?
I'm applying yolov5 on kitti raw image [C, H, W] = [3, 375, 1242]. Therefore I need to pad the image so that the H and W being dividable by 32. I'm using nn.ReplicationPad2d to do the padding: [3, 375, 1242] -> [3, 384, 1248]. In the official tutorial of nn.ReplicationPad2d it was said that we give a 4-tuple to indicate padding sizes for left, right, top and bottom. The Problem is: When I give a 4-tuple (0, pad1, 0, pad2), it claims that: 3D tensors expect 2 values for padding When I give a 2-tuple (pad1, pad2), the pad can be implemented but it seems that only W was padded by pad1+pad2, while H stays unchanged. Because I 'll get a tensor of size [3, 375, 1257]. 1257-1242 = 15 = 9+6, where 9 was supposed to pad H and 6 pad W. I could not figure out what is the problem here... thanks in advance Here is my code: def paddingImage(img, divider=32): if img.shape[1]%divider != 0 or img.shape[2]%divider != 0: padding1_mult = int(img.shape[1] / divider) + 1 padding2_mult = int(img.shape[2] / divider) + 1 pad1 = (divider * padding1_mult) - img.shape[1] pad2 = (divider * padding2_mult) - img.shape[2] # pad1 = 32 - (img.shape[1]%32) # pad2 = 32 - (img.shape[2]%32) # pad1 = 384 - 375 # 9 # pad2 = 1248 - 1242 # 6 #################### PROBLEM #################### padding = nn.ReplicationPad2d((pad1, pad2)) #################### PROBLEM #################### return padding(img) else: return img Where img was given as a torch.Tensor in the main function: # ... image_tensor = torch.from_numpy(image_np).type(torch.float32) image_tensor = paddingImage(image_tensor) image_np = image_tensor.numpy() # ...
PyTorch expects the input to ReplicationPad2d to be batched image tensors. Therefore, we can unsqueeze to add a 'batch dimension'. def paddingImage(img, divider=32): if img.shape[1]%divider != 0 or img.shape[2]%divider != 0: padding1_mult = int(img.shape[1] / divider) + 1 padding2_mult = int(img.shape[2] / divider) + 1 pad1 = (divider * padding1_mult) - img.shape[1] pad2 = (divider * padding2_mult) - img.shape[2] # pad1 = 32 - (img.shape[1]%32) # pad2 = 32 - (img.shape[2]%32) # pad1 = 384 - 375 # 9 # pad2 = 1248 - 1242 # 6 padding = nn.ReplicationPad2d((0, pad2, 0, pad1)) # Add a extra batch-dimension, pad, and then remove batch-dimension return torch.squeeze(padding(torch.unsqueeze(img,0)),0) else: return img Hope this helps! EDIT As GoodDeeds mentions, this is resolved with later versions of PyTorch. Either upgrade PyTorch, or, if that's not an option, use the code above.
https://stackoverflow.com/questions/73821069/
CPU inference in libtorch causes OOM with repeated calls to forward
I have some libtorch code that is doing inference on the cpu using a model trained in pytorch that is then exported to torchscript. The code below is a simplified version of a method that is being repeatedly called. void Backend::perform(std::vector<float *> in_buffer, std::vector<float *> out_buffer) { c10::InferenceMode guard; at::Tensor tensor_out; at::Tensor tensor_in = torch::zeros({ 1, 16, 2 }); std::vector<torch::jit::IValue> inputs = { tensor_in }; // calling forward on the model "decode," this is where // the memory leak happens tensor_out = m_model.get_method("decode")(inputs).toTensor(); auto out_ptr = tensor_out.contiguous().data_ptr<float>(); for (int i(0); i < out_buffer.size(); i++) { memcpy(out_buffer[i], out_ptr + i * n_vec, n_vec * sizeof(float)); } } m_model is the .ts file loaded via: m_model = torch::jit::load(path); m_model.eval(); Every call it seems that more of the torch graph is being allocated, and it isn’t being freed causing the program to eventually OOM and crash. Commenting out the forward call causes the the memory usage to stabilize. My understanding is that InferenceMode guard should turn off autograd memory buildup which seems to be the normal cause of these issues. I tried mimicking this in pytorch (by repeatedly calling forward from a loop), and there’s no memory issues which seems to point to this being a libtorch issue rather than an issue with the model itself. My system: OS: Windows 10/11 pytorch version: 1.11.0 libtorch version: 1.11.0
This ended up being a bug in the windows implementation of libtorch. Memory leaks can happen when calling forward on a separate thread from the main thread (https://github.com/pytorch/pytorch/issues/24237), and moving the forward call to the main thread fixed the issue. Even though the issue is marked closed the bug is still present.
https://stackoverflow.com/questions/73821149/
Pytorch DataLoader changes dict return values
Given a Pytorch dataset that reads a JSON file as such: import csv from torch.utils.data import IterableDataset from torch.utils.data import DataLoader2, DataLoader class MyDataset(IterableDataset): def __init__(self, jsonfilename): self.filename = jsonfilename def __iter__(self): with open(self.filename) as fin: reader = csv.reader(fin) headers = next(reader) for line in reader: yield dict(zip(headers, line)) content = """imagefile,label train/0/16585.png,0 train/0/56789.png,0""" with open('myfile.json', 'w') as fout: fout.write(content) ds = MyDataset("myfile.json") When I loop through the dataset, the return values are dict of each line of the json, e.g. ds = MyDataset("myfile.json") for i in ds: print(i) [out]: {'imagefile': 'train/0/16585.png', 'label': '0'} {'imagefile': 'train/0/56789.png', 'label': '0'} But when I read the Dataset into a DataLoader, it returns the values of the dict as lists instead of the values themselves, e.g. ds = MyDataset("myfile.json") x = DataLoader(dataset=ds) for i in x: print(i) [out]: {'imagefile': ['train/0/16585.png'], 'label': ['0']} {'imagefile': ['train/0/56789.png'], 'label': ['0']} Q (part1) : Why does the DataLoader changes the value of the dict to a list? and also Q (part2) : How to make the DataLoader return just the values of the dict instead of the list of value when running __iter__ with the DataLoader? Is there some arguments/options to use in DataLoader to do this?
The reason is the default collate behaviour in torch.utils.data.DataLoader, which determines how data samples in a batch are merged. By default, the torch.utils.data.default_collate collate function is used, which transforms mappings as: Mapping[K, V_i] -> Mapping[K, default_collate([V_1, V_2, …])] and strings as: str -> str (unchanged) Note that if you set batch_size to 2 in your example, you get: {'imagefile': ['train/0/16585.png', 'train/0/56789.png'], 'label': ['0', '0']} as a consequence of these transforms. Assuming you do not need batching, you can get your desired output by disabling it by setting batch_size=None. More information on this here: Loading Batched and Non-Batched Data.
https://stackoverflow.com/questions/73824056/
PyTorch: Logging during model.fit()
I'm using pytorch/fastai for training models. Since I'm working with remote machines, I am running the scripts using nohup python $1 >$2 2>&1 & with redirection to logging file like "log123.txt". My problem is that during the model.fit() phase with scheduler, I can't see the progress in the file after each epoch like in console and the results are written to my logging file after the model.fit() finishes training. It works fine when I watch the process in console. Is there any workaround for this?
This may be due to file buffering. You can disable it for python process like this: Run your script with -u options. Example: python -u my_script.py > log.txt Use PYTHONUNBUFFERED: PYTHONUNBUFFERED=1 python my_script.py > log.txt or export PYTHONUNBUFFERED=1 python my_script.py > log.txt
https://stackoverflow.com/questions/73826403/
Difference between conv2d and conv2dtranspose with kernel size 1
I understand that conv2d is used for downsampling and conv2dtranspose is the opposite (upsampling). However, assuming we are not using stride or padding here. Is there a difference between the two? Downsampling means reducing the size of input dimension. for example If you have an input of (Batch Size = 5, Channel = 3, Height = 8, Width = 8), if you reduce the height and width using maxpooling (stride=2 kernel_size=2) the output becomes (Batch Size = 5, Channel = 3, Height = 4, Width = 4). That's downsampling, the opposite is upsampling (Increasing the Height and Width dimension) for example: classifier1 = torch.nn.Conv2d(in_channels=10, out_channels=5, kernel_size=1) classifier2 = torch.nn.Conv2dTranspose(in_channels=10, out_channels=5, kernel_size=1)
Operation wise, no difference. ConvTranspose2d() inserts stride - 1 zeros inbetween all rows and columns, adds kernel size - padding - 1 padding zeros, then does exactly the same stuff as Conv2d(). Default arguments result in no changes. Though, if you actually run them back to back like this on the same input, the results will vary unless you explicitly equalize the inital weights, of course.
https://stackoverflow.com/questions/73827644/
the result from torch.concat() is stored in cpu(memory)?
the code c = torch.rand((2000, 64, 64)).to('cuda') d = torch.rand((2000, 64, 64)).to('cuda') t3 = time.time() s1 = c+d s2 = torch.concat((a, b), dim=2) t4 = time.time() s1's device is gpu, but s2's device is cpu. So I can't understand it. What is the principle of this?
Torch will do an operation if all necessary variable for the operation are on the same device. I suppose that a and b where on CPU thus torch.concat((a, b), dim=2) is too. When you did .to('cuda'), you have moved c and d to GPU, thus s1 is on GPU too.
https://stackoverflow.com/questions/73828388/
Python - Torchmetric SSIM depends on batch size
sry for the quick question, I just want to know wether I found a bug or if I do not understand something here. I got the following sample, where I print the ssim of the torchmetrics library of two tensors with the batchsize 8 and the single calculated values mean. Why are they not the same? ssim = StructuralSimilarityIndexMeasure(kernel_size=(5, 5)) A = torch.zeros([8, 1, 500, 500]) B = torch.randn([8, 1, 500, 500]) print(ssim(A, B)) ssim1 = ssim(A[0].unsqueeze(0), B[0].unsqueeze(0)) ssim2 = ssim(A[1].unsqueeze(0), B[1].unsqueeze(0)) ssim3 = ssim(A[2].unsqueeze(0), B[2].unsqueeze(0)) ssim4 = ssim(A[3].unsqueeze(0), B[3].unsqueeze(0)) ssim5 = ssim(A[4].unsqueeze(0), B[4].unsqueeze(0)) ssim6 = ssim(A[5].unsqueeze(0), B[5].unsqueeze(0)) ssim7 = ssim(A[6].unsqueeze(0), B[6].unsqueeze(0)) ssim8 = ssim(A[7].unsqueeze(0), B[7].unsqueeze(0)) print((ssim1 + ssim2 + ssim3 + ssim4 + ssim5 + ssim6 + ssim7 + ssim8) / 8) Console output: tensor(0.0404) tensor(0.0340) Python version 3.8.10 TrochMetrics version 0.9.1 PyTroch version 1.10.1+cu113 Or is this a git issue?
The difference comes from the data_range parameter. Please refer to the documentations: data_range: Range of the image. If ``None``, it is determined from the image (max - min) https://torchmetrics.readthedocs.io/en/stable/image/structural_similarity.html#structural-similarity-index-measure-ssim By default it is None so the batch and individual examples of batch will have different data_range derived from the data: if data_range is None: data_range = max(preds.max() - preds.min(), target.max() - target.min()) https://github.com/Lightning-AI/metrics/blob/19355a9d2c51b3b39b311694d7b8e6856a73eae6/src/torchmetrics/functional/image/ssim.py#L129 If you set specific data_range like this: ssim = StructuralSimilarityIndexMeasure(kernel_size=(5, 5), data_range=255) The results will be the same.
https://stackoverflow.com/questions/73828843/
How to handle Pytorch Dataset with transform function that returns >1 output per row of data?
Given a myfile.csv file that looks like: imagefile,label train/0/16585.png,0 train/0/56789.png,0 The goal is to create a Pytorch DataLoader that when looped return 2x the data points, e.g. >>> dp = MyDataPipe(csvfile) >>> for row in dp.train_dataloader: ... print(row) ... (tensor([1.23, 4.56, 7.89]), 0) (tensor([9.87, 6.54, 3.21]), 1) (tensor([9.99, 8.88, 7.77]), 0) (tensor([1.11, 2.22, 9.87]), 1) I've tried writing the dataloader if we are just expect the same no. of dataloader's row as per the input file, this works: import torch from torch.utils.data import DataLoader2 from torchdata.datapipes.iter import IterDataPipe, IterableWrapper import pytorch_lightning as pl content = """imagefile,label train/0/16585.png,0 train/0/56789.png,0""" with open('myfile.csv', 'w') as fout: fout.write(content) def optimus_prime(row): """This functions returns two data points with some arbitrary vectors. >>> row = {'imagefile': 'train/0/16585.png', label: 0} >>> optimus_prime(row) (tensor([1.23, 4.56, 7.89]), 0) """ # We are using torch.rand here but there is an actual function # that converts the png file into a vector. vector1 = torch.rand(3) return vector1, row['label'] class MyDataPipe(pl.LightningDataModule): def __init__( self, csv_files: list[str], skip_lines: int = 0, tranform_func: Callable = None ): super().__init__() self.csv_files: list[str] = csv_files self.skip_lines: int = skip_lines # Initialize a datapipe. self.dp_chained_datapipe: IterDataPipe = ( IterableWrapper(iterable=self.csv_files) .open_files() .parse_csv_as_dict(skip_lines=self.skip_lines) ) if tranform_func: self.dp_chained_datapipe = self.dp_chained_datapipe.map(tranform_func) def train_dataloader(self, batch_size=1) -> DataLoader2: return DataLoader2(dataset=self.dp_chained_datapipe, batch_size=batch_size) dp = MyDataPipe('myfile.csv', tranform_func=optimus_prime) for row in dp.train_dataloader: print(row) If the optimus_prime function returns 2 data points, how do I setup the Dataloader such that it can collate the 2 data points accordingly? How to formulate the collate function or tell the Dataloader that there's 2 inputs in each .map(tranform_func) output? E.g. if I change the function to: def optimus_prime(row): """This functions returns two data points with some arbitrary vectors. >>> row = {'imagefile': 'train/0/16585.png', label: 0} >>> optimus_prime(row) (tensor([1.23, 4.56, 7.89]), 0), (tensor([3.21, 6.54, 9.87]), 1) """ # We are using torch.rand here but there is an actual function # that converts the png file into a vector. vector1 = torch.rand(3) yield vector1, row['label'] yield vector2, not row['label'] I've also tried the following and it works but I need to run the optimus_prime function twice, but the 2nd .map(tranform_func) throws a TypeError: tuple indices must be integers or slice not str... def optimus_prime_1(row): # We are using torch.rand here but there is an actual function # that converts the png file into a vector. vector1 = torch.rand(3) yield vector1, row['label'] def optimus_prime_2(row): # We are using torch.rand here but there is an actual function # that converts the png file into a vector. vector2 = torch.rand(3) yield vector2, not row['label'] class MyDataPipe(pl.LightningDataModule): def __init__( self, csv_files: list[str], skip_lines: int = 0, tranform_funcs: list[Callable] = None ): super().__init__() self.csv_files: list[str] = csv_files self.skip_lines: int = skip_lines # Initialize a datapipe. self.dp_chained_datapipe: IterDataPipe = ( IterableWrapper(iterable=self.csv_files) .open_files() .parse_csv_as_dict(skip_lines=self.skip_lines) ) if tranform_funcs: for tranform_func in tranform_funcs: self.dp_chained_datapipe = self.dp_chained_datapipe.map(tranform_func) def train_dataloader(self, batch_size=1) -> DataLoader2: return DataLoader2(dataset=self.dp_chained_datapipe, batch_size=batch_size) dp = MyDataPipe('myfile.csv', tranform_funcs=[optimus_prime_1, optimus_prime_2]) for row in dp.train_dataloader: print(row)
From https://discuss.pytorch.org/t/how-to-handle-pytorch-dataset-with-transform-function-that-returns-1-output-per-row-of-data/162160, there's a reference to use .flatmap() instead of .map(): https://pytorch.org/data/main/generated/torchdata.datapipes.iter.FlatMapper.html https://pytorch.org/data/main/generated/torchdata.datapipes.iter.Mapper.html By changing the transformation function to return N no. of data points per row of data form the csvfile, e.g. def optimus_prime(row): """This functions returns two data points with some arbitrary vectors. >>> row = {'imagefile': 'train/0/16585.png', label: 0} >>> optimus_prime(row) (tensor([1.23, 4.56, 7.89]), 0) """ # We are using torch.rand here but there is an actual function # that converts the png file into a vector. vector1 = torch.rand(3) vector2 = torch.rand(3) return [(vector1, row['label']), (vector2, row['label'])] Changing the code to use the .flatmap() as such works: class MyDataPipe(pl.LightningDataModule): def __init__( self, csv_files, skip_lines=0 ): super().__init__() self.csv_files: list[str] = csv_files self.skip_lines: int = skip_lines # Initialize a datapipe. self.dp_chained_datapipe: IterDataPipe = ( IterableWrapper(iterable=self.csv_files) .open_files() .parse_csv_as_dict(skip_lines=self.skip_lines) ) self.dp_chained_datapipe = self.dp_chained_datapipe.flatmap(optimus_prime) def train_dataloader(self, batch_size=1) -> DataLoader2: return DataLoader2(dataset=self.dp_chained_datapipe, batch_size=batch_size) Full working example: import torch from torch.utils.data import DataLoader2 import pytorch_lightning as pl from torchdata.datapipes.iter import IterDataPipe, IterableWrapper content = """imagefile,label train/0/16585.png,0 train/0/56789.png,0""" with open('myfile.csv', 'w') as fout: fout.write(content) def optimus_prime(row): """This functions returns two data points with some arbitrary vectors. >>> row = {'imagefile': 'train/0/16585.png', label: 0} >>> optimus_prime(row) (tensor([1.23, 4.56, 7.89]), 0) """ # We are using torch.rand here but there is an actual function # that converts the png file into a vector. vector1 = torch.rand(3) vector2 = torch.rand(3) return [(vector1, row['label']), (vector2, row['label'])] class MyDataPipe(pl.LightningDataModule): def __init__( self, csv_files, skip_lines=0 ): super().__init__() self.csv_files: list[str] = csv_files self.skip_lines: int = skip_lines # Initialize a datapipe. self.dp_chained_datapipe: IterDataPipe = ( IterableWrapper(iterable=self.csv_files) .open_files() .parse_csv_as_dict(skip_lines=self.skip_lines) ) self.dp_chained_datapipe = self.dp_chained_datapipe.flatmap(optimus_prime) def train_dataloader(self, batch_size=1) -> DataLoader2: return DataLoader2(dataset=self.dp_chained_datapipe, batch_size=batch_size) dp = MyDataPipe(['myfile.csv']) for row in dp.train_dataloader(): print(row) [out]: [tensor([[0.6003, 0.1200, 0.5175]]), ('0',)] [tensor([[0.0628, 0.7004, 0.3169]]), ('0',)] [tensor([[0.0623, 0.4608, 0.7456]]), ('0',)] [tensor([[0.7454, 0.5326, 0.7459]]), ('0',)]
https://stackoverflow.com/questions/73833792/
Building Pytorch form source fails using the provided Dockerfile
I'm trying to build a docker image that I can use as a development environment for modifying Pytorch. There is a Dockerfile provided in the repo, and I'm trying the following: git clone --recursive https://github.com/pytorch/pytorch cd pytorch DOCKER_BUILDKIT=1 docker build -t pytorchtest . But the docker build results in the following error: ... #20 28.80 Performing C++ SOURCE FILE Test HAS_WERROR_CAST_FUNCTION_TYPE failed with the following output: #20 28.80 Change Dir: /opt/pytorch/build/CMakeFiles/CMakeTmp #20 28.80 #20 28.80 Run Build Command(s):/usr/bin/make -f Makefile cmTC_09005/fast && /usr/bin/make -f CMakeFiles/cmTC_09005.dir/build.make CMakeFiles/cmTC_09005.dir/build #20 28.80 make[1]: Entering directory '/opt/pytorch/build/CMakeFiles/CMakeTmp' #20 28.80 Building CXX object CMakeFiles/cmTC_09005.dir/src.cxx.o #20 28.80 /usr/bin/c++ -DHAS_WERROR_CAST_FUNCTION_TYPE -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -fPIE -Werror=cast-function-type -o CMakeFiles/cmTC_09005.dir/src.cxx.o -c /opt/pytorch/build/CMakeFiles/CMakeTmp/src.cxx #20 28.80 cc1plus: error: -Werror=cast-function-type: no option -Wcast-function-type #20 28.80 CMakeFiles/cmTC_09005.dir/build.make:77: recipe for target 'CMakeFiles/cmTC_09005.dir/src.cxx.o' failed #20 28.80 make[1]: *** [CMakeFiles/cmTC_09005.dir/src.cxx.o] Error 1 #20 28.80 make[1]: Leaving directory '/opt/pytorch/build/CMakeFiles/CMakeTmp' #20 28.80 Makefile:127: recipe for target 'cmTC_09005/fast' failed #20 28.80 make: *** [cmTC_09005/fast] Error 2 #20 28.80 #20 28.80 #20 28.80 Source file was: #20 28.80 int main() { return 0; } #20 DONE 29.0s ------ executor failed running [/bin/sh -c TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1 7.0+PTX 8.0" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" python setup.py install]: exit code: 1 I cannot get the error logs because they exist in the temporary filesystem for the image building process. I find it somewhat strange that a building a stable release image is failing. Am I doing something wrong? The Dockerfile: # syntax = docker/dockerfile:experimental # # NOTE: To build this you will need a docker version > 18.06 with # experimental enabled and DOCKER_BUILDKIT=1 # # If you do not use buildkit you are not going to have a good time # # For reference: # https://docs.docker.com/develop/develop-images/build_enhancements/ ARG BASE_IMAGE=ubuntu:18.04 ARG PYTHON_VERSION=3.8 FROM ${BASE_IMAGE} as dev-base RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ ca-certificates \ ccache \ # cmake=3.10.2-1ubuntu2.18.04.2 \ cmake \ curl \ git \ libjpeg-dev \ libpng-dev && \ rm -rf /var/lib/apt/lists/* RUN /usr/sbin/update-ccache-symlinks RUN mkdir /opt/ccache && ccache --set-config=cache_dir=/opt/ccache ENV PATH /opt/conda/bin:$PATH FROM dev-base as conda ARG PYTHON_VERSION=3.8 # Automatically set by buildx ARG TARGETPLATFORM # translating Docker's TARGETPLATFORM into miniconda arches RUN case ${TARGETPLATFORM} in \ "linux/arm64") MINICONDA_ARCH=aarch64 ;; \ *) MINICONDA_ARCH=x86_64 ;; \ esac && \ curl -fsSL -v -o ~/miniconda.sh -O "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-${MINICONDA_ARCH}.sh" COPY requirements.txt . RUN chmod +x ~/miniconda.sh && \ ~/miniconda.sh -b -p /opt/conda && \ rm ~/miniconda.sh && \ /opt/conda/bin/conda install -y python=${PYTHON_VERSION} cmake conda-build pyyaml numpy ipython && \ /opt/conda/bin/python -mpip install -r requirements.txt && \ /opt/conda/bin/conda clean -ya FROM dev-base as submodule-update WORKDIR /opt/pytorch COPY . . RUN git submodule update --init --recursive --jobs 0 FROM conda as build WORKDIR /opt/pytorch COPY --from=conda /opt/conda /opt/conda COPY --from=submodule-update /opt/pytorch /opt/pytorch RUN --mount=type=cache,target=/opt/ccache \ TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1 7.0+PTX 8.0" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \ CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" \ python setup.py install || cat /opt/pytorch/build/CMakeFiles/CMakeError.log
The issue was with the COPY --from=submodule-update /opt/pytorch /opt/pytorch instruction. Some .bzl files were not getting copied. More precisely they were not getting added to the Docker build context because of a .dockerignore file. I've added the following line to the end of the .dockerignore and now it works: !*.bzl As far as I understand, this is a bug. These files are committed to the repo, so they should get copied.
https://stackoverflow.com/questions/73835506/
In PyTorch, how do I update a neural network via the average gradient from a list of losses?
I have a toy reinforcement learning project based on the REINFORCE algorithm (here's PyTorch's implementation) that I would like to add batch updates to. In RL, the "target" can only be created after a "prediction" has been made, so standard batching techniques do not apply. As such, I accrue losses for each episode and append them to a list l_losses where each item is a zero-dimensional tensor. I hold off on calling .backward() or optimizer.step() until a certain number of episodes have passed in order to create a sort of pseudo batch. Given this list of losses, how do I have PyTorch update the network based on their average gradient? Or would updating based on the average gradient be the same as updating on the average loss (I seem to have read otherwise elsewhere)? My current method is to create a new tensor t_loss from torch.stack(l_losses), and then run t_loss = t_loss.mean(), t_loss.backward(), optimizer.step(), and zero the gradient, but I'm unsure if this is equivalent to my intents? It's also unclear to me if I should have been running .backward() on each individual loss instead of concatenating them in a list (but holding on the .step() part until the end?
Gradient is a linear operation so gradient of the average is the same as the average of the gradient. Take some example data import torch a = torch.randn(1, 4, requires_grad=True); b = torch.randn(5, 4); You could store all the losses and compute the mean as you are doing, a.grad = None x = (a * b).mean(axis=1) x.mean().backward() # gradient of the mean print(a.grad) Or every iteration to compute the back propagation to get the contribution of that loss to the gradient. a.grad = None for bi in b: (a * bi / len(b)).mean().backward() print(a.grad) Performance I don't know the internal details of the pytorch backward implementation, but I can tell that (1) the graph is destroyed by default after the backward pass ratain_graph=True or create_graph=True to backward(). (2) The gradient is not kept except for leaf tensors, unless you specify retain_grad; (3) if you evaluate a model twice using different inputs, you can perform the backward pass to individual variables, this means that they have separate graphs. This can be verified with the following code. a.grad = None # compute all the variables in advance r = [ (a * b / len(b)).mean() for bi in b ] for ri in r: # This depends on the graph of r[i] but the graph or r[i-1] # was already destroyed, it means that r[i] graph is independent # of r[i-1] graph, hence they require separate memory. ri.backward() # this will remove the graph of ri print(a.grad) So if you update the gradient after each episode it will accumulate the gradient of the leaf nodes, that's all the information you need for the next optimization step, so you can discard that loss freeing up resources for further computations. I would expect a memory usage reduction, potentially even a faster execution if the memory allocation can efficiently use the just deallocated pages for the next allocation.
https://stackoverflow.com/questions/73840143/
How to calculate f1 score during evaluation on test set?
I am trying to calculate the f1 score during evaluation of my own test set but i'm not able to solve as I am very unexperienced. I've tried to use both f1 score from Scikit-Learn and from torchmetrics but they give me everytime different errors. This is my code: # Function to test the model from sklearn.metrics import f1_score since = time.time() total=0 correct=0 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') y_pred=[] y_true=[] # Iterate over data. with torch.no_grad(): for inputs, labels in dataloadersTest_dict['Test']: inputs = inputs.to(device) labels = labels.to(device) #outputs = model(inputs) predicted_outputs = model(inputs) _, predicted = torch.max(predicted_outputs, 1) total += labels.size(0) print(total) correct += (predicted == labels).sum().item() print(correct) #f1 score temp_true=labels.numpy() temp_pred=predicted.numpy() y_true.append(temp_true.tolist()) y_pred.append(temp_pred.tolist()) time_elapsed = time.time() - since test_acc=100 * correct / total print('Evaluation completed in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60)) print('Accuracy: %d %%' % (test_acc)) print('F1 Score:') f1=f1_score(y_true,y_pred, average = 'macro') print(f1)
The error trace should be available in order to spot the problem but I guess the problem is due to passing a nested list to f1_score instead of a single list. It must be fixed by changing the collecting strategy of the final lists. # Iterate over data. y_true, y_pred = [], [] with torch.no_grad(): for inputs, labels in dataloadersTest_dict['Test']: inputs = inputs.to(device) labels = labels.to(device) #outputs = model(inputs) predicted_outputs = model(inputs) _, predicted = torch.max(predicted_outputs, 1) total += labels.size(0) print(total) correct += (predicted == labels).sum().item() print(correct) #f1 score temp_true=labels.numpy() temp_pred=predicted.numpy() y_true+=temp_true.tolist() y_pred+=temp_pred.tolist()
https://stackoverflow.com/questions/73840203/
Save a model on gpu and load it on CPU
I have a model which I train based on the torch in GPU. Now, i want to upload it on cpu. I am using this code to save an load the model. Here is my model and the training phase: model = VAE(input_size , lead, hidden_dim,hidden_dim1,hidden_dimd, latent_dim, device, num_hidden= lead).to(device) optimizer = torch.optim.Adam(model.parameters(), lr =learning_rate ) losses, kl_loss, l_loss, validate_loss = trainv(model, device, epochs, train_iterator, optimizer, validate_iterator) model = VAE() torch.save(model.state_dict(), "model.pt") #load device = torch.device('cpu') model = VAE() model.load_state_dict(torch.load(PATH, map_location=device)) Here is the error: TypeError: __init__() missing 8 required positional arguments:
Did you define your model class before torch.save() ? This code works without errors in Google Collab: import torch import torch.nn as nn class TheModelClass(nn.Module): def __init__(self): super(TheModelClass, self).__init__() self.linear = nn.Linear(125, 1) def forward(self, src): output = self.linear(src) return model = TheModelClass() torch.save(model.state_dict(), "model.pt") # you need to define your model before device = torch.device('cpu') model = TheModelClass() # you even don't need to redefine your model model.load_state_dict(torch.load('/content/model.pt', map_location=device))
https://stackoverflow.com/questions/73840652/
How to extract a specific digit from the MNIST dataset with dataloader?
I am feeding the MNIST dataset to train my neural network in the following manner indices = torch.arange(60000) dataset = datasets.MNIST(root="dataset/", transform=transforms, download=True) datasetsmall = data_utils.Subset(dataset, indices) loader = DataLoader(datasetsmall, batch_size=batch_size, shuffle=True) However, since the training is taking huge time to complete I have decided to train the model with only a specific digit from the MNIST dataset, for example the digit 4. How can I just extract the digit 4 and feed it to my neural network in the same way. The loop to train the neural network is like for batch_idx, (real, _) in enumerate(loader): Now I want only the digit 4 in the loader. How should I proceed in that case?
Does this code solve your problem? import torch from torchvision import datasets from torch.utils.data import TensorDataset, DataLoader from torchvision.transforms import ToTensor cls = 4 # needed class batch_size = 32 dataset = datasets.MNIST(root="dataset/", download=True, transform=ToTensor()) dataset = list(filter(lambda i: i[1] == cls, dataset)) loader = DataLoader(dataset, batch_size=batch_size, shuffle=True) s = 0 for i in loader: s += 1 print(f'We\'ve got {s} batches with batch_size {batch_size} only for class {cls}') # print(i) # uncomment this line if you want to examine last batch by yourself Result: We've got 183 batches with batch_size 32 only for class 4
https://stackoverflow.com/questions/73840759/
Question regarding latent space in Autoencoder
I am just start learning AE few days ago. From what I know about AE, a latent space will be created after the encoder and then the decoder will regenerate images based on the latent spaces. In my other project, it requires me to embed some new feature into the AE latent space. Below are the AE code that I have try. AE module # build autoencoder import torch import matplotlib.pyplot as plt # Creating a PyTorch class # 28*28 ==> 9 ==> 28*28 class AE(torch.nn.Module): def __init__(self): super().__init__() # Building an linear encoder with Linear # layer followed by Relu activation function # 784 ==> 9 self.encoder = torch.nn.Sequential( torch.nn.Linear(64 * 64, 256), torch.nn.ReLU(), torch.nn.Linear(256, 128), torch.nn.ReLU(), torch.nn.Linear(128, 32), torch.nn.ReLU(), torch.nn.Linear(32, 3) ) # Building an linear decoder with Linear # layer followed by Relu activation function # The Sigmoid activation function # outputs the value between 0 and 1 # 9 ==> 784 self.decoder = torch.nn.Sequential( torch.nn.Linear(3,32), torch.nn.ReLU(), torch.nn.Linear(32, 128), torch.nn.ReLU(), torch.nn.Linear(128, 256), torch.nn.ReLU(), torch.nn.Linear(256, 64*64), torch.nn.Sigmoid() ) def forward(self, x): encoded = self.encoder(x) decoded = self.decoder(encoded) return decoded Init # Model Initialization model = AE().to(device) # Validation using MSE Loss function loss_function = torch.nn.MSELoss() # Using an Adam Optimizer with lr = 0.1 optimizer = torch.optim.Adam(model.parameters(), lr = 1e-1, weight_decay = 1e-8) training num_epochs = 100 output =[] for epoch in range(num_epochs): for data in loader: img, _ = data img = img.reshape(-1,64*64) img = img.to(device) recon = model(img) loss = loss_function(recon, img.data) optimizer.zero_grad() loss.backward() optimizer.step() print(f'epoch [{epoch + 1}/{num_epochs}], loss:{loss. Item(): .4f}') output. Append((epoch,img,recon)) My question is may I know how can I obtain the latent space? From what I know about the code there is only encoder and decoder. How can I retrieve the latent space so that I can embed new feature to it? Thank you.
You can simply modify your forward(self, x) function to also return the laten space embedding generated by the encoder: def forward(self, x): encoded = self.encoder(x) decoded = self.decoder(encoded) return encoded, decoded In your train loop you can just add: embedding, recon = model(img)
https://stackoverflow.com/questions/73849136/
tf.bitcast equivalent in pytorch?
This question is different from tf.cast equivalent in pytorch?. bitcast do bitwise reinterpretation(like reinterpret_cast in C++) instead of "safe" type conversion. This operation is useful when you want to store bfloat16 tensor with numpy. x = torch.ones(224, 224, 3, dtype=torch.bfloat16 x_np = bitcast(x, torch.uint8).numpy() Currently numpy doesn't natively support bfloat16, so x.numpy() will raise TypeError: Got unsupported ScalarType BFloat16
Use the 2nd overload torch.Tensor.view. Its semantic is closely similar to numpy.ndarray.view.
https://stackoverflow.com/questions/73853956/
Are torch.nn.ReLU and torch.nn.Sigmoid trainable?
I build a simple GRU model with PyTorch. It includes 4 sub-modules. I noted that some dictionaries return by the state_dict() of them are empty after training, while ones of the other sub-modules certainly have some weights and bias. The code: class GruModel(nn.Module): def __init__(self, inputs, nodes, layers=2): super(GruModel, self).__init__() self.gru_m = nn.GRU(input_size=inputs, num_layers=layers, hidden_size=nodes, batch_first=True, dropout=0.5) self.activt_f = nn.ReLU() self.output_f = nn.Linear(nodes, 1) self.probab_f = nn.Sigmoid() def forward(self, x, h): o, h = self.gru_m(x, h) o = self.activt_f(o[:, -1]) out = self.output_f(o) return self.probab_f(out) def trainWith(self, ...): ''' training body ''' criterion = nn.BCEWithLogitsLoss() optimizer = torch.optim.Adadelta(self.parameters(), lr=learn_rat) lr_schdlr = torch.optim.lr_scheduler.ExponentialLR( optimizer, gamma=0.99) t_loader = torch_data.DataLoader(...) for e in range(epochs): for x, p_label, n_label in t_loader: optimizer.zero_grad() out = self(x, self.init_hidden(batch_size)) loss = criterion(out, p_label) loss.backward() optimizer.step() lr_schdlr.step() def save(self, full_path: str): print(self.gru_m.state_dict()) print(self.activt_f.state_dict()) print(self.output_f.state_dict()) print(self.probab_f.state_dict()) In real running, the state_dict of the sub-module self.gru_m and self.output_f have values as expected, but the ones of the sub-module self.activt_f (nn.ReLU) and self.probab_f (nn.Sigmoid) have nothing. Don't care about my training process, I feed it with tons of data and run through hundreds of epochs, and the model can do classification as I expected. I'm interested in whether the later two modules are trainable, or they do NOT need any weights and bias with them to do their work? If so, can we say that the torch.nn.Sigmoid is same as torch.nn.functional.sigmoid? Because they are all dummy functions, not stateful objects.
The two layer modules you are mentioning are activation functions which are not parametrized. This means they are not "trainable" since they don't hold any parameters. However, nn modules are classes (they can be stateful) while nn.functional utilities are functions (they are not stateful).
https://stackoverflow.com/questions/73857887/
How to evaluate a pyTorch/DGL tensor
From a DGL graph I want to see the adjacency matrix with adjM = g.adjacency_matrix() adjM and I get the following which is fine: tensor(indices=tensor([[0, 0, 0, 1], [1, 2, 3, 3]]), values=tensor([1., 1., 1., 1.]), size=(4, 4), nnz=4, layout=torch.sparse_coo) Now I want to have the adjacency matrix and the node values each by itself. I imagine something of this kind: adjMatrix = adjM.indices # or adjMatrix = adjM[0] nodeValues = adjM.values # or nodeValues = adjM[1] But this form is not estimated by pyTorch/DGL. My beginner's question: how to do this correctly and sucsessfully? and is there a tutorial for a nuby? ( I have searched a lot just for this detail...!)
Click here! You will find the usage of dgl.adj(). As the doc said, the return is an adjacency matrix, and the return type is the SparseTensor. I noticed that the output that you post is a SparseTensor. You can try it as follows then you can get the entire adj_matrix I create a dgl graph g, get the adjacency matrix as adj g = dgl.graph(([0, 1, 2], [1, 2, 3])) adj = g.adj() adj output is: tensor(indices=tensor([[0, 1, 2], [1, 2, 3]]), values=tensor([1., 1., 1.]), size=(4, 4), nnz=3, layout=torch.sparse_coo) We can find that adj is the presence of sparse, and the sparse type is coo, we can use the following code to verify if adj is a SparseTensor adj.is_sparse output : True so we can use to_dense() get the original adj matrix adj.to_dense() the result is: tensor([[0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.], [0., 0., 0., 0.]]) When you have a problem with DGL you can check the Deep Graph Library Tutorials and Documentation.
https://stackoverflow.com/questions/73859439/
Polynomial features using numpy or torch
Having tensor [a, b] I want to create a tensor of the form [a, b, ab, a^2, b^2] or even of higher order [a, b, ab, a^2, b^2, (a^2)b, a(b^2), (a^2)(b^2), a^3, b^3] I want to solve the issue in a short time. I can solve it with loops, but that's not the way I really would like to do that. However dynamic programming works for me, so using 2nd order to compute 3rd order is fine. The final solution will be implemented in PyTorch, but NumPy implementation would be useful, I can port it to PyTorch on my own. Edit: As you have asked, I'm posting my attempt, which I'm not very proud of: def polynomial(t: torch.Tensor) -> torch.Tensor: r = t.clone() r_c = torch.empty((t.shape[0], math.comb(t.shape[1], 2) + t.shape[1])) i = 0 for idx in range(t.shape[1]): for jdx in range(idx, t.shape[1]): r_c[:, i] = (r[:, idx].unsqueeze(-1) * r[:, jdx].unsqueeze(-1)).squeeze(-1) i += 1 r = torch.hstack([r, r_c]) return r For t = torch.tensor([ [1, 2, 3], [3, 4, 5], [5, 6, 7] ]) polynomial(t) results in tensor([[ 1., 2., 3., 1., 2., 3., 4., 6., 9.], [ 3., 4., 5., 9., 12., 15., 16., 20., 25.], [ 5., 6., 7., 25., 30., 35., 36., 42., 49.]])
For anyone who will meet the problem: def polynomial(t: torch.Tensor, degree: int = 2, interaction_only: bool = False) -> torch.Tensor: cols = t.hsplit(t.shape[1]) if interaction_only: degree = 2 combs = combinations(cols, degree) else: combs = combinations_with_replacement(cols, degree) prods = [torch.prod(torch.hstack(comb), -1).unsqueeze(-1) for comb in combs] r = torch.hstack(prods) return torch.hstack((t, r)) if degree == 2 else torch.hstack((polynomial(t, degree - 1), r))
https://stackoverflow.com/questions/73864186/
nn.LSTM doesn't seem to learn anything or not updating properly
I was trying out a simple LSTM use case form pytorch, with the following model. class SimpleLSTM(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=0) self.lstm = nn.LSTM(batch_first=True, input_size=embedding_dim, num_layers=1, hidden_size=hidden_dim, bidirectional=True) self.linear = nn.Linear(hidden_dim*2, 1) self.sigmoid = nn.Sigmoid() def forward(self, x): # NxD, padded to same length with 0s in N-sized batch x = self.embedding(x) output, (final_hidden_state, final_cell_state) = self.lstm(x) x = self.linear(output[:,-1,:]) x=self.sigmoid(x) return x It is a binary classification, with BCELoss (combined with the Sigmoid output layer). Unfortunately, loss is stuck at 0.6969 (i.e. it is not learning anything). I've tried using final_hidden_state, output[:,0,:] feeding into the linear layer, but so far no dice. Everything else (optimizer, loss criterion, train loop, val loop) already works because I tried the exact same setup with a basic NN using nn.Embedding, nn.Linear, and nn.Sigmoid only, and could get to good loss decrease and high accuracy. In the SimpleLSTM, the only thing I added is the nn.LSTM.
Typically final_hidden_state is passed to linear, not output. Use it. add 1-2 more linear layers after the LSTM. try lower LR, especially when embeddings are not pre-trained. Better yet, try loading pre-trained embeddings.
https://stackoverflow.com/questions/73875337/
Construct a 3D tensor from a 2D matrix
Given an n-by-n matrix A, where each row of A is a permutation of [n], e.g., import torch n = 100 AA = torch.rand(n, n) A = torch.argsort(AA, dim=1) Also given another n-by-n matrix P, we want to construct a 3D tensor Q s.t. Q[i, j, k] = P[A[i, j], k] Is there any efficient way in pytorch? I am aware of torch.gather but it seems hard to be directly applied here.
You can directly use: Q = P[A]
https://stackoverflow.com/questions/73876921/
Pytorch submodules output shape
How does the output shape of submodules in pytorch is determined? why is the output shape of a certain sub-module is modified in the code below? When I separate the head of a classical classifier from its backbone in the following way: import torch, torchvision from torchsummary import summary effnet = torchvision.models.efficientnet_b0(num_classes = 2) backbone = torch.nn.Sequential(*(list(effnet.children())[0])) adaptive_pool = list(effnet.children())[1] head = list(effnet.children())[2] model = torch.nn.Sequential(*[backbone, adaptive_pool, head]) summary(model, (3,256,256), device = 'cpu') # <== Error I get the following error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (2560x1 and 1280x2) This error is due to modified output shape of the sub-module adaptive_pool. To workaround this problem, flatten can be used as follows: class flatten(torch.nn.Module): def forward(self, input): return input.view(input.size(0), -1) model = torch.nn.Sequential(*[backbone, adaptive_pool, flatten(), head]) summary(model, (3,256,256), device = 'cpu') Why is the output shape of the sub-module adaptive_pool is modified?
The output of an nn.AdaptiveAvgPool2d is 4D even if the average is computed globally i.e output_size=1. In other words, the output shape of your global pooling layer is (N, C, 1, 1). This means you indeed need to flatten it for the layer which is fully connected. In the referenced original efficient net classification network, the implementation of the flattening operation is done directly in the forward logic without the use of a dedicated layer. See this line. Instead of implementing your own flattening layer, you can use the built-in nn.Flatten. More details about this module can be found here. >>> model = nn.Sequential(backbone, adaptive_pool, nn.Flatten(1), head)
https://stackoverflow.com/questions/73879238/
Map a list of list to TPU tensor
I have this code to deconstruct a list of list to different tensors. token_a_index, token_b_index, isNext, input_ids, segment_ids, masked_tokens, masked_pos = map(torch.LongTensor, zip(*batch)) If I want to create these tensors on gpu, I can use below code: token_a_index, token_b_index, isNext, input_ids, segment_ids, masked_tokens, masked_pos = map(torch.cuda.LongTensor, zip(*batch)) But now I want to create all these on TPU, what should I do? Is there anything like below? token_a_index, token_b_index, isNext, input_ids, segment_ids, masked_tokens, masked_pos = map(torch.xla.LongTensor, zip(*batch))
You can use xla device following the guide here. You can select the device and pass it to your function like this: import torch_xla.core.xla_model as xm device = xm.xla_device() token_a_index, token_b_index, isNext, input_ids, segment_ids, masked_tokens, masked_pos = map(lambda x: torch.Tensor(x).to(device).long(), zip(*batch)) You can even parametrize the device variable, torch.device("cuda" if torch.cuda.is_available() else "cpu") can be used to select between cuda and cpu.
https://stackoverflow.com/questions/73891022/
How to concatenate feature maps?
I was trying to merge feature maps that I got from two different encoders which have different dimensions in Pytorch simclr_features: torch.Size([543, 512]) imagenet_features: torch.Size([543, 1024]) so I wanted: torch.Size([543, 1536]). what is a possible solution to do that?
You can use torch.cat: torch.cat((simclr_features, imagenet_features), dim=1)
https://stackoverflow.com/questions/73893825/
Distributed Data Parallel (DDP) Batch size
Suppose, I use 2 gpus in a DDP setting. So, if I intend to use 16 as a batch size if I run the experiment on a single gpu, should I give 8 as a batch size, or 16 as a batch size in case of using 2 gpus with DDP setting?? Does 16 is divided into 8 and 8 automatically? Thank you -!
As explained here: the application of the given module by splitting the input across the specified devices The batch size should be larger than the number of GPUs used locally each replica handles a portion of the input If you use 16 as batch-size, it will be divided automatically between the two gpus.
https://stackoverflow.com/questions/73899097/
How can I get the maximum values of a tensor along a dimension?
I have a 3D tensor and would like to take the max values along the 0th dimension in Libtorch. I know how to do this in Python (PyTorch) but I'm having trouble doing this in LibTorch. In LibTorch my code is auto target_q_T = torch::rand({5, 10, 1}); auto max_q = torch::max({target_q_T}, 0); std::cout << max_q; It returns this long, repeating error. note: candidate: ‘template<class _Traits> std::basic_ostream<char, _Traits>& std::operator<<(std::basic_ostream<char, _Traits>&, const char*)’ 611 | operator<<(basic_ostream<char, _Traits>& __out, const char* __s) | ^~~~~~~~ /usr/include/c++/11/ostream:611:5: note: template argument deduction/substitution failed: /home/iii/tor/m_gym/multiv_normal.cpp:432:18: note: cannot convert ‘max_q’ (type ‘std::tuple<at::Tensor, at::Tensor>’) to type ‘const char*’ 432 | std::cout << max_q; | ^~~~~ In file included from /usr/include/c++/11/istream:39, from /usr/include/c++/11/sstream:38, from /home/iii/tor/m_gym/libtorch/include/c10/macros/Macros.h:246, from /home/iii/tor/m_gym/libtorch/include/c10/core/DeviceType.h:8, from /home/iii/tor/m_gym/libtorch/include/c10/core/Device.h:3, from /home/iii/tor/m_gym/libtorch/include/ATen/core/TensorBody.h:11, from /home/iii/tor/m_gym/libtorch/include/ATen/core/Tensor.h:3, from /home/iii/tor/m_gym/libtorch/include/ATen/Tensor.h:3, from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/function_hook.h:3, from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/cpp_hook.h:2, from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/variable.h:6, from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/autograd.h:3, from /home/iii/tor/m_gym/libtorch/include/torch/csrc/api/include/torch/autograd.h:3, from /home/iii/tor/m_gym/libtorch/include/torch/csrc/api/include/torch/all.h:7, from /home/iii/tor/m_gym/libtorch/include/torch/csrc/api/include/torch/torch.h:3, from /home/iii/tor/m_gym/multiv_normal.cpp:2: /usr/include/c++/11/ostream:624:5: note: candidate: ‘template<class _Traits> std::basic_ostream<char, _Traits>& std::operator<<(std::basic_ostream<char, _Traits>&, const signed char*)’ 624 | operator<<(basic_ostream<char, _Traits>& __out, const signed char* __s) | ^~~~~~~~ This is how it works in Python. target_q_np = torch.rand(5, 10, 1) max_q = torch.max(target_q_np, 0) max_q torch.return_types.max( values=tensor([[0.8517], [0.7526], [0.6546], [0.9913], [0.8521], [0.9757], [0.9080], [0.9376], [0.9901], [0.7445]]), indices=tensor([[4], [2], [3], [4], [1], [0], [2], [4], [4], [4]]))
If you read the compiler error, it basically tells you that you are trying to print a tuple of two tensors. That's because the C++ code works exactly like the python code and returns the max values and their respective indices (your python code prints exactly that). You need std get to extract the tensors from the tuple : auto target_q_T = torch::rand({5, 10, 1}); auto max_q = torch::max({target_q_T}, 0); std::cout << "max: " << std::get<0>(max_q) << "indices: " << std::get<1>(max_q) << std::endl; In C++17 you should also be able to write auto [max_t, idx_t] = torch::max({target_q_T}, 0); std::cout << ... ;
https://stackoverflow.com/questions/73902752/
Changing 3D Convolutional Encoder layers to 3D Deconvolutional layers
I have a 3D convolutional network that is composed of an encoder and a decoder. In the decoder part, I want to use the type of convolutional layers that are used in the decoder part but making them to do deconvolution. In other words, I want that the mentioned convolutional layers increase the spatial size as like as the converse rate that is used in the encoder while decrease the number of channels. The convolutional layer that I use inside my encoder contains sequential layers: Conv_layer = nn.Sequential( BasicConv3d(64, 64, kernel_size=1, stride=1), SepConv3d(64, 192, kernel_size=3, stride=1, padding=1), nn.MaxPool3d(kernel_size=(1,3,3), stride=(1,2,2), padding=(0,1,1)), ) The type of convolutional layers that are used: class BasicConv3d(nn.Module): def __init__(self, in_planes, out_planes, kernel_size, stride, padding=0): super(BasicConv3d, self).__init__() self.conv = nn.Conv3d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) self.bn = nn.BatchNorm3d(out_planes, eps=1e-3, momentum=0.001, affine=True) self.relu = nn.ReLU() def forward(self, x): x = self.conv(x) x = self.bn(x) x = self.relu(x) return x class SepConv3d(nn.Module): def __init__(self, in_planes, out_planes, kernel_size, stride, padding=0): super(SepConv3d, self).__init__() self.conv_s = nn.Conv3d(in_planes, out_planes, kernel_size=(1,kernel_size,kernel_size), stride=(1,stride,stride), padding=(0,padding,padding), bias=False) self.bn_s = nn.BatchNorm3d(out_planes, eps=1e-3, momentum=0.001, affine=True) self.relu_s = nn.ReLU() self.conv_t = nn.Conv3d(out_planes, out_planes, kernel_size=(kernel_size,1,1), stride=(stride,1,1), padding=(padding,0,0), bias=False) self.bn_t = nn.BatchNorm3d(out_planes, eps=1e-3, momentum=0.001, affine=True) self.relu_t = nn.ReLU() def forward(self, x): x = self.conv_s(x) x = self.bn_s(x) x = self.relu_s(x) x = self.conv_t(x) x = self.bn_t(x) x = self.relu_t(x) return x My question is that how I should chnge the kernel_size, stride and padding that change the mentioned layers to deconvolution that incease the spatial size of feature maps as the same with the converse rate of convolutional layers.
Basically, you want a nn.ConvTranspose3d with kernel size (3, 3, 3) and stride (1, 2, 2). You can see the formula relating input size to output size here: D_out=(D_in−1)×stride − 2×padding + dilation×(kernel_size−1) + output_padding + 1 In your case, kernel_size=3, dilation=1 and stride is 1 for the temporal dimension and 2 for the spatial dimensions. Thus the desired layers would be something like: out_planes = 192 in_planes = 64 deconv_layer = nn.Sequential( nn.ConvTranspose3d(out_planes, out_planes, kernel_size=(3, 1, 1), stride=1, padding=0, output_padding=0) nn.BatchNorm3d(out_planes), nn.ReLU(inplace=True), nn.ConvTranspose3d(out_planes, in_planes, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), output_padding=(0, 1, 1)) nn.BatchNorm3d(in_planes), nn.ReLU(inplace=True), BasicConv3d(in_planes, in_planes, kernel_size=1, stride=1), # to be consistent with the Conv_layers structure )
https://stackoverflow.com/questions/73910666/
Why don't the images align when concatenating two data sets in pytorch using torch.utils.data.ConcatDataset?
I wanted to concatenate multiple data sets where the labels are disjoint (so don't share labels). I did: class ConcatDataset(Dataset): """ ref: https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 """ def __init__(self, datasets: list[Dataset]): """ """ # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling offset: int = 0 new_idx: int = 0 for dataset_idx, dataset in enumerate(datasets): assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] for x, y in dataset: y = int(y) _x, _y = self.concat_datasets[new_idx] _y = int(_y) # assert y == _y assert torch.equal(x, _x) new_label = y + offset self.indices_to_labels[new_idx] = new_label self.labels_to_indices[new_label] = new_idx num_labels_for_current_dataset: int = max([y for _, y in dataset]) offset += num_labels_for_current_dataset new_idx += 1 assert len(self.indices_to_labels.keys()) == len(self.concat_datasets) # contains the list of labels from 0 - total num labels after concat self.labels = range(offset) self.target_transform = lambda data: torch.tensor(data, dtype=torch.int) def __len__(self): return len(self.concat_datasets) def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: x = self.concat_datasets[idx] y = self.indices_to_labels[idx] if self.target_transform is not None: y = self.target_transform(y) return x, y but it doesn't even work to align the x images (so never mind if my relabling works!). Why? def check_xs_align_cifar100(): from pathlib import Path root = Path("~/data/").expanduser() # root = Path(".").expanduser() train = torchvision.datasets.CIFAR100(root=root, train=True, download=True) test = torchvision.datasets.CIFAR100(root=root, train=False, download=True) concat = ConcatDataset([train, test]) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') error Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 405, in <module> check_xs_align() File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 391, in check_xs_align concat = ConcatDataset([train, test]) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 71, in __init__ assert torch.equal(x, _x) TypeError: equal(): argument 'input' (position 1) must be Tensor, not Image python-BaseException Bonus: let me know if relabeling is correct please. related discussion: https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 Edit 1: PIL comparison fails I did a PIL image comparison according to Compare images Python PIL but it failed: Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 419, in <module> check_xs_align_cifar100() File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 405, in check_xs_align_cifar100 concat = ConcatDataset([train, test]) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 78, in __init__ assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' AssertionError: comparison of imgs failed: diff.getbbox()=None python-BaseException diff PyDev console: starting. <PIL.Image.Image image mode=RGB size=32x32 at 0x7FBE897A21C0> code comparison: diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' this also failed: assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' AssertionError: ...long msg... assert statement was: assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' Edit 2: Tensor comparison Fails I tried to convert images to tensors but it still fails: AssertionError: Error for some reason, got: data_idx=1, x.norm()=tensor(45.9401), _x.norm()=tensor(33.9407), x=tensor([[[1.0000, 0.9922, 0.9922, ..., 0.9922, 0.9922, 1.0000], code: class ConcatDataset(Dataset): """ ref: - https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 - https://stackoverflow.com/questions/73913522/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-tor """ def __init__(self, datasets: list[Dataset]): """ """ # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling img2tensor: Callable = torchvision.transforms.ToTensor() offset: int = 0 new_idx: int = 0 for dataset_idx, dataset in enumerate(datasets): assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] for data_idx, (x, y) in enumerate(dataset): y = int(y) # - get data point from concataned data set (to compare with the data point from the data set list) _x, _y = self.concat_datasets[new_idx] _y = int(_y) # - sanity check concatanted data set aligns with the list of datasets # assert y == _y # from PIL import ImageChops # diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil # assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' # assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' # tensor comparison x, _x = img2tensor(x), img2tensor(_x) print(f'{data_idx=}, {x.norm()=}, {_x.norm()=}') assert torch.equal(x, _x), f'Error for some reason, got: {data_idx=}, {x.norm()=}, {_x.norm()=}, {x=}, {_x=}' # - relabling new_label = y + offset self.indices_to_labels[new_idx] = new_label self.labels_to_indices[new_label] = new_idx num_labels_for_current_dataset: int = max([y for _, y in dataset]) offset += num_labels_for_current_dataset new_idx += 1 assert len(self.indices_to_labels.keys()) == len(self.concat_datasets) # contains the list of labels from 0 - total num labels after concat self.labels = range(offset) self.target_transform = lambda data: torch.tensor(data, dtype=torch.int) def __len__(self): return len(self.concat_datasets) def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: x = self.concat_datasets[idx] y = self.indices_to_labels[idx] if self.target_transform is not None: y = self.target_transform(y) return x, y Edit 3, clarification request: My vision of the data set I want is a concatenation of a data sets in question -- where relabeling starting the first label commences. The curicial thing (according to me -- might be wrong on this) is that once concatenated we should verify in some way that the data set indeed behaves the way we want it. One check I thought is to index the data point from the list of data sets and also from the concatenation object of the data set. If the data set was correctly conatenated I'd expect the images to be correspond according to this indexing. So if the first image in the first data set had some unique identifier (e.g. the pixels) then the concatenation of the data sets should have the first image be the same as the first image in the list of data sets and so on...if this doesn't hold, if I start creating new labels -- how do I know I am even doing this correctly? reddit link: https://www.reddit.com/r/pytorch/comments/xurnu9/why_dont_the_images_align_when_concatenating_two/ cross posted pytorch discuss: https://discuss.pytorch.org/t/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-torch-utils-data-concatdataset/162801?u=brando_miranda
Corrected code can be found here https://github.com/brando90/ultimate-utils/blob/master/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py you can pip install the library pip install ultimate-utils. Since only links is not a good way to answer I will copy paste the code too with it's test and expected output: """ do checks, loop through all data points, create counts for each label how many data points there are do this for MI only then check union and ur implementation? compare the mappings of one & the other? actually it's easy, just add the cummulative offset and that's it. :D the indices are already -1 indexed. assert every image has a label between 0 --> n1+n2+... and every bin for each class is none empty for it to work with any standard pytorch data set I think the workflow would be: pytorch dataset -> l2l meta data set -> union data set -> .dataset field -> data loader for l2l data sets: l2l meta data set -> union data set -> .dataset field -> data loader but the last one might need to make sure .indices or .labels is created or a get labels function that checking the attribute gets the right .labels or remaps it correctly """ from collections import defaultdict from pathlib import Path from typing import Callable, Optional import torch import torchvision from torch import Tensor from torch.utils.data import Dataset, DataLoader class ConcatDatasetMutuallyExclusiveLabels(Dataset): """ Useful attributes: - self.labels: contains all new USL labels i.e. contains the list of labels from 0 - total num labels after concat. - len(self): gives number of images after all images have been concatenated - self.indices_to_labels: maps the new concat idx to the new label after concat. ref: - https://stackoverflow.com/questions/73913522/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-tor - https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 """ def __init__(self, datasets: list[Dataset], transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, compare_imgs_directly: bool = False, verify_xs_align: bool = False, ): """ Concatenates different data sets assuming the labels are mutually exclusive in the data sets. compare_imgs_directly: adds the additional test that imgs compare at the PIL imgage level. """ self.datasets = datasets self.transform = transform self.target_transform = target_transform # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling self._re_label_all_dataset(datasets, compare_imgs_directly, verify_xs_align) def __len__(self): return len(self.concat_datasets) def _re_label_all_dataset(self, datasets: list[Dataset], compare_imgs_directly: bool = False, verify_xs_align: bool = False, ): """ Relabels according to a blind (mutually exclusive) assumption. Relabling Algorithm: The zero index of the label starts at the number of labels collected so far. So when relabling we do: y = y + total_number_labels total_number_labels += max label for current data set where total_number_labels always has the + 1 to correct for the zero indexing. :param datasets: :param compare_imgs_directly: :parm verify_xs_align: set to false by default in case your transforms aren't deterministic. :return: """ self.img2tensor: Callable = torchvision.transforms.ToTensor() self.int2tensor: Callable = lambda data: torch.tensor(data, dtype=torch.int) total_num_labels_so_far: int = 0 new_idx: int = 0 for dataset_idx, dataset in enumerate(datasets): assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] for data_idx, (x, y) in enumerate(dataset): y = int(y) # - get data point from concataned data set (to compare with the data point from the data set list) _x, _y = self.concat_datasets[new_idx] _y = int(_y) # - sanity check concatanted data set aligns with the list of datasets assert y == _y if compare_imgs_directly: # from PIL import ImageChops # diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil # assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' # doesn't work :/ assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' # tensor comparison if not isinstance(x, Tensor): x, _x = self.img2tensor(x), self.img2tensor(_x) if isinstance(y, int): y, _y = self.int2tensor(y), self.int2tensor(_y) if verify_xs_align: # this might fails if there are random ops in the getitem assert torch.equal(x, _x), f'Error for some reason, got: {dataset_idx=},' \ f' {new_idx=}, {data_idx=}, ' \ f'{x.norm()=}, {_x.norm()=}, ' \ f'{x=}, {_x=}' # - relabling new_label = y + total_num_labels_so_far self.indices_to_labels[new_idx] = new_label self.labels_to_indices[new_label].append(new_idx) new_idx += 1 num_labels_for_current_dataset: int = int(max([y for _, y in dataset])) + 1 # - you'd likely resolve unions if you wanted a proper union, the addition assumes mutual exclusivity total_num_labels_so_far += num_labels_for_current_dataset assert len(self.indices_to_labels.keys()) == len(self.concat_datasets) # contains the list of labels from 0 - total num labels after concat, assume mutually exclusive self.labels = range(total_num_labels_so_far) def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: """ Get's the data point and it's new label according to a mutually exclusive concatenation. For later? to do the relabling on the fly we'd need to figure out which data set idx corresponds to and to compute the total_num_labels_so_far. Something like this: current_data_set_idx = bisect_left(idx) total_num_labels_so_far = sum(max(_, y in dataset)+1 for dataset_idx, dataset in enumerate(self.datasets) if dataset_idx <= current_data_set_idx) new_y = total_num_labels_so_far self.indices_to_labels[idx] = new_y :param idx: :return: """ x, _y = self.concat_datasets[idx] y = self.indices_to_labels[idx] # for the first data set they aren't re-labaled so can't use assert # assert y != _y, f'concat dataset returns x, y so the y is not relabeled, but why are they the same {_y}, {y=}' # idk what this is but could be useful? mnist had this. # img = Image.fromarray(img.numpy(), mode="L") if self.transform is not None: x = self.transform(x) if self.target_transform is not None: y = self.target_transform(y) return x, y def assert_dataset_is_pytorch_dataset(datasets: list, verbose: bool = False): """ to do 1 data set wrap it in a list""" for dataset in datasets: if verbose: print(f'{type(dataset)=}') print(f'{type(dataset.dataset)=}') assert isinstance(dataset, Dataset), f'Expect dataset to be of type Dataset but got {type(dataset)=}.' def get_relabling_counts(dataset: Dataset) -> dict: """ counts[new_label] -> counts/number of data points for that new label """ assert isinstance(dataset, Dataset), f'Expect dataset to be of type Dataset but got {type(dataset)=}.' counts: dict = {} iter_dataset = iter(dataset) for datapoint in iter_dataset: x, y = datapoint # assert isinstance(x, torch.Tensor) # assert isinstance(y, int) if y not in counts: counts[y] = 0 else: counts[y] += 1 return counts def assert_relabling_counts(counts: dict, labels: int = 100, counts_per_label: int = 600): """ default values are for MI. - checks each label/class has the right number of expected images per class - checks the relabels start from 0 and increase by 1 - checks the total number of labels after concat is what you expect ref: https://openreview.net/pdf?id=rJY0-Kcll Because the exact splits used in Vinyals et al. (2016) were not released, we create our own version of the Mini-Imagenet dataset by selecting a random 100 classes from ImageNet and picking 600 examples of each class. We use 64, 16, and 20 classes for training, validation and testing, respectively. """ # - check each image has the right number of total images seen_labels: list[int] = [] for label, count in counts.items(): seen_labels.append(label) assert counts[label] == counts_per_label # - check all labels are there and total is correct seen_labels.sort() prev_label = -1 for label in seen_labels: diff = label - prev_label assert diff == 1 assert prev_label < label # - checks the final label is the total number of labels assert label == labels - 1 def check_entire_data_via_the_dataloader(dataloader: DataLoader) -> dict: counts: dict = {} for it, batch in enumerate(dataloader): xs, ys = batch for y in ys: if y not in counts: counts[y] = 0 else: counts[y] += 1 return counts # - tests def check_xs_align_mnist(): root = Path('~/data/').expanduser() import torchvision # - test 1, imgs (not the recommended use) train = torchvision.datasets.MNIST(root=root, train=True, download=True) test = torchvision.datasets.MNIST(root=root, train=False, download=True) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], compare_imgs_directly=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') # - test 2, tensor imgs train = torchvision.datasets.MNIST(root=root, train=True, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) test = torchvision.datasets.MNIST(root=root, train=False, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], verify_xs_align=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') assert len(concat) == 10 * 7000, f'Err, unexpected number of datapoints {len(concat)=} expected {100 * 700}' assert len( concat.labels) == 20, f'Note it should be 20 (since it is not a true union), but got {len(concat.labels)=}' # - test dataloader loader = DataLoader(concat) for batch in loader: x, y = batch assert isinstance(x, torch.Tensor) assert isinstance(y, torch.Tensor) def check_xs_align_cifar100(): from pathlib import Path root = Path('~/data/').expanduser() import torchvision # - test 1, imgs (not the recommended use) train = torchvision.datasets.CIFAR100(root=root, train=True, download=True) test = torchvision.datasets.CIFAR100(root=root, train=False, download=True) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], compare_imgs_directly=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') # - test 2, tensor imgs train = torchvision.datasets.CIFAR100(root=root, train=True, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) test = torchvision.datasets.CIFAR100(root=root, train=False, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], verify_xs_align=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') assert len(concat) == 100 * 600, f'Err, unexpected number of datapoints {len(concat)=} expected {100 * 600}' assert len( concat.labels) == 200, f'Note it should be 200 (since it is not a true union), but got {len(concat.labels)=}' # details on cifar100: https://www.cs.toronto.edu/~kriz/cifar.html # - test dataloader loader = DataLoader(concat) for batch in loader: x, y = batch assert isinstance(x, torch.Tensor) assert isinstance(y, torch.Tensor) def concat_data_set_mi(): """ note test had to be in MI where train, val, test have disjount/different labels. In cifar100 classic the labels in train, val and test are shared from 0-99 instead of being different/disjoint. :return: """ # - get mi data set from diversity_src.dataloaders.hdb1_mi_omniglot_l2l import get_mi_datasets train_dataset, validation_dataset, test_dataset = get_mi_datasets() assert_dataset_is_pytorch_dataset([train_dataset, validation_dataset, test_dataset]) train_dataset, validation_dataset, test_dataset = train_dataset.dataset, validation_dataset.dataset, test_dataset.dataset # - create usl data set union = ConcatDatasetMutuallyExclusiveLabels([train_dataset, validation_dataset, test_dataset]) # union = ConcatDatasetMutuallyExclusiveLabels([train_dataset, validation_dataset, test_dataset], # compare_imgs_directly=True) assert_dataset_is_pytorch_dataset([union]) assert len(union) == 100 * 600, f'got {len(union)=}' assert len(union.labels) == 100, f'got {len(union.labels)=}' # - create dataloader from uutils.torch_uu.dataloaders.common import get_serial_or_distributed_dataloaders union_loader, _ = get_serial_or_distributed_dataloaders(train_dataset=union, val_dataset=union) for batch in union_loader: x, y = batch assert x is not None assert y is not None if __name__ == '__main__': import time from uutils import report_times start = time.time() # - run experiment check_xs_align_mnist() check_xs_align_cifar100() concat_data_set_mi() # - Done print(f"\nSuccess Done!: {report_times(start)}\a") expected correct output: len(concat)=70000 len(concat.labels)=20 len(concat)=70000 len(concat.labels)=20 Files already downloaded and verified Files already downloaded and verified len(concat)=60000 len(concat.labels)=200 Files already downloaded and verified Files already downloaded and verified len(concat)=60000 len(concat.labels)=200 Success Done!: time passed: hours:0.16719497998555502, minutes=10.0316987991333, seconds=601.901927947998 warning: if you have a transform that is random the verification that the data sets align might make it look as if the two data points are not algined. The code is correct so it's not an issue, but perhaps remove the randomness somehow. Note, I actually decided to not force the user to check all the images of their data set and trust my code works from running once my unit tests. Also note that it's slow to construct the data set since I do the re-labling at the beginning. Might be better to relabel on the fly. I outlined the code for it on how to do it but decided against it since we always see all the data set at least once so doing this amortized is the same as doing it on the fly (note the fly pseudo-code saves the labels to avoid recomputations). This is better: # int2tensor: Callable = lambda data: torch.tensor(data, dtype=torch.int) int2tensor: Callable = lambda data: torch.tensor(data, dtype=torch.long) class ConcatDatasetMutuallyExclusiveLabels(Dataset): """ Useful attributes: - self.labels: contains all new USL labels i.e. contains the list of labels from 0 - total num labels after concat. - len(self): gives number of images after all images have been concatenated - self.indices_to_labels: maps the new concat idx to the new label after concat. ref: - https://stackoverflow.com/questions/73913522/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-tor - https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 """ def __init__(self, datasets: list[Dataset], transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, compare_imgs_directly: bool = False, verify_xs_align: bool = False, ): """ Concatenates different data sets assuming the labels are mutually exclusive in the data sets. compare_imgs_directly: adds the additional test that imgs compare at the PIL imgage level. """ self.datasets = datasets self.transform = transform self.target_transform = target_transform # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling self._re_label_all_dataset(datasets, compare_imgs_directly, verify_xs_align) def __len__(self): return len(self.concat_datasets) def _re_label_all_dataset(self, datasets: list[Dataset], compare_imgs_directly: bool = False, verify_xs_align: bool = False, verbose: bool = False, ): """ Relabels according to a blind (mutually exclusive) assumption. Relabling Algorithm: The zero index of the label starts at the number of labels collected so far. So when relabling we do: y = y + total_number_labels total_number_labels += max label for current data set where total_number_labels always has the + 1 to correct for the zero indexing. assumption: it re-lables the data points to have a concatenation of all the labels. If there are rebeated labels they are treated as different. So if dataset1 and dataset2 both have cats (represented as indices), then they will get unique integers representing these. So the cats are treated as entirely different labels. """ print() self.img2tensor: Callable = torchvision.transforms.ToTensor() total_num_labels_so_far: int = 0 global_idx: int = 0 # new_idx assert len(self.indices_to_labels.keys()) == 0 assert len(self.labels_to_indices.keys()) == 0 for dataset_idx, dataset in enumerate(datasets): print(f'{dataset_idx=} \n{len(dataset)=}') if hasattr(dataset, 'labels'): print(f'{len(dataset.labels)=}') assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] original_label2global_idx: defaultdict = defaultdict(list) for original_data_idx, (x, original_y) in enumerate(dataset): original_y = int(original_y) # - get data point from concataned data set (to compare with the data point from the data set list) _x, _y = self.concat_datasets[global_idx] _y = int(_y) # - sanity check concatanted data set aligns with the list of datasets assert original_y == _y, f'{original_y=}, {_y=}' if compare_imgs_directly: # from PIL import ImageChops # diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil # assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' # doesn't work :/ assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' # - tensor comparison of raw images if not isinstance(x, Tensor): x, _x = self.img2tensor(x), self.img2tensor(_x) # if isinstance(original_y, int): # original_y, _y = int2tensor(original_y), int2tensor(_y) if verify_xs_align: # checks the data points after doing get item make them match. # this might fails if there are random ops in the getitem assert torch.equal(x, _x), f'Error for some reason, got: {dataset_idx=},' \ f' {global_idx=}, {original_data_idx=}, ' \ f'{x.norm()=}, {_x.norm()=}, ' \ f'{x=}, {_x=}' # - collect original labels in dictionary keys original_label2global_idx[int(original_y)].append(global_idx) global_idx += 1 print(f'{global_idx=}') local_num_dps: int = sum(len(global_indices) for global_indices in original_label2global_idx.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' # - do relabeling - original labeling to new global labels print(f'{total_num_labels_so_far=}') assert total_num_labels_so_far != len(dataset), f'Err:\n{total_num_labels_so_far=}\n{len(dataset)=}' new_local_label2global_indices: dict = {} global_label2global_indices: dict = {} # make sure to sort to avoid random looping of unordered data structures e.g. keys in a dict for new_local_label, original_label in enumerate(sorted(original_label2global_idx.keys())): global_indices: list[int] = original_label2global_idx[original_label] new_local_label2global_indices[int(new_local_label)] = global_indices new_global_label: int = total_num_labels_so_far + new_local_label global_label2global_indices[int(new_global_label)] = global_indices local_num_dps: int = sum(len(global_indices) for global_indices in original_label2global_idx.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' local_num_dps: int = sum(len(global_indices) for global_indices in new_local_label2global_indices.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' local_num_dps: int = sum(len(global_indices) for global_indices in global_label2global_indices.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' # - this assumes the integers in each data set is different, if there were unions you'd likely need semantic information about the label e.g. the string cat instead of absolute integers, or know the integers are shared between the two data sets print(f'{total_num_labels_so_far=}') # this is the step where classes are concatenated. Note due to the previous loops assuming each label is uning this should never have intersecting keys. print(f'{list(self.labels_to_indices.keys())=}') print(f'{list(global_label2global_indices.keys())=}') dup: list = get_duplicates(list(self.labels_to_indices.keys()) + list(global_label2global_indices.keys())) print(f'{list(self.labels_to_indices.keys())=}') print(f'{list(global_label2global_indices.keys())=}') assert len(dup) == 0, f'Error:\n{self.labels_to_indices.keys()=}\n{global_label2global_indices.keys()=}\n{dup=}' for global_label, global_indices in global_label2global_indices.items(): # note g_idx might different to global_idx! global_indices: list[int] for g_idx in global_indices: self.labels_to_indices[int(global_label)] = g_idx self.indices_to_labels[g_idx] = int(global_label) # - update number of labels seen so far num_labels_for_current_dataset: int = len(original_label2global_idx.keys()) print(f'{num_labels_for_current_dataset=}') total_num_labels_so_far += num_labels_for_current_dataset assert total_num_labels_so_far == len(self.labels_to_indices.keys()), f'Err:\n{total_num_labels_so_far=}' \ f'\n{len(self.labels_to_indices.keys())=}' assert global_idx == len(self.indices_to_labels.keys()), f'Err:\n{global_idx=}\n{len(self.indices_to_labels.keys())=}' if hasattr(dataset, 'labels'): assert len(dataset.labels) == num_labels_for_current_dataset, f'Err:\n{len(dataset.labels)=}' \ f'\n{num_labels_for_current_dataset=}' # - relabling done assert len(self.indices_to_labels.keys()) == len( self.concat_datasets), f'Err: \n{len(self.indices_to_labels.keys())=}' \ f'\n {len(self.concat_datasets)=}' if all(hasattr(dataset, 'labels') for dataset in datasets): assert sum(len(dataset.labels) for dataset in datasets) == total_num_labels_so_far # contains the list of labels from 0 - total num labels after concat, assume mutually exclusive # - set & validate new labels self.labels = range(total_num_labels_so_far) labels = list(sorted(list(self.labels_to_indices.keys()))) assert labels == list(labels), f'labels should match and be consecutive, but got: \n{labels=}, \n{self.labels=}' def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: """ Get's the data point and it's new label according to a mutually exclusive concatenation. For later? to do the relabling on the fly we'd need to figure out which data set idx corresponds to and to compute the total_num_labels_so_far. Something like this: current_data_set_idx = bisect_left(idx) total_num_labels_so_far = sum(max(_, y in dataset)+1 for dataset_idx, dataset in enumerate(self.datasets) if dataset_idx <= current_data_set_idx) new_y = total_num_labels_so_far + y self.indices_to_labels[idx] = new_y :param idx: :return: """ x, _y = self.concat_datasets[idx] y = self.indices_to_labels[idx] # for the first data set they aren't re-labaled so can't use assert # assert y != _y, f'concat dataset returns x, y so the y is not relabeled, but why are they the same {_y}, {y=}' # idk what this is but could be useful? mnist had this. # img = Image.fromarray(img.numpy(), mode="L") if self.transform is not None: x = self.transform(x) if self.target_transform is not None: y = self.target_transform(y) return x, y
https://stackoverflow.com/questions/73913522/
Map tensor values to another tensor
Hi everyone I have two tensors, one tensor has as values the indicies to another tensor which contain the values I want mapped for example: a = tensor([0, 2, 5, 1, 0, 0, 4, 3, 0, 0, 0, 2,2,1]) b = tensor([1.5, 2.7, 1.8, 7.0, 3.9, 10.0]) I would like to produce a new tensor in which the values of b replace the index values in a as such: new_tensor = tensor([1.5, 1.8, 10.0, 2.7, 1.5, 1.5, ... ]) Id like to do it without performing some time of for loop. If there is a pytorch/linear away to tackle the mapping that'd be great.
>>> b[a] tensor([ 1.5000, 1.8000, 10.0000, 2.7000, 1.5000, 1.5000, 3.9000, 7.0000, 1.5000, 1.5000, 1.5000, 1.8000, 1.8000, 2.7000])
https://stackoverflow.com/questions/73920429/
custom classifier pytorch, I want to add softmax
I have this classifier: input_dim = 25088 h1_dim = 4096 h2_dim = 2048 h3_dim = 1024 h4_dim = 512 output_dim = len(cat_to_name) # 102 drop_prob = 0.2 model.classifier = nn.Sequential(nn.Linear(input_dim, h1_dim), nn.ReLU(), nn.Dropout(drop_prob), nn.Linear(h1_dim, h2_dim), nn.ReLU(), nn.Dropout(drop_prob), nn.Linear(h2_dim, h3_dim), nn.ReLU(), nn.Dropout(drop_prob), nn.Linear(h3_dim, h4_dim), nn.ReLU(), nn.Dropout(drop_prob), nn.Linear(h4_dim, output_dim), ) and I went with CrossEntropyLoss as the criterion. In the validation and testing how can I add Softmax? This is the validation loop: model.eval() with torch.no_grad(): for images, labels in valid_loader: images, labels = images.to(device), labels.to(device) images.requires_grad = True logits = model.forward(images) batch_loss = criterion(logits, labels) valid_loss += batch_loss.item() ps = torch.exp(logits) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape)
The CrossEntropyLoss already applies the softmax function. From the Pytorch doc: Note that this case is equivalent to the combination of LogSoftmax and NLLLoss. So if you just want to use cross entropy loss, no need to apply SoftMax beforehand. If you really wanted to use the SoftMax function anyway, you can do: m = nn.Softmax(dim=1) output = m(logits) assuming your logits have a shape of (batch_size, number_classes) You can check: https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html
https://stackoverflow.com/questions/73921876/
torch.sub cause cuda memory out
I have two tensors, a and b, with sizes a.shape=(10000,10000,120) and b.shape=(10000,10000,120). I'm trying to get a cost matrix between a and b, cost = torch.sub((a-b)**2,-1), where cost.shape=(10000,10000). The problem is, when I tried to do a-b or torch.sum(a,b,alpha=1), a "CUDA MEMORY OUT" error occurs. I don't think it should cost that much. It works when the size of the tensor is small, like 2000. Using a for iteration is not an efficient way. How can I deal with it?
It does costs much (about 134 GB). Let's do some calculations. Assuming your data is of type torch.float32, a will occupy a memory size of: 32 bits (4 Bytes) * 10000 * 10000 * 120 = 4.8E10 bytes ≈ 44.7 G Bytes So does b. When you do b-a, the result also has the same shape with a and thus occupies the same amount of memory, which means you need a total of 44.7 GB * 3 (≈ 134 GB) memory to do this operation. Is your available memory size greater than 134GB? Possible solution: If you will no longer use a or b afterwards, you can store the result in one of them to prevents to allocating another 44.7 GB space like this: torch.sub(a, b, out=a) # In this case, the result goes to `a`
https://stackoverflow.com/questions/73923513/
Batch of n-dimensional vectors to batch of images with n channels
I have a batch of n-dimensional vectors, i.e. a tensor of size[batch_size, n]. I want this to be transformed into a batch of images of size [batch_size, n, H, W], i.e. each element of each vector in the batch must become a [1, W, H] image, thus each vector becomes a [n, H, W] image. Now I'm doing it in a very ugly way: vectors = torch.zeros((batch_size, n)) # This is the (batch_size, n, H, W) tensor that I will fill channels = torch.empty((batch_size, n, H, W)) for i, vector in enumerate(vectors): for j, val in enumerate(vector): channels[i, j].fill_(val) How can I do it properly, using pytorch functions?
You can add dimensions to the orginal tensor with vectors[:,:, None, None], then multiply by a (H, W) tensor of ones: channels = vectors[:,:, None, None]*torch.ones((H, W)) This will give you a tensor of size (batch_size, n, H, W), with each channels[i][j] being a (H, W) map with constant values.
https://stackoverflow.com/questions/73925017/
Fine tuning freezing weights nnUNet
Good morning, I've followed the instructions in this github issue: https://github.com/MIC-DKFZ/nnUNet/issues/1108 to fine-tune an nnUNet model (pyTorch) on a pre-trained one, but this method retrain all weights, and i would like to freeze all weigths and retrain only the last layer's weights, changing the number of segmentation classes from 3 to 1. Do you know a way to do that? Thank you in advance
To freeze the weights you need to set parameter.requires_grad = False. Example: from nnunet.network_architecture.generic_UNet import Generic_UNet model = Generic_UNet(input_channels=3, base_num_features=64, num_classes=4, num_pool=3) for name, parameter in model.named_parameters(): if 'seg_outputs' in name: print(f"parameter '{name}' will not be freezed") parameter.requires_grad = True else: parameter.requires_grad = False To check parameter names you can use print: print(model) which produces: Generic_UNet( (conv_blocks_localization): ModuleList( (0): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (instnorm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (instnorm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (conv_blocks_context): ModuleList( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (dropout): Dropout2d(p=0.5, inplace=True) (instnorm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (dropout): Dropout2d(p=0.5, inplace=True) (instnorm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (dropout): Dropout2d(p=0.5, inplace=True) (instnorm): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (dropout): Dropout2d(p=0.5, inplace=True) (instnorm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (td): ModuleList( (0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False) ) (tu): ModuleList( (0): Upsample() ) (seg_outputs): ModuleList( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1), bias=False) ) ) Or you can visualize your network with netron: https://github.com/lutzroeder/netron
https://stackoverflow.com/questions/73925477/
Using GPU with Pytorch
I am confused on the usage of .to(device) in pytorch. I know that it loads the variables to the GPU. But after that, lets say we multiply two gpu tensors together, will our computer know to use the gpu to do that? Or will it cast it back to CPU and preform the computation? I guess I am just confused on when and how our computer will know to use gpu outside of us telling it to hold some variable in its memory.
All operations on tensors that are e.g. in the GPU memory will be performed on the GPU. Furthermore if multiple tensors are involved in some operation, all of them need to be set to the same .device. The way this is done is that no matter what operation we do to tensors, we'll always call methods of the torch.Tensor class. So adding two tensors a + b does actually call torch.Tensor.add, so whenever some operation is executed, it is always done with the knowledge of what device is used.
https://stackoverflow.com/questions/73928470/
Laplacian positional encoding in a pytorch batch
I am trying to recode the laplacian positional encoding for a graph model in pytorch. A valid encoding in numpy can be found at https://docs.dgl.ai/en/0.9.x/_modules/dgl/transforms/functional.html#laplacian_pe . I think I have managed to make an equivalent encoding to numpy in pytorch, but for performance issues I would like that function to be able to work with batches of data. That is, the following function works with the parameters with the form adj[N, N], degrees[N, N] and topk as an integer, where N is the number of nodes in the network. def _laplacian_positional_encoding_th(self, adj, degrees, topk): number_of_nodes = adj.shape[-1]. #degrees = th.clip(degrees, 0, 1) # not multigraph assert topk < number_of_nodes # Laplacian D = th.diag(degrees**-0.5) B = D * adj * D L = th.eye(number_of_nodes).to(B.device) * B # Eigenvectors EigVal, EigVec = th.linalg.eig(L) idx = th.argsort(th.real(EigVal)) # increasing order EigVal, EigVec = th.real(EigVal[idx]), th.real(EigVec[:,idx]) # Only select [1,topk+1] EigenVectors as L is symmetric (Spectral decomposition) out = EigVec[:,1:topk+1] return out However, when I try to perform the same efficient operations in batch form, I cannot code it. That is, the idea is that the parameters can come in the form adj[B, N, N], degrees[B, N, N] and topk as integer, B being the number of data in the batch.
How about: def _laplacian_positional_encoding_th(self, adj, degrees, topk): number_of_nodes = adj.shape[-1] assert topk < number_of_nodes D = th.clip(degrees, 0, 1) # not multigraph B = D @ adj @ D L = th.eye(number_of_nodes).to(B.device)[None, ...] - B # Eigenvectors EigVal, EigVec = th.linalg.eig(L) idx = th.argsort(th.real(EigVal)) # increasing order out = th.real(th.gather(EigVec, dim=-1, index=idx[..., None])) return out See th.diag_embed for creating a batch of diagonal matrices, and th.gather for selecting the right columns of EigVec according to the sorted indices. Updata: If you want to extract the topk vectors: _, topk = th.topk(EigVal.real, k=5) # get the top 5 out = th.gather(EigVec.real, dim=-1, index=topk[:, None, :].expand(-1, EigVec.shape[1], -1))
https://stackoverflow.com/questions/73929039/
How can I use larger input images when using a pre-trained CNN without resizing?
I have a ResNet18 model trained on the Places365 image dataset, and I'd like to use this pre-trained model to expedite the training needed to identify distressed houses. My dataset is images of size 800x800x3, but the inputs are of size 224x224x3. I do not want to resize the image to 224x224 since I'm worried it will lose house distress indicators (chipped paint and loose shingles) during conversion. My idea was to add extra layers that can handle the larger images before feeding them into ResNet. I have the following pytorch model: import torch from torch import nn from torchvision import models class NewModel(nn.Model): def __init__(self, pretrain_model_path) -> None: # Not sure here self.pre_layers = None # import the trained model model = models.resnet18(num_classes=365) checkpoint = torch.load(pretrain_model_path, map_location=lambda storage, loc: storage) state_dict = {str.replace(k,'module.',''): v for k,v in checkpoint['state_dict'].items()} model.load_state_dict(state_dict) # change prediction class count model.fc = nn.Linear(model.fc.in_features, 4) self.backbone = model def forward(self, x): x = self.pre_layers(x) x = self.backbone(x) return x Is this a common practice or is it better to make one from scratch that is built for this size image specifically? How would I go about implementing it if so?
You can use an ordinary resnet18 model and pass 800x800 images to it. But it may be slow and consumes more memory. import torch from torchvision import models model = models.resnet18(num_classes=4) print(model(torch.zeros((1, 3, 800, 800))).shape) # (1, 4) You can add any lightweight module that reduces image resolution: import torch from torch import nn from torchvision import models class NewModel(nn.Module): def __init__(self, intermediate_features=64) -> None: super().__init__() model = models.resnet18(num_classes=4) self.backbone = model self.pre_model = nn.Sequential( nn.Conv2d(3, intermediate_features, 3, stride=2, padding=1), nn.ReLU(), ) conv1 = self.backbone.conv1 self.backbone.conv1 = nn.Conv2d( intermediate_features, conv1.out_channels, conv1.kernel_size, conv1.stride, conv1.padding) def forward(self, x): # 3x800x800 x = self.pre_model(x) # 3x400x400 x = self.backbone(x) # 4 return x model = NewModel() x = torch.zeros((1, 3, 800, 800)) print(model(x).shape) Depending on your data the different approaches may perform better or worse so you may need to experiment with model architectures.
https://stackoverflow.com/questions/73930084/
How to select top half of the elements in each row of a given tensor? Pytorch code
I have a tensor A with shape (Batch, sequence, dimension). I calculated the importance weights for each element in a sequence. The weights can be represented as a tensor B with shape (Batch, sequence). I need to select the first half of the elements with large weights. That means the result is expected to have a shape (Batch, sequence/2, dimension). Note: Each sequence of tensor A has different length. I pad the sequences to form the tensor A. I have the mask matrix Mask_A with shape (Batch, sequence). Therefore, I can't simply take the top sequence/2 elements because of padding. I know it can be achieved by 'for' loop in python code. However, it will slow down the program. I hope it can be done in Pytorch functions to get GPU acceleration. The result doesn't have to be sorted according to weights. It's also acceptable to receive a result of mask containing top-half elements. Here is an example code: A = torch.randn((2, 4, 3)) # batch size = 2, sequence legnth = 4, dimension = 3 # it looks like '''tensor([[[ 6.0840e-01, 4.5604e-01, -1.3264e+00], [-4.6437e-01, 1.6999e-01, 1.3551e+00], [-1.9888e+00, -2.3047e-01, 1.2347e-03], [ 0.0000e+00, 0.0000e+00, 0.0000e+00]], [[ 7.9035e-01, -5.5752e-01, -1.2477e+00], [-1.7801e-01, 4.6232e-01, 1.3019e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00]]])''' # [ 0.0000e+00, 0.0000e+00, 0.0000e+00] means padding Mask_A = torch.tensor([[1., 1., 1., 0.], [1., 1., 0., 0.]]) # this means 1-st row of A has three elements, 2-nd row has two elements. '0.' means padding B = torch.tensor([[0.3,0.1,0.6,0.0],[0.05, 0.95, 0.0, 0.0]]) # this is the unsorted importance weights matrix # what I want is like: ''' tensor([[[-1.9888e+00, -2.3047e-01, 1.2347e-03], [ 0.0000e+00, 0.0000e+00, 0.0000e+00]], [[-1.7801e-01, 4.6232e-01, 1.3019e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00]]]) ''' # [ 0.0000e+00, 0.0000e+00, 0.0000e+00] means padding # what I want can also be like: Mask_result = torch.tensor([[0., 0., 1., 0.], [0., 1., 0., 0.]])
Making some assumptions here about exactly how many elements you want to keep, but this should work: # Get a tensor marking the rank of each weight within the sequence rank = torch.argsort(torch.argsort(B, axis=1, descending=True), axis=1) # Determine how many elements per sequence we want to keep, based on the mask n_to_keep = torch.floor(torch.sum(Mask_A, axis=1)/2) # Make a mask marking the n elements per sequence with the top weights # using broadcasting mask_result = rank < n_to_keep[:,None]
https://stackoverflow.com/questions/73930566/
Einsum for shapes of different sizes or ranks
I have two PyTorch tensors. One is rank three and the other is rank four. Is there a way to get it so that it produce the rank and shape of the first tensor? For instance in this cross-attention bit: q = torch.linspace(1, 192, steps=192) q = q.reshape(2, 4, 3, 8) k = torch.linspace(2, 193, steps=192) k = k.reshape(2, 4, 3, 8) v = torch.linspace(3, 194, steps=192) v = v.reshape(2, 4, 24) k = k.permute(0, 3, 2, 1) attn = torch.einsum("nchw,nwhu->nchu", q, k) # Below is what doesn't work. I would like to get it such that hidden_states is a tensor of rank 2, 4, 24 hidden_states = torch.einsum("chw,whu->chu", attn, v) Is there a permutation/transpose I could apply to q, k, v, or attn that would allow me to multiply into (2, 4, 24)? I have yet to find one. I currently receive this error: "RuntimeError: einsum(): the number of subscripts in the equation (3) does not match the number of dimensions (4) for operand 0 and no ellipsis was given" so I'm wondering how to use the ellipsis in this case, if that could be a solution. Any explanation as to why this is or isn't possible would also be an excepted answer!
It seems like your q and k are 4D tensors of shape batch-channel-height-width (2x4x3x8). However, when considering attention mechanism, one disregard the spatial arrangement of the features and only treat them as a "bag of features". That is, instead of q and k of shape 2x4x3x8 you should have 2x4x24: q = torch.linspace(1, 192, steps=192) q = q.reshape(2, 4, 3 * 8) # collapse the spatial dimensions into a single one k = torch.linspace(2, 193, steps=192) k = k.reshape(2, 4, 3 * 8) # collapse the spatial dimensions into a single one v = torch.linspace(3, 194, steps=192) v = v.reshape(2, 4, 24) attn = torch.einsum("bcn,bcN->bnN", q, k) # it is customary to convert the raw attn into probabilities using softmax attn = torch.softmax(attn, dim=-1) hidden_states = torch.einsum("bnN,bcN->bcn", attn, v)
https://stackoverflow.com/questions/73931002/
Rsizing The Spatial Dimensions Of a 5D Tensor
For training a deep learning based model, I have an input tensor with the size of [batch_size=32, channels=3, Temporal=16, H=128, w=192] that includes frames of a video. I need to resize the spatial size of the tensor to (H=224, w=224). In other words, I need to have a tensor with the size of [batch_size=32, channels=3, Temporal=16, H=224, w=224]. How can I do that?
You can use torch.nn.functional.interpolate: import torch.nn.functional as nnf y = nnf.interpolate(x, size=(x.shape[2], 224, 224), mode='trilinear')
https://stackoverflow.com/questions/73935523/
How does a pytorch function (such as RoIPool) work?
For example, I'm trying to view the implementation of RoI Pooling in pytorch. Here is a code fragment showing how to use RoIPool in pytorch import torch from torchvision.ops.roi_pool import RoIPool device = torch.device('cuda') # create feature layer, proposals and targets num_proposals = 10 feature_map = torch.randn(1, 64, 32, 32) proposals = torch.zeros((num_proposals, 4)) proposals[:, 0] = torch.randint(0, 16, (num_proposals,)) proposals[:, 1] = torch.randint(0, 16, (num_proposals,)) proposals[:, 2] = torch.randint(16, 32, (num_proposals,)) proposals[:, 3] = torch.randint(16, 32, (num_proposals,)) roi_pool_obj = RoIPool(3, 2**-1) roi_pool = roi_pool_obj(feature_map, [proposals]) I'm using pychram, so when I follow RoIPool from the second line, it opens a file located at ~/anaconda3/envs/CV/lib/python3.8/site-package/torchvision/ops/roi_pool.py, which is exactly the same as codes in the documentation. I pasted the code below without documentations. from typing import List, Union import torch from torch import nn, Tensor from torch.jit.annotations import BroadcastingList2 from torch.nn.modules.utils import _pair from torchvision.extension import _assert_has_ops from ..utils import _log_api_usage_once from ._utils import convert_boxes_to_roi_format, check_roi_boxes_shape def roi_pool( input: Tensor, boxes: Union[Tensor, List[Tensor]], output_size: BroadcastingList2[int], spatial_scale: float = 1.0, ) -> Tensor: if not torch.jit.is_scripting() and not torch.jit.is_tracing(): _log_api_usage_once(roi_pool) _assert_has_ops() check_roi_boxes_shape(boxes) rois = boxes output_size = _pair(output_size) if not isinstance(rois, torch.Tensor): rois = convert_boxes_to_roi_format(rois) output, _ = torch.ops.torchvision.roi_pool(input, rois, spatial_scale, output_size[0], output_size[1]) return output class RoIPool(nn.Module): def __init__(self, output_size: BroadcastingList2[int], spatial_scale: float): super().__init__() _log_api_usage_once(self) self.output_size = output_size self.spatial_scale = spatial_scale def forward(self, input: Tensor, rois: Tensor) -> Tensor: return roi_pool(input, rois, self.output_size, self.spatial_scale) def __repr__(self) -> str: s = f"{self.__class__.__name__}(output_size={self.output_size}, spatial_scale={self.spatial_scale})" return s So, in the code example: When running roi_pool_obj = RoIPool(3, 2**-1) it will create an instance of RoIPool by calling its __init__ method, which only initialized two instance variables; When running roi_pool = roi_pool_obj(feature_map, [proposals]), it must have called the forward() method (but I don't know how) which then called the roi_pool() function above; When running the roi_pool() function, it did some checking first and then computed output with the line output, _ = torch.ops.torchvision.roi_pool(input, rois, spatial_scale, output_size[0], output_size[1]). But this doesn't show details of how roi_pool is implemented and pycharm showed Cannot find declaration to go to when I tried to follow torch.ops.torchvision.roi_pool. To summarize, I have two questions: How does the forward() called by running roi_pool = roi_pool_obj(feature_map, [proposals])? How can I view the source code of torch.ops.torchvision.roi_pool or where is the file containing it's implementaion located? Last but not least, I've just started reading source code which is pretty difficult for me. I'd appreciate it if you can also provide some advice or tutorials.
RoIPool is a subclass of torch.nn.Module. Source code: https://github.com/pytorch/vision/blob/07ae61bf9c21ddd1d5f65d326aa9636849b383ca/torchvision/ops/roi_pool.py#L56 nn.Module defines __call__ method which in turn calls forward method. Source code: https://github.com/pytorch/pytorch/blob/b2311192e6c4745aac3fdd774ac9d56a36b396d4/torch/nn/modules/module.py#L1234 When you executing roi_pool = roi_pool_obj(feature_map, [proposals]) statement the __call__ method uses the forward() of RoiPool. Source code: https://github.com/pytorch/vision/blob/07ae61bf9c21ddd1d5f65d326aa9636849b383ca/torchvision/ops/roi_pool.py#L67 RoiPool.forward calls torch.ops.torchvision.roi_pool. https://github.com/pytorch/vision/blob/07ae61bf9c21ddd1d5f65d326aa9636849b383ca/torchvision/ops/roi_pool.py#L52 ops is a object which loads native libraries implemented in c++: https://github.com/pytorch/pytorch/blob/b2311192e6c4745aac3fdd774ac9d56a36b396d4/torch/_ops.py#L537 so when you call torch.ops.torchvision it will use torchvision library. Here the roi_pool function is registered: https://github.com/pytorch/vision/blob/7947fc8fb38b1d3a2aca03f22a2e6a3caa63f2a0/torchvision/csrc/ops/roi_pool.cpp#L53 Here you can find the actual implementation of rol_pool CPU: https://github.com/pytorch/vision/blob/7947fc8fb38b1d3a2aca03f22a2e6a3caa63f2a0/torchvision/csrc/ops/cpu/roi_pool_kernel.cpp GPU: https://github.com/pytorch/vision/blob/7947fc8fb38b1d3a2aca03f22a2e6a3caa63f2a0/torchvision/csrc/ops/cuda/roi_pool_kernel.cu
https://stackoverflow.com/questions/73938616/
Problem installing matplotlib using conda and virtual environment. ModuleNotFoundError
I have tried to install matplotlib in my environment "cs323V2"with the following commands: (cs323V2) conda install matplotlib Collecting package metadata (current_repodata.json): done Solving environment: done # All requested packages already installed. Retrieving notices: ...working... done (cs323V2) python Python 2.7.5 (default, Nov 16 2020, 22:23:17) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux2 Type "help", "copyright", "credits" or "license" for more information. [3]+ Stopped python >>> import matplotlib Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'matplotlib'
Use the conda-forge channel. conda install -c conda-forge matplotlib I am not really sure why only conda install matplotlib does not work.
https://stackoverflow.com/questions/73939717/
How to combine two tensors of different shapes in pytorch?
I am currently doing this: repeat_vals = [x.shape[0] // pfinal.shape[0]] + [-1] * (len(pfinal.shape) - 1) x = torch.cat((x, pfinal.expand(*repeat_vals)), dim=-1) the shape of x is[91,6] and of final is[6,6] but I am getting this error: RuntimeError: The expanded size of the tensor (15) must match the existing size (6) at non-singleton dimension 0. Target sizes: [15, -1]. Tensor sizes: [6, 6]
You cannot expand non-singleton values. Furthermore, you cannot enforce len(x) to be a multiple of len(pfinal), so instead, depending on your needs, you could more over the desired number and then slice away the excess. Something that you can modify to fit your needs: >>> reps = len(x) // len(pfinal) + 1 >>> res = pfinal.repeat(reps, *[1]*(pfinal.ndim - 1))[:len(x)]
https://stackoverflow.com/questions/73946105/
Slicing audio given video frames
I have audio from a video that I've loaded with PyTorch. Given a starting index and ending index corresponding to the video segment of interest, along with the video FPS and audio sampling rate, how would I go about extracting the slice of audio that matches the segment of interest of the video? My intuition is to convert frames to time via: start_time = frame_start / fps end_time = frame_end / fps the convert time to sample position with: start_sample = int(math.floor(start_time * sr)) end_sample = int(math.floor(end_time * sr)) Is this correct? Or is there something I'm missing? I'm worried that there will be loss of information since I'm converting the samples into ints with floor.
Let's say you have fs = 44100 # audio sampling frequency vfr = 24 # video frame rate frame_start = 10 # index of first frame frame_end = 10 # index of last frame audio = np.arange(44100) # audio in form of ndarray you can calculate at which points in time you want to slice the audio time_start = frame_start / vfr time_end = frame_end / vfr # or (frame_end + 1) / vfr for inclusive cut and then to which samples those points in time correspond: sample_start_idx = int(time_start * fs) sample_end_idx = int(time_end * fs) Its up to you if you want to be super-precise and take into account the fact that audio corresponding to a given frame should rather be starting half a frame before a frame and end half a frame after. In such a case use: time_start = np.clip((frame_start - 0.5) / vfr, 0, np.inf) time_end = (frame_end + 0.5) / vfr
https://stackoverflow.com/questions/73947518/
Multi-target loss recommendations
I'm working on a classification problem. The number of classes is 5. I have a ground truth vector that has the shape (3) instead of 1. The values in this target vector are the possible classes and the predicted vector is of the shape (1x5) which holds the softmax scores for all the classes. For example: predicted_vector = tensor([0.0669, 0.1336, 0.3400, 0.3392, 0.1203] ground_truth = tensor([3,2,5]) For the above illustration, a typical argmax operation would result in declaring class 3 as the predicted class (0.34) but I want the model to reward even if the argmax class is any of 3,2, or 5. Which loss function is recommended for such a use case?
For this problem, a given sample is in exactly one class (say, class 3), but for training purposes, predicting class 2 or 5 is still okay so the model isn't penalised that heavily. This is a typical single-label, multi-class problem, but with probabilistic (“soft”) labels, and CrossEntropyLoss should be used (and not use softmax()). In this example, the (soft) target might be a probability of 0.7 for class 3, a probability of 0.2 for class 2, and a probability of 0.1 for class 5 (and zero for everything else).
https://stackoverflow.com/questions/73948538/
Libtorch C++: Efficient/correct way for saving/loading Model and Optimizer State Dict for retraining
I am looking for the correct and most efficient way of saving, loading, and retraining a model in Libtorch (C++) with both the model and optimizer state dict. I believe I have everything correctly set (however this may not be right for saving and loading optimizer state dicts, only the model state dict I am absolutely sure of), my last question is where I set the Optimizer and give it the model parameters. Saving Model and Optimizer: // Save model state torch::serialize::OutputArchive output_model_archive; myModel.to(torch::kCPU); myModel.save(output_model_archive); output_model_archive.save_to(model_state_dict_path); // Save optim state torch::serialize::OutputArchive output_optim_archive; myOptimizer->save(output_optim_archive); output_optim_archive.save_to(optim_state_dict_path); Loading model and optim state for retraining. // Load model state torch::serialize::InputArchive input_archive; input_archive.load_from(state_dict); myModel.load(input_archive); // Load optim state torch::serialize::InputArchive input_archive; input_archive.load_from(state_dict); myOptimizer->load(input_archive); When creating the optimizer object, you need to give it the model parameters: std::shared_ptr<torch::optim::Optimizer> myOptimizer; myOptimizer.reset(new torch::optim::Adam(myModel.parameters(), torch::optim::AdamOptions(LR))); Should this be done before the state dicts are loaded, after, or does it matter? For example, I am doing it like: // Setup model and optimizer object, set model params in optimizer // Load state dictionaries... // Train epoch #n... myOptimizer->step(); // Save state dictionaries
To answer my own question, the model state dict needs to be loaded and then parameters put into the optimizer object. Then load the state dict into the optimizer object. My use case was a little more complicated as I was aggregating gradients from multiple nodes where training was happening and doing an optimizer step on a "master" node. I was trying to simplify the problem above for the question, and I assumed I did not need the previous state dict since I was aggregating gradients. That was an incorrect assumption. The flow looks like: // Load model state dict // Aggregate gradients // Load Optimizer state dict / params into optim // Step
https://stackoverflow.com/questions/73949214/
Repetitive word predictions in RNN
Hello dear community, I am training a Seq2Seq model to generate a question based on a graph. Both train and val loss are converging, but the generated questions (on either train or test set) are nonsense and contain mostly repetition of tokens. I tried various hyper parameters and double checked input and outputs tensors. Something that I do find odd is that the output out (see below) starts containing some values, which I consider as unusually high. This starts happening around half way through the first epoch: Out: tensor([[ 0.2016, 103.7198, 90.4739, ..., 0.9419, 0.4810, -0.2869]] My guess for that is vanishing/exploding gradients, which I thought I had handeled by gradient clipping, but now I am not sure about this: for p in model_params: p.register_hook(lambda grad: torch.clamp( grad, -clip_value, clip_value)) Below are the training curves (10K samples, batch size=128, lr=0.065, lr_decay=0.99, dropout=0.25) Encoder (a GNN, learning node embeddings of the input graph, that consists of around 3-4 nodes and edges. A single graph embedding is obtained by pooling the node embeddings and feeding them as the initial hidden state to the Decoder): class QuestionGraphGNN(torch.nn.Module): def __init__(self, in_channels, hidden_channels, out_channels, dropout, aggr='mean'): super(QuestionGraphGNN, self).__init__() nn1 = torch.nn.Sequential( torch.nn.Linear(in_channels, hidden_channels), torch.nn.ReLU(), torch.nn.Linear(hidden_channels, in_channels * hidden_channels)) self.conv = NNConv(in_channels, hidden_channels, nn1, aggr=aggr) self.lin = nn.Linear(hidden_channels, out_channels) self.dropout = dropout def forward(self, x, edge_index, edge_attr): x = self.conv(x, edge_index, edge_attr) x = F.leaky_relu(x) x = F.dropout(x, p=self.dropout) x = self.lin(x) return x Decoder (The out vector from above is printed in the forward() function): class DecoderRNN(nn.Module): def __init__(self, embedding_size, output_size, dropout): super(DecoderRNN, self).__init__() self.output_size = output_size self.dropout = dropout self.embedding = nn.Embedding(output_size, embedding_size) self.gru1 = nn.GRU(embedding_size, embedding_size) self.gru2 = nn.GRU(embedding_size, embedding_size) self.gru3 = nn.GRU(embedding_size, embedding_size) self.out = nn.Linear(embedding_size, output_size) self.logsoftmax = nn.LogSoftmax(dim=1) def forward(self, inp, hidden): output = self.embedding(inp).view(1, 1, -1) output = F.leaky_relu(output) output = F.dropout(output, p=self.dropout) output, hidden = self.gru1(output, hidden) output = F.dropout(output, p=self.dropout) output, hidden = self.gru2(output, hidden) output, hidden = self.gru3(output, hidden) out = self.out(output[0]) print("Out: ", out) output = self.logsoftmax(out) return output, hidden I am using PyTorchs NLLLoss(). Optimizer is SGD. I call optimizer.zero_grad() right before the backward and optimizer step and I switch the training/evaluation mode for training, evaluation and testing. What are your thoughts on this? Thank you very much! EDIT Dimensions of the Encoder: in_channels=301 (This is the size of the initial node embeddings) hidden_channels=256 out_channels=301 (This will also be the size of the final graph embedding, after mean pooling the node embeddings) Dimensions of the Decoder: embedding_size=301 (the size of the previously pooled graph embedding) output_size=number of words in my vocabulary. In the training above around 1.2K I am using top-k sampling and my train loop follows the NMT Tutorial https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#training-the-model). Similarily, my translation function, that takes the data of a single graph, decodes a question as such: def translate(self, data): # Get node embeddings of the input graph h = self.encoder(data.node_embeddings, data.edge_index, data.edge_embeddings) # Pool node embeddings into single graph embedding graph_embedding = self.get_graph_embeddings(h, data.graph_dict) # Pass graph embedding through decoder self.encoder.eval() self.decoder.eval() with torch.no_grad(): # Initialize first input and hidden state decoder_input = decoder_input = torch.tensor( [[self.vocab.SOS['idx']]], device=self.device) decoder_hidden = graph_embedding.view(1, 1, -1) decoder_tokens = [] for di in range(self.dec_max_length): decoder_output, decoder_hidden = self.decoder( decoder_input, decoder_hidden) topv, topi = decoder_output.data.topk(1) if topi.item() == self.vocab.EOS['idx']: break else: word = self.vocab.index2word[topi.item()] word = word.upper( ) if word == self.vocab.UNK['token'].lower() else word decoder_tokens.append(word) decoder_input = topi.squeeze().detach() return decoder_tokens Also: At times, the output-vector of the final gru layer (self.gru3(...)) inside the forward() function (5th line from the bottom) outputs a lot of values being (close to) 1 and -1. I suppose these might otherwise be a lot higher/lower without clipping. This might be alright, but seems unusual to me. An example: tensor([[[-0.9984, -0.9950, 1.0000, -0.9889, -1.0000, -0.9770, -0.0299, -0.9996, 0.9996, 1.0000, -0.0176, -0.5815, -0.9998, -0.0265, -0.1471, 0.9998, -1.0000, -0.2356, 0.9964, 0.9936, -0.9998, 0.0652, -0.9999, 0.9999, -1.0000, -0.9998, -0.9999, 0.9998, -1.0000, -0.9997, 0.9850, 0.9994, -0.9998, -1.0000, -1.0000, 0.9977, 0.9015, -0.9982, 1.0000, 0.9980, -1.0000, 0.9859, 0.6670, 0.9998, 0.3827, 0.9999, 0.9953, -0.9989, 0.1287, 1.0000, 1.0000, -1.0000, 0.9778, 1.0000, 1.0000, -0.9907, ...
Your code looks good, and given the training/validation curves you posted, it looks like it's doing alright. How are you generating text samples? Are you just taking the word the model predicts with the highest probability, appending to the end of your input sequence, and calling forward again? This sampling technique, called greedy sampling, can lead to behavior you described. Maybe another sampling technique could help (see beam search https://medium.com/geekculture/beam-search-decoding-for-text-generation-in-python-9184699f0120)?
https://stackoverflow.com/questions/73949640/
How to record the Batch Normalization layers under the ONNX format?
I need to print the structure of a neural network from a Pytorch code. I resorted to use ONNX format: I used the torch.onnx.export function, but what happens is that the Batch Normalization layers are not recorded, since they are included in the convolutional layers (see here and here, for example). After some search, I found this Q&A on StackOverflow, which seems to provide a solution to the above problem, by adding the option training=TrainingMode.TRAINING. Unfortunately, as noted also by others on the web, this option seems to not work, as such option does not seem to be recognized. --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-208-8820886c111a> in <module> 10 input_names = ['input'], 11 output_names = ['output'], ---> 12 training=TrainingMode.TRAINING) 13 14 NameError: name 'TrainingMode' is not defined Below I provide an example code for showing the problem, I am currently working on Colab. import torch import torchvision from torch import nn device = "cuda" if torch.cuda.is_available() else "cpu" print(f"Using {device} device") class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.Network = nn.Sequential( nn.Conv2d(1,1,3), nn.BatchNorm2d(1,track_running_stats=False), nn.ReLU(), ) def forward(self,x): output = self.Network(x) return output model = NeuralNetwork().to(device) print(model) import onnx net=NeuralNetwork() torch.onnx.export(net, dummy_input, "test.onnx", verbose=True, export_params=True, opset_version=12, do_constant_folding=True, input_names = ['input'], output_names = ['output'], training=TrainingMode.TRAINING)
You need to import definition of TrainingMode with import: from torch.onnx import TrainingMode without import: torch.onnx.TrainingMode.TRAINING After that you can successfully export onnx model (visualized with netron):
https://stackoverflow.com/questions/73950114/
GPU Usage While Training LSTM Model Using Darts
I am currently using the Darts library to do some time series forecasting using an LSTM. In my arguments I set the model to utilize my GPU and the output from the dashboard during training shows that my GPU is indeed being used. Looking at the documentation as well confirms that what I am seeing means the GPU is being used for training. https://unit8co.github.io/darts/userguide/gpu_and_tpu_usage.html. However, looking at my GPU in task manager it says that my GPU usage is at 0%. That does not make any sense to me? Does anyone know what goes on behind the scenes and might be able to explain why the GPU usage would be at 0% during training?
It might be that your GPU is under-used, e.g. because your CPU is not feeding it fast enough. Try playing with the num_loader_workers parameter, increasing the batch size, and reading the recommendations provided here.
https://stackoverflow.com/questions/73955752/
Simple RNN Error "Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu" How to?
I'm working on a basic RNN-NLP classifier using PyTorch, and trying to use CUDA for acceleration.(On Google_Colab) but, I can't solve this error. The code is written like this. error message Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu RNN class class RNN(nn.Module): def __init__(self, vocab_size, emb_size, hidden_size, output_size): super().__init__() self.hidden_size = hidden_size self.emb = nn.Embedding(vocab_size, emb_size) self.rnn = nn.RNN(emb_size, hidden_size, nonlinearity='tanh', batch_first=True) self.fc = nn.Linear(hidden_size, output_size) def forward(self, x): self.batch_size = x.size()[0] hidden = self.init_hidden() emb = self.emb(x) out, hidden = self.rnn(emb, hidden) out = self.fc(out[:, -1, :]) return out def init_hidden(self): hidden = torch.zeros(1, self.batch_size, self.hidden_size) return hidden device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") Setting var VOCAB_SIZE = len(word_id.keys()) +1 EMB_SIZE = 300 OUTPUT_SIZE = 4 HIDDEN_SIZE = 50 model = RNN(VOCAB_SIZE,EMB_SIZE, HIDDEN_SIZE, OUTPUT_SIZE) model = model.to(device) Predict for i in range(10): # datasetet の、リスト0indexに入力要素 X, y = dataset_train[i] X = X.to(device) print(torch.softmax(model(X.unsqueeze(0)), dim=1)) This code works on CPU. but, can't works on "GPU". Follow this error, I try to make some fix code. ex) hidden.to(device),,,, but,I can't solve... Pleas someone tell me how to solve. Thank you very much for my question.
Doesn't doing something like the following work? device = torch.device("cuda" if torch.cuda.is_available() else "CPU") class RNN(nn.Module): def __init__(self, vocab_size, emb_size, hidden_size, output_size): super().__init__() self.hidden_size = hidden_size self.emb = nn.Embedding(vocab_size, emb_size) self.rnn = nn.RNN(emb_size, hidden_size, nonlinearity='tanh', batch_first=True) self.fc = nn.Linear(hidden_size, output_size) self.to(device) def forward(self, x): self.batch_size = x.size()[0] hidden = self.init_hidden() emb = self.emb(x) out, hidden = self.rnn(emb, hidden) out = self.fc(out[:, -1, :]) return out def init_hidden(self): hidden = torch.zeros(1, self.batch_size, self.hidden_size).to(device) return hidden
https://stackoverflow.com/questions/73956503/
NonImplementedError on using torch.onnx.export
I am trying to convert a pre-saved PyTorch model into a TensorFlow one via ONNX. For now, the following code is to export the model into .onnx format. The neural network has 2 inputs, one hidden layer with 5 neurons and a scalar output. Here's the code I'm working with: import torch.nn as nn from torch.autograd import Variable import numpy as np class Model(nn.Module): def __init__(self, n_h_layers, n_h_neurons, dim_in, dim_out, in_bound, out_bound): super(Model,self).__init__() self.n_h_layers=n_h_layers self.n_h_neurons=n_h_neurons self.dim_in=dim_in self.dim_out=dim_out self.in_bound=in_bound self.out_bound=out_bound layer_input = [nn.Linear(dim_in, n_h_neurons, bias=True)] layer_output = [nn.ReLU(), nn.Linear(n_h_neurons, dim_out, bias=True), nn.Hardtanh(in_bound, out_bound)] # hidden layer module_hidden = [[nn.ReLU(), nn.Linear(n_h_neurons, n_h_neurons, bias=True)] for _ in range(n_h_layers - 1)] layer_hidden = list(np.array(module_hidden).flatten()) # nn model layers = layer_input + layer_hidden + layer_output self.model = nn.Sequential(*layers) print(self.model) trained_nn=torch.load('path') trained_model=Model(1,5,2,1,-1,1) trained_model.load_state_dict(trained_nn,strict=False) dummy_input=Variable(torch.randn(1,2)) torch.onnx.export(trained_model,dummy_input, 'file.onnx', verbose=True) I have two problems: Running this snippet raises "NonImplementedError" in _forward_unimplemented in module.py as follows: File ".../anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented raise NotImplementedError NotImplementedError I am not aware with Exception handling in python and I do not know what I must change in order to tackle the error. When I print trained_nn, this is what it gives me: OrderedDict([('0.weight', tensor([[ 0.2035, -0.7679], [ 1.6368, -0.4135], [-0.0908, -0.2335], [ 1.3731, -0.3135], [ 0.6361, 0.2521]])), ('0.bias', tensor([-1.6907, 0.7262, 1.4032, 1.2551, 0.8013])), ('2.weight', tensor([[-0.4603, -0.0719, 0.4082, -1.0235, -0.0538]])), ('2.bias', tensor([-1.1568]))]) However, printing trained_model.state_dict() gives me a neural network with a completely different set of weights and biases, although I believe that it should be giving me the exact same model as before as this is what I need to save as onnx file? OrderedDict([('model.0.weight', tensor([[ 0.4817, 0.0928], [-0.4313, 0.1253], [ 0.6681, -0.4029], [ 0.6474, 0.0029], [-0.4663, 0.5029]])), ('model.0.bias', tensor([-0.2292, 0.6674, -0.3755, 0.0778, 0.0527])), ('model.2.weight', tensor([[-0.2097, -0.3029, 0.2792, 0.2596, 0.1362]])), ('model.2.bias', tensor([-0.1835]))]) Not sure what mistakes I'm making. Any help is appreciated.
When you are making a subclass of nn.Module you need to implement forward method. In your case you need to add: class Model(nn.Module): def __init__(self, n_h_layers, n_h_neurons, dim_in, dim_out, in_bound, out_bound): super(Model, self).__init__() ... def forward(self, x): return self.model(x) The names of parameters does not match: model.0.weight != 0.weight model.0.bias != 0.bias prefix model is missed. So when you call load_state_dict() with strict=False the parameters will not be used. You can rename the parameters to match the model: trained_nn = torch.load('path') trained_nn = {f'model.{name}': w for name, w in trained_nn.items()} trained_model.load_state_dict(trained_nn, strict=True)
https://stackoverflow.com/questions/73959211/
How to get most of the outputs of the sigmoid function in the range (0, 0.5)?
I use sigmoid function in my last layer. we know that sigmoid function, limits the outputs of network in range (0,1). I want most of the outputs to be in range (0, 0.5) and very few of them to be in range [0.5, 1). How can I do this in Pytorch to get the desired output? The following Pytorch code snippet is related to this question: class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() # def block(in_feat, out_feat, normalize=True): layers = [nn.Linear(in_features=in_feat, out_features=out_feat)] if normalize: layers.append(nn.BatchNorm1d(out_feat)) layers.append(nn.LeakyReLU(0.2, inplace=True)) return layers # now we can use this function like below: self.model = nn.Sequential(*block(params.input_dim_generator, 500, normalize=False), *block(500, 350), *block(350, 256), nn.Linear(256, 564), nn.Sigmoid()) # forward def forward(self, old_vector, z): vector_app = torch.cat((old_vector, z), dim=1) new_vector = self.model(vector_app) new_result = torch.max(new_vector, old_vector).float() return new_result z is a random noise vector in range (0,1) and old_vector is a binary vector(the values are 0 or 1). the output of this model is: torch.tensor([0.5167, 0.5281, 0.5804, 0.4372, 1.0000, 1.0000, 1.0000, 0.5501, 1.0000, 0.6154, 1.0000, 1.0000, 0.4699, 0.5536, 0.5005, 0.4318, 0.5302, 0.4830, 0.5404, 0.3597, 0.4639, 0.5885, 0.4997, 0.5881, 0.5046, 0.5670, 0.3977, 0.5186, 0.5859, 0.5398, 0.3954, 0.4839, 0.3310, 0.5208, 0.5420, 0.5056, 0.5022, 0.6316, 0.6185, 0.5142, 0.5536, 0.4988, 0.5250, 0.4813, 0.5150, 0.4080, 1.0000, 1.0000, 1.0000, 0.6054, 0.4766, 0.4423, 0.4520, 0.4816, 0.5159, 0.4582, 1.0000, 0.4550, 0.4956, 1.0000, 0.5934, 1.0000, 0.4809, 0.5512, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 0.4024, 0.4822, 1.0000, 0.5310, 1.0000, 0.5127, 1.0000, 0.5441, 0.5063, 1.0000, 0.5511, 0.5544, 1.0000, 0.4585, 0.5211, 0.5758, 0.4355, 1.0000, 0.5297, 0.4582, 0.4170, 1.0000, 1.0000, 0.5257, 0.4194, 0.3583, 0.5087, 0.5936, 0.4851, 0.5697, 0.4261, 0.4736, 0.4551, 1.0000, 0.5667, 0.5650, 1.0000, 0.5069, 0.5901, 0.4980, 0.5184, 1.0000, 1.0000, 0.5435, 1.0000, 1.0000, 1.0000, 1.0000, 0.4521, 1.0000, 0.4509, 1.0000, 0.5067, 1.0000, 0.4152, 0.5034, 0.5735, 0.4040, 1.0000, 0.4492, 1.0000, 0.4405, 1.0000, 1.0000, 0.5667, 0.5639, 0.4013, 0.4357, 0.4437, 0.4510, 0.4225, 0.5091, 0.5057, 1.0000, 0.5237, 0.5098, 1.0000, 0.4216, 0.5242, 0.5335, 0.3916, 0.4938, 1.0000, 0.4070, 0.5210, 1.0000, 1.0000, 0.4050, 0.3960, 0.5750, 0.4906, 0.4991, 1.0000, 0.3149, 0.2949, 1.0000, 0.4515, 0.3627, 0.4348, 0.3887, 0.5807, 0.5787, 0.5781, 1.0000, 1.0000, 1.0000, 1.0000, 0.4919, 1.0000, 1.0000, 0.5554, 0.5515, 1.0000, 0.5472, 0.3342, 0.5705, 0.5076, 0.6348, 0.4436, 0.4683, 0.4228, 0.6506, 0.4540, 0.5333, 0.4512, 0.6037, 0.5173, 1.0000, 0.4466, 0.5644, 0.5565, 0.5141, 0.4771, 0.5822, 0.4888, 1.0000, 0.6331, 0.6435, 1.0000, 0.5012, 1.0000, 0.4864, 1.0000, 0.4994, 0.4326, 0.4347, 0.3606, 0.5829, 0.5229, 1.0000, 0.5992, 0.5883, 0.4825, 0.6254, 0.4951, 0.4285, 0.4982, 1.0000, 0.5847, 0.4131, 0.5194, 0.5270, 0.4856, 0.6182, 0.5578, 1.0000, 0.5460, 0.5023, 0.6279, 0.5727, 0.5997, 0.4903, 0.5633, 0.5070, 0.5013, 1.0000, 0.4179, 0.5529, 0.6254, 0.5767, 0.3939, 0.5791, 0.4936, 0.4714, 0.5150, 0.5717, 0.4570, 0.4463, 0.5493, 0.5179, 1.0000, 0.5682, 0.5451, 0.5266, 0.5571, 1.0000, 1.0000, 0.5506, 0.4710, 0.5951, 1.0000, 0.5027, 1.0000, 1.0000, 0.4960, 0.6269, 0.4817, 1.0000, 0.4059, 0.4787, 0.4419, 0.5479, 0.4830, 0.4709, 0.6106, 0.6154, 0.3958, 0.6434, 0.4626, 0.5954, 0.5083, 0.5121, 1.0000, 0.5139, 1.0000, 0.5428, 1.0000, 0.5278, 0.5255, 0.5854, 0.4400, 0.4774, 0.4431, 0.4871, 0.3854, 0.6217, 0.5562, 0.4461, 0.5191, 0.5654, 0.4428, 0.5503, 0.5742, 1.0000, 0.4899, 1.0000, 0.5229, 0.5428, 0.4285, 0.3038, 0.3029, 0.5145, 0.6747, 0.5685, 0.5268, 0.4888, 0.6431, 0.5308, 0.6249, 0.4531, 0.5631, 0.4498, 0.4465, 0.5125, 0.5610, 1.0000, 0.5033, 0.5517, 1.0000, 0.4625, 0.5095, 1.0000, 0.3415, 0.4749, 1.0000, 0.4567, 1.0000, 0.4417, 0.5623, 1.0000, 0.4780, 0.4218, 1.0000, 0.5474, 0.6514, 0.5725, 0.4219, 0.5303, 0.3375, 0.5710, 0.5507, 0.3698, 0.4902, 0.6082, 0.5212, 0.5606, 0.5320, 0.4893, 0.3831, 0.4605, 0.5409, 0.4605, 0.5774, 0.5709, 0.5020, 0.5771, 0.4032, 0.5832, 0.4454, 0.4572, 0.4651, 0.4752, 0.5786, 0.4700, 0.3398, 0.4143, 0.4413, 0.4020, 0.6390, 0.5165, 0.4871, 0.6229, 0.4915, 1.0000, 0.4780, 0.5900, 0.4847, 0.4583, 0.5889, 0.4291, 0.4095, 0.5258, 1.0000, 0.4875, 1.0000, 0.5174, 0.4302, 1.0000, 0.5058, 0.5917, 0.5395, 0.3915, 0.4775, 0.4688, 0.4860, 0.4869, 0.4189, 1.0000, 0.6453, 0.4652, 0.5106, 0.4336, 0.4959, 0.5144, 1.0000, 1.0000, 0.4382, 0.5917, 1.0000, 0.5123, 0.4299, 0.5447, 1.0000, 0.5316, 0.4145, 0.5741, 1.0000, 0.4581, 0.5953, 1.0000, 0.4909, 0.3703, 0.3851, 0.5324, 1.0000, 0.6660, 1.0000, 0.5687, 0.4825, 0.5081, 0.5052, 0.6288, 0.5371, 0.4286, 1.0000, 0.6535, 0.5556, 0.5390, 0.3320, 1.0000, 0.6431, 0.5405, 1.0000, 0.3641, 0.4390, 0.6196, 0.4720, 0.5114, 0.4844, 0.4184, 0.6269, 1.0000, 0.4077, 0.3950, 0.4502, 1.0000, 0.4417, 0.4329, 0.5803, 0.4967, 0.5248, 0.5182, 0.4417, 0.4066, 0.6219, 0.3435, 1.0000, 0.4680, 1.0000, 0.5403, 0.4570, 1.0000, 0.5805, 1.0000, 0.5796, 0.5100, 0.6487, 0.4752, 0.4579, 0.6026, 0.5964, 0.5842, 0.3423, 0.5475, 0.4467, 0.4494, 0.4782, 0.6054, 0.4499, 0.4691, 0.4700, 0.5006, 0.5895, 0.3947, 0.5517, 0.4240, 0.5286, 0.4796, 0.5116, 0.5696, 0.4369, 0.4761, 0.5444, 0.4490, 0.6399, 0.5469, 0.5155, 0.5339, 0.5860, 0.6092, 0.4000, 0.4622, 0.4235, 0.5554, 0.4088, 0.5798, 0.5034, 0.4752, 0.4337, 0.4786, 0.5766, 0.4569, 0.5401, 0.4903, 0.4243, 0.3825, 0.6652, 0.4780, 0.5335, 0.4415, 0.5478, 0.3797, 1.0000, 0.6133, 0.5824, 0.4292, 0.5182, 0.3953, 0.5071, 0.5131, 0.4735, 1.0000, 0.3457, 0.5933, 0.5329])
As I noted, reducing the bias of the last layer is one way to decrease the outputs on average. Here's a script that tries a few different values for the bias and plots the effect on the distribution of outputs. import torch from torch import nn import matplotlib.pyplot as plt class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() # def block(in_feat, out_feat, normalize=True): layers = [nn.Linear(in_features=in_feat, out_features=out_feat)] if normalize: layers.append(nn.BatchNorm1d(out_feat)) layers.append(nn.LeakyReLU(0.2, inplace=True)) return layers # now we can use this function like below: self.model = nn.Sequential(*block(input_dim_generator, 500, normalize=False), *block(500, 350), *block(350, 256), nn.Linear(256, 564), nn.Sigmoid()) # forward def forward(self, old_vector, z): vector_app = torch.cat((old_vector, z), dim=1) new_vector = self.model(vector_app) new_result = torch.max(new_vector, old_vector).float() return new_result z = torch.rand(160 , 70) input_dim_generator = 634 old_vector = torch.randint(2,(160, 564)) gen = Generator() bias_shift = 0 bias_delta = -.5 fig, ax = plt.subplots(2,2, figsize = (8,8)) with torch.no_grad(): for i in range(4): ax_cur = ax[i//2][i%2] ax_cur.hist(gen(old_vector, z).numpy().ravel(),bins=20) ax_cur.set_title(f"{bias_shift} added to bias") bias_shift += bias_delta gen.model[-2].bias += bias_delta The resulting histograms: We can see that reducing the bias impacts the distribution of outputs, but doesn't seem to affect the number of outputs that are equal to exactly 1 (which I suspect come directly from the old vector).
https://stackoverflow.com/questions/73963805/
GPytorch Runtime Error has different input types
I am following the simple regression tutorial on gpytorch and get the following error when trying to use 2 dimensional input space during a call to the loss function. RuntimeError: !(has_different_input_dtypes && !config.promote_inputs_to_common_dtype_ && (has_undefined_outputs || config.enforce_safe_casting_to_output_ || config.cast_common_dtype_to_outputs_)) INTERNAL ASSERT FAILED at "../aten/src/ATen/TensorIterator.cpp":405, please report a bug to PyTorch. I am not quite sure what it means. Everything but the training data is still: https://github.com/cornellius-gp/gpytorch/blob/master/examples/01_Exact_GPs/Simple_GP_Regression.ipynb
The issue was my conversion of torch tensor. I used: torch.from_numpy(array) Instead I should use: torch.tensor(array) This is weird, but no issues now.
https://stackoverflow.com/questions/73971653/
How do a put a different classifier on top of BertForSequenceClassification?
I have a huggingface model: model_name = 'bert-base-uncased' model = BertForSequenceClassification.from_pretrained(model_name, num_labels=1).to(device) How can I change the default classifier head? Since it's only a single LinearClassifier. I found this issue in the huggingface github which said: You can also replace self.classifier with your own model. model = BertForSequenceClassification.from_pretrained("bert-base-multilingual-cased") model.classifier = new_classifier where new_classifier is any pytorch model that you want. However, I can't figure out how the structure of the new_classifier should look like (in particular the inputs and outputs so it can handle batches).
By looking at the source code of BertForSequenceClassification here, you can see that the classifier is simply a linear layer that project the bert output from hidden_size dimension to num_labels dimension. Suppose you want to change the linear classifier to a two layer MLP with Relu activation, you can do the following: new_classifier = nn.Sequential( nn.Linear(config.hidden_size, config.hidden_size *2), nn.ReLU(), nn.Linear(config.hidden_size * 2, config.num_labels) ) model.classifier = new_classifier The requirement of the structure of your new classifier is its input dimension and output dimension need to be config.hidden_size dimension and config.num_labels accordingly. The structure of the classifier doesn't rely on the batch size, and module like nn.Linear takes (*, H_dimension) dimension as input so you don't need to specify the batch size when creating the new classifier.
https://stackoverflow.com/questions/73975817/
How to understand this block of python code?
Found a piece of python code that can run perfectly,but I couldn't understand how it works. Would appreciate it if you could explain the lining part for me. I totally don't know what it does class BoxHead(nn.Module):#pending def __init__(self, lengths, num_classes): super(BoxHead, self).__init__() #------------------------------------------------------- self.cls_score = nn.Sequential(*tuple([ module for i in range(len(lengths) - 1) for module in (nn.Linear(lengths[i], lengths[i + 1]), nn.ReLU())] + [nn.Linear(lengths[-1], num_classes)])) #----------------------------------------------------------------------- self.bbox_pred = nn.Sequential(*tuple([ module for i in range(len(lengths) - 1) for module in (nn.Linear(lengths[i], lengths[i + 1]), nn.ReLU())] + [nn.Linear(lengths[-1], 4)]))
After reading, I assert lengths as a list of numbers. This is the input of nn.Sequential for cls_score: # lengths = [num1, num2, num3, ..., numN-1, numN] # numN-1 just represents its meaning, and is not valid in python syntax [ nn.Linear(num1, num2), nn.ReLU(), nn.Linear(num2, num3), nn.ReLU(), nn.Linear(num3, num4), nn.ReLU(), ..., nn.Linear(numN-1, numN), nn.ReLU(), nn.Linear(numN, num_classes) ] Input of nn.Sequential for bbox_pred is alike, but its last item is nn.Linear(numN, 4)
https://stackoverflow.com/questions/73981510/
Deep Learning: How does the training process in PyTorch work?
This picture shows my understanding of the process of running a model and my question about the process of training a model. Can anyone explain what happens then? Thanks to @Michael Hearn 's answer, I grabbed information and filled the picture:
Hopefully someone says a better answer after mine later... So basically you are asking how a neural network functions. So the model is trained on the training data going in it then passes through nodes which run functions on the information and change it as it passes through. Then an output is generated. The output is compared to the test answer. Then through the not magical process of backpropagation the network is corrected. Nodes that were heavily responsible for giving wrong answers on the training set are changed so the next pass through they will be more likely to give the right answer. Backpropagation with PyTorch
https://stackoverflow.com/questions/73981980/
Is there a torch function to derive union of two tensors?
There is a function that can be used to derive union of two tensors in numpy, as below: import torch import numpy as np a = torch.tensor([0, 1, 2]) b = torch.tensor([2, 3, 4]) c = np.union1d(a, b) # c = array([0, 1, 2, 3, 4]) c = torch.from_numpy(c) # c = torch.tensor([0, 1, 2, 3, 4]) However, I am looking for torch function that can be used directly on two tensors. If I use numpy function as above, I must cast the result from numpy to torch, and also must use cpu although the function is applied to tensors. Is there any union function in torch that can be used directly on two tensors? Or, at least, can it be simply implemented using other torch functions?
You can use: torch.cat((a, b)).unique()
https://stackoverflow.com/questions/73987088/
Transform DVS128Gesture dataset
I am making a neural network in Python using the DVS128Gesture dataset. I want to transform the default 128x128 trinary frames to 32x32 binary frames, but when I try to use torchvision.transform in the dataset, I am getting this error: img should be PIL Image. Got <class 'numpy.lib.npyio.NpzFile'> My code: import torch import torchvision from spikingjelly.datasets.dvs128_gesture import DVS128Gesture train_data = DVS128Gesture(root_dir, train=True, data_type='event', transform=torchvision.transforms.Compose([ torchvision.transforms.Resize(32), torchvision.transforms.Normalize((0.0,), (0.8,)), torchvision.transforms.ToTensor() ])) test_data = DVS128Gesture(root_dir, train=False, data_type='event', transform=torchvision.transforms.Compose([ torchvision.transforms.Resize(32), torchvision.transforms.Normalize((0.0,), (0.8,)), torchvision.transforms.ToTensor() ])) train_loader = torch.utils.data.DataLoader(train_data, batch_size=bs, shuffle=True) test_loader = torch.utils.data.DataLoader(test_data, batch_size=bs, shuffle=True) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) example_data.shape I have done the same with the MNIST dataset and everything worked as expected. I think the problem is that I use torchvision.transform in DVS128Gesture, but I am not sure what else I can use. The same with MNIST: train_data = torchvision.datasets.MNIST(root_dir, train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.Resize(28), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.0,), (0.8,)) ])) test_data = torchvision.datasets.MNIST(root_dir, train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.Resize(28), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.0,), (0.8,)) ])) What am I doing wrong? Stack trace of error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [10], line 2 1 examples = enumerate(test_loader) ----> 2 batch_idx, (example_data, example_targets) = next(examples) 3 example_data.shape File p:\Programs\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py:681, in _BaseDataLoaderIter.__next__(self) 678 if self._sampler_iter is None: 679 # TODO(https://github.com/pytorch/pytorch/issues/76750) 680 self._reset() # type: ignore[call-arg] --> 681 data = self._next_data() 682 self._num_yielded += 1 683 if self._dataset_kind == _DatasetKind.Iterable and \ 684 self._IterableDataset_len_called is not None and \ 685 self._num_yielded > self._IterableDataset_len_called: File p:\Programs\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py:721, in _SingleProcessDataLoaderIter._next_data(self) 719 def _next_data(self): 720 index = self._next_index() # may raise StopIteration --> 721 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 722 if self._pin_memory: 723 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File p:\Programs\Anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py:49, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 47 def fetch(self, possibly_batched_index): 48 if self.auto_collation: ---> 49 data = [self.dataset[idx] for idx in possibly_batched_index] 50 else: 51 data = self.dataset[possibly_batched_index] File p:\Programs\Anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py:49, in <listcomp>(.0) 47 def fetch(self, possibly_batched_index): 48 if self.auto_collation: ---> 49 data = [self.dataset[idx] for idx in possibly_batched_index] 50 else: 51 data = self.dataset[possibly_batched_index] File p:\Programs\Anaconda3\lib\site-packages\torchvision\datasets\folder.py:232, in DatasetFolder.__getitem__(self, index) 230 sample = self.loader(path) 231 if self.transform is not None: --> 232 sample = self.transform(sample) 233 if self.target_transform is not None: 234 target = self.target_transform(target) File p:\Programs\Anaconda3\lib\site-packages\torchvision\transforms\transforms.py:94, in Compose.__call__(self, img) 92 def __call__(self, img): 93 for t in self.transforms: ---> 94 img = t(img) 95 return img File p:\Programs\Anaconda3\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs) 1126 # If we don't have any hooks, we want to skip the rest of the logic in 1127 # this function, and just call forward. 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] File p:\Programs\Anaconda3\lib\site-packages\torchvision\transforms\transforms.py:349, in Resize.forward(self, img) 341 def forward(self, img): 342 """ 343 Args: 344 img (PIL Image or Tensor): Image to be scaled. (...) 347 PIL Image or Tensor: Rescaled image. 348 """ --> 349 return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias) File p:\Programs\Anaconda3\lib\site-packages\torchvision\transforms\functional.py:430, in resize(img, size, interpolation, max_size, antialias) 428 warnings.warn("Anti-alias option is always applied for PIL Image input. Argument antialias is ignored.") 429 pil_interpolation = pil_modes_mapping[interpolation] --> 430 return F_pil.resize(img, size=size, interpolation=pil_interpolation, max_size=max_size) 432 return F_t.resize(img, size=size, interpolation=interpolation.value, max_size=max_size, antialias=antialias) File p:\Programs\Anaconda3\lib\site-packages\torchvision\transforms\functional_pil.py:249, in resize(img, size, interpolation, max_size) 240 @torch.jit.unused 241 def resize( 242 img: Image.Image, (...) 245 max_size: Optional[int] = None, 246 ) -> Image.Image: 248 if not _is_pil_image(img): --> 249 raise TypeError(f"img should be PIL Image. Got {type(img)}") 250 if not (isinstance(size, int) or (isinstance(size, Sequence) and len(size) in (1, 2))): 251 raise TypeError(f"Got inappropriate size arg: {size}") TypeError: img should be PIL Image. Got <class 'numpy.lib.npyio.NpzFile'>
The error probably comes from the Resize transform (can you provide more details on the stack trace of the error ?). Resize is a image specific tranform, expecting a PIL image (or a torch Tensor, see the transform documentation), while your dataset DVS128Gesture outputs another object type.
https://stackoverflow.com/questions/73990524/
PyTorch Lightning: What's the meaning of calling self?
Consider the following function: def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) return loss What's the meaning of calling self in this context? How can self act as a callable anyway? Thanks
Calling an object with object() is equivalent to calling the __call__ method of the given class, defined as usual with: def __call__(self): So self() is equivalent to obj.__call__ method of class object and since it subclasses torch.nn.Module, which will invoke _call_impl, and eventually the forward pass,
https://stackoverflow.com/questions/73991158/
Best way to implicitly change the value of nn.Parameter() in Pytorch?
Suppose that I want to optimize a vector v so that its norm is equal to 1. To do that, I defined a network with that vector as follows: class myNetwork(nn.Module): def __init__(self,initial_vector): super(myNetwork, self).__init__() #Define vector according to an initial column vector self.v = nn.Parameter(initial_vector) def forward(self,x): #Normalize vector so that its norm is equal to 1 self.v.data = self.v.data / torch.sqrt(self.v.data.transpose(1,0) @ self.v.data) #Multiply v times a row vector out = x @ self.v return out Is the use of .data the best way to update v? Does it takes into account the normalization during backpropagation?
You could simply use def forward(self,x): return x @ self.v / (x**2).sum() Depending on your loss, or the downstream layers of your network, even skip normalization at all. def forward(self,x): return x @ self.v This should work as long as your loss is invariant to scale, the norm should vary only slightly every step, but it is not strictly stable. If you are giving many steps maybe it is worth adding a term thinyvalue * ((myNetwork.v**2).sum()-1)**2 to your loss, to make sure that the norm of $v$ is attracted to 1.
https://stackoverflow.com/questions/73991950/
PyTorch: Running Inference on multiple GPUs
I have a model that accepts two inputs. I want to run inference on multiple GPUs where one of the inputs is fixed, while the other changes. So, let’s say I use n GPUs, each of them has a copy of the model. First gpu processes the input pair (a_1, b), the second processes (a_2, b) and so on. All the outputs are saved as files, so I don’t need to do a join operation on the outputs. How can I do this with DDP or otherwise?
I have figured out how to do this using torch.multiprocessing.Queue: import torch import torch.multiprocessing as mp from absl import app, flags from torchvision.models import AlexNet FLAGS = flags.FLAGS flags.DEFINE_integer("num_processes", 2, "Number of subprocesses to use") def infer(rank, queue): """Each subprocess will run this function on a different GPU which is indicated by the parameter `rank`.""" model = AlexNet() device = torch.device(f"cuda:{rank}") model.to(device) while True: a, b = queue.get() if a is None: # check for sentinel value break x = a + b x = x.to(device) model(x) del a, b # free memory print(f"Inference on process {rank}") def main(argv): queue = mp.Queue() processes = [] for rank in range(FLAGS.num_processes): p = mp.Process(target=infer, args=(rank, queue)) p.start() processes.append(p) for _ in range(10): a_1 = torch.randn(1, 3, 224, 224) a_2 = torch.randn(1, 3, 224, 224) b = torch.randn(1, 3, 224, 224) queue.put((a_1, b)) queue.put((a_2, b)) for _ in range(FLAGS.num_processes): queue.put((None, None)) # sentinel value to signal subprocesses to exit for p in processes: p.join() # wait for all subprocesses to finish if __name__ == "__main__": app.run(main)
https://stackoverflow.com/questions/73999265/
RuntimeError: NCCL Error 2: unhandled system error
I upgraded cuda from 9.0 to 10.2 recently, but when I successfully upgraded, my demo as follows will default by "RuntimeError: NCCL Error 2: unhandled system error". I dont know why and try to look for answer in github or stackoverflow, but I failed. So I hope someone can help me. import torch from torchvision import datasets, transforms import torchvision from tqdm import tqdm device_ids = [0, 1] # GPU BATCH_SIZE = 64 transform = transforms.Compose([transforms.ToTensor()]) data_train = datasets.MNIST(root = "./data/", transform=transform, train=True, download=True) data_test = datasets.MNIST(root="./data/", transform=transform, train=False) data_loader_train = torch.utils.data.DataLoader(dataset=data_train, batch_size=BATCH_SIZE * len(device_ids), shuffle=True, num_workers=2) data_loader_test = torch.utils.data.DataLoader(dataset=data_test, batch_size=BATCH_SIZE * len(device_ids), shuffle=True, num_workers=2) class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = torch.nn.Sequential( torch.nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1), torch.nn.ReLU(), torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), torch.nn.ReLU(), torch.nn.MaxPool2d(stride=2, kernel_size=2), ) self.dense = torch.nn.Sequential( torch.nn.Linear(14 * 14 * 128, 1024), torch.nn.ReLU(), torch.nn.Dropout(p=0.5), torch.nn.Linear(1024, 10) ) def forward(self, x): x = self.conv1(x) x = x.view(-1, 14 * 14 * 128) x = self.dense(x) return x model = Model() model = torch.nn.DataParallel(model, device_ids=device_ids) model = model.cuda(device=device_ids[0]) cost = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters()) from time import sleep n_epochs = 50 for epoch in range(n_epochs): running_loss = 0.0 running_correct = 0 print("Epoch {}/{}".format(epoch, n_epochs)) print("-"*10) for data in tqdm(data_loader_train): X_train, y_train = data X_train, y_train = X_train.cuda(device=device_ids[0]), y_train.cuda(device=device_ids[0]) outputs = model(X_train) _,pred = torch.max(outputs.data, 1) optimizer.zero_grad() loss = cost(outputs, y_train) loss.backward() optimizer.step() running_loss += loss.data.item() running_correct += torch.sum(pred == y_train.data) testing_correct = 0 for data in data_loader_test: X_test, y_test = data X_test, y_test = X_test.cuda(device=device_ids[0]), y_test.cuda(device=device_ids[0]) outputs = model(X_test) _, pred = torch.max(outputs.data, 1) testing_correct += torch.sum(pred == y_test.data) print("Loss is:{:.4f}, Train Accuracy is:{:.4f}%, Test Accuracy is:{:.4f}".format(torch.true_divide(running_loss, len(data_train)), torch.true_divide(100*running_correct, len(data_train)), torch.true_divide(100*testing_correct, len(data_test)))) torch.save(model.state_dict(), "model_parameter.pkl") The follows are the Info of the error. Epoch 0/50 ---------- 0%| | 0/469 [00:00<?, ?it/s]7aea7ed215cf:50693:50693 [0] NCCL INFO Bootstrap : Using eth0:172.17.0.14<0> 7aea7ed215cf:50693:50693 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation 7aea7ed215cf:50693:50693 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] 7aea7ed215cf:50693:50693 [0] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.14<0> 7aea7ed215cf:50693:50693 [0] NCCL INFO Using network Socket NCCL version 2.10.3+cuda10.2 7aea7ed215cf:50693:50809 [1] NCCL INFO Could not enable P2P between dev 1(=3e000) and dev 0(=3d000) 7aea7ed215cf:50693:50809 [1] NCCL INFO Could not enable P2P between dev 0(=3d000) and dev 1(=3e000) 7aea7ed215cf:50693:50809 [1] NCCL INFO Could not enable P2P between dev 1(=3e000) and dev 0(=3d000) 7aea7ed215cf:50693:50809 [1] NCCL INFO Could not enable P2P between dev 0(=3d000) and dev 1(=3e000) 7aea7ed215cf:50693:50808 [0] NCCL INFO Could not enable P2P between dev 1(=3e000) and dev 0(=3d000) 7aea7ed215cf:50693:50808 [0] NCCL INFO Could not enable P2P between dev 0(=3d000) and dev 1(=3e000) 7aea7ed215cf:50693:50808 [0] NCCL INFO Could not enable P2P between dev 1(=3e000) and dev 0(=3d000) 7aea7ed215cf:50693:50808 [0] NCCL INFO Could not enable P2P between dev 0(=3d000) and dev 1(=3e000) 7aea7ed215cf:50693:50809 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 7aea7ed215cf:50693:50808 [0] NCCL INFO Channel 00/02 : 0 1 7aea7ed215cf:50693:50808 [0] NCCL INFO Channel 01/02 : 0 1 7aea7ed215cf:50693:50809 [1] NCCL INFO Setting affinity for GPU 1 to 3ff003ff 7aea7ed215cf:50693:50808 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 7aea7ed215cf:50693:50808 [0] NCCL INFO Setting affinity for GPU 0 to 3ff003ff 7aea7ed215cf:50693:50809 [1] NCCL INFO Could not enable P2P between dev 1(=3e000) and dev 0(=3d000) 7aea7ed215cf:50693:50808 [0] NCCL INFO Could not enable P2P between dev 0(=3d000) and dev 1(=3e000) 7aea7ed215cf:50693:50809 [1] include/shm.h:28 NCCL WARN Call to posix_fallocate failed : No space left on device 7aea7ed215cf:50693:50809 [1] NCCL INFO include/shm.h:41 -> 2 7aea7ed215cf:50693:50809 [1] include/shm.h:48 NCCL WARN Error while creating shared memory segment nccl-shm-recv-3bd03c4f9664d387-0-0-1 (size 9637888) 7aea7ed215cf:50693:50809 [1] NCCL INFO transport/shm.cc:100 -> 2 7aea7ed215cf:50693:50809 [1] NCCL INFO transport.cc:34 -> 2 7aea7ed215cf:50693:50809 [1] NCCL INFO transport.cc:84 -> 2 7aea7ed215cf:50693:50809 [1] NCCL INFO init.cc:778 -> 2 7aea7ed215cf:50693:50808 [0] include/shm.h:28 NCCL WARN Call to posix_fallocate failed : No space left on device 7aea7ed215cf:50693:50808 [0] NCCL INFO include/shm.h:41 -> 2 7aea7ed215cf:50693:50809 [1] NCCL INFO init.cc:904 -> 2 7aea7ed215cf:50693:50808 [0] include/shm.h:48 NCCL WARN Error while creating shared memory segment nccl-shm-recv-3bd03c4f9664d387-0-1-0 (size 9637888) 7aea7ed215cf:50693:50808 [0] NCCL INFO transport/shm.cc:100 -> 2 7aea7ed215cf:50693:50808 [0] NCCL INFO transport.cc:34 -> 2 7aea7ed215cf:50693:50808 [0] NCCL INFO transport.cc:84 -> 2 7aea7ed215cf:50693:50808 [0] NCCL INFO init.cc:778 -> 2 7aea7ed215cf:50693:50809 [1] NCCL INFO group.cc:72 -> 2 [Async thread] 7aea7ed215cf:50693:50808 [0] NCCL INFO init.cc:904 -> 2 7aea7ed215cf:50693:50808 [0] NCCL INFO group.cc:72 -> 2 [Async thread] 7aea7ed215cf:50693:50693 [0] NCCL INFO init.cc:973 -> 2 0%| | 0/469 [00:03<?, ?it/s] Traceback (most recent call last): File "test.py", line 73, in <module> outputs = model(X_train) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 172, in replicate return replicate(module, device_ids, not torch.is_grad_enabled()) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/nn/parallel/replicate.py", line 91, in replicate param_copies = _broadcast_coalesced_reshape(params, devices, detach) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/nn/parallel/replicate.py", line 71, in _broadcast_coalesced_reshape tensor_copies = Broadcast.apply(devices, *tensors) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 23, in forward outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/nn/parallel/comm.py", line 58, in broadcast_coalesced return torch._C._broadcast_coalesced(tensors, devices, buffer_size) RuntimeError: NCCL Error 2: unhandled system error
This is apparently caused by newer versions of nccl including a data pathway which uses linux shared memory for internode communication (see here). If that system is misconfigured or unavailable, then you might see this issue in any codebase which uses nccl. Your two choices to fix this are Correctly set up the linux tmpfs system Use the NCCL_SHM_DISABLE environment variable to prevent nccl from trying to use this data pathway (see in the documentation here). This will force nccl to fall back to a potentially slower data pathway.
https://stackoverflow.com/questions/74003230/
How to operate torch.dot() in matrix consist of vectors in pytorch?
I have tensor like this: arr1 = np.array([[ 1.6194, -0.6058, -0.8012], [ 1.1483, 1.6538, -0.8062]]) arr2 = np.array([[-0.3180, -1.8249, 0.0499], [-0.4184, 0.6495, -0.4911]]) X = torch.Tensor(arr1) Y = torch.Tensor(arr2) I want to do torch.dot on every tensor 1D (2 vectors) inside my 2D tensor torch.dot(X, Y) I want to get the result like this tensor([dotResult1, dotResult2]). But I got the error like this: RuntimeError: 1D tensors expected, but got 2D and 2D tensors My main purpose is to do "something" operation on every vector inside my matrix but I don't want to use looping here, does anyone know how to do that?
Assuming what you are looking for is the tensor : [torch.dot(X[0], Y[0]), torch.dot(X[1], Y[1])] Then you can do: (X*Y).sum(axis = 1) Test: (X*Y).sum(axis = 1) == torch.tensor([torch.dot(X[0], Y[0]),torch.dot(X[1], Y[1])]) outputs: tensor([True, True])
https://stackoverflow.com/questions/74003689/
sum distribution of points on 2d graph
I have a 10,000 2d points in a pytorch tensor. The points are between between -1 and 1. i.e. x ∈ {-1, 1}^2 I want to output a heat map to show the distribution of the points. In order to print the graph I am using matplotlib like so torch.meshgrid(torch.linspace(-1, 1, 1000), torch.linspace(-1, 1, 1000)) ax.contourf(x_grid, y_grid, grid_values) However I dont know how to calculate the grid values for this I would you calculate the grid_values given the tensor of shape (10,000, 2) of points on the graph (note that a point can appear multiple times on the graph and we want to sum the distribtion of these points)
You are describing a normal 2D histogram, possibly weighted (not sure if your points had a value associated with them as well)? plt.hist2d(x, y, weights=w, range=[[-1,1],[-1,1]], bins=1000) will also spit out the associated matrix, with values binned accordingly which you can use to make contours.
https://stackoverflow.com/questions/74003838/
How to fine-tune gpt-j using Huggingface Trainer
I'm attempting to fine-tune gpt-j using the huggingface trainer and failing miserably. I followed the example that references bert, but of course, the gpt-j model isn't exactly like the bert model. The error indicates that the model isn't producing a loss, which is great, except that I have no idea how to make it generate a loss or how to change what the trainer is expecting. I'm using Transformers 4.22.2. I would like to get this working on a CPU before I try to do anything on Paperspace with a GPU. I did make an initial attempt there using a GPU that received the same error, with slightly different code to use cuda. I suspect that my approach is entirely wrong. I found a very old example of fine-tuning gpt-j using 8-bit quantization, but even that repository says it is deprecated. I'm unsure if my mistake is in using the compute_metrics() I found in the bert example or if it is something else. Any advice would be appreciated. Or, maybe it is an issue with the labels I provide the config, but I've tried different permutations. I understand what a loss function is, but I don't know how it is supposed to be configured in this case. My Code: from transformers import Trainer, TrainingArguments, AutoModelForCausalLM from transformers import GPTJForCausalLM, AutoTokenizer from datasets import load_dataset import time import torch import os import numpy as np import evaluate import sklearn start = time.time() GPTJ_FINE_TUNED_FILE = "./fine_tuned_models/gpt-j-6B" print("Loading model") model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", low_cpu_mem_usage=True) model.config.pad_token_id = model.config.eos_token_id print("Loading tokenizer") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") tokenizer.pad_token = tokenizer.eos_token print("Loading dataset") current_dataset = load_dataset("wikitext", 'wikitext-103-v1') current_dataset['train'] = current_dataset['train'].select(range(1200)) def tokenize_function(examples): current_tokenizer_result = tokenizer(examples["text"], padding="max_length", truncation=True) return current_tokenizer_result print("Splitting and tokenizing dataset") tokenized_datasets = current_dataset.map(tokenize_function, batched=True) small_train_dataset = tokenized_datasets["train"].select(range(100)) print("Preparing training arguments") training_args = TrainingArguments(output_dir=GPTJ_FINE_TUNED_FILE, report_to='all', logging_dir='./logs', per_device_train_batch_size=1, label_names=['input_ids', 'attention_mask'], # 'logits', 'past_key_values' num_train_epochs=1, no_cuda=True ) metric = evaluate.load("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset ) print("Starting training") trainer.train() print(f"Finished fine-tuning in {time.time() - start}") Which leads to the error and stacktrace: File "xxx\ft_v3.py", line 66, in <module> File "xxx\venv\lib\site-packages\transformers\trainer.py", line 1521, in train return inner_training_loop( File "xxx\venv\lib\site-packages\transformers\trainer.py", line 1763, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "xxx\venv\lib\site-packages\transformers\trainer.py", line 2499, in training_step loss = self.compute_loss(model, inputs) File "xxx\venv\lib\site-packages\transformers\trainer.py", line 2544, in compute_loss raise ValueError( ValueError: The model did not return a loss from the inputs, only the following keys: logits,past_key_values. For reference, the inputs it received are input_ids,attention_mask.
I found what appears to work, though now I'm running low on memory and working through ways of handling it. The data_collator parameter seems to take care of the exact issue that I was having. data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False) trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, data_collator=data_collator, )
https://stackoverflow.com/questions/74014379/
Installed Pytorch 1.12 in the environment but detects version 1.10.0+cpu
Recently, I have installed pytorch 1.12.1 in the conda environment. After installation, I checked the version of pytorch using print(torch.__version__), it returns 1.10.0+cpu. I also checked the available packages in the environment. It shows pytorch version 1.12.1 as shown in figure below. I am unable to understand why it is detecting version 1.10.0+cpu. I even reinstalled Anaconda python in Windows, still it shows same version 1.10.0+cpu even in the base environment. Can someone please figure it out.
I tried uninstalling the existing pytorch version 1.10.0+cpu using pip/conda but nothing was working. I was finally able to get rid of it by deleting the old Python folder at 'C:\Users\Vikrant\AppData\Roaming\Python'. I created a new environment and installed Pytorch 1.12.1. Now, everything is working fine. Thanks!
https://stackoverflow.com/questions/74014816/
Unflatten in pytorch
I need to change the shape of tensor from [2, 48, 196] to [2, 48, 14,14]. I read there a "unflatten" in pytorch. But I couldn't understand how to use it. Is there any example?
Here is example for your question. import torch input = torch.randn([2,48,196]) unflatten = torch.nn.Unflatten(2, (14,14)) output = unflatten(input) If you check output.shape, the shape is [2,48,14,14]. Unflatten function is to expand specific dim to a desired shape. In your case, you want to expand the shape 196 in "dim 2" to new shape of the unflatten dimension "(14,14)". There are two parameters in Unflatten function. First parameter is dim. it is specific dimension which you want to be unflatten. In your case, it is 2. Second parameter is unflatten_size. It is the new shape of the unflatten dimension of the tensor. So it is (14,14). Therefore, your Unflatten function should be looked like unflatten = torch.nn.Unflatten(2, (14,14))
https://stackoverflow.com/questions/74019037/
AttributeError: 'DataLoader' object has no attribute '__getitem__'
I want to get the first image within Pytorch's dataloader as per this doc here: https://pytorch.org/vision/stable/generated/torchvision.datasets.CIFAR100.html However I get the error in the title when I try doing this: trainloader.__getitem__(0) Am I misunderstanding their docs? Here's my code: preprocess = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) trainset = torchvision.datasets.CIFAR100(root= "./data", train = True, transform=preprocess, download=True) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) trainloader.__getitem__(0)
Those are the docs for a torch Dataset, which does in fact have a __getitem__ implemented. However, you are calling __getitem__ on trainloader, a DataLoader which does not have that method. So this should do what you expect: trainset.__getitem__(0) When you get something from a Dataset, you are getting a single observation. DataLoaders should not be used to return single items in this manner; instead, they are for fetching a sampled batch of observations. That's why they don't support getting a single obs through __getitem__. It's also worth reading a bit about how __getitem__ works (for example here); you can use it to subscript an object directly. trainset[0] is equivalent to trainset.__getitem__(0).
https://stackoverflow.com/questions/74020283/
Unexpected number of fully connected neurons after padding with stride in pytorch
I'm trying to replicate the procedure of this paper (Re-)Imag(in)ing Price Trends, that trains a 2dCNN based on OHLC charts. They have images of different dimensions (32x15, 64x60 and 96x180) corresponding to 5, 20 and 60 daily bars and thus three dimension-specific architectures. But I end up with a different number of neurons compared to them in the fully connected layer for the 20day horizon (64x60)... I followed the specification of their architecture that can be summarised as: Number of blocks: (32x15): 2; (64x60):3 and (96x180):4 Fixed number of filters in 1st block: 64, else number doubles each convolutional block 5x3 (kernel size) convolutional filters (for all image types) 2x1 max-pooling filters (for all image types) vertical stride of 1, 3, and 3 (only in first layer) for 32x15, 64x60 and 96x180 respectively vertical dilution rate of 1, 2, and 3 (only in the first layer) for 32x15, 64x60 and 96x180 respectively padding such that output has SAME dimension as the image itself I suspect that my issue has something with the workaround for padding="same" in pytorch with asymmetric strides. Since strides can be >1, I went for the Conv2d workaround provided in this answer. According to their paper (see figure), the CNN for 20 days horizon (64x60) should end up having 46080 neurons in FC layer. Below is the code for my architecture that gets the following error when resizing. RuntimeError: shape '[-1, 46080]' is invalid for input of size 30720: Clearly, the dimensions are incorrect and sensitive to changing the padding calculations. I cannot seem to get this right, not sure how I would else figure out padding for each block in each specific model... Hope someone can help me out. Thanks in advance. import torch from torch import nn import math from functools import reduce from operator import __add__ import torch.nn.functional as F class Conv2dSame(nn.Conv2d): """ https://github.com/pytorch/captum/blob/optim-wip/captum/optim/models/_common.py#L144 """ def calc_same_pad(self, i: int, k: int, s: int, d: int) -> int: pad = max((math.ceil(i / s) - 1) * s + (k - 1) * d + 1 - i, 0) return pad def forward(self, x: torch.Tensor) -> torch.Tensor: ih, iw = x.size()[-2:] kh, kw = self.weight.size()[-2:] pad_h = self.calc_same_pad(i=ih, k=kh, s=self.stride[0], d=self.dilation[0]) pad_w = self.calc_same_pad(i=iw, k=kw, s=self.stride[1], d=self.dilation[1]) if pad_h > 0 or pad_w > 0: x = F.pad( x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2] ) return F.conv2d( x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups, ) class Net20(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Sequential( Conv2dSame(1, 64, kernel_size=(5,3), stride=(3,1), dilation=(2,1)), nn.BatchNorm2d(64), nn.LeakyReLU(negative_slope=0.01, inplace=True), nn.MaxPool2d((2, 1)) ) self.layer2 = nn.Sequential( Conv2dSame(64, 128, kernel_size=(5,3)), nn.BatchNorm2d(128), nn.LeakyReLU(negative_slope=0.01, inplace=True), nn.MaxPool2d((2, 1)) ) self.layer3 = nn.Sequential( Conv2dSame(128, 256, kernel_size=(5,3)), nn.BatchNorm2d(256), nn.LeakyReLU(negative_slope=0.01, inplace=True), nn.MaxPool2d((2, 1)) ) self.fc1 = nn.Sequential( nn.Dropout(p=0.5), nn.Linear(46080, 1), ) def forward(self, x): x = x.reshape(-1,1,64,60) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = x.reshape(-1,46080) # FC neurons according to paper x = self.fc1(x) return x
You're right, torch doesn't support 'same' padding and you should implement it yourself for odd size with F.pad. First I suggest you to inspect MaxPool2D. By default, the output of a 3x3 image by MaxPool2D((2, 2)) is of shape 1x1 (forgetting borders) but you may expect 2x2. Typically if your width starts to be odd you can have a problem. You can try to add ceil_mode=True argument (see this for details). Now regarding the padding calculation, you should note that there are many possible implementation for padding because "same" padding is ambiguous in your case. I found an other implementation that also should mimic tensorflow behaviour (see here). It seems not to be exactly what you used so you can try it: class Conv2dSame(nn.Conv2d): def calc_same_pad(self, i: int, k: int, s: int, d: int) -> int: # (i + s - 1) // s instead of ceil(i / s) pad = max(0, ((i + s - 1) // s - 1) * s + (k - 1) * d + 1 - i) return pad def forward(self, x: torch.Tensor) -> torch.Tensor: # ... pad_w, pad_h = self.calc_same_pad(...), self.calc_same_pad(...) w_odd, h_odd = pad_w % 2 == 1, pad_h % 2 == 1 if w_odd or h_odd: # Add 1 padding now for odd size x = F.pad(x, [0, int(w_odd), 0, int(h_odd)]) return F.conv2d( x, kernel_size=self.weight, bias=self.bias, stride=self.stride, padding=(pad_w // 2, pad_h // 2), # add the rest of the padding here dilatation=self.dilation, groups=self.groups, ) Basically if for instance i = s + 1 = 4 the two methods don't return the same thing. I let you explore that. Notes: padding argument in Conv2dSame has no effect (that's logical as the padding is calculated on the fly) the padding in two steps should be similar to what you did with [pad_w // 2, pad_w - pad_w //2, ...] but I keep it to respect the original code (in case of there is a strange behaviour in Conv2D but I don't think so) Hope it solves the problem.
https://stackoverflow.com/questions/74029235/