instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Weights & Biases sweep cannot import modules with pytorch lightning
I am training a variational autoencoder, using pytorch-lightning. My pytorch-lightning code works with a Weights and Biases logger. I am trying to do a parameter sweep using a W&B parameter sweep. The hyperparameter search procedure is based on what I followed from this repo. The runs initialise correctly, but when my training script is run with the first set of hyperparameters, i get the following error: 2020-08-14 14:09:07,109 - wandb.wandb_agent - INFO - About to run command: /usr/bin/env python train_sweep.py --LR=0.02537477586974176 Traceback (most recent call last): File "train_sweep.py", line 1, in <module> import yaml ImportError: No module named yaml yaml is installed and is working correctly. I can train the network by setting the parameters manually, but not with the parameter sweep. Here is my sweep script to train the VAE: import yaml import numpy as np import ipdb import torch from vae_experiment import VAEXperiment import torch.backends.cudnn as cudnn from pytorch_lightning import Trainer from pytorch_lightning.loggers import WandbLogger from pytorch_lightning.callbacks import EarlyStopping from vae_network import VanillaVAE import os import wandb from utils import get_config, log_to_wandb # Sweep parameters hyperparameter_defaults = dict( root='data_semantics', gpus=1, batch_size = 2, lr = 1e-3, num_layers = 5, features_start = 64, bilinear = False, grad_batches = 1, epochs = 20 ) wandb.init(config=hyperparameter_defaults) config = wandb.config def main(hparams): model = VanillaVAE(hparams['exp_params']['img_size'], **hparams['model_params']) model.build_layers() experiment = VAEXperiment(model, hparams['exp_params'], hparams['parameters']) logger = WandbLogger( project='vae', name=config['logging_params']['name'], version=config['logging_params']['version'], save_dir=config['logging_params']['save_dir'] ) wandb_logger.watch(model.net) early_stopping = EarlyStopping( monitor='val_loss', min_delta=0.00, patience=3, verbose=False, mode='min' ) runner = Trainer(weights_save_path="../../Logs/", min_epochs=1, logger=logger, log_save_interval=10, train_percent_check=1., val_percent_check=1., num_sanity_val_steps=5, early_stop_callback = early_stopping, **config['trainer_params'] ) runner.fit(experiment) if __name__ == '__main__': main(config) Why am I getting this error?
The problem is that the structure of my code and the way that I was running the wandb commands was not in the correct order. Looking at this pytorch-ligthning with wandb is the correct structure to follow. Here is my refactored code: #!/usr/bin/env python import wandb from utils import get_config #--------------------------------------------------------------------------------------------- def main(): """ The training function used in each sweep of the model. For every sweep, this function will be executed as if it is a script on its own. """ import wandb import yaml import numpy as np import torch from vae_experiment import VAEXperiment import torch.backends.cudnn as cudnn from pytorch_lightning import Trainer from pytorch_lightning.loggers import WandbLogger from pytorch_lightning.callbacks import EarlyStopping from vae_network import VanillaVAE import os from utils import log_to_wandb, format_config path_to_config = 'sweep.yaml' config = get_config(path_to_yaml) path_to_defaults = 'defaults.yaml' param_defaults = get_config(path_to_defaults) wandb.init(config=param_defaults) config = format_config(config, wandb.config) model = VanillaVAE(config['meta']['img_size'], hidden_dims = config['hidden_dims'], latent_dim = config['latent_dim']) model.build_layers() experiment = VAEXperiment(model, config) early_stopping = EarlyStopping( monitor='val_loss', min_delta=0.00, patience=3, verbose=False, mode='max' ) runner = Trainer(weights_save_path=config['meta']['save_dir'], min_epochs=1, train_percent_check=1., val_percent_check=1., num_sanity_val_steps=5, early_stop_callback = early_stopping, **config['trainer_params']) runner.fit(experiment) log_to_wandb(config, runner, experiment, path_to_config) #--------------------------------------------------------------------------------------------- path_to_yaml = 'sweep.yaml' sweep_config = get_config(path_to_yaml) sweep_id = wandb.sweep(sweep_config) wandb.agent(sweep_id, function=main) #---------------------------------------------------------------------------------------------
https://stackoverflow.com/questions/63412757/
tensor transformation in pytorch?
I have a tensor of shape (size, 1) and I want to convert it into of shape (size, lookback, 1) by shifting its values. A pandas equivalent is below size = 7 lookback = 3 data = pd.DataFrame(np.arange(size), columns=['out']) # input y = np.full((len(data), lookback, 1), np.nan) # required/output for j in range(lookback): y[:, j, 0] = data['out'].shift(lookback - j - 1).fillna(method="bfill") How can I acheive similar in pytorch? Example input: [0, 1, 2, 3, 4, 5, 6] Desired output: [[0. 0. 0.] [0. 0. 1.] [0. 1. 2.] [1. 2. 3.] [2. 3. 4.] [3. 4. 5.] [4. 5. 6.]]
You can use Tensor.unfold for this. First though you will need to pad the front of the tensor, for that you could use nn.functional.pad. E.g. import torch import torch.nn.functional as F size = 7 loopback = 3 data = torch.arange(size, dtype=torch.float) # pad front of data with 2 values # replicate padding requires 3d, 4d, or 5d tensor, hence the creation of two unitary dimensions before padding data_padded = F.pad(data[None, None, ...], (loopback - 1, 0), 'replicate')[0, 0, ...] # unfold with window size of 3 with step size of 1 y = data_padded.unfold(dimension=0, size=loopback, step=1) which gives output of tensor([[0., 0., 0.], [0., 0., 1.], [0., 1., 2.], [1., 2., 3.], [2., 3., 4.], [3., 4., 5.], [4., 5., 6.]])
https://stackoverflow.com/questions/63416659/
How to solve "NotImplementedError"
I defined a three layer convolution layer(self.convs) ,the input tensor has the shape([100,10,24]) x_convs = self.convs(Variable(torch.from_numpy(X).type(torch.FloatTensor))) >>Variable(torch.from_numpy(X).type(torch.FloatTensor)).shape torch.Size([100, 10, 24]) >>self.convs ModuleList( (0): ConvBlock( (conv): Conv1d(24, 8, kernel_size=(5,), stride=(1,), padding=(2,)) (relu): ReLU() (maxpool): AdaptiveMaxPool1d(output_size=10) (zp): ConstantPad1d(padding=(1, 0), value=0) ) (1): ConvBlock( (conv): Conv1d(8, 8, kernel_size=(5,), stride=(1,), padding=(2,)) (relu): ReLU() (maxpool): AdaptiveMaxPool1d(output_size=10) (zp): ConstantPad1d(padding=(1, 0), value=0) ) (2): ConvBlock( (conv): Conv1d(8, 8, kernel_size=(5,), stride=(1,), padding=(2,)) (relu): ReLU() (maxpool): AdaptiveMaxPool1d(output_size=10) (zp): ConstantPad1d(padding=(1, 0), value=0) ) ) When I excuate x_convs = self.convs(Variable(torch.from_numpy(X).type(torch.FloatTensor))), it gives me the error `94 registered hooks while the latter silently ignores them. 95 """ ---> 96 raise NotImplementedError` The ConvBlock is defined as below class ConvBlock(nn.Module): def __init__(self, T, in_channels, out_channels, filter_size): super(ConvBlock, self).__init__() padding = self._calc_padding(T, filter_size) self.conv=nn.Conv1d(in_channels, out_channels, filter_size, padding=padding) self.relu=nn.ReLU() self.maxpool=nn.AdaptiveMaxPool1d(T) self.zp=nn.ConstantPad1d((1, 0), 0) def _calc_padding(self, Lin, kernel_size, stride=1, dilation=1): p = int(((Lin-1)*stride + 1 + dilation*(kernel_size - 1) - Lin)/2) return p def forward(self, x): x = x.permute(0,2,1) x = self.conv(x) x = self.relu(x) x = self.maxpool(x) x = x.permute(0,2,1) return x The "forward" function has correct indent, so I cannot figure it out what is going on.
You are trying to call a ModuleList, which is a list (i.e. a list object in Python), slightly modified for being used with PyTorch. A quick fix would be to call the self.convs as: x_convs = self.convs[0](Variable(torch.from_numpy(X).type(torch.FloatTensor))) if len(self.convs) > 1: for conv in self.convs[1:]: x_convs = conv(x_convs) That is, although self.convs is a list, each member of it is a Module. You can directly call each member of the self.convs, using its index, e.g. ``self.convsan_index`. Or, you can do it with the help of functools module: from functools import reduce def apply_layer(layer_input, layer): return layer(layer_input) output_of_self_convs = reduce(apply_layer, self.convs, Variable(torch.from_numpy(X).type(torch.FloatTensor))) P.S. Though, the Variable keyword is not used anymore.
https://stackoverflow.com/questions/63422307/
PyTorch Input and hidden tensors not on the same device
I'm creating a simple LSTM model to predict some sales data. I am trying to train it on a GPU, but there seems to be a problem with casting the hidden state tensor to cuda. I get the following error: RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu. How can I train the model on a GPU? I cast the training data, initial hidden states, and the model to cuda, yet I still get the error. Here's my code: # Convert train_norm from an array to a tensor train_norm = torch.FloatTensor(train_norm).view(-1).cuda() # define a window size window_size = 12 # Define function to create seq/label tuples def input_data(seq, ws): # ws is window size out = [] L = len(seq) for i in range(L-ws): window = seq[i:i+ws] label = seq[i+ws:i+ws+1] out.append((window, label)) return out # Apply the input_data function to the train_norm train_data = input_data(train_norm, window_size) class LSTM(nn.Module): def __init__(self, input_size=1, hidden_size=100, output_size=1): super().__init__() self.hidden_size = hidden_size # Add an LSTM layer: self.lstm = nn.LSTM(input_size, hidden_size) # Add a fully connected linear layer: self.linear = nn.Linear(hidden_size, output_size) # Initialize h0 and c0: self.hidden = (torch.zeros(1, 1, hidden_size).cuda(), torch.zeros(1, 1, hidden_size).cuda()) def forward(self, seq): lstm_out, self.hidden = self.lstm(seq.view(len(seq), 1, -1), self.hidden) pred = self.linear(lstm_out.view(len(seq), -1)) return pred[-1] # get only the last value model = LSTM().cuda() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) epochs = 200 import time start_time = time.time() for epoch in range(epochs): # Extract the sequence and label from the training data for seq, y_train in train_data: # Reset the parameters and hidden states optimizer.zero_grad() hidden = (torch.zeros(1, 1, model.hidden_size), torch.zeros(1, 1, model.hidden_size)) model.hidden = hidden # Predict the values y_pred = model(seq) # Calculate loss and perform backpropagation loss = criterion(y_pred, y_train) loss.backward() optimizer.step() print(f'epoch: {epoch+1:2} loss: {loss.item():10.8f}') print(f'Training took {time.time() - start_time:.0f} seconds')
First of all you are initializing hidden when there is absolutely no point to do it. If hidden isn't passed to LSTM layer it will be zero by default, please see documentation. This gives us the following model: class LSTM(nn.Module): def __init__(self, input_size=1, hidden_size=100, output_size=1): super().__init__() self.hidden_size = hidden_size # Add an LSTM layer: self.lstm = nn.LSTM(input_size, hidden_size) # Add a fully connected linear layer: self.linear = nn.Linear(hidden_size, output_size) def forward(self, seq): lstm_out, _ = self.lstm(seq.view(len(seq), 1, -1)) return self.linear(lstm_out.view(len(seq), -1)) Your pred[-1] is probably wrong as well as you are only returning the last element of batch from linear layer... Also your training should be this (see hidden removed and added cuda to seq and y_train): for epoch in range(epochs): # Extract the sequence and label from the training data for seq, y_train in train_data: # Reset the parameters and hidden states optimizer.zero_grad() # Predict the values # Add cuda to sequence y_pred = model(seq.cuda()) # Calculate loss and perform backpropagation loss = criterion(y_pred, y_train.cuda()) loss.backward() optimizer.step() print(f'epoch: {epoch+1:2} loss: {loss.item():10.8f}') print(f'Training took {time.time() - start_time:.0f} seconds') This alleviates problems with cuda (it's not a solution to hardcode it everywhere you possibly can...) and makes your code more readable.
https://stackoverflow.com/questions/63424990/
best way of tqdm for data loader
How to use tqdm for data_loader ? is this the correct way? for i,j in enumerate(data_loader,total = 100): pass
You need to wrap the iterable with tqdm, as their documentation clearly says: Instantly make your loops show a smart progress meter - just wrap any iterable with tqdm(iterable), and you’re done! If you're enumerating over an iterable, you can do something like the following. Sleep is only for visualizing it. from tqdm import tqdm from time import sleep data_loader = list(range(1000)) for i, j in enumerate(tqdm(data_loader)): sleep(0.01)
https://stackoverflow.com/questions/63426545/
torch.save() gives : RuntimeError: CUDA error: no CUDA-capable device is detected
I was training a neural network model on GPU but I get the above mentioned error when I use torch.save() to save checkpoints. My question is even though I have a CUDA device why am I getting the mentioned error? My model was running okay on the GPU please see bellow the Output of: nvidia-smi command. $ nvidia-smi Sat Aug 15 09:51:58 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 2060 Off | 00000000:01:00.0 Off | N/A | | N/A 55C P3 33W / N/A | 4774MiB / 5934MiB | 97% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 7080 C python3 4763MiB | +-----------------------------------------------------------------------------+ $ python --version Python 3.8.2 $ python -c "import torch; print(torch.__version__)" 1.5.1 $ python -c "import torchvision as torch; print(torch.__version__)" 0.6.1 I have even tried the following: os.environ["CUDA_VISIBLE_DEVICES"] = '0' torch.save({ 'epoch': epoch + 1, 'metrics': metrics, 'model': model.state_dict(), 'optimizer' : optimizer.state_dict(), }, name) But nothing worked. I am new to deep learning and still learning PyTorch. Please do pardon my ignorance.
It seems some other problem. Test this toy example and see if the problem is in torch.save() or any other command. import torch import torchvision model = torchvision.models.resnet18() model = model.cuda() torch.save(model.state_dict(), 'net')
https://stackoverflow.com/questions/63426905/
How does torch.argmax work for 4-dimensions
I am a newbie in Pytorch. Even though I read the documentation, it is unclear for me how does torch.argmax() applied to first dimension work when we have 4-dimensional input. Also, how does keepdims=True change the output? Here is an example of each case: k = torch.rand(2, 3, 4, 4) print(k): tensor([[[[0.2912, 0.4818, 0.1123, 0.3196], [0.6606, 0.1547, 0.0368, 0.9475], [0.4753, 0.7428, 0.5931, 0.3615], [0.6729, 0.7069, 0.1569, 0.3086]], [[0.6603, 0.7777, 0.3546, 0.2850], [0.3681, 0.5295, 0.8812, 0.6093], [0.9165, 0.2842, 0.0260, 0.1768], [0.9371, 0.9889, 0.6936, 0.7018]], [[0.5880, 0.0349, 0.0419, 0.3913], [0.5884, 0.9408, 0.1707, 0.1893], [0.3260, 0.4410, 0.6369, 0.7331], [0.9448, 0.7130, 0.3914, 0.2775]]], [[[0.9433, 0.8610, 0.9936, 0.1314], [0.8627, 0.3103, 0.3066, 0.3547], [0.3396, 0.1892, 0.0385, 0.5542], [0.4943, 0.0256, 0.7875, 0.5562]], [[0.2338, 0.2498, 0.4749, 0.2520], [0.4405, 0.1605, 0.6219, 0.8955], [0.2326, 0.1816, 0.5032, 0.8732], [0.2089, 0.6131, 0.1898, 0.0517]], [[0.1472, 0.8059, 0.6958, 0.9047], [0.6403, 0.2875, 0.5746, 0.5908], [0.8668, 0.4602, 0.8224, 0.9307], [0.2077, 0.5665, 0.8671, 0.4365]]]]) argmax = torch.argmax(k, axis=1) print(argmax): tensor([[[1, 1, 1, 2], [0, 2, 1, 0], [1, 0, 2, 2], [2, 1, 1, 1]], [[0, 0, 0, 2], [0, 0, 1, 1], [2, 2, 2, 2], [0, 1, 2, 0]]]) argmax = torch.argmax(k, axis=1, keepdims=True) print(argmax): tensor([[[[1, 1, 1, 2], [0, 2, 1, 0], [1, 0, 2, 2], [2, 1, 1, 1]]], [[[0, 0, 0, 2], [0, 0, 1, 1], [2, 2, 2, 2], [0, 1, 2, 0]]]])
If k is a tensor of shape (2, 3, 4, 4), by definition, torch.argmax with axis=1 should give you an output of shape (2, 4, 4). To understand why this happens, you have to understand what happens in lower dimensions first. If I have a 2D (2, 2) tensor A, like: [[1,2], [3,4]] Then torch.argmax(A, axis=1) gives the output of shape (2) with values (1, 1). The axis argument means axis along which to operate. So setting axis=1 means that it will look at values from each column one by one, before deciding a max. For row 0, it looks at column values 1, 2 and decides that 2 (at index 1) is the max. For row 1, it looks at column vales 3, 4 and decides that 4 (at index 1) is the max. So the argmax result is [1, 1]. Moving up to 3D, let's have a hypothetical array of dimensions (I, J, K). If we call argmax with axis = 1, we can break it down to the following: I, J, K = 3, 4, 5 A = torch.rand(I, J, K) out = torch.zeros((I, K), dtype=torch.int32) for i in range(I): for k in range(K): out[i,k] = torch.argmax(A[i,:,k]) print(out) print(torch.argmax(A, axis=1)) Out: tensor([[3, 3, 2, 3, 2], [1, 1, 0, 1, 0], [0, 1, 0, 3, 3]], dtype=torch.int32) tensor([[3, 3, 2, 3, 2], [1, 1, 0, 1, 0], [0, 1, 0, 3, 3]]) So what happens is, in your 3D tensor, you're once again calculating argmax along the columns/axis 1. So for each unique pair of (i, k), you have exactly J values along the axis 1, right? The index of the maximum value within those J values is inserted into position (i,k) of the output. If you understand this, then you can understand what happens in 4D. For any 4D tensor of dimensions (I, J, K, L), if you call argmax with axis=1, then for each combination of (i, k, l) you'll have exactly J values along axis 1 - and the argmax of those J values will be present at output[i,k,l]. The keepdims argument is merely conserving the number of dimensions of your matrix. For example, argmax at axis 1 on the 4D matrix gives a 3D result of shape (I,K,L), but using keepdims, the result will be 4D as well with the shape (I,1,K,L).
https://stackoverflow.com/questions/63427246/
Extracting Intermediate layer outputs of a CNN in PyTorch
I am using a Resnet18 model. ResNet( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) (fc): Linear(in_features=512, out_features=1000, bias=True) ) I want to extract the outputs only from layer2, layer3, layer4 & I don't want the avgpool and fc outputs. How do I achieve this ? class BasicBlock(nn.Module): def __init__(self, in_channels, out_channels, stride=1, padding=1) -> None: super(BasicBlock, self).__init__() self.conv1 = nn.Conv2d(in_channels, out_channels, 3, stride, padding=padding, bias=False) self.bn1 = nn.BatchNorm2d(out_channels) self.relu = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(out_channels) if in_channels != out_channels: l1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False) l2 = nn.BatchNorm2d(out_channels) self.downsample = nn.Sequential(l1, l2) else: self.downsample = None def forward(self, xb): prev = xb x = self.relu(self.bn1(self.conv1(xb))) x = self.bn2(self.conv2(x)) if self.downsample is not None: prev = self.downsample(xb) x = x + prev return self.relu(x) class CustomResnet(nn.Module): def __init__(self, pretrained:bool=True) -> None: super(CustomResnet, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=7,stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = nn.Sequential(BasicBlock( 64, 64, stride=1), BasicBlock(64, 64)) self.layer2 = nn.Sequential(BasicBlock(64, 128, stride=2), BasicBlock(128, 128)) self.layer3 = nn.Sequential(BasicBlock(128, 256, stride=2), BasicBlock(256, 256)) self.layer4 = nn.Sequential(BasicBlock(256, 512, stride=2), BasicBlock(512, 512)) def forward(self, xb): x = self.maxpool(self.relu(self.bn1(self.conv1(xb)))) x = self.layer1(x) x2 = x = self.layer2(x) x3 = x = self.layer3(x) x4 = x = self.layer4(x) return [x2, x3, x4] I guess one solution would be this .. But is there any other way without writing this while lot of code? Also is it possible to load in the pre-trained weights given by torchvision in the above modified ResNet model.
If you know how the forward method is implemented, then you can subclass the model, and override the forward method only. If you are using the pre-trained weights of a model in PyTorch, then you already have access to the code of the model. So, find where the code of the model is, import it, subclass the model, and override the forward method. For example: class MyResNet18(Resnet): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def forward(self, xb): x = self.maxpool(self.relu(self.bn1(self.conv1(xb)))) x = self.layer1(x) x2 = x = self.layer2(x) x3 = x = self.layer3(x) x4 = x = self.layer4(x) return [x2, x3, x4] and you are done.
https://stackoverflow.com/questions/63427771/
Getting model class labels from torchvision pretrained models
I am using a pre-trained Alexnet model (without fine-tuning) from torchvision. The issue is that even though I am able to run the model on some data and get the output probability distribution, I am unable to find class labels to map it to. Following this official documentation import torch model = torch.hub.load('pytorch/vision:v0.6.0', 'alexnet', pretrained=True) model.eval() AlexNet( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU(inplace=True) (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (9): ReLU(inplace=True) (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(6, 6)) (classifier): Sequential( (0): Dropout(p=0.5, inplace=False) (1): Linear(in_features=9216, out_features=4096, bias=True) (2): ReLU(inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Linear(in_features=4096, out_features=4096, bias=True) (5): ReLU(inplace=True) (6): Linear(in_features=4096, out_features=1000, bias=True) ) ) Following some steps on processing the image, I am able to use it to get an output for a single image as a (1,1000) dim vector which I will use a softmax on to get a probability distribution - #Output - tensor([-1.6531e+00, -4.3505e+00, -1.8172e+00, -4.2143e+00, -3.1914e+00, 3.4163e-01, 1.0877e+00, 5.9350e+00, 8.0425e+00, -7.0242e-01, -9.4130e-01, -6.0822e-01, -2.4097e-01, -1.9946e+00, -1.5288e+00, -3.2656e+00, -5.5800e-01, 1.0524e+00, 1.9211e-01, -4.7202e+00, -3.3880e+00, 4.3048e+00, -1.0997e+00, 4.6132e+00, -5.7404e-03, -5.3437e+00, -4.7378e+00, -3.3974e+00, -4.1287e+00, 2.9064e-01, -3.2955e+00, -6.7051e+00, -4.7232e+00, -4.1778e+00, -2.1859e+00, -2.9469e+00, 3.0465e+00, -3.5882e+00, -6.3890e+00, -4.4203e+00, -3.3685e+00, -5.0983e+00, -4.9006e+00, -5.5235e+00, -3.7233e+00, -4.0204e+00, 2.6998e-01, -4.4702e+00, -5.6617e+00, -5.4880e+00, -2.6801e+00, -3.2129e+00, -1.6294e+00, -5.2289e+00, -2.7495e+00, -2.6286e+00, -1.8206e+00, -2.3196e+00, -5.2806e+00, -3.7652e+00, -3.0987e+00, -4.1421e+00, -5.2531e+00, -4.6505e+00, -3.5815e+00, -4.0189e+00, -4.0008e+00, -4.5512e+00, -3.2248e+00, -7.7903e+00, -1.4484e+00, -3.8347e+00, -4.5611e+00, -4.3681e+00, 2.7234e-01, -4.0162e+00, -4.2136e+00, -5.4524e+00, 1.1744e+00, -4.7785e+00, -1.8335e+00, 4.1288e-01, 2.2239e+00, -9.9919e-02, 4.8216e+00, -8.4304e-01, 5.6911e-01, -4.0484e+00, -3.3013e+00, 2.8698e+00, -1.1419e+00, -9.1690e-01, -2.9284e+00, -2.6097e+00, -1.8213e-01, -2.5429e+00, -2.1095e+00, 2.2419e+00, -1.6280e+00, 7.4458e+00, 2.3184e+00, -5.7408e+00, -7.4332e-01, -5.4066e+00, 1.5177e+01, -4.4737e-02, 1.8237e+00, -3.7741e+00, 9.2271e-01, -4.3687e-01, -1.4003e+00, -4.3026e+00, 6.3782e-01, -1.0808e+00, -1.4173e+00, 2.6194e+00, -3.8418e+00, 1.1598e+00, -2.6876e+00, -3.6103e+00, -4.9281e+00, -4.1411e+00, -3.3603e+00, -3.4296e+00, -1.4997e+00, -2.8381e+00, -1.2843e+00, 1.5745e+00, -1.7449e+00, 4.2903e-01, 3.1234e-01, -2.8206e+00, 3.6688e-01, -2.1033e+00, 1.6481e+00, 1.4222e+00, -2.7303e+00, -3.6292e+00, 1.2864e+00, -2.5541e+00, -2.9663e+00, -4.1575e+00, -3.1954e+00, -4.6487e-01, 1.8916e+00, -7.4721e-01, 4.5986e+00, -2.5443e+00, -6.2003e+00, -1.3215e+00, -2.6225e+00, 9.9639e+00, 9.7772e+00, 9.6715e+00, 9.0857e+00,... Where do I get the class labels from? I couldn't find any method that let me get that from the model object.
You cannot, unfortunately, get class label names directly from the torchvision models. However, these models are trained on the ImageNet dataset (hence the 1000 classes). You have to get the class name mapping off the web as far as I know; there's no way to get it off torch. Previously, you could download ImageNet directly using torchvision.datasets.ImageNet, which had a built-in label to class name converter. Now the download link isn't publicly available and requires a manual download, before it can be used by datasets.ImageNet. So you can simply search for the class to label mapping of ImageNet online, rather than downloading the data or attempting with torch. Try here for example.
https://stackoverflow.com/questions/63429260/
Simultaneous reads of the same PyTorch torchvision.datasets object
Consider the following piece of code to fetch a data set for training from torchvision.datasets and to create a DataLoader for it. import torch from torchvision import datasets, transforms training_set_mnist = datasets.MNIST('./mnist_data', train=True, download=True) train_loader_mnist = torch.utils.data.DataLoader(training_set_mnist, batch_size=128, shuffle=True) Assume that several Python processes have access to the folder ./mnist_data and execute the above piece of code simultaneously; in my case, each process is a different machine on a cluster and the data set is stored in an NFS location accessible by everyone. You may also assume that the data is already downloaded in this folder so download=True should have no effect. Moreover, each process may use a different seed, as set by torch.manual_seed(). I would like to know whether this scenario is allowed in PyTorch. My main concern is whether the above code can change the data folders or files in ./mnist_data such that if ran by multiple processes it can potentially lead to unexpected behavior or other issues. Also, given that shuffle=True I would expect that if 2 or more processes try to create the DataLoader each of them will get a different shuffling of the data assuming that the seeds are different. Is this true?
My main concern is whether the above code can change the data folders or files in ./mnist_data such that if ran by multiple processes it can potentially lead to unexpected behavior or other issues. You will be fine as processes are only reading data, not modifying in (loading tensors with data into RAM in case of MNIST). Please notice processes do not share memory addresses, hence tensor with data will be loaded multiple times (which shouldn't be a big problem in case of MNIST). Also, given that shuffle=True I would expect that if 2 or more processes try to create the DataLoader each of them will get a different shuffling of the data assuming that the seeds are different. shuffle=True has nothing to do with data itself. What it does, is it get __len__() of provided dataset, makes a range [0, __len__()) and this range is shuffled and used to index dataset's __getitem__. Check out this section for more info about Samplers.
https://stackoverflow.com/questions/63431889/
Pytorch: The size of tensor a (24) must match the size of tensor b (48) at non-singleton dimension 3
Below code works fine and generate proper results. import torch import torch.nn as nn import torch.nn.functional as F from modules import ConvLSTMCell, Sign class EncoderCell(nn.Module): def __init__(self): super(EncoderCell, self).__init__() self.conv = nn.Conv2d( 3, 64, kernel_size=3, stride=2, padding=1, bias=False) self.rnn1 = ConvLSTMCell( 64, 256, kernel_size=3, stride=2, padding=1, hidden_kernel_size=1, bias=False) self.rnn2 = ConvLSTMCell( 256, 512, kernel_size=3, stride=2, padding=1, hidden_kernel_size=1, bias=False) self.rnn3 = ConvLSTMCell( 512, 512, kernel_size=3, stride=2, padding=1, hidden_kernel_size=1, bias=False) def forward(self, input, hidden1, hidden2, hidden3): x = self.conv(input) hidden1 = self.rnn1(x, hidden1) x = hidden1[0] hidden2 = self.rnn2(x, hidden2) x = hidden2[0] hidden3 = self.rnn3(x, hidden3) x = hidden3[0] return x, hidden1, hidden2, hidden3 class Binarizer(nn.Module): def __init__(self): super(Binarizer, self).__init__() self.conv = nn.Conv2d(512, 32, kernel_size=1, bias=False) self.sign = Sign() def forward(self, input): feat = self.conv(input) x = F.tanh(feat) return self.sign(x) class DecoderCell(nn.Module): def __init__(self): super(DecoderCell, self).__init__() self.conv1 = nn.Conv2d( 32, 512, kernel_size=1, stride=1, padding=0, bias=False) self.rnn1 = ConvLSTMCell( 512, 512, kernel_size=3, stride=1, padding=1, hidden_kernel_size=1, bias=False) self.rnn2 = ConvLSTMCell( 128, 512, kernel_size=3, stride=1, padding=1, hidden_kernel_size=1, bias=False) self.rnn3 = ConvLSTMCell( 128, 256, kernel_size=3, stride=1, padding=1, hidden_kernel_size=3, bias=False) self.rnn4 = ConvLSTMCell( 64, 128, kernel_size=3, stride=1, padding=1, hidden_kernel_size=3, bias=False) self.conv2 = nn.Conv2d( 32, 3, kernel_size=1, stride=1, padding=0, bias=False) def forward(self, input, hidden1, hidden2, hidden3, hidden4): x = self.conv1(input) hidden1 = self.rnn1(x, hidden1) x = hidden1[0] x = F.pixel_shuffle(x, 2) hidden2 = self.rnn2(x, hidden2) x = hidden2[0] x = F.pixel_shuffle(x, 2) hidden3 = self.rnn3(x, hidden3) x = hidden3[0] x = F.pixel_shuffle(x, 2) hidden4 = self.rnn4(x, hidden4) x = hidden4[0] x = F.pixel_shuffle(x, 2) x = F.tanh(self.conv2(x)) / 2 return x, hidden1, hidden2, hidden3, hidden4 Now i have changed in self.con and add pretrained resent with layer. Now it shows tensor mismatched error after training. All things are same just add this line in code. I put ** in those line import torch import torch.nn as nn import torch.nn.functional as F import torchvision.models as models from modules import ConvLSTMCell, Sign class EncoderCell(nn.Module): def __init__(self): super(EncoderCell, self).__init__() #self.conv = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False) **resConv = models.resnet50(pretrained=True) resConv.layer4 = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False) self.conv = resConv.layer4** self.rnn1 = ConvLSTMCell( 64, 256, kernel_size=3, stride=2, padding=1, hidden_kernel_size=1, bias=False) self.rnn2 = ConvLSTMCell( 256, 512, kernel_size=3, stride=2, padding=1, hidden_kernel_size=1, bias=False) self.rnn3 = ConvLSTMCell( 512, 512, kernel_size=3, stride=2, padding=1, hidden_kernel_size=1, bias=False) def forward(self, input, hidden1, hidden2, hidden3): x = self.conv(input) hidden1 = self.rnn1(x, hidden1) x = hidden1[0] hidden2 = self.rnn2(x, hidden2) x = hidden2[0] hidden3 = self.rnn3(x, hidden3) x = hidden3[0] return x, hidden1, hidden2, hidden3 class Binarizer(nn.Module): def __init__(self): super(Binarizer, self).__init__() self.conv = nn.Conv2d(512, 32, kernel_size=1, bias=False) self.sign = Sign() def forward(self, input): feat = self.conv(input) x = F.tanh(feat) return self.sign(x) class DecoderCell(nn.Module): def __init__(self): super(DecoderCell, self).__init__() **resConv = models.resnet50(pretrained=True) resConv.layer4 = nn.Conv2d(32, 512, kernel_size=3, stride=2, padding=1, bias=False) self.conv1 = resConv.layer4** self.rnn1 = ConvLSTMCell( 512, 512, kernel_size=3, stride=1, padding=1, hidden_kernel_size=1, bias=False) self.rnn2 = ConvLSTMCell( 128, 512, kernel_size=3, stride=1, padding=1, hidden_kernel_size=1, bias=False) self.rnn3 = ConvLSTMCell( 128, 256, kernel_size=3, stride=1, padding=1, hidden_kernel_size=3, bias=False) self.rnn4 = ConvLSTMCell( 64, 128, kernel_size=3, stride=1, padding=1, hidden_kernel_size=3, bias=False) **resConv2 = models.resnet50(pretrained=True) resConv2.layer4 = nn.Conv2d(32, 3, kernel_size=1, stride=1, padding=0, bias=False) self.conv2 = resConv2.layer4** def forward(self, input, hidden1, hidden2, hidden3, hidden4): x = self.conv1(input) hidden1 = self.rnn1(x, hidden1) x = hidden1[0] x = F.pixel_shuffle(x, 2) hidden2 = self.rnn2(x, hidden2) x = hidden2[0] x = F.pixel_shuffle(x, 2) hidden3 = self.rnn3(x, hidden3) x = hidden3[0] x = F.pixel_shuffle(x, 2) hidden4 = self.rnn4(x, hidden4) x = hidden4[0] x = F.pixel_shuffle(x, 2) x = F.tanh(self.conv2(x)) / 2 return x, hidden1, hidden2, hidden3, hidden4
You are doing it a wrong way, some explanation is, **resConv = models.resnet50(pretrained=True) # you are reading a model now you are replacing the layer in that model with newly initialized layer. Secondly, layer4 in resnet50 is a sequential block containing multiple layers. Use print to see exact the layers in model. resConv.layer4 = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False) here you are using new layer. self.conv = resConv.layer4** As per your query regarding usage of pretrained layer, you should do it like this, resConv = models.resnet50(pretrained=True) print(resConv) #see the layer which you want to use self.conv = resConv.conv1 # replace conv1 with that layer # note: conv1 is the name of first conv layer in resnet To add to this, I would also recommend acquiring and adding this layer (or the weights and biases) outside of the object initialization. Something like: enc = EncoderCell() resnet50 = models.resnet50(pretrained=True) and then either enc.conv = resnet50.conv1 or more ideally enc.conv.load_state_dict(resnet50.layer1.state_dict()) The reason being, calling state_dict() on a nn.Module class creates a clone of the parameters (weights and biases in this case) which can be loaded via nn.Module.load_state_dict() method as long as the two instances of nn.Module share the same shape. So you get the pretrained weights and they are completely detached from the pretrained model. Then you can get rid of the pretrained model since it could be rather large in memory. del resnet50
https://stackoverflow.com/questions/63432886/
RuntimeError: cuda runtime error (30) : unknown error at ..\aten\src\THC\THCGeneral.cpp:87
I have tried installing CUDA on Windows 10 to train nerual networks in my GPU(NVIDIA GeForce 710) but I get the following error when I tried to load an initial model. Here's the code I am running: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device """# 1. Create the classifier""" C = nn.Sequential(Flatten(), nn.Linear(784,200), nn.ReLU(), nn.Linear(200,100), nn.ReLU(), nn.Linear(100,100), nn.ReLU(), nn.Linear(100,10)) """Upload the trained model""" C.load_state_dict(torch.load("C.pt",map_location='cuda')) And this is the error I am getting: C.load_state_dict(torch.load("C.pt",map_location='cuda')) Traceback (most recent call last): File "<ipython-input-2-358f76f483ed>", line 1, in <module> C.load_state_dict(torch.load("C.pt",map_location='cuda')) File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 368, in load return _load(f, map_location, pickle_module) File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 542, in _load result = unpickler.load() File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 505, in persistent_load data_type(size), location) File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 385, in restore_location return default_restore_location(storage, map_location) File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 114, in default_restore_location result = fn(storage, location) File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 96, in _cuda_deserialize return obj.cuda(device) File "C:\Users\usuario\anaconda3\lib\site-packages\torch\_utils.py", line 68, in _cuda with torch.cuda.device(device): File "C:\Users\usuario\anaconda3\lib\site-packages\torch\cuda\__init__.py", line 229, in __enter__ _lazy_init() File "C:\Users\usuario\anaconda3\lib\site-packages\torch\cuda\__init__.py", line 162, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (30) : unknown error at ..\aten\src\THC\THCGeneral.cpp:87 I already installed cudNN and these are the versions I am using. Python 3.7.6 CUDA 8.0 Pytorch 1.0.1
Restarting your computer might fix the issue reference if it doesn't make sure that you Re-install latest GPU driver Reboot Ensure you have admin access
https://stackoverflow.com/questions/63435050/
Computing matrix derivatives with torch.autograd.grad (PyTorch)
I am trying to compute matrix derivatives in PyTorch using torch.autograd.grad however I am running into few issues. Here is a minimal working example to reproduce the error. theta = torch.tensor(np.random.uniform(low=-np.pi, high=np.pi), requires_grad=True) rot_mat = torch.tensor([[torch.cos(theta), torch.sin(theta), 0], [-torch.sin(theta), torch.cos(theta), 0]], dtype=torch.float, requires_grad=True) torch.autograd.grad(outputs=rot_mat, inputs=theta, grad_outputs=torch.ones_like(rot_mat), create_graph=True, retain_graph=True) This code results in the error "One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior." I tried using allow_unused=True but the gradients are returned as None. I am not sure what is causing the graph to be disconnected here.
Pytorch autograd graph will be created only if pytorch functions are used. I think python 2d list used while creating rot_mat disconnects the graph. So using torch functions create rotation matrix and also just use backward() function to compute gradients. Here's sample code: import torch import numpy as np theta = torch.tensor(np.random.uniform(low=-np.pi, high=np.pi), requires_grad=True) # create required values and convert it to torch 1d tensor cos_t = torch.cos(theta).view(1) sin_t = torch.sin(theta).view(1) msin_t = -sin_t zero = torch.zeros(1) # create rotation matrix using only pytorch functions rot_1d = torch.cat((cos_t, sin_t, zero, msin_t, cos_t, zero)) rot_mat = rot_1d.view((2, 3)) # Autograd rot_mat.backward(torch.ones_like(rot_mat)) # gradient print(theta.grad)
https://stackoverflow.com/questions/63437478/
Pytorch CUDA OutOfMemory Error while training
I'm trying to train a PyTorch FLAIR model in AWS Sagemaker. While doing so getting the following error: RuntimeError: CUDA out of memory. Tried to allocate 84.00 MiB (GPU 0; 11.17 GiB total capacity; 9.29 GiB already allocated; 7.31 MiB free; 10.80 GiB reserved in total by PyTorch) For training I used sagemaker.pytorch.estimator.PyTorch class. I tried with different variants of instance types from ml.m5, g4dn to p3(even with a 96GB memory one). In the ml.m5 getting the error with CPUmemoryIssue, in g4dn with GPUMemoryIssue and in the P3 getting GPUMemoryIssue mostly because Pytorch is using only one of the GPU of 12GB out of 8*12GB. Not getting anywhere to complete this training, even in local tried with a CPU machine and got the following error: RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 67108864 bytes. Buy new RAM! The model training script: corpus = ClassificationCorpus(data_folder, test_file='../data/exports/val.csv', train_file='../data/exports/train.csv') print("finished loading corpus") word_embeddings = [WordEmbeddings('glove'), FlairEmbeddings('news-forward-fast'), FlairEmbeddings('news-backward-fast')] document_embeddings = DocumentLSTMEmbeddings(word_embeddings, hidden_size=512, reproject_words=True, reproject_words_dimension=256) classifier = TextClassifier(document_embeddings, label_dictionary=corpus.make_label_dictionary(), multi_label=False) trainer = ModelTrainer(classifier, corpus, optimizer=Adam) trainer.train('../model_files', max_epochs=12,learning_rate=0.0001, train_with_dev=False, embeddings_storage_mode="none") P.S.: I was able to train the same architecture with a smaller dataset in my local GPU machine with a 4GB GTX 1650 DDR5 memory and it was really quick.
Okay, so after 2 days of continuous debugging was able to find out the root cause. What I understood is Flair does not have any limitation on the sentence length, in the sense the word count, it is taking the highest length sentence as the maximum. So there it was causing issue, as in my case there were few content with 1.5 lakh rows which is too much to load the embedding of into the memory, even a 16GB GPU. So there it was breaking. To solve this: For content with this much lengthy words, you can take chunk of n words(10K in my case) from these kind of content from any portion(left/right/middle anywhere) and trunk the rest, or simply ignore those records for training if it is very minimal in comparative count. After this I hope you will be able to progress with your training, as it happened in my case. P.S.: If you are following this thread and face similar issue feel free to comment back so that I can explore and help on your case of the issue.
https://stackoverflow.com/questions/63441299/
How to train Pytorch CNN with two or more inputs
I have a big image, multiple events in the image can impact the classification. I am thinking to split big image into small chunks and get features from each chunk and concatenate outputs together for prediction. My code is like: train_load_1 = DataLoader(dataset=train_dataset_1, batch_size=100, shuffle=False) train_load_2 = DataLoader(dataset=train_dataset_2, batch_size=100, shuffle=False) train_load_3 = DataLoader(dataset=train_dataset_3, batch_size=100, shuffle=False) test_load_1 = DataLoader(dataset=test_dataset_1, batch_size=100, shuffle=True) test_load_2 = DataLoader(dataset=test_dataset_2, batch_size=100, shuffle=True) test_load_3 = DataLoader(dataset=test_dataset_3, batch_size=100, shuffle=True) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv = nn.Conv2d( ... ) # set up your layer here self.fc1 = nn.Linear( ... ) # set up first FC layer self.fc2 = nn.Linear( ... ) # set up the other FC layer def forward(self, x1, x2, x3): o1 = self.conv(x1) o2 = self.conv(x2) o3 = self.conv(x3) combined = torch.cat((o1.view(c.size(0), -1), o2.view(c.size(0), -1), o3.view(c.size(0), -1)), dim=1) out = self.fc1(combined) out = self.fc2(out) return F.softmax(x, dim=1) model = Net().to(device) optimizer = optim.SGD(model.parameters(), lr=0.01) for epoch in epochs: model.train() for batch_idx, (inputs, labels) in enumerate(train_loader_1): **### I am stuck here, how to enumerate all three train_loader to pass input_1, input_2, input_3 into model and share the same label? Please note in train_loader I have set shuffle=False, this is to make sure train_loader_1, train_loader_2, train_loader_3 are getting the same label ** Thank you for your help!
Instead of using 3 separate dataLoader elements, you can use a single dataLoader element where each of the datapoint contains 3 separate parts of the image. Like this: dataLoader = [[[img1_part1],[img1_part2],[img1_part3], label1], [[img2_part1],[img2_part2],[img2_part3], label2]....] This way you can use that in training loop as: for img in dataLoader: part1,part2,part3,label = img out = model.forward(part1,part2,part3) loss = loss_fn(out, label) loss.backward() optimizer.step()
https://stackoverflow.com/questions/63443348/
can anyone explain what "out = self(images)" do in below code
I am not able to understand, if prediction is calculated in forward method, then why there is need "out = self(images)" and what it will do. I am bit confuse about this code. class MnistModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, xb): xb = xb.reshape(-1, 784) out = self.linear(xb) return out def training_step(self, batch): images, labels = batch out = self(images) # Generate predictions loss = F.cross_entropy(out, labels) # Calculate loss return loss def validation_step(self, batch): images, labels = batch out = self(images) # Generate predictions loss = F.cross_entropy(out, labels) # Calculate loss acc = accuracy(out, labels) # Calculate accuracy return {'val_loss': loss, 'val_acc': acc} def validation_epoch_end(self, outputs): batch_losses = [x['val_loss'] for x in outputs] epoch_loss = torch.stack(batch_losses).mean() # Combine losses batch_accs = [x['val_acc'] for x in outputs] epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()} def epoch_end(self, epoch, result): print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format(epoch, result['val_loss'], result['val_acc'])) model = MnistModel()
In Python, self refers to the instance that you have created from a class (similar to this in Java and C++). An instance is callable, which means it may be called like a function itself, if method __call__ have been overridden. Example: class A: def __init__(self): pass def __call__(self, x, y): return x + y a = A() print(a(3,4)) # Prints 7 In your case, __call__ method is implemented in super class nn.Module. As it is a neural network module it needs an input placeholder. "out" is the placeholder for the data that is going to be forward the output of the module to the next layer or module of your model. In the case of nn.Module class instances (and those that inherit from the class) the forward method is what is used as the __call__ method. At least where it is defined with respect to the nn.Module class.
https://stackoverflow.com/questions/63445488/
Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
here is my code for lstm network, I instantiated it and passed to Cuda device but still getting the error that hidden and inputs are not in same device class LSTM_net(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(LSTM_net, self).__init__() self.hidden_size = hidden_size self.lstm_cell = nn.LSTM(input_size, hidden_size) self.h2o = nn.Linear(hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden_0=None, hidden_1=None, hidden_2=None): input=resnet(input) input=input.unsqueeze(0) out_0, hidden_0 = self.lstm_cell(input, hidden_0) out_1, hidden_1 = self.lstm_cell(out_0+input, hidden_1) out_2, hidden_2 = self.lstm_cell(out_1+input, hidden_2) output = self.h2o(hidden_2[0].view(-1, self.hidden_size)) output = self.softmax(output) return output,hidden_0,hidden_1, hidden_2 def init_hidden(self, batch_size = 1): return (torch.zeros(1, batch_size, self.hidden_size), torch.zeros(1, batch_size, self.hidden_size)) net1=LSTM_net(input_size=1000,hidden_size=1000, output_size=100) net1=net1.to(device) pic of connections that I want to make, plz guide me to implement it click here for an image of error massege
Edit: I think I see the problem now. Try changing def init_hidden(self, batch_size = 1): return (torch.zeros(1, batch_size, self.hidden_size), torch.zeros(1, batch_size, self.hidden_size)) to def init_hidden(self, batch_size = 1): return (torch.zeros(1, batch_size, self.hidden_size).cuda(), torch.zeros(1, batch_size, self.hidden_size).cuda()) This is because each of the tensors created by init_hidden method are not data attributes in the parent object of the function. So they do not have cuda() applied to them when you apply cuda() to an instance of the model object. Try calling .cuda() on all the tensors/variables and models involved. net1.cuda() # net1.to(device) for device == cuda:0 works fine also # cuda() is more succinct, though input.cuda() # now, calling net1 on a tensor named input should not produce the error. out = net1(input)
https://stackoverflow.com/questions/63446791/
Why do I get CUDA out of memory when running PyTorch model [with enough GPU memory]?
I am asking this question because I am successfully training a segmentation network on my GTX 2070 on laptop with 8GB VRAM and I use exactly the same code and exactly the same software libraries installed on my desktop PC with a GTX 1080TI and it still throws out of memory. Why does this happen, considering that: The same Windows 10 + CUDA 10.1 + CUDNN 7.6.5.32 + Nvidia Driver 418.96 (comes along with CUDA 10.1) are both on laptop and on PC. The fact that training with TensorFlow 2.3 runs smoothly on the GPU on my PC, yet it fails allocating memory for training only with PyTorch. PyTorch recognises the GPU (prints GTX 1080 TI) via the command : print(torch.cuda.get_device_name(0)) PyTorch allocates memory when running this command: torch.rand(20000, 20000).cuda() #allocated 1.5GB of VRAM. What is the solution to this?
Most of the people (even in the thread below) jump to suggest that decreasing the batch_size will solve this problem. In fact, it does not in this case. For example, it would have been illogical for a network to train on 8GB VRAM and yet to fail to train on 11GB VRAM, considering that there were no other applications consuming video memory on the system with 11GB VRAM and the exact same configuration is installed and used. The reason why this happened in my case was that, when using the DataLoader object, I set a very high (12) value for the workers parameter. Decreasing this value to 4 in my case solved the problem. In fact, although at the bottom of the thread, the answer provided by Yurasyk at https://github.com/pytorch/pytorch/issues/16417#issuecomment-599137646 pointed me in the right direction. Solution: Decrease the number of workers in the PyTorch DataLoader. Although I do not exactly understand why this solution works, I assume it is related to the threads spawned behind the scenes for data fetching; it may be the case that, on some processors, such an error appears.
https://stackoverflow.com/questions/63449011/
Optimizer for an RNN using pytorch
The pytorch RNN tutorial uses for p in net.parameters(): p.data.add_(p.grad.data, alpha = -learning_rate) as optimizer. Does anyone know the difference between doing that or doing the classical optimizer.step(), once an optimizer has been defined explicitly? Is there some special consideration one has to take into when training RNNs in regards to the optimizer?
It looks like the example uses a simple gradient descent algorithm to update: where J is cost. If the optimizer your using is a simple gradient descent tool, then there is no difference between using optimizer.step() and the code in the example. I know that's not a super exciting answer to your question, because it depends on how the step() function is written. Check out this page to learn about step() and this page to learn more about torch.optim.
https://stackoverflow.com/questions/63460065/
How do I fix the Dataset to return desired output (pytorch)
I am trying to use information from the outside functions to decide which data to return. Here, I have added a simplified code to demonstrate the problem. When I use num_workers = 0, I get the desired behavior (The output after 3 epochs is 18). But, when I increase the value of num_workers, the output after each epoch is the same. And the global variable remains unchanged. from torch.utils.data import Dataset, DataLoader x = 6 def getx(): global x x+=1 print("x: ", x) return x class MyDataset(Dataset): def __init__(self): pass def __getitem__(self, index): global x x = getx() return x def __len__(self): return 3 dataset = MyDataset() loader = DataLoader( dataset, num_workers=0, shuffle=False ) for epoch in range(4): for idx, data in enumerate(loader): print('Epoch {}, idx {}, val: {}'.format(epoch, idx, data)) The final output when num_workers=0 is 18 as expected. But when num_workers>0, x remains unchanged (The final output is 6). How can I get a similar behavior as num_workers=0 using num_workers>0(i.e.How to ensure the __getitem__ function of dataloader changes the global variable x's value )?
The reason for this is the underlying nature of multiprocessing in python. Setting num_workers means that your DataLoader creates that number of sub-processes. Each sub-process is effectively a separate python instance with its own global state, and has no idea of what's going on in the other processes. A typical solution for this in python's multiprocessing is using a Manager. However, since your multiprocessing is being provided through the DataLoader, you have no way to work this in. Fortunately, something else can be done. DataLoader actually relies on torch.multiprocessing, which in turn allows sharing of tensors between processes as long as they are in shared memory. So what you can do is, simply use x as a shared tensor. from torch.utils.data import Dataset, DataLoader import torch x = torch.tensor([6]) x.share_memory_() def getx(): global x x+=1 print("x: ", x.item()) return x class MyDataset(Dataset): def __init__(self): pass def __getitem__(self, index): global x x = getx() return x def __len__(self): return 3 dataset = MyDataset() loader = DataLoader( dataset, num_workers=2, shuffle=False ) for epoch in range(4): for idx, data in enumerate(loader): print('Epoch {}, idx {}, val: {}'.format(epoch, idx, data)) Out: x: 7 x: 8 x: 9 Epoch 0, idx 0, val: tensor([[7]]) Epoch 0, idx 1, val: tensor([[8]]) Epoch 0, idx 2, val: tensor([[9]]) x: 10 x: 11 x: 12 Epoch 1, idx 0, val: tensor([[10]]) Epoch 1, idx 1, val: tensor([[12]]) Epoch 1, idx 2, val: tensor([[12]]) x: 13 x: 14 x: 15 Epoch 2, idx 0, val: tensor([[13]]) Epoch 2, idx 1, val: tensor([[15]]) Epoch 2, idx 2, val: tensor([[14]]) x: 16 x: 17 x: 18 Epoch 3, idx 0, val: tensor([[16]]) Epoch 3, idx 1, val: tensor([[18]]) Epoch 3, idx 2, val: tensor([[17]]) While this works, it isn't perfect. Look at epoch 1, and notice that there are 2 12s rather than 11 and 12. This means that two separate processes have executed the line x+=1 before executing print. This is unavoidable as parallel processes are working on shared memory. If you're familiar with operating system concepts, you may be able to further implement some sort of semaphore with an extra variable to control the access to x as needed - but as this goes beyond the scope of the question, I won't elaborate further.
https://stackoverflow.com/questions/63460992/
Finding the top k matches in Pytorch
I'm using the following code to find the topk matches using pytorch: def find_top(self, x, y, n_neighbors, unit_vectors=False, cuda=False): if not unit_vectors: x = __to_unit_torch__(x, cuda=cuda) y = __to_unit_torch__(y, cuda=cuda) with torch.no_grad(): d = 1. - torch.matmul(x, y.transpose(0, 1)) values, indices = torch.topk(d, n_neighbors, dim=1, largest=False, sorted=True) return indices.cpu().numpy() Unfortunately, it is throwing the following error: values, indices = torch.topk(d, n_neighbors, dim=1, largest=False, sorted=True) RuntimeError: invalid argument 5: k not in range for dimension at /pytorch/aten/src/THC/generic/THCTensorTopK.cu:23 The size of d is (1793,1) . What am I missing?
This error occurs when you call torch.topk with a k larger than the total number of classes. Reduce your argument and it should run fine.
https://stackoverflow.com/questions/63463510/
CNN trained model doesn't appear to be working
I've trained a CNN model and I would like to run the trained model against new data. However, it seems that the trained model isn't predicting the count correctly as it done during the training. I have a feeling that the model is not using the PTH file. Could someone please advise what I am doing wrong, please? import argparse import datetime import glob import os import random import shutil import time from os.path import join import numpy as np import pandas as pd import torch import torch.nn as nn from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter from torchvision.transforms import ToTensor from tqdm import tqdm import torch.optim as optim from convnet3_eval import Convnet from dataset2_eval import CellsDataset parser = argparse.ArgumentParser('Predicting hits from pixels') parser.add_argument('name',type=str,help='Name of experiment') parser.add_argument('data_dir',type=str,help='Path to data directory containing images and gt.csv') parser.add_argument('--weight_decay',type=float,default=0.0,help='Weight decay coefficient (something like 10^-5)') parser.add_argument('--lr',type=float,default=0.0001,help='Learning rate') args = parser.parse_args() metadata = pd.read_csv(join(args.data_dir,'gt.csv')) metadata.set_index('filename', inplace=True) dataset = CellsDataset(args.data_dir,transform=ToTensor(),return_filenames=True) dataset = DataLoader(dataset,num_workers=4,pin_memory=True) model_path = '/base_model.pth' model = Convnet() optimizer = torch.optim.Adam(model.parameters(),lr=args.lr,weight_decay=args.weight_decay) for images, paths in tqdm(dataset): targets = torch.tensor([metadata['count'][os.path.split(path)[-1]] for path in paths]) # B targets = targets.float() # code to print training data to a csv file filename=CellsDataset(args.data_dir,transform=ToTensor(),return_filenames=True) output = model(images) # B x 1 x 9 x 9 (analogous to a heatmap) preds = output.sum(dim=[1,2,3]) # predicted cell counts (vector of length B) print(preds) paths_test = np.array([paths]) names_preds = np.hstack(paths) print(names_preds) df=pd.DataFrame({'Image_Name':names_preds, 'Target':targets.detach(), 'Prediction':preds.detach()}) print(df) # save image name, targets, and predictions df.to_csv(r'model.csv', index=False, mode='a') model.load_state_dict(torch.load(model_path)) model.eval()
Move the last two lines where you load the weights model.load_state_dict(torch.load(model_path)) model.eval() above the for loop right below where you initialize the model.
https://stackoverflow.com/questions/63463535/
RuntimeError: cudnn RNN backward can only be called in training mode
I have seen this problem the first time, I never encountered such an error in previous Python projects. Here is my training code: def train(net, opt, criterion,ucf_train, batchsize,i): opt.zero_grad() total_loss = 0 net=net.eval() net=net.train() for vid in range(i*batchsize,i*batchsize+batchsize,1): output=infer(net,ucf_train[vid]) m=get_label_no(ucf_train[vid]) m=m.cuda( ) loss = criterion(output,m) loss.backward(retain_graph=True) total_loss += loss opt.step() #updates wghts and biases return total_loss/n_points code for infer(net,input) def infer(net, name): net.eval() hidden_0 = net.init_hidden() hidden_1 = net.init_hidden() hidden_2 = net.init_hidden() video_path = fetch_ucf_video(name) cap = cv2.VideoCapture(video_path) resize=(224,224) T=FrameCapture(video_path) print(T) lim=T-(T%20)-2 i=0 while(1): ret, frame2 = cap.read() frame2= cv2.resize(frame2, resize) # print(type(frame2)) if (i%20==0 and i<lim): input=normalize(frame2) input=input.cuda() output,hidden_0,hidden_1, hidden_2 = net(input, hidden_0, hidden_1, hidden_2) elif (i>=lim): break i=i+1 op=output torch.cuda.empty_cache() op=op.cuda() return op I am getting this error, I tried with model.train() following this where net is my model: RuntimeError Traceback (most recent call last) <ipython-input-62-42238f3f6877> in <module>() ----> 1 train(net1,opt,criterion,ucf_train,1,0) 2 frames /usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 125 Variable._execution_engine.run_backward( 126 tensors, grad_tensors, retain_graph, create_graph, --> 127 allow_unreachable=True) # allow_unreachable flag 128 129 RuntimeError: cudnn RNN backward can only be called in training mode
You should remove the net.eval() call that comes right after the def infer(net, name): It needs to be removed because you call this infer function inside your training code. Your model needs to be in train mode throughout the the whole training. And you never set your model back to train after calling eval as well, so that is the root of the exception you are getting. If you want to use this infer code in your test cases, you can cover that case with an if. Also the net.eval() that comes right after the total_loss=0 assignment is not useful since you call net.train() right after that. You can also remove that one since it gets neutralized right in next line. The updated code def train(net, opt, criterion,ucf_train, batchsize,i): opt.zero_grad() total_loss = 0 net=net.train() for vid in range(i*batchsize,i*batchsize+batchsize,1): output=infer(net,ucf_train[vid]) m=get_label_no(ucf_train[vid]) m=m.cuda( ) loss = criterion(output,m) loss.backward(retain_graph=True) total_loss += loss opt.step() #updates wghts and biases return total_loss/n_points code for infer(net,input) def infer(net, name, is_train=True): if not is_train: net.eval() hidden_0 = net.init_hidden() hidden_1 = net.init_hidden() hidden_2 = net.init_hidden() video_path = fetch_ucf_video(name) cap = cv2.VideoCapture(video_path) resize=(224,224) T=FrameCapture(video_path) print(T) lim=T-(T%20)-2 i=0 while(1): ret, frame2 = cap.read() frame2= cv2.resize(frame2, resize) # print(type(frame2)) if (i%20==0 and i<lim): input=normalize(frame2) input=input.cuda() output,hidden_0,hidden_1, hidden_2 = net(input, hidden_0, hidden_1, hidden_2) elif (i>=lim): break i=i+1 op=output torch.cuda.empty_cache() op=op.cuda() return op
https://stackoverflow.com/questions/63465187/
Why does `is_same` in the C++ PyTorch API fail when comparing with the same tensor that is read from a file?
Why does torch::Tensor::is_same fail the following assertion? A tensor is written to a file using the C++ PyTorch API, then read again into another tensor, and is_same compares both tensors: torch::Tensor x_sequence = torch::linspace(0, M_PI, 1000); torch::save(x_sequence, "x_sequence.dat"); torch::Tensor x_read; torch::load(x_read, "x_sequence.dat"); assert(x_read.is_same(x_sequence)); This results in: int main(int, char**): Assertion `x_read.is_same(x_sequence)' failed. using python-pytorch, version 1.6.0-2 on Arch Linux g++ (GCC) 10.1.0
torch::Tensor::is_same(const torch::Tensor& other)is defined here. It is important to notice that a Tensor is actually a pointer on an underlying TensorImpl class (which actually holds the data). Thus, when you call is_same, what is checked is actually whether or not your pointers are the same, i.e whether your 2 tensors are pointing to the same underlying memory. Here is a very simple example to understand it well : auto x = torch::randn({4,4}); auto copy = x; auto clone = x.clone(); std::cout << x.is_same(copy) << " " << x.is_same(clone) << std::endl; >>> 0 1 Here, the call to clone forces pytorch to copy the data in another memory location. Consequently, the pointers are different and is_same returns false. If you want to actually compare the values, you have no choice but to compute the difference between the two tensors and compute how close to 0 this difference is.
https://stackoverflow.com/questions/63470907/
In a python iterator select other iterator based on conditions
In python I have an iterator returning an infinite string of indices in a fixed range [0, N] called Sampler. Actually I have a list of those and all they do is return indices in the range [0, N_0], [N_0, N_1], ..., [N_{n-1}, N_n]. What I now want to do is first select one of these iterators based on the length of their range, so I have a weights list [N_0, N_1 - N_0, ...] and I select one of these with: iterator_idx = random.choices(range(len(weights)), weights=weights/weights.sum())[0] Next, what I want to do is create an iterator which randomly selects one of the iterators and selects a batch of M samples. class BatchSampler: def __init__(self, M): self.M = M self.weights = [weight_list] self.samplers = [list_of_iterators] ] self._batch_samplers = [ self.batch_sampler(sampler) for sampler in self.samplers ] def batch_sampler(self, sampler): batch = [] for batch_idx in sampler: batch.append(batch_idx) if len(batch) == self.M: yield batch if len(batch) > 0: yield batch def __iter__(self): # First select one of the datasets. iterator_idx = random.choices( range(len(self.weights)), weights=self.weights / self.weights.sum() )[0] return self._batch_samplers[iterator_idx] The issue with this is that iter() only seems to be called once, so only the first time iterator_idx is selected. Obviously this is wrong... What is the way around this? This is a possible case when you would have multiple datasets in pytorch, but you want to sample only batches from one of the datasets.
Seems to me that you want to define your own container type. I'll try to provide examples of a few standard ways to do so (hopefully without missing too many details); you should be able to reuse one of these simple examples, into your own class. Using just __getitem__ (support indexing & looping): object.__getitem__ Called to implement evaluation of self[key]. class MyContainer: def __init__(self, sequence): self.elements = sequence # Just something to work with. def __getitem__(self, key): # If we're delegating to sequences like built-in list, # invalid indices are handled automatically by them # (throwing IndexError, as per the documentation). return self.elements[key] t = (1, 2, 'a', 'b') c = MyContainer(t) elems = [e for e in c] assert elems == [1, 2, 'a', 'b'] assert c[1:-1] == t[1:-1] == (2, 'a') Using the iterator protocol: object.__iter__ object.__iter__(self) This method is called when an iterator is required for a container. This method should return a new iterator object that can iterate over all the objects in the container. For mappings, it should iterate over the keys of the container. Iterator objects also need to implement this method; they are required to return themselves. For more information on iterator objects, see Iterator Types. Iterator Types container.__iter__() Return an iterator object. The object is required to support the iterator protocol described below. The iterator objects themselves are required to support the following two methods, which together form the iterator protocol: iterator.__iter__() Return the iterator object itself. This is required to allow both containers and iterators to be used with the for and in statements. iterator.__next__() Return the next item from the container. If there are no further items, raise the StopIteration exception. Once an iterator's __next__() method raises StopIteration, it must continue to do so on subsequent calls. class MyContainer: class Iter: def __init__(self, container): self.cont = container self.pos = 0 self.len = len(container.elements) def __iter__(self): return self def __next__(self): if self.pos == self.len: raise StopIteration curElem = self.cont.elements[self.pos] self.pos += 1 return curElem def __init__(self, sequence): self.elements = sequence # Just something to work with. def __iter__(self): return MyContainer.Iter(self) t = (1, 2, 'a', 'b') c = MyContainer(t) elems = [e for e in c] assert elems == [1, 2, 'a', 'b'] Using a generator: Generator Types Python's generators provide a convenient way to implement the iterator protocol. If a container object's iter() method is implemented as a generator, it will automatically return an iterator object (technically, a generator object) supplying the iter() and next() methods. generator A function which returns a generator iterator. It looks like a normal function except that it contains yield expressions for producing a series of values usable in a for-loop or that can be retrieved one at a time with the next() function. Usually refers to a generator function, but may refer to a generator iterator in some contexts. generator iterator An object created by a generator function. 6.2.9. Yield expressions Using a yield expression in a function's body causes that function to be a generator class MyContainer: def __init__(self, sequence): self.elements = sequence # Just something to work with. def __iter__(self): for e in self.elements: yield e t = (1, 2, 'a', 'b') c = MyContainer(t) elems = [e for e in c] assert elems == [1, 2, 'a', 'b']
https://stackoverflow.com/questions/63475956/
Correct Way to Fine-Tune/Train HuggingFace's Model from scratch (PyTorch)
For example, I want to train a BERT model from scratch but using the existing configuration. Is the following code the correct way to do so? model = BertModel.from_pretrained('bert-base-cased') model.init_weights() Because I think the init_weights method will re-initialize all the weights. Second question, if I want to change a bit the configuration, such as the number of hidden layers. model = BertModel.from_pretrained('bert-base-cased', num_hidden_layers=10) model.init_weights() I wonder if the above is the correct way to do so. Because they don't appear to have an error when I run the above code.
In this way, you would unnecessarily download and load the pre-trained model weights. You can avoid that by downloading the BERT config config = transformers.AutoConfig.from_pretrained("bert-base-cased") model = transformers.AutoModel.from_config(config) Both yours and this solution assume you want to tokenize the input in the same as the original BERT and use the same vocabulary. If you want to use a different vocabulary, you can change in the config before instantiating the model: config.vocab_size = 123456 Similarly, you can change any hyperparameter that you want to have different from the original BERT.
https://stackoverflow.com/questions/63478947/
Missing XLA configuration when running pytorch/xla
I am trying to run GCP TPU with Pytorch/XLA, I am using a VM with debian-9-torch-xla-v20200818 image, I initiate the TPU and check it is running using ctpu status which shows that both the CPU and TPU are running, I then activate the torch-xla-nightly environment, but when I try to invoke this simple code: import torch import torch_xla import torch_xla.core.xla_model as xm dev = xm.xla_device() t1 = torch.ones(3, 3, device = dev) print(t1) this error comes up: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 231, in xla_device devkind=devkind if devkind is not None else None) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 136, in get_xla_supported_devices xla_devices = _DEVICES.value File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/utils/utils.py", line 32, in value self._value = self._gen_fn() File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 18, in <lambda> _DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices()) RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:274 : Missing XLA configuration I tried everything but nothing seem to work.
Take a look at this link as it seems to pertain to the issue. Maybe you didn't setup the XRT_TPU_CONFIG: (vm)$ export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" Follow the instructions here and you should be fine.
https://stackoverflow.com/questions/63486381/
Iterator doesn't work with DataLoader on GPU
I'm using PyTorch on Google Colab, I'm getting this error when Using GPU, TypeError Traceback (most recent call last) <ipython-input-33-41cdbc758ecd> in <module>() ----> 1 dataiter= iter(trainloader) TypeError: '_SingleProcessDataLoaderIter' object is not callable but wen using normal CPU there is no Error. My code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import torch import numpy as np from torchvision import datasets, transforms from collections import OrderedDict from torch import nn from torch import optim import torch.nn.functional as F import helper transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset= datasets.MNIST("MINIST_data/", download= True, train=True, transform=transform) trainloader= torch.utils.data.DataLoader(trainset, batch_size= 64, shuffle=True) dataiter= iter(trainloader) Using enumerate instead of iter works with GPU but I don't know why, can someone explain the error to me and why it is happening !?
You don't have to use iter. trainloader is already iterable. The loop should be done like this for data in trainloader: or for index, data in enumerate(trainloader):
https://stackoverflow.com/questions/63487775/
PyTorch method for returning element with highest count in a 1D tensor?
Is there a built-in PyTorch method that takes a 1D tensor, and returns the element in the tensor with the highest count? For example, if we input torch.tensor([2,2,2,3,4,5]), the method should return 2 as it occurs the most. In case of a tie in frequency, the element with the lower value should be returned; inputting torch.tensor([1,1,2,2,4,5]) should return 1. Just to be clear, I only wish to know if there's an existing built-in PyTorch method that does exactly this. If there's no such method, please refrain from posting the solution, as I'd like to try solving it on my own.
yes torch.mode() is builtin function(read here) which handles both of your conditions. torch.mode(alpha,0) #alpha being the name of tensor
https://stackoverflow.com/questions/63489963/
Denoising linear autoencoder learns to output a constant instead of denoising
I am trying to create a denoising autoencoder for 1d cyclic signals like cos(x) etc. The process of creating the dataset is that I pass a list of cyclic functions and for each example generated it rolls random coefficients for each function in the list so every function generated is different yet cyclic. eg - 0.856cos(x) - 1.3cos(0.1x) Then I add noise and normalize the signal to be between [0, 1). Next, I train my autoencoder on it but it learns to output a constant (usually 0.5). my guess is that it happens because 0.5 is the usual mean value of the normalized functions. But this is not the result im aspiring to get at all. I am providing the code I wrote for the autoencoder, the data generator and the training loop as well as two pictures depicting the problem im having. first example: second example: Linear autoencoder: class LinAutoencoder(nn.Module): def __init__(self, in_channels, K, B, z_dim, out_channels): super(LinAutoencoder, self).__init__() self.in_channels = in_channels self.K = K # number of samples per 2pi interval self.B = B # how many intervals self.out_channels = out_channels encoder_layers = [] decoder_layers = [] encoder_layers += [ nn.Linear(in_channels * K * B, 2*z_dim, bias=True), nn.ReLU(), nn.Linear(2*z_dim, z_dim, bias=True), nn.ReLU(), nn.Linear(z_dim, z_dim, bias=True), nn.ReLU() ] decoder_layers += [ nn.Linear(z_dim, z_dim, bias=True), nn.ReLU(), nn.Linear(z_dim, 2*z_dim, bias=True), nn.ReLU(), nn.Linear(2*z_dim, out_channels * K * B, bias=True), nn.Tanh() ] self.encoder = nn.Sequential(*encoder_layers) self.decoder = nn.Sequential(*decoder_layers) def forward(self, x): batch_size = x.shape[0] x_flat = torch.flatten(x, start_dim=1) enc = self.encoder(x_flat) dec = self.decoder(enc) res = dec.view((batch_size, self.out_channels, self.K * self.B)) return res The data generator: def lincomb_generate_data(batch_size, intervals, sample_length, functions, noise_type="gaussian", **kwargs)->torch.tensor: channels = 1 mul_term = 2 * np.pi / sample_length positions = np.arange(0, sample_length * intervals) x_axis = positions * mul_term X = np.tile(x_axis, (channels, 1)) y = X Y = np.repeat(y[np.newaxis, :], batch_size, axis=0) if noise_type == "gaussian": # defaults to 0, 0.4 noise_mean = kwargs.get("noise_mean", 0) noise_std = kwargs.get("noise_std", 0.4) noise = np.random.normal(noise_mean, noise_std, Y.shape) if noise_type == "uniform": # defaults to 0, 1 noise_low = kwargs.get("noise_low", 0) noise_high = kwargs.get("noise_high", 1) noise = np.random.uniform(noise_low, noise_high, Y.shape) coef_lo = -2 coef_hi = 2 coef_mat = np.random.uniform(coef_lo, coef_hi, (batch_size, len(functions))) # creating a matrix of coefficients coef_mat = np.where(np.abs(coef_mat) < 10**-1, 0, coef_mat) for i in range(batch_size): curr_res = np.zeros((channels, sample_length * intervals)) for func_id, function in enumerate(functions): curr_func = functions[func_id] curr_coef = coef_mat[i][func_id] curr_res += curr_coef * curr_func(Y[i, :, :]) Y[i, :, :] = curr_res clean = Y noisy = clean + noise # Normalizing clean -= clean.min(axis=2, keepdims=2) clean /= clean.max(axis=2, keepdims=2) + 1e-5 #avoiding zero division noisy -= noisy.min(axis=2, keepdims=2) noisy /= noisy.max(axis=2, keepdims=2) + 1e-5 #avoiding zero division clean = torch.from_numpy(clean) noisy = torch.from_numpy(noisy) return x_axis, clean, noisy Training loop: functions = [lambda x: np.cos(0.1*x), lambda x: np.cos(x), lambda x: np.cos(3*x)] num_epochs = 200 lin_loss_list = [] criterion = torch.nn.MSELoss() lin_optimizer = torch.optim.SGD(lin_model.parameters(), lr=0.01, momentum=0.9) _, val_clean, val_noisy = util.lincomb_generate_data(batch_size, B, K, functions, noise_type="gaussian") print("STARTED TRAINING") for epoch in range(num_epochs): # generate data returns the x-axis used for plotting as well as the clean and noisy data _, t_clean, t_noisy = util.lincomb_generate_data(batch_size, B, K, functions, noise_type="gaussian") # ===================forward===================== lin_output = lin_model(t_noisy.float()) lin_loss = criterion(lin_output.float(), t_clean.float()) lin_loss_list.append(lin_loss.data) # ===================backward==================== lin_optimizer.zero_grad() lin_loss.backward() lin_optimizer.step() val_lin_loss = F.mse_loss(lin_model(val_noisy.float()), val_clean.float()) print("DONE TRAINING") edit: shared the parameters requested L = 1 K = 512 B = 2 batch_size = 64 z_dim = 64 noise_mean = 0 noise_std = 0.4
The problem was I didnt use nn.BatchNorm1d in my model so i guess something wrong happened during training (probably vanishing gradients).
https://stackoverflow.com/questions/63500337/
Shouldn't `randperm` in the PyTorch C++ API return a tensor with default type int?
When I try to generate a list of permuted integer indices with randperm using the C++ PyTorch API, the resulting tensor has the element type of CPUFloatType{10} instead of an integer type: int N_SAMPLES = 10; torch::Tensor shuffled_indices = torch::randperm(N_SAMPLES); cout << shuffled_indices << endl; returns 9 3 8 6 2 5 4 7 1 0 [ CPUFloatType{10} ] Which cannot be used used for indexing of tensors because the element type is float and not an integer type. When tryig to use my_tensor.index(shuffled_indices) I get terminate called after throwing an instance of 'c10::IndexError' what(): tensors used as indices must be long, byte or bool tensors Environment: python-pytorch, version 1.6.0-2 on Arch Linux g++ (GCC) 10.1.0 Why does this happen?
That's because the default type of any tensor that you create with torch is always float. If you want otherwise, you have to specify it with the TensorOptions parameter struct : int N_SAMPLES = 10; torch::Tensor shuffled_indices = torch::randperm(N_SAMPLES, torch::TensorOptions().dtype(at::kLong)); cout << shuffled_indices.dtype() << endl; >>> long
https://stackoverflow.com/questions/63500671/
Why does regularization in pytorch and scratch code does not match and what is the formula used for regularization in pytorch?
I have been trying to do L2 regularization on a binary classification model in PyTorch but when I match the results of PyTorch and scratch code it doesn't match, Pytorch code: class LogisticRegression(nn.Module): def __init__(self,n_input_features): super(LogisticRegression,self).__init__() self.linear=nn.Linear(4,1) self.linear.weight.data.fill_(0.0) self.linear.bias.data.fill_(0.0) def forward(self,x): y_predicted=torch.sigmoid(self.linear(x)) return y_predicted model=LogisticRegression(4) criterion=nn.BCELoss() optimizer=torch.optim.SGD(model.parameters(),lr=0.05,weight_decay=0.1) dataset=Data() train_data=DataLoader(dataset=dataset,batch_size=1096,shuffle=False) num_epochs=1000 for epoch in range(num_epochs): for x,y in train_data: y_pred=model(x) loss=criterion(y_pred,y) loss.backward() optimizer.step() optimizer.zero_grad() Scratch Code: def sigmoid(z): s = 1/(1+ np.exp(-z)) return s def yinfer(X, beta): return sigmoid(beta[0] + np.dot(X,beta[1:])) def cost(X, Y, beta, lam): sum = 0 sum1 = 0 n = len(beta) m = len(Y) for i in range(m): sum = sum + Y[i]*(np.log( yinfer(X[i],beta)))+ (1 -Y[i])*np.log(1-yinfer(X[i],beta)) for i in range(0, n): sum1 = sum1 + beta[i]**2 return (-sum + (lam/2) * sum1)/(1.0*m) def pred(X,beta): if ( yinfer(X, beta) > 0.5): ypred = 1 else : ypred = 0 return ypred beta = np.zeros(5) iterations = 1000 arr_cost = np.zeros((iterations,4)) print(beta) n = len(Y_train) for i in range(iterations): Y_prediction_train=np.zeros(len(Y_train)) Y_prediction_test=np.zeros(len(Y_test)) for l in range(len(Y_train)): Y_prediction_train[l]=pred(X[l,:],beta) for l in range(len(Y_test)): Y_prediction_test[l]=pred(X_test[l,:],beta) train_acc = format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100) test_acc = 100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100 arr_cost[i,:] = [i,cost(X,Y_train,beta,lam),train_acc,test_acc] temp_beta = np.zeros(len(beta)) ''' main code from below ''' for j in range(n): temp_beta[0] = temp_beta[0] + yinfer(X[j,:], beta) - Y_train[j] temp_beta[1:] = temp_beta[1:] + (yinfer(X[j,:], beta) - Y_train[j])*X[j,:] for k in range(0, len(beta)): temp_beta[k] = temp_beta[k] + lam * beta[k] #regularization here temp_beta= temp_beta / (1.0*n) beta = beta - alpha*temp_beta graph of the losses graph of training accuracy graph of testing accuracy Can someone please tell me why this is happening? L2 value=0.1
Great question. I dug a lot through PyTorch documentation and found the answer. The answer is very tricky. Basically there are two ways to calculate regulalarization. (For summery jump to the last section). The PyTorch uses the first type (in which regularization factor is not divided by batch size). Here's a sample code which demonstrates that: import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import torch.optim as optim class model(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(1, 1) self.linear.weight.data.fill_(1.0) self.linear.bias.data.fill_(1.0) def forward(self, x): return self.linear(x) model = model() optimizer = optim.SGD(model.parameters(), lr=0.1, weight_decay=1.0) input = torch.tensor([[2], [4]], dtype=torch.float32) target = torch.tensor([[7], [11]], dtype=torch.float32) optimizer.zero_grad() pred = model(input) loss = F.mse_loss(pred, target) print(f'input: {input[0].data, input[1].data}') print(f'prediction: {pred[0].data, pred[1].data}') print(f'target: {target[0].data, target[1].data}') print(f'\nMSEloss: {loss.item()}\n') loss.backward() print('Before updation:') print('--------------------------------------------------------------------------') print(f'weight [data, gradient]: {model.linear.weight.data, model.linear.weight.grad}') print(f'bias [data, gradient]: {model.linear.bias.data, model.linear.bias.grad}') print('--------------------------------------------------------------------------') optimizer.step() print('After updation:') print('--------------------------------------------------------------------------') print(f'weight [data]: {model.linear.weight.data}') print(f'bias [data]: {model.linear.bias.data}') print('--------------------------------------------------------------------------') which outputs: input: (tensor([2.]), tensor([4.])) prediction: (tensor([3.]), tensor([5.])) target: (tensor([7.]), tensor([11.])) MSEloss: 26.0 Before updation: -------------------------------------------------------------------------- weight [data, gradient]: (tensor([[1.]]), tensor([[-32.]])) bias [data, gradient]: (tensor([1.]), tensor([-10.])) -------------------------------------------------------------------------- After updation: -------------------------------------------------------------------------- weight [data]: tensor([[4.1000]]) bias [data]: tensor([1.9000]) -------------------------------------------------------------------------- Here m = batch size = 2, lr = alpha = 0.1, lambda = weight_decay = 1. Now consider tensor weight which has value = 1 and grad = -32 case1(type1 regularization): weight = weight - lr(grad + weight_decay.weight) weight = 1 - 0.1(-32 + 1(1)) weight = 4.1 case2(type2 regularization): weight = weight - lr(grad + (weight_decay/batch size).weight) weight = 1 - 0.1(-32 + (1/2)(1)) weight = 4.15 From the output we can see that updated weight = 4.1000. That concludes PyTorch uses type1 regularization. So finally In your code you are following type2 regularization. So just change some last lines to this: # for k in range(0, len(beta)): # temp_beta[k] = temp_beta[k] + lam * beta[k] #regularization here temp_beta= temp_beta / (1.0*n) beta = beta - alpha*(temp_beta + lam * beta) And also PyTorch loss functions doesn't include regularization term(implemented inside optimizers) so also remove regularization terms inside your custom cost function. In summary: Pytorch use this Regularization function: Regularization is implemented inside Optimizers (weight_decay parameter). PyTorch Loss functions doesn't include Regularization term. Bias is also regularized if Regularization is used. To use Regularization try: torch.nn.optim.optimiser_name(model.parameters(), lr, weight_decay=lambda).
https://stackoverflow.com/questions/63502430/
Different output from Libtorch C++ and pytorch
I'm using the same traced model in pytorch and libtorch but I'm getting different outputs. Python Code: import cv2 import numpy as np import torch import torchvision from torchvision import transforms as trans # device for pytorch device = torch.device('cuda:0') torch.set_default_tensor_type('torch.cuda.FloatTensor') model = torch.jit.load("traced_facelearner_model_new.pt") model.eval() # read the example image used for tracing image=cv2.imread("videos/example.jpg") test_transform = trans.Compose([ trans.ToTensor(), trans.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) resized_image = cv2.resize(image, (112, 112)) tens = test_transform(resized_image).to(device).unsqueeze(0) output = model(tens) print(output) C++ Code: #include <iostream> #include <algorithm> #include <opencv2/opencv.hpp> #include <torch/script.h> int main() { try { torch::jit::script::Module model = torch::jit::load("traced_facelearner_model_new.pt"); model.to(torch::kCUDA); model.eval(); cv::Mat visibleFrame = cv::imread("example.jpg"); cv::resize(visibleFrame, visibleFrame, cv::Size(112, 112)); at::Tensor tensor_image = torch::from_blob(visibleFrame.data, { 1, visibleFrame.rows, visibleFrame.cols, 3 }, at::kByte); tensor_image = tensor_image.permute({ 0, 3, 1, 2 }); tensor_image = tensor_image.to(at::kFloat); tensor_image[0][0] = tensor_image[0][0].sub(0.5).div(0.5); tensor_image[0][1] = tensor_image[0][1].sub(0.5).div(0.5); tensor_image[0][2] = tensor_image[0][2].sub(0.5).div(0.5); tensor_image = tensor_image.to(torch::kCUDA); std::vector<torch::jit::IValue> input; input.emplace_back(tensor_image); // Execute the model and turn its output into a tensor. auto output = model.forward(input).toTensor(); output = output.to(torch::kCPU); std::cout << "Embds: " << output << std::endl; std::cout << "Done!\n"; } catch (std::exception e) { std::cout << "exception" << e.what() << std::endl; } } The model gives (1x512) size output tensor as shown below. Python output tensor([[-1.6270e+00, -7.8417e-02, -3.4403e-01, -1.5171e+00, -1.3259e+00, -1.1877e+00, -2.0234e-01, -1.0677e+00, 8.8365e-01, 7.2514e-01, 2.3642e+00, -1.4473e+00, -1.6696e+00, -1.2191e+00, 6.7770e-01, ... -7.1650e-01, 1.7661e-01]], device=‘cuda:0’, grad_fn=) C++ output Embds: Columns 1 to 8 -84.6285 -14.7203 17.7419 47.0915 31.8170 57.6813 3.6089 -38.0543 Columns 9 to 16 3.3444 -95.5730 90.3788 -10.8355 2.8831 -14.3861 0.8706 -60.7844 ... Columns 505 to 512 36.8830 -31.1061 51.6818 8.2866 1.7214 -2.9263 -37.4330 48.5854 [ CPUFloatType{1,512} ] Using Pytorch 1.6.0 Libtorch 1.6.0 Visual studio 2019 Windows 10 Cuda 10.1
before the final normalization, you need to scale your input to the range 0-1 and then carry on the normalization you are doing. convert to float and then divide by 255 should get you there. Here is the snippet I wrote, there might be some syntaax errors, that should be visible. Try this : #include <iostream> #include <algorithm> #include <opencv2/opencv.hpp> #include <torch/script.h> int main() { try { torch::jit::script::Module model = torch::jit::load("traced_facelearner_model_new.pt"); model.to(torch::kCUDA); cv::Mat visibleFrame = cv::imread("example.jpg"); cv::resize(visibleFrame, visibleFrame, cv::Size(112, 112)); at::Tensor tensor_image = torch::from_blob(visibleFrame.data, { visibleFrame.rows, visibleFrame.cols, 3 }, at::kByte); tensor_image = tensor_image.to(at::kFloat).div(255).unsqueeze(0); tensor_image = tensor_image.permute({ 0, 3, 1, 2 }); ensor_image.sub_(0.5).div_(0.5); tensor_image = tensor_image.to(torch::kCUDA); // Execute the model and turn its output into a tensor. auto output = model.forward({tensor_image}).toTensor(); output = output.cpu(); std::cout << "Embds: " << output << std::endl; std::cout << "Done!\n"; } catch (std::exception e) { std::cout << "exception" << e.what() << std::endl; } } I don't have access to a system to run this so if you face anything comment below.
https://stackoverflow.com/questions/63502473/
Pytorch is really slow and uses a lot of GPU memory when used in Starlette with WEB_CONCURRENCY > 1
I am trying to build an API that uses a Pytorch model. However, as soon as I increase WEB_CONCURRENCY to something above 1, it creates substantially more threads than expected and slows down by a lot, even when sending a single request. Example code: api.sh export WEB_CONCURRENCY=2 python api.py api.py from starlette.applications import Starlette from starlette.responses import UJSONResponse from starlette.middleware.gzip import GZipMiddleware from mymodel import Model model = Model() app = Starlette(debug=False) app.add_middleware(GZipMiddleware, minimum_size=1000) @app.route('/process', methods=['GET', 'POST', 'HEAD']) async def add_styles(request): if request.method == 'GET': params = request.query_params elif request.method == 'POST': params = await request.json() elif request.method == 'HEAD': return UJSONResponse([], headers=response_header) print('===Request body===') print(params) model_output = model(params.get('data', [])) # It is very simplified. Inside there are # many things that are happening, which # involve file reading/writing # and spawning processes with `popen` that # do even more processing. But I don't # think that should be an issue here. return model_output if __name__ == '__main__': uvicorn.run('api:app', host='0.0.0.0', port=int(os.environ.get('PORT', 8080))) When WEB_CONCURRENCY=1 in api.sh, there is only 1 python process seen when nvidia-smi is ran and model uses 1.2GB or VRAM. Request takes ~0.7s When WEB_CONCURRENCY=2 in api.sh, there can be upwards of 8 python processes seen in nvidia-smi and they will use upwards of ~8GB of VRAM. Then one single request can take up to 3s, if you're lucky and don't get an out of memory error. I am using Python3.8 Why isn't Pytorch using the expected VRAM of 2.4GB when WEB_CONCURRENCY=2? And why is it slowing down so much?
If anyone else stumbles upon this issue, just use gunicorn. It uses separate threads/processes, so there's no internal conflict going on. So instead of running it with: python api.py, just run with: gunicorn -w 2 api:app -k uvicorn.workers.UvicornWorker
https://stackoverflow.com/questions/63504148/
Pytorch, get rid of a for loop when adding permutation of one vector to entries of a matrix?
I'm trying to implement this paper, and stuck with this simple step. Although this is to do with attention, the thing I'm stuck with is just how to implement a permutation of a vector added to a matrix without using for loops. The attention scores have a learned bias vector added to them, the theory is that it encodes relative position (j-i) of the two tokens the score represents so alpha is a T x T matrix,T depends on the batch being forwarded, and B is a learned bias vector whose length has to be fixed and as large as 2T. My current implementation which I believe does what the paper suggests is: def __init__(...): ... self.bias = torch.nn.Parameter(torch.randn(config.n),requires_grad = True) stdv = 1. / math.sqrt(self.bias.data.size(0)) self.bias.data.uniform_(-stdv, stdv) def forward(..) ... #n = 201 (2* max_seq_len + 1) B_matrix = torch.zeros(self.T, self.T) # 60 x 60 for i in range(self.T): B_matrix[i] = self.bias[torch.arange(start=n//2-i, end=n//2-i+T)])] attention_scores = attention_scores + B_matrix.unsqueeze(0) # 64 x 60 x 60 ... This is the only relevant part B_matrix = torch.zeros(self.T, self.T) # 60 x 60 for i in range(self.T): B_matrix[i] = self.bias[torch.arange(start=n//2-i, end=n//2-i+T)])] basically trying to not use a for loop to go over each row. but I know this must be really inefficient, and costly when this model is very large. I'm doing an explicit for loop over each row to get a permutation of the learned bias vector. Can anyone help me out with a better way, through smart broadcasting perhaps? After thinking about it, I don't need to instantiate a zero matrix, but still can't get rid of the for loop? and can't use gather as the B_matrix is a different size than a tiled b vector. functor = lambda i : bias[torch.arange(start=n//2-i, end=n//2-i+T)] B_matrix = torch.stack([functor(i) for i in torch.arange(T)])
I couldn't figure out what n was supposed to be in your code but I think the following example using torch.meshgrid provides what you're looking for. Supposing n, m = 10, 20 # arbitrary a = torch.randn(n, m) b = torch.randn(n + m) then for i in range(n): for j in range(m): a[i, j] = a[i, j] + b[n - i + j] is equivalent to ii, jj = torch.meshgrid(torch.arange(n), torch.arange(m)) a = a + b[n - ii + jj] though the latter is an out-of-place operation, which is usually a good thing. If you actually wanted an in-place operation then replace a = with a[...] =. Note that this is an example of integer array indexing where we index b using a tensor that is the same shape as a.
https://stackoverflow.com/questions/63506321/
How to avoid recalculating a function when we need to backpropagate through it twice?
In PyTorch, I want to do the following calculation: l1 = f(x.detach(), y) l1.backward(retain_graph=True) l2 = -1*f(x, y.detach()) l2.backward() where f is some function, and x and y are tensors that require gradient. Notice that x and y may both be the results of previous calculations which utilize shared parameters (for example, maybe x=g(z) and y=g(w) where g is an nn.Module). The issue is that l1 and l2 are both numerically identical, up to the minus sign, and it seems wasteful to repeat the calculation f(x,y) twice. It would be nicer to be able to calculate it once, and apply backward twice on the result. Is there any way of doing this? One possibility is to manually call autograd.grad and update the w.grad field of each nn.Parameter w. But I'm wondering if there is a more direct and clean way to do this, using the backward function.
I took this answer from here. We can calculate f(x,y) once, without detaching neither x or y, if we ensure that we we multiply by -1 the gradient flowing through x. This can be done using register_hook: x.register_hook(lambda t: -t) l = f(x,y) l.backward() Here is code demonstrating that this works: import torch lin = torch.nn.Linear(1, 1, bias=False) lin.weight.data[:] = 1.0 a = torch.tensor([1.0]) b = torch.tensor([2.0]) loss_func = lambda x, y: (x - y).abs() # option 1: this is the inefficient option, presented in the original question lin.zero_grad() x = lin(a) y = lin(b) loss1 = loss_func(x.detach(), y) loss1.backward(retain_graph=True) loss2 = -1 * loss_func(x, y.detach()) # second invocation of `loss_func` - not efficient! loss2.backward() print(lin.weight.grad) # option 2: this is the efficient method, suggested in this answer. lin.zero_grad() x = lin(a) y = lin(b) x.register_hook(lambda t: -t) loss = loss_func(x, y) # only one invocation of `loss_func` - more efficient! loss.backward() print(lin.weight.grad) # the output of this is identical to the previous print, which confirms the method # option 3 - this should not be equivalent to the previous options, used just for comparison lin.zero_grad() x = lin(a) y = lin(b) loss = loss_func(x, y) loss.backward() print(lin.weight.grad)
https://stackoverflow.com/questions/63506411/
Mask certain indices for every entry in a batch, when using torch.max()
I am incremently sampling a batch of size torch.Size([n, 8]). I also have a list valid_indices of length n which contains tuples of indices that are valid for each entry in the batch. For instance valid_indices[0] may look like this: (0,1,3,4,5,7) , which suggests that indices 2 and 6 should be excluded from the first entry in batch along dim 1. Particularly I need to exclude these values for when I use torch.max(batch, dim=1, keepdim=True). Indices to be excluded (if any) may differ from entry to entry within the batch. Any ideas? Thanks in advance.
I assume that you are getting the good old IndexError: too many indices for tensor of dimension 1 error when you use your tuple indices directly on the tensor. At least that was the error that I was able to reproduce when I execute the following line t[0][valid_idx0] Where t is a random tensor with size (10,8) and valid_idx0 is a tuple with 4 elements. However, same line works just fine when you convert your tuple to a list as following t[0][list(valid_idx0)] >>> tensor([0.1847, 0.1028, 0.7130, 0.5093]) But when it comes to applying these indices to 2D tensors, things get a bit different, since we need to preserve the structure of our tensor for batch processing. Therefore, it would be reasonable to convert our indices to mask arrays. Let's say we have a list of tuples valid_indices at hand. First thing will be converting it to a list of lists. valid_idx_list = [list(tup) for tup in valid_indices] Second thing will be converting them to mask arrays. masks = np.zeros((t.size())) for i, indices in enumerate(valid_idx_list): masks[i][indices] = 1 Done. Now we can apply our mask and use the torch.max on the masked tensor. torch.max(t*masks) Kindly see the colab notebook that I've used to reproduce the problem. https://colab.research.google.com/drive/1BhKKgxk3gRwUjM8ilmiqgFvo0sfXMGiK?usp=sharing
https://stackoverflow.com/questions/63507027/
How to manually implement padding for pytorch convolutions
I'm trying to port some pytorch code to tensorflow 2.0 and am having difficulty figuring out how to translate the convolution functions between the two. The way both libraries deal with padding is the sticking point. Basically, I'd like to understand how I can manually produce the padding that pytorch does under the hood so that I can translate that to tensorflow. The code below works if I don't do any padding, but I can't figure out how to make the two implementations match once any padding is added. output_padding = SOME NUMBER padding = SOME OTHER NUMBER strides = 128 tensor = np.random.rand(2, 258, 249) filters = np.random.rand(258, 1, 256) out_torch = F.conv_transpose1d( torch.from_numpy(tensor).float(), torch.from_numpy(filters).float(), stride=strides, padding=padding, output_padding=output_padding) def pytorch_transpose_conv1d(inputs, filters, strides, padding, output_padding): N, L_in = inputs.shape[0], inputs.shape[2] out_channels, kernel_size = filters.shape[1], filters.shape[2] time_out = (L_in - 1) * strides - 2 * padding + (kernel_size - 1) + output_padding + 1 padW = (kernel_size - 1) - padding # HOW DO I PAD HERE TO GET THE SAME OUTPUT AS IN PYTORCH inputs = tf.pad(inputs, [(?, ?), (?, ?), (?, ?)]) return tf.nn.conv1d_transpose( inputs, tf.transpose(filters, perm=(2, 1, 0)), output_shape=(N, out_channels, time_out), strides=strides, padding="VALID", data_format="NCW") out_tf = pytorch_transpose_conv1d(tensor, filters, strides, padding, output_padding) assert np.allclose(out_tf.numpy(), out_torch.numpy())
Padding To translate the convolution and transpose convolution functions (with padding padding) between the Pytorch and Tensorflow we need to understand first F.pad() and tf.pad() functions. torch.nn.functional.pad(input, padding_size, mode='constant', value=0): padding size: The padding size by which to pad some dimensions of input are described starting from the last dimension and moving forward. to pad only the last dimension of the input tensor, then pad has the form (padding_left, padding_right) to pad the last 3 dimensions, (padding_left, padding_right, padding_top, padding_bottom, padding_front,padding_back) tensorflow.pad(input, padding_size, mode='CONSTANT',name=None,constant_values=0) padding_size: is an integer tensor with shape [n, 2], where n is the rank of the tensor. For each dimension D of input, paddings[D, 0] indicates how many values to add before the contents of tensor in that dimension, and paddings[D, 1] indicates how many values to add after the contents of tensor in that dimension. Here's table representing F.pad and tf.pad equivalents along with output tensor for the input tensor [[[1, 1], [1, 1]]] which is of shape (1, 2, 2) Padding in Convolution Let's now move to PyTorch padding in Convolution layers F.conv1d(input, ..., padding, ...): padding controls the amount of implicit paddings on both sides for padding number of points. padding=(size) applies F.pad(input, [size, size]) i.e padding last dimension with (size, size) equivalent to tf.pad(input, [[0, 0], [0, 0], [size, size]]) F.conv2d(input, ..., padding, ...): padding=(size) applies F.pad(input, [size, size, size, size]) i.e padding last 2 dimensions with (size, size) equivalent to tf.pad(input, [[0, 0], [size, size], [size, size]]) padding=(size1, size2) applies F.pad(input, [size2, size2, size1, size1]) which is equivalent to tf.pad(input, [[0, 0], [size1, size1], [size2, size2]]) Padding in Transpose Convolution PyTorch padding in Transpose Convolution layers F.conv_transpose1d(input, ..., padding, output_padding, ...): dilation * (kernel_size - 1) - padding padding will be added to both sides of each dimension in the input. Padding in transposed convolutions can be seen as allocating fake outputs that will be removed output_padding controls the additional size added to one side of the output shape Check this to understand what exactly happens during transpose convolution in pytorch. Here's the formula to calculate output size of transpose convolution: output_size = (input_size - 1)stride + (kerenel_size - 1) + 1 + output_padding - 2padding Codes Transpose Convolution import torch import torch.nn as nn import torch.nn.functional as F import tensorflow as tf import numpy as np # to stop tf checkfailed error not relevent to actual code import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "1" def tconv(tensor, filters, output_padding=0, padding=0, strides=1): ''' tensor : input tensor of shape (batch_size, channels, W) i.e (NCW) filters : input kernel of shape (in_ch, out_ch, kernel_size) output_padding : single number must be smaller than either stride or dilation padding : single number should be less or equal to ((valid output size + output padding) // 2) strides : single number ''' bs, in_ch, W = tensor.shape in_ch, out_ch, k_sz = filters.shape out_torch = F.conv_transpose1d(torch.from_numpy(tensor).float(), torch.from_numpy(filters).float(), stride=strides, padding=padding, output_padding=output_padding) out_torch = out_torch.numpy() # output_size = (input_size - 1)*stride + (kerenel_size - 1) + 1 + output_padding - 2*padding # valid out size -> padding=0, output_padding=0 # -> valid_out_size = (input_size - 1)*stride + (kerenel_size - 1) + 1 out_size = (W - 1)*strides + (k_sz - 1) + 1 # input shape -> (batch_size, W, in_ch) and filters shape -> (kernel_size, out_ch, in_ch) for tf conv valid_tf = tf.nn.conv1d_transpose(np.transpose(tensor, axes=(0, 2, 1)), np.transpose(filters, axes=(2, 1, 0)), output_shape=(bs, out_size, out_ch), strides=strides, padding='VALID', data_format='NWC') # output padding tf_outpad = tf.pad(valid_tf, [[0, 0], [0, output_padding], [0, 0]]) # NWC to NCW tf_outpad = np.transpose(tf_outpad, (0, 2, 1)) # padding -> input, begin, shape -> remove `padding` elements on both side out_tf = tf.slice(tf_outpad, [0, 0, padding], [bs, out_ch, tf_outpad.shape[2]-2*padding]) out_tf = np.array(out_tf) print('output size(tf, torch):', out_tf.shape, out_torch.shape) # print('out_torch:\n', out_torch) # print('out_tf:\n', out_tf) print('outputs are close:', np.allclose(out_tf, out_torch)) tensor = np.random.rand(2, 1, 7) filters = np.random.rand(1, 2, 3) tconv(tensor, filters, output_padding=2, padding=5, strides=3) Results >>> tensor = np.random.rand(2, 258, 249) >>> filters = np.random.rand(258, 1, 7) >>> tconv(tensor, filters, output_padding=4, padding=9, strides=6) output size(tf, torch): (2, 1, 1481) (2, 1, 1481) outputs are close: True Some useful links: pytorch 'SAME' convolution How pytorch transpose conv works
https://stackoverflow.com/questions/63508818/
Gensim's word2vec has a loss of 0 from epoch 1?
I am using the Word2vec module of Gensim library to train a word embedding, the dataset is 400k sentences with 100k unique words (its not english) I'm using this code to monitor and calculate the loss : class MonitorCallback(CallbackAny2Vec): def __init__(self, test_words): self._test_words = test_words def on_epoch_end(self, model): print("Model loss:", model.get_latest_training_loss()) # print loss for word in self._test_words: # show wv logic changes print(model.wv.most_similar(word)) monitor = MonitorCallback(["MyWord"]) # monitor with demo words w2v_model = gensim.models.word2vec.Word2Vec(size=W2V_SIZE, window=W2V_WINDOW, min_count=W2V_MIN_COUNT , callbacks=[monitor]) w2v_model.build_vocab(tokenized_corpus) words = w2v_model.wv.vocab.keys() vocab_size = len(words) print("Vocab size", vocab_size) print("[*] Training...") # Train Word Embeddings w2v_model.train(tokenized_corpus, total_examples=len(tokenized_corpus), epochs=W2V_EPOCH) The problem is from epoch 1 the loss is 0 and the vector of the monitored words dont change at all! [*] Training... Model loss: 0.0 Model loss: 0.0 Model loss: 0.0 Model loss: 0.0 so what is the problem here? is this normal? the tokenized corpus is a list of lists that are something like tokenized_corpus[0] = [ "word1" , "word2" , ...] I googled and seems like some of the old versions of gensim had problem with calculating loss function, but they are from almost a year ago and it seems like it should be fixed right now? I tried the code provided in the answer of this question as well but still the loss is 0 : Loss does not decrease during training (Word2Vec, Gensim) EDIT1 : after adding compute_loss=True, the loss shows up, but it keeps going higher and higher, and the top similar words and their similarity doesn't change at all : Model loss: 2187903.5 Model loss: 3245492.0 Model loss: 4103624.5 Model loss: 4798541.0 Model loss: 5413940.0 Model loss: 5993822.5 Model loss: 6532631.0 Model loss: 7048384.5 Model loss: 7547147.0
The top issue with your code is that you haven't used the Word2Vec initialization parameter necessary to toggle loss-tracking on: compute_loss=True (See 'parameters' section of https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec ) Even with that fix, the loss-reporting is still quite buggy (as of gensim-3.8.3 & this writing in August 2020): it's not the per-epoch total, or per-example average, one might expect. (So if you need that, as a workaround, your callback should be remembering the last value and computing the delta, or resetting the internal counter to 0.0, each epoch's end.) it definitely loses precision in larger training runs, eventually becoming useless. (This may not be an issue for you.) it might lose some tallies due to multithreaded value-overwriting. (This may not be a practical issue for you, depending on why you're consulting the loss value.)
https://stackoverflow.com/questions/63509864/
Resnet implementation: forward() takes 1 positional argument but 2 were given
I wrote this code and when I run it I get the following error: forward() takes 1 positional argument but 2 were given. As far as I know, I am passing only one argument to forward(). ResNet is a basic residual block class ResNet(nn.Module): def __init__(self, in_channels, mid_channels, mid2_channels ,out_channels): super().__init__() self.conv1 = nn.Conv2d(in_channels,mid_channels,kernel_size = 3, stride = 1, padding = 1) self.conv1_bn = nn.BatchNorm2d(mid_channels) self.conv2 = nn.Conv2d(mid_channels,mid2_channels,kernel_size = 3, stride = 1, padding = 1) self.conv2_bn = nn.BatchNorm2d(mid2_channels) self.conv3 = nn.Conv2d(mid2_channels,out_channels,kernel_size = 3, stride = 1, padding = 1) self.conv3_bn = nn.BatchNorm2d(out_channels) if (in_channels != out_channels): self.conv_shortcut = nn.Conv2d(in_channels, out_channels, kernel_size = 1, stride = 1, padding = 0 ) def forward(self, X): X_shortcut = X X = F.relu(self.conv1(X)) X = self.conv1_bn(X) X = F.relu(self.conv2(X)) X = self.conv2_bn(X) X = F.relu(self.conv2(X)) X = self.conv2_bn(X) if (in_channels == out_channels): X = self.conv3(X) + X_shortcut else: X = self.conv3(X) + self.conv_shortcut(X_shortcut) X = self.conv3_bn(F.relu(x)) return X This the method for generating a model using the given layers. class TotalNet(nn.Module): def __init__(self, Layers): super().__init__() self.hidden = nn.ModuleList() self.hidden.append(nn.BatchNorm2d(1)) for i in range(0,len(Layers)-1,3): in_channels, mid_channels, mid2_channels, out_channels = Layers[i:(i+4)] self.hidden.append(ResNet(in_channels, mid_channels, mid2_channels, out_channels)) self.hidden.append(nn.Flatten()) def forward(self, X): X = self.hidden(X) return X the following is how I am calling the function: test = TotalNet([9,2,9,9,9,9,9,9,9,9]) a = torch.rand((1,9,9), dtype = torch.float32) test(a)
I realized that I was passing the X to the nn.ModuleList. This is incorrect that the right way would be to apply X to the elements of nn.ModuleList and updating the values of X. In other words, the forward function of TotalNet should be the following: for operation in self.hidden: X = operation(X) return X
https://stackoverflow.com/questions/63512921/
Pytorch: after embedding layer, Unable to get repr for
I am new to PyTorch and trying to reproduce the project: https://github.com/eXascaleInfolab/ActiveLink However, errors occur in the feedforward() which has been bothering me for days, here is part of the code (for complete code of the model, see https://github.com/eXascaleInfolab/ActiveLink/blob/master/models.py please): def forward(self, e1, rel, batch_size=None, weights=None): ...... e1_embedded = self.emb_e(e1).view(-1, 1, 10, 20) rel_embedded = self.emb_rel(rel).view(-1, 1, 10, 20) stacked_inputs = torch.cat([e1_embedded, rel_embedded], 2) # out: (128L, 1L, 20L, 20L) That gives me the error (I am using GPU): THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=196 error=710 : device-side assert triggered Traceback (most recent call last): File "main.py", line 147, in <module> main() File "main.py", line 136, in main model = run_meta_incremental(config, model, train_batcher, test_rank_batcher) File "/home/yonghui/yt/meta_incr_training.py", line 158, in run_meta_incremental g = run_inner(config, model, task) File "/home/yonghui/yt/meta_incr_training.py", line 120, in run_inner pred = model.forward(e1, rel) File "/home/yonghui/yt/models.py", line 136, in forward stacked_inputs = torch.cat([e1_embedded, rel_embedded], 2) RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:196 /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [189,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. I use Debugger in an attempt to find out where goes wrong: Before e1 and rel are embedded, they are both tensors in int64 with the shape of torch.Size([128, 1]). e1 can be embedded as normal, converting into torch.float32 and torch.Size([128, 1, 10, 20]). However, after rel passed the embedding layer of emb_rel, Debugger shows all tenros as Unable to get repr for <class 'torch.Tensor'>. What's going on? How can I fix that? Thank you for any possible help!!
This issue is solved by using the debugger and checking the input tensor. After checking the tensors before embedding, I find that some elements exceed the range, especially for the case where the index starting from 0.
https://stackoverflow.com/questions/63518688/
Value error while converting tensor to numpy array
I'm using the following code to extract the features from image. def ext(): imgPathList = glob.glob("images/"+"*.JPG") features = [] for i, path in enumerate(tqdm(imgPathList)): feature = get_vector(path) feature = feature[0] / np.linalg.norm(feature[0]) features.append(feature) paths.append(path) features = np.array(features, dtype=np.float32) return features, paths However, the above code throws the following error, features = np.array(features, dtype=np.float32) ValueError: only one element tensors can be converted to Python scalars How can I be able to fix it?
The error says that your features variable is a list which contains multi dimensional values which cant be converted to tensor, because .append is converting the tensors to list, So some workaround is to use concatenation function of torch as torch.cat() (read here) instead of append method. I tried to replicate the solution with toy example. I am assuming that features contain 2D tensor import torch for i in range(1,11): alpha = torch.rand(2,2) if i<2: beta = alpha #will concatenate second sample else: beta = torch.cat((beta,alpha),0) import numpy as np features = np.array(beta, dtype=np.float32)
https://stackoverflow.com/questions/63522070/
PyTorch find keypoints: output nodes to be in a range and negative loss
I am beginner in deep learning. I am using this dataset and I want my network to detect keypoints of a hand. How can I make my output layer's nodes to be in range [-1, 1] (range of normalized 2D points)? Another problem is when I train for more than 1 epoch the loss gets negative values criterion: torch.nn.MultiLabelSoftMarginLoss() and optimizer: torch.optim.SGD() Here u can find my repo net = nnModel.Net() net = net.to(device) criterion = nn.MultiLabelSoftMarginLoss() optimizer = optim.SGD(net.parameters(), lr=learning_rate) lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=decay_rate)
You can use the Tanh activation function, since the image of the function lies in [-1, 1]. The problem of predicting key-points in an image is more of a regression problem than a classification problem (especially if you're making your model outputs + targets fall within a continuous interval). Therefore, I suggest you use the L2 Loss. In fact, it could be a good exercise for you to determine which loss function that is appropriate for regression problems provides the lowest expected generalization error using cross-validation. There's several such functions available in PyTorch.
https://stackoverflow.com/questions/63525242/
How to change the learning rate in PyTorch (1.6)
I am using PyTorch and I want to change the learning rate after some epochs. However, the code that is provided on most documentations, which is: optimizer = torch.optim.Adam([ dict(params=model.parameters(), lr=learning_rate), ]) #This line specifically optimizer.params_group[0]['lr'] = learning_rate does not work. Actually PyCharm hints at it: Unresolved attribute reference 'params_group' for class 'Adam' As a result, the error thrown is: AttributeError: 'Adam' object has no attribute 'params_group' How should one manually change the learning rate in PyTorch (1.6)?
Param_groups is not the feasible solution devised by the pytorch and thus you should be implementing pytorch.optim.lr_scheduler. Read more about this at other stackoverflow answer here. import torch.optim.lr_scheduler.StepLR #step learning rate scheduler = StepLR(optimizer, step_size=5, gamma=0.1)
https://stackoverflow.com/questions/63528631/
Access 3D tensor (image) using a 2d tensor of indices
With the following 3D tensor representing an image img.shape=[H,W,F] And a tensor representing the indices to that img indices.shape=[N,2] E.g. if indices = [[0,1],[5,3],...]] I would like to create a new tensor of shape new.shape=[N,F] where new[k] == img[indices[k][0],indices[k][1]] Currently to solve this I flatten both tensors: idx_flattened = idx_flattened [:,0] * (idx_flattened [:,1].max()+1) + idx_flattened[:,1] img = img .reshape(-1,F) new = img[idx_flattened ] But I'm certain there is a better way:) Here's a full minimal example: img = torch.arange(8*10*3).reshape(8,10,3) indices = torch.tensor([[0,0],[3,0],[1,2]]) new = img[indices] <- This does not work new = [[ 0, 1, 2],[ 90, 91, 92],[ 36, 37, 38]] Ideas?
Slicing would work img[indices[:,0], indices[:,1]] tensor([[ 0, 1, 2], [90, 91, 92], [36, 37, 38]])
https://stackoverflow.com/questions/63533073/
how to see where exactly torch is installed pip vs conda torch installation
On my machine i can't "pip install torch" - i get infamous "single source externally managed error" - i could not fix it and used "conda install torch" from anaconda. Still, checking version is easy - torch.__version__ But how to see where is it installed -the home dir of torch? Suppose if I had had both torches installed via pip and conda - how to know which one is used in a project? import torch print(torch__version__)
You can get torch module location which is imported in your script import torch print(torch.__file__)
https://stackoverflow.com/questions/63533237/
Torchtext 0.7 shows Field is being deprecated. What is the alternative?
Looks like the previous paradigm of declaring Fields, Examples and using BucketIterator is deprecated and will move to legacy in 0.8. However, I don't seem to be able to find an example of the new paradigm for custom datasets (as in, not the ones included in torch.datasets) that doesn't use Field. Can anyone point me at an up-to-date example? Reference for deprecation: https://github.com/pytorch/text/releases
It took me a little while to find the solution myself. The new paradigm is like so for prebuilt datasets: from torchtext.experimental.datasets import AG_NEWS train, test = AG_NEWS(ngrams=3) or like so for custom built datasets: from torch.utils.data import DataLoader def collate_fn(batch): texts, labels = [], [] for label, txt in batch: texts.append(txt) labels.append(label) return texts, labels dataloader = DataLoader(train, batch_size=8, collate_fn=collate_fn) for idx, (texts, labels) in enumerate(dataloader): print(idx, texts, labels) I've copied the examples from the Source
https://stackoverflow.com/questions/63539809/
Why is the output of torch.lstsq drastically different than np.linalg.lstsq?
Pytorch provides a lstsq function, but the result it returns drastically differs from the numpy's version. Here is an example input and both of their results: import numpy as np import torch a = torch.tensor([[1., 1, 1], [2, 3, 4], [3, 5, 2], [4, 2, 5], [5, 4, 3]]) b = torch.tensor([[-10., -3], [ 12, 14], [ 14, 12], [ 16, 16], [ 18, 16]]) a1 = a.clone().numpy() b1 = b.clone().numpy() x, r = torch.lstsq(b, a) x1, res, r1, s = np.linalg.lstsq(b1, a1) print(f'torch_x: {x}') print(f'torch_r: {r}\n') print(f'np_x: {x1}') print(f'np_res: {res}') print(f'np_r1(rank): {r1}') print(f'np_s: {s}') Output: torch_x: tensor([[ 2.0000, 1.0000], [ 1.0000, 1.0000], [ 1.0000, 2.0000], [10.9635, 4.8501], [ 8.9332, 5.2418]]) torch_r: tensor([[-7.4162, -6.7420, -6.7420], [ 0.2376, -3.0896, 0.1471], [ 0.3565, 0.5272, 3.0861], [ 0.4753, -0.3952, -0.4312], [ 0.5941, -0.1411, 0.2681]]) np_x: [[-0.11452514 -0.10474861 -0.28631285] [ 0.35913807 0.33719075 0.54070234]] np_res: [ 5.4269753 10.197526 1.4185953] np_r1(rank): 2 np_s: [43.057705 5.199417] What am I missing here?
torch.lstq(a, b) solves minX L2∥bX−a∥ while np.linalg.lstsq(a, b) solves minX L2∥aX−b∥ So change the order of parameters passed. Here's a sample: import numpy as np import torch a = torch.tensor([[1., 1, 1], [2, 3, 4], [3, 5, 2], [4, 2, 5], [5, 4, 3]]) b = torch.tensor([[-10., -3], [ 12, 14], [ 14, 12], [ 16, 16], [ 18, 16]]) a1 = a.clone().numpy() b1 = b.clone().numpy() x, _ = torch.lstsq(a, b) x1, res, r1, s = np.linalg.lstsq(b1, a1) print(f'torch_x: {x[:b.shape[1]]}') print(f'np_x: {x1}') Results: torch_x: tensor([[-0.1145, -0.1047, -0.2863], [ 0.3591, 0.3372, 0.5407]]) np_x: [[-0.11452514 -0.10474861 -0.28631285] [ 0.35913807 0.33719075 0.54070234]] link to torch doc link to numpy doc And also the returned rank from numpy.lianalg.lstsq is rank of 1st parameters . To get rank in pytorch use torch.matrix_rank() function.
https://stackoverflow.com/questions/63543236/
In PyTorch is GPU memory freed when a GPU tensor is assigned a new value?
When a Cuda variable in PyTorch is assigned a new value, it becomes a CPU variable again (As illustrated by the code below). In this case, is the memory held by the variable on the GPU previously is freed automatically? import torch t1 = torch.empty(4,5) if torch.cuda.is_available(): t1 = t1.cuda() print(t1.is_cuda) t1 = torch.empty(4,5) print(t1.is_cuda) The output of the above code is: True False
In python an object is freed as soon as there are no remaining references to it. Since you assign t1 to reference a brand new tensor there are no more references to the original GPU tensor so that tensor is freed. That said, when PyTorch is instructed to free a GPU tensor it tends to cache that GPU memory for a while since it's usually the case that if we used GPU memory once we will probably want to use some again, and GPU memory allocation is relatively slow. If you want to force this cache of GPU memory to be cleared you can use torch.cuda.empty_cache. Using this won't directly increase the GPU memory available within a single PyTorch instance since PyTorch will call it automatically in an attempt to save you from an out of GPU memory error. To reiterate, the GPU tensor doesn't actually "become" a CPU tensor. In python, variable names are references to objects. What your code really does is assign t1 to refer to a new CPU tensor object. Internally, python counts the number of references for each object. When that count goes to zero that object is immediately freed. Caveat (Reference Cycles): Reference counting fails in the case of unreachable reference cycles. Unreachable reference cycles occur when objects contain references to one-another but no reference to any object in the cycle is reachable. To deal with this python employs a garbage collection module which executes intermittently. This module uses more sophisticated algorithms to detect and free objects that are part of unreachable reference cycles. In these cases, the memory is not necessarily freed when a cycle becomes unreachable and will instead be freed once the internal garbage collector is activated. This occurs automatically and relatively unpredictably. If desired, the garbage collector may be queried, configured, or manually executed using python's built in gc garbage collection interface. Based on the previous discussion if you really want to ensure your unreachable GPU memory is freed in PyTorch (even in the case of unreachable reference cycles) you could use import gc gc.collect() torch.cuda.empty_cache()
https://stackoverflow.com/questions/63546131/
How to extract feature vector from single image in Pytorch?
I am attempting to understand more about computer vision models, and I'm trying to do some exploring of how they work. In an attempt to understand how to interpret feature vectors more I'm trying to use Pytorch to extract a feature vector. Below is my code that I've pieced together from various places. import torch import torch.nn as nn import torchvision.models as models import torchvision.transforms as transforms from torch.autograd import Variable from PIL import Image img=Image.open("Documents/01235.png") # Load the pretrained model model = models.resnet18(pretrained=True) # Use the model object to select the desired layer layer = model._modules.get('avgpool') # Set model to evaluation mode model.eval() transforms = torchvision.transforms.Compose([ torchvision.transforms.Resize(256), torchvision.transforms.CenterCrop(224), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) def get_vector(image_name): # Load the image with Pillow library img = Image.open("Documents/Documents/Driven Data Competitions/Hateful Memes Identification/data/01235.png") # Create a PyTorch Variable with the transformed image t_img = transforms(img) # Create a vector of zeros that will hold our feature vector # The 'avgpool' layer has an output size of 512 my_embedding = torch.zeros(512) # Define a function that will copy the output of a layer def copy_data(m, i, o): my_embedding.copy_(o.data) # Attach that function to our selected layer h = layer.register_forward_hook(copy_data) # Run the model on our transformed image model(t_img) # Detach our copy function from the layer h.remove() # Return the feature vector return my_embedding pic_vector = get_vector(img) When I do this I get the following error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead I'm sure this is an elementary error, but I can't seem to figure out how to fix this. It was my impression that the "totensor" transformation would make my data 4-d, but it seems it's either not working correctly or I'm misunderstanding it. Appreciate any help or resources I can use to learn more about this!
All the default nn.Modules in pytorch expect an additional batch dimension. If the input to a module is shape (B, ...) then the output will be (B, ...) as well (though the later dimensions may change depending on the layer). This behavior allows efficient inference on batches of B inputs simultaneously. To make your code conform you can just unsqueeze an additional unitary dimension onto the front of t_img tensor before sending it into your model to make it a (1, ...) tensor. You will also need to flatten the output of layer before storing it if you want to copy it into your one-dimensional my_embedding tensor. A couple of other things: You should infer within a torch.no_grad() context to avoid computing gradients since you won't be needing them (note that model.eval() just changes the behavior of certain layers like dropout and batch normalization, it doesn't disable construction of the computation graph, but torch.no_grad() does). I assume this is just a copy paste issue but transforms is the name of an imported module as well as a global variable. o.data is just returning a copy of o. In the old Variable interface (circa PyTorch 0.3.1 and earlier) this used to be necessary, but the Variable interface was deprecated way back in PyTorch 0.4.0 and no longer does anything useful; now its use just creates confusion. Unfortunately, many tutorials are still being written using this old and unnecessary interface. Updated code is then as follows: import torch import torchvision import torchvision.models as models from PIL import Image img = Image.open("Documents/01235.png") # Load the pretrained model model = models.resnet18(pretrained=True) # Use the model object to select the desired layer layer = model._modules.get('avgpool') # Set model to evaluation mode model.eval() transforms = torchvision.transforms.Compose([ torchvision.transforms.Resize(256), torchvision.transforms.CenterCrop(224), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) def get_vector(image): # Create a PyTorch tensor with the transformed image t_img = transforms(image) # Create a vector of zeros that will hold our feature vector # The 'avgpool' layer has an output size of 512 my_embedding = torch.zeros(512) # Define a function that will copy the output of a layer def copy_data(m, i, o): my_embedding.copy_(o.flatten()) # <-- flatten # Attach that function to our selected layer h = layer.register_forward_hook(copy_data) # Run the model on our transformed image with torch.no_grad(): # <-- no_grad context model(t_img.unsqueeze(0)) # <-- unsqueeze # Detach our copy function from the layer h.remove() # Return the feature vector return my_embedding pic_vector = get_vector(img)
https://stackoverflow.com/questions/63552044/
Efficient way to compute Jacobian x Jacobian.T
Assume J is the Jacobian of some function f with respect to some parameters. Are there efficient ways (in PyTorch or perhaps Jax) to have a function that takes two inputs (x1 and x2) and computes J(x1)*J(x2).transpose() without instantiating the entire J matrices in memory? I have come across something like jvp(f, input, v=vjp(f, input)) but don't quite understand it and not sure is what I want.
In JAX, you can compute a full jacobian matrix using jax.jacfwd or jax.jacrev, or you can compute a jacobian operator and its transpose using jax.jvp and jax.vjp. So, for example, say you had a function Rᴺ → Rᴹ that looks something like this: import jax.numpy as jnp import numpy as np np.random.seed(1701) N, M = 10000, 5 f_mat = np.array(np.random.rand(M, N)) def f(x): return jnp.sqrt(f_mat @ x / N) Given two vectors x1 and x2, you can evaluate the Jacobian matrix at each using jax.jacfwd import jax x1 = np.array(np.random.rand(N)) x2 = np.array(np.random.rand(N)) J1 = jax.jacfwd(f)(x1) J2 = jax.jacfwd(f)(x2) print(J1 @ J2.T) # [[3.3123782e-05 2.5001222e-05 2.4946943e-05 2.5180108e-05 2.4940484e-05] # [2.5084497e-05 3.3233835e-05 2.4956826e-05 2.5108084e-05 2.5048916e-05] # [2.4969209e-05 2.4896170e-05 3.3232871e-05 2.5006309e-05 2.4947023e-05] # [2.5102483e-05 2.4947576e-05 2.4906987e-05 3.3327218e-05 2.4958186e-05] # [2.4981882e-05 2.5007204e-05 2.4966144e-05 2.5076926e-05 3.3595043e-05]] But, as you note, along the way to computing this 5x5 result, we instantiate two 5x10,000 matrices. How might we get around this? The answer is in jax.jvp and jax.vjp. These have somewhat unintuitive call signatures for the purposes of your question, as they are designed primarily for use in forward-mode and reverse-mode automatic differentiation. But broadly, you can think of them as a way to compute J @ v and J.T @ v for a vector v, without having to actually compute J explicitly. For example, you can use jax.jvp to compute the effect of J1 operating on a vector, without actually computing J1: J1_op = lambda v: jax.jvp(f, (x1,), (v,))[1] vN = np.random.rand(N) np.allclose(J1 @ vN, J1_op(vN)) # True Similarly, you can use jax.vjp to compute the effect of J2.T operating on a vector, without actually computing J2: J2T_op = lambda v: jax.vjp(f, x2)[1](v)[0] vM = np.random.rand(M) np.allclose(J2.T @ vM, J2T_op(vM)) # True Putting these together and operating on an identity matrix gives you the full jacobian matrix product that you're after: def direct(f, x1, x2): J1 = jax.jacfwd(f)(x1) J2 = jax.jacfwd(f)(x2) return J1 @ J2.T def indirect(f, x1, x2, M): J1J2T_op = lambda v: jax.jvp(f, (x1,), jax.vjp(f, x2)[1](v))[1] return jax.vmap(J1J2T_op)(jnp.eye(M)).T np.allclose(direct(f, x1, x2), indirect(f, x1, x2, M)) # True Along with the memory savings, this indirect method is also a fair bit faster than the direct method, depending on the sizes of the jacobians involved: %time direct(f, x1, x2) # CPU times: user 1.43 s, sys: 14.9 ms, total: 1.44 s # Wall time: 886 ms %time indirect(f, x1, x2, M) # CPU times: user 311 ms, sys: 0 ns, total: 311 ms # Wall time: 158 ms
https://stackoverflow.com/questions/63559139/
Difference between model.train(False) and required_grad = False
I use the Pytorch library and i'm looking for a way to make the weights and biases in my model to freeze. I saw these 2 options: model.train(False) for param in model.parameters(): param.requires_grad = False What is the difference (if there is any) and which one should i use to freeze the current state of my model?
They are very different. Independently of the backprop process, some layers have different behaviors when you are training or evaluating a model. In pytorch, there are only 2 of them : BatchNorm (which I think stops updating its running mean and deviation when you are evaluating) and Dropout (which only drops values in training mode). So model.train()and model.eval()(equivalently model.train(false)) just set a boolean flag to tell these 2 layers "freeze yourselves". Note that these two layers do not have any parameters that are affected by backward operation (batchnorm buffer tensors are changed during the forward pass I think) On the other hand, setting all your parameters to "requires_grad=false" just tells pytorch to stop recording gradients for backprop. That will not affect the BatchNorm and the Dropout layers How to freeze your model kinda depends on your use-case, but I'd say the easiest way is to use torch.jit.trace. This will create a frozen copy your model, exactly in the state it was when you called trace. Your model remained unaffected. Usually, you would call model.eval() traced_model = torch.jit.trace(model, input)
https://stackoverflow.com/questions/63564098/
How to automatically disable register_hook when model is in eval() phase in PyTorch?
I require to update grads of an intermediate tensor variable using the register_hook method. Since the variable isn't a leaf-variable, I require to add the retain_grad() method to it after which, I can use the register_hook method to alter the grads. score.retain_grad() h = score.register_hook(lambda grad: grad * torch.FloatTensor(...)) This works perfectly fine during the training (model.train()) phase. However, it gives an error during the evaluation phase (model.eval()). The error: File "/home/envs/darthvader/lib/python3.6/site-packages/torch/tensor.py", line 198, in register_hook raise RuntimeError("cannot register a hook on a tensor that " RuntimeError: cannot register a hook on a tensor that doesn't require gradient How could the model automatically disable the register_hook method when it in eval() phase?
Removing score.retain_grad() and guarding register_hook with if condition (if score.requires_grad) does the trick. if score.requires_grad: h = score.register_hook(lambda grad: grad * torch.FloatTensor(...)) Originally answered by Alban D here.
https://stackoverflow.com/questions/63564508/
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 3
I am doing the following operation, energy.masked_fill(mask == 0, float("-1e20")) my python traces are below, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "seq_sum.py", line 418, in forward enc_src = self.encoder(src, src_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "seq_sum.py", line 71, in forward src = layer(src, src_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "seq_sum.py", line 110, in forward _src, _ = self.self_attention(src, src, src, src_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "seq_sum.py", line 191, in forward energy = energy.masked_fill(mask == 0, float("-1e20")) RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 3 These are my attention layers code, Q = self.fc_q(query) K = self.fc_k(key) V = self.fc_v(value) #Q = [batch size, query len, hid dim] #K = [batch size, key len, hid dim] #V = [batch size, value len, hid dim] # Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3) # K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3) # V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3) Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024) K = K.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024) V = V.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024) energy = torch.matmul(Q, K.transpose(1,0)) / self.scale I am following below github code to do my seq to seq operation,seq2seq pytorch actual testing code is available on the below location, code to test a seq of 1024 to 1024 output 2nd example tried here I have commented out pos_embedding due CUDA error with large index (RuntimeError: cuda runtime error (59)
I took a look at your code (which by the way, didnt run with seq_len = 10) and the problem is that you hard coded the batch_size to be equal 1 (line 143) in your code. It looks like the example you are trying to run the model on has batch_size = 2. Just uncomment the previous line where you wrote batch_size = query.shape[0] and everything runs fine.
https://stackoverflow.com/questions/63566232/
Multiple-Input Linear Regression works for one dataset but not another
I am trying to run a logistic regression algorithm using Pytorch (and employing a neural network with one hidden layer), and I stumbled upon a problem. I am running the same algorithm for two different input data. My inputs are two-dimensional. The first set of data I created myself, while the second set of data come from real-world data which I got from a csv file, converted to lists and then to pytorch tensors. For the first input data, the tensor I insert to the logistic regression code is: First tensor has rank torch.Size([1000, 2]) and it's given by: T1= tensor([[ 0.6258, 0.9683], [-0.0833, 0.5691], [-0.4657, -0.8722], ..., [ 0.5868, -1.0565], [ 0.1611, -0.1716], [-0.1515, -0.8408]]) While the tensor for the second set of data is: Second tensor has rank torch.Size([1064, 2]) and it's given by: T2= tensor([[918.0600, 74.8220], [917.3477, 71.4038], [923.0400, 60.6380], ..., [916.6000, 71.0960], [912.6000, 58.4060], [921.5300, 77.7020]]) Now, for the first set of data, I get the following result: So as you can see, the algorithm does a fairly good job with the reb/blue decision region, as most of the red points end up in the red region (and the same with the blue ones). Now, for the second set of data, I get the following: As you can see, it paints the whole region in red. I tried playing around with the number of neurons in my hidden layer and the learning rate, number of epochs and some other things, but nothing seems to work. I then thought that it might have to do with x-axis data having much larger values than the y-axis ones so I normalized them by dividing each with their mean, but this did not solve the problem. The algorithm is the same, but it just doesn't work for this set of data. I was wondering if somebody who's more of an expert than me could have a hunch as to what might be going wrong here.
When you dont normalize the data the model can be easily fooled. Your train set is composed of 1000 examples that by the looks of it, the majority of the values are in the range [-1, 1]. When you test your model however, you feed it with much much much higher numbers. The solution is normalization. When you normalize your input your model can be free to learn the true distribution function of the data rather then "memorize" numbers. You should normalize both the training set and the test set. Then your values will range between 0 and 1 and your network will have much better chance of picking up the desired correlation. import torch import torch.nn.functional as f train = torch.rand((4, 2))*100 tensor([[36.9267, 7.3306], [63.5794, 42.9968], [61.3316, 67.6096], [88.4657, 11.7254]]) f.normalize(train, p=2, dim=1) tensor([[0.9809, 0.1947], [0.8284, 0.5602], [0.6719, 0.7407], [0.9913, 0.1314]])
https://stackoverflow.com/questions/63570251/
Speeding up the trainning - RNN with LSTM in PyTorch
I am trying to train a LSTM for energy demand forecast but it takes too long. I do not understand why because the model looks “simple” and there is no much data. Might it be because I am not using the DataLoader? How could I use it with RNN since I have a sequence? Complete code is in Colab: https://colab.research.google.com/drive/130rG8_j1Lf8RQoVRrfXCeo5h_CcC5NU6?usp=sharing The interesting part to be improved may be this: for seq, y_train in train_data: optimizer.zero_grad() model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) y_pred = model(seq) loss = criterion(y_pred, y_train) loss.backward() optimizer.step() Thanks in advance to anyone helping me.
Should you want to speed up the process of training, more data must be provided to the model per training. In my case I was providing just 1 batch. The best way to simply solve this is using the DataLoader. Complete Colab with the solution can be found in this link: https://colab.research.google.com/drive/1QgtshCFETZ9oTvIYWy1Bdre-614kbwRX?usp=sharing # This is to create the Dataset from torch.utils.data import Dataset, DataLoader class DemandDataset(Dataset): def __init__(self, X_train, y_train): self.X_train = X_train self.y_train = y_train def __len__(self): return len(self.y_train) def __getitem__(self, idx): data = self.X_train[idx] labels = self.y_train[idx] return data, labels #This is to convert from typical RNN sequences sq_0 =[] y_0 =[] for seq, y_train in train_data: sq_0.append(seq) y_0.append(y_train) dataset=DemandDataset(sq_0,y_0) dataloader = DataLoader(dataset, batch_size=20) epochs = 30 t = 50 for i in range(epochs): print("New epoch") for data,label in dataloader: optimizer.zero_grad() model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) y_pred = model(seq) loss = criterion(y_pred, label) loss.backward() optimizer.step() print(f'Epoch: {i+1:2} Loss: {loss.item():10.8f}') preds = train_set[-window_size:].tolist() for f in range(t): seq = torch.FloatTensor(preds[-window_size:]) with torch.no_grad(): model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) preds.append(model(seq).item()) loss = criterion(torch.tensor(preds[-window_size:]),y[-t:])
https://stackoverflow.com/questions/63576308/
How does Pytorch build the computation graph
Here is example pytorch code from the website: class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 3x3 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x In the forward function, we simply apply a series of transformations to x, but never explicitly define which objects are part of that transformation. Yet when computing the gradient and updating the weights, Pytorch 'magically' knows which weights to update and how the gradient should be calculated. How does this process work? Is there code analysis going on, or something else that I am missing?
Yes, there is implicit analysis on forward pass. Examine the result tensor, there is thingie like grad_fn= <CatBackward>, that's a link, allowing you to unroll the whole computation graph. And it is built during real forward computation process, no matter how you defined your network module, object oriented with 'nn' or 'functional' way. You can exploit this graph for net analysis, as torchviz do here: https://github.com/szagoruyko/pytorchviz/blob/master/torchviz/dot.py
https://stackoverflow.com/questions/63580218/
type 'NoneType' is not iterable error when training pytorch model with ray tunes Trainable API
I wrote a simple pytorch script to train MNIST and it worked fine. I reimplemented my script to be with Trainable class: import numpy as np import torch import torch.optim as optim import torch.nn as nn from torchvision import datasets, transforms from torch.utils.data import DataLoader import torch.nn.functional as F import ray from ray import tune # Change these values if you want the training to run quicker or slower. EPOCH_SIZE = 512 TEST_SIZE = 256 class ConvNet(nn.Module): def __init__(self): super(ConvNet, self).__init__() # In this example, we don't change the model architecture # due to simplicity. self.conv1 = nn.Conv2d(1, 3, kernel_size=3) self.fc = nn.Linear(192, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 3)) x = x.view(-1, 192) x = self.fc(x) return F.log_softmax(x, dim=1) class AlexTrainer(tune.Trainable): def setup(self, config): self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Data Setup mnist_transforms = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) self.train_loader = DataLoader( datasets.MNIST("~/data", train=True, download=True, transform=mnist_transforms), batch_size=64, shuffle=True) self.test_loader = DataLoader( datasets.MNIST("~/data", train=False, transform=mnist_transforms), batch_size=64, shuffle=True) self.model = ConvNet() self.optimizer = optim.SGD(self.model.parameters(), lr=config["lr"], momentum=config["momentum"]) print('finished setup') def step(self): self.train() print("after train") acc = self.test() return {'acc': acc} def train(self): print("in train") self.model.train() for batch_idx, (data, target) in enumerate(self.train_loader): # We set this just for the example to run quickly. if batch_idx * len(data) > EPOCH_SIZE: return data, target = data.to(self.device), target.to(self.device) self.optimizer.zero_grad() print(type(data)) output = self.model(data) loss = F.nll_loss(output, target) loss.backward() self.optimizer.step() def test(self): self.model.eval() correct = 0 total = 0 with torch.no_grad(): for batch_idx, (data, target) in enumerate(self.test_loader): # We set this just for the example to run quickly. if batch_idx * len(data) > TEST_SIZE: break data, target = data.to(self.device), target.to(self.device) outputs = self.model(data) _, predicted = torch.max(outputs.data, 1) total += target.size(0) correct += (predicted == target).sum().item() return correct / total if __name__ == '__main__': ray.init() analysis = tune.run( AlexTrainer, stop={"training_iteration": 2}, # verbose=1, config={ "lr": tune.sample_from(lambda spec: 10 ** (-10 * np.random.rand())), "momentum": tune.uniform(0.1, 0.9) } ) How ever when i try to run, this time it fails: Traceback (most recent call last): File "/hdd/raytune/venv/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 473, in _process_trial is_duplicate = RESULT_DUPLICATE in result TypeError: argument of type 'NoneType' is not iterable Traceback (most recent call last): File "/hdd/raytune/test_3.py", line 116, in <module> "momentum": tune.uniform(0.1, 0.9) File "/hdd/raytune/venv/lib/python3.6/site-packages/ray/tune/tune.py", line 356, in run raise TuneError("Trials did not complete", incomplete_trials) ray.tune.error.TuneError: ('Trials did not complete', [AlexTrainer_9b3cd_00000]) What could be the reason for this?
It's because you're actually overriding the existing train method in Trainable. If you rename your train method to something else it should work as expected.
https://stackoverflow.com/questions/63582370/
Why do we call .detach() before calling .numpy() on a Pytorch Tensor?
It has been firmly established that my_tensor.detach().numpy() is the correct way to get a numpy array from a torch tensor. I'm trying to get a better understanding of why. In the accepted answer to the question just linked, Blupon states that: You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. In the first discussion he links to, albanD states: This is expected behavior because moving to numpy will break the graph and so no gradient will be computed. If you don’t actually need gradients, then you can explicitly .detach() the Tensor that requires grad to get a tensor with the same content that does not require grad. This other Tensor can then be converted to a numpy array. In the second discussion he links to, apaszke writes: Variable's can’t be transformed to numpy, because they’re wrappers around tensors that save the operation history, and numpy doesn’t have such objects. You can retrieve a tensor held by the Variable, using the .data attribute. Then, this should work: var.data.numpy(). I have studied the internal workings of PyTorch's autodifferentiation library, and I'm still confused by these answers. Why does it break the graph to to move to numpy? Is it because any operations on the numpy array will not be tracked in the autodiff graph? What is a Variable? How does it relate to a tensor? I feel that a thorough high-quality Stack-Overflow answer that explains the reason for this to new users of PyTorch who don't yet understand autodifferentiation is called for here. In particular, I think it would be helpful to illustrate the graph through a figure and show how the disconnection occurs in this example: import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.numpy() print(tensor1) print(type(tensor1))
I think the most crucial point to understand here is the difference between a torch.tensor and np.ndarray: While both objects are used to store n-dimensional matrices (aka "Tensors"), torch.tensors has an additional "layer" - which is storing the computational graph leading to the associated n-dimensional matrix. So, if you are only interested in efficient and easy way to perform mathematical operations on matrices np.ndarray or torch.tensor can be used interchangeably. However, torch.tensors are designed to be used in the context of gradient descent optimization, and therefore they hold not only a tensor with numeric values, but (and more importantly) the computational graph leading to these values. This computational graph is then used (using the chain rule of derivatives) to compute the derivative of the loss function w.r.t each of the independent variables used to compute the loss. As mentioned before, np.ndarray object does not have this extra "computational graph" layer and therefore, when converting a torch.tensor to np.ndarray you must explicitly remove the computational graph of the tensor using the detach() command. Computational Graph From your comments it seems like this concept is a bit vague. I'll try and illustrate it with a simple example. Consider a simple function of two (vector) variables, x and w: x = torch.rand(4, requires_grad=True) w = torch.rand(4, requires_grad=True) y = x @ w # inner-product of x and w z = y ** 2 # square the inner product If we are only interested in the value of z, we need not worry about any graphs, we simply moving forward from the inputs, x and w, to compute y and then z. However, what would happen if we do not care so much about the value of z, but rather want to ask the question "what is w that minimizes z for a given x"? To answer that question, we need to compute the derivative of z w.r.t w. How can we do that? Using the chain rule we know that dz/dw = dz/dy * dy/dw. That is, to compute the gradient of z w.r.t w we need to move backward from z back to w computing the gradient of the operation at each step as we trace back our steps from z to w. This "path" we trace back is the computational graph of z and it tells us how to compute the derivative of z w.r.t the inputs leading to z: z.backward() # ask pytorch to trace back the computation of z We can now inspect the gradient of z w.r.t w: w.grad # the resulting gradient of z w.r.t w tensor([0.8010, 1.9746, 1.5904, 1.0408]) Note that this is exactly equals to 2*y*x tensor([0.8010, 1.9746, 1.5904, 1.0408], grad_fn=<MulBackward0>) since dz/dy = 2*y and dy/dw = x. Each tensor along the path stores its "contribution" to the computation: z tensor(1.4061, grad_fn=<PowBackward0>) And y tensor(1.1858, grad_fn=<DotBackward>) As you can see, y and z stores not only the "forward" value of <x, w> or y**2 but also the computational graph -- the grad_fn that is needed to compute the derivatives (using the chain rule) when tracing back the gradients from z (output) to w (inputs). These grad_fn are essential components to torch.tensors and without them one cannot compute derivatives of complicated functions. However, np.ndarrays do not have this capability at all and they do not have this information. please see this answer for more information on tracing back the derivative using backwrd() function. Since both np.ndarray and torch.tensor has a common "layer" storing an n-d array of numbers, pytorch uses the same storage to save memory: numpy() → numpy.ndarray Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray and vice versa. The other direction works in the same way as well: torch.from_numpy(ndarray) → Tensor Creates a Tensor from a numpy.ndarray. The returned tensor and ndarray share the same memory. Modifications to the tensor will be reflected in the ndarray and vice versa. Thus, when creating an np.array from torch.tensor or vice versa, both object reference the same underlying storage in memory. Since np.ndarray does not store/represent the computational graph associated with the array, this graph should be explicitly removed using detach() when sharing both numpy and torch wish to reference the same tensor. Note, that if you wish, for some reason, to use pytorch only for mathematical operations without back-propagation, you can use with torch.no_grad() context manager, in which case computational graphs are not created and torch.tensors and np.ndarrays can be used interchangeably. with torch.no_grad(): x_t = torch.rand(3,4) y_np = np.ones((4, 2), dtype=np.float32) x_t @ torch.from_numpy(y_np) # dot product in torch np.dot(x_t.numpy(), y_np) # the same dot product in numpy
https://stackoverflow.com/questions/63582590/
What is the Pytorch sub for this tensor flow code?
In converting this line of code to Pytorch from Tensor Flow, I am having trouble datagen = ImageDataGenerator( shear_range=0.2, zoom_range=0.2, ) def read_img(filename, size, path): img = image.load_img(os.path.join(path, filename), target_size=size) #convert image to array img = img_to_array(img) / 255 return img and then corona_df = final_train_data[final_train_data['Label_2_Virus_category'] == 'COVID-19'] with_corona_augmented = [] #create a function for augmentation def augment(name): img = read_img(name, (255,255), train_img_dir) i = 0 for batch in tqdm(datagen.flow(tf.expand_dims(img, 0), batch_size=32)): with_corona_augmented.append(tf.squeeze(batch).numpy()) if i == 20: break i =i+1 #apply the function corona_df['X_ray_image_name'].apply(augment) I tried doing transform = transforms.Compose([transforms.Resize(255*255) ]) train_loader = torch.utils.data.DataLoader(os.path.join(train_dir,corona_df),transform = transform,batch_size =32) def read_img(path): img = train_loader() img = np.asarray(img,dtype='int32') img = img/255 return img I tried continuing but got soo confused by the errors. I welcome any feedback. Tell me If i miss something Even a small advice would work, thanks !
You can create a custom dataset to read the images. If you have a directory full of images you can use ImageFolder default dataset. Otherwise if you have different folder placement you can write your own custom dataset class. You can look to this link for custom datasets. What dataloader does is, it automatically gets the data from your dataset and read the images according to your dataset __getitem__ function and apply transformation. So you don't need anything fancy to apply augmentation. transform = transforms.Compose([ transforms.RandomAffine(20,shear=20,scale=(-0.2,0.2)), transforms.Resize(255*255) ]) dataset = torchvision.datasets.ImageFolder(train_img_dir, transform=transform) loader = torch.utils.data.DataLoader(dataset,batch_size =32,shuffle=True) for batch in loader: output = model(batch)
https://stackoverflow.com/questions/63583442/
Feature extraction in loop seems to cause memory leak in pytorch
I have spent considerable time trying to debug some pytorch code which I have created a minimal example of for the purpose of helping to better understand what the issue might be. I have removed all necessary portions of the code which are unrelated to the issue so the remaining piece of code won't make much sense from a functional standpoint but it still displays the error I'm facing. The overall task I'm working on is in a loop and every pass of the loop is computing the embedding of the image and adding it to a variable storing it. It's effectively aggregating it (not concatenating, so the size remains the same). I don't expect the number of iterations to force the datatype to overflow, I don't see this happening here nor in my code. I have added multiple metrics to evaluate the size of the tensors I'm working with to make sure they're not growing in memory footprint I'm checking the overall GPU memory usage to verify the issue leading to the final RuntimeError: CUDA out of memory.. My environment is as follows: - python 3.6.2 - Pytorch 1.4.0 - Cudatoolkit 10.0 - Driver version 410.78 - GPU: Nvidia GeForce GT 1030 (2GB VRAM) (though I've replicated this experiment with the same result on a Titan RTX with 24GB, same pytorch version and cuda toolkit and driver, it only goes out of memory further in the loop). Complete code below. I have marked 2 lines as culprits, as deleting them removes the issue, though obviously I need to find a way to execute them without having memory issues. Any help would be much appreciated! You may try with any image named "source_image.bmp" to replicate the issue. import torch from PIL import Image import torchvision from torchvision import transforms from pynvml import nvmlDeviceGetHandleByIndex, nvmlDeviceGetMemoryInfo, nvmlInit import sys import os os.environ["CUDA_VISIBLE_DEVICES"]='0' # this is necessary on my system to allow the environment to recognize my nvidia GPU for some reason os.environ['CUDA_LAUNCH_BLOCKING'] = '1' # to debug by having all CUDA functions executed in place torch.set_default_tensor_type('torch.cuda.FloatTensor') # Preprocess image tfms = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),]) img = tfms(Image.open('source_image.bmp')).unsqueeze(0).cuda() model = torchvision.models.resnet50(pretrained=True).cuda() model.eval() # we put the model in evaluation mode, to prevent storage of gradient which might accumulate nvmlInit() h = nvmlDeviceGetHandleByIndex(0) info = nvmlDeviceGetMemoryInfo(h) print(f'Total available memory : {info.total / 1000000000}') feature_extractor = torch.nn.Sequential(*list(model.children())[:-1]) orig_embedding = feature_extractor(img) embedding_depth = 2048 mem0 = 0 embedding = torch.zeros(2048, img.shape[2], img.shape[3]) #, dtype=torch.float) patch_size=[4,4] patch_stride=[2,2] patch_value=0.0 # Here, we iterate over the patch placement, defined at the top left location for row in range(img.shape[2]-1): for col in range(img.shape[3]-1): print("######################################################") ###################################################### # Isolated line, culprit 1 of the GPU memory leak ###################################################### patched_embedding = feature_extractor(img) delta_embedding = (patched_embedding - orig_embedding).view(-1, 1, 1) ###################################################### # Isolated line, culprit 2 of the GPU memory leak ###################################################### embedding[:,row:row+1,col:col+1] = torch.add(embedding[:,row:row+1,col:col+1], delta_embedding) print("img size:\t\t", img.element_size() * img.nelement()) print("patched_embedding size:\t", patched_embedding.element_size() * patched_embedding.nelement()) print("delta_embedding size:\t", delta_embedding.element_size() * delta_embedding.nelement()) print("Embedding size:\t\t", embedding.element_size() * embedding.nelement()) del patched_embedding, delta_embedding torch.cuda.empty_cache() info = nvmlDeviceGetMemoryInfo(h) print("\nMem usage increase:\t", info.used / 1000000000 - mem0) mem0 = info.used / 1000000000 print(f'Free:\t\t\t {(info.total - info.used) / 1000000000}') print("Done.")
Add this to your code as soon as you load the model for param in model.parameters(): param.requires_grad = False from https://pytorch.org/docs/stable/notes/autograd.html#excluding-subgraphs-from-backward
https://stackoverflow.com/questions/63588069/
PyTorch: How to do inference in batches (inference in parallel)
How to do inference in batches in PyTorch? How to do inference in parallel to speed up that part of the code. I've started with the standard way of doing inference: with torch.no_grad(): for inputs, labels in dataloader['predict']: inputs = inputs.to(device) output = model(inputs) output = output.to(device) And I've researched and the only mention of doing inference in parallel (in the same machine) seems to be with the library Dask: https://examples.dask.org/machine-learning/torch-prediction.html Currently attempting to understand that library and create a working example. In the meanwhile do you know of a better way?
In pytorch, the input tensors always have the batch dimension in the first dimension. Thus doing inference by batch is the default behavior, you just need to increase the batch dimension to larger than 1. For example, if your single input is [1, 1], its input tensor is [[1, 1], ] with shape (1, 2). If you have two inputs [1, 1] and [2, 2], generate the input tensor as [[1, 1], [2, 2], ] with shape (2, 2). This is usually done in the batch generator function such as your dataloader.
https://stackoverflow.com/questions/63603692/
Torch Tensor & Input Conflicting: "Tensor Object Is Not Callable"
Due to the code "torch.tensor," I am getting the error "Tensor object is not callable" when I add "input." Does anyone know how I can fix this? import torch from torch.nn import functional as F from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') text0 = "In order to" text = tokenizer.encode("In order to") input, past = torch.tensor([text]), None logits, past = model(input, past = past) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(5) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() for i in range(5): f = ('Generated {}: {}'.format(i, best_words[i])) print(f) option = input("Pick a Option:") z = text0.append(option) print(z) Error stacktrace: TypeError Traceback (most recent call last) <ipython-input-2-82e8d88e81c1> in <module>() 25 26 ---> 27 option = input("Pick a Option:") 28 z = text0.append(option) 29 print(z) TypeError: 'Tensor' object is not callable
The problem is that you have already defined a variable with the name input which will be used instead of the input function. Just use a different name for your variable and it will work as expected. Also a python string does not have an append method. import torch from torch.nn import functional as F from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') text0 = "In order to" text = tokenizer.encode("In order to") myinput, past = torch.tensor([text]), None logits, past = model(myinput, past = past) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(5) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() for i in range(5): f = ('Generated {}: {}'.format(i, best_words[i])) print(f) option = input("Pick a Option:") z = text0 + ' ' + option print(z)
https://stackoverflow.com/questions/63608183/
Preprocessing image for Tensorflow model instead Pytorch preprocessing
I had a pytorch resnet101 encoder model, when input image got this preprocessing: import torchvision as tv from PIL import Image data_transforms = tv.transforms.Compose([ tv.transforms.Resize((224, 224)), tv.transforms.ToTensor(), tv.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) img = Image.open(img_path) img = img.convert('RGB') img = data_transforms(img) img = torch.FloatTensor(img) img = img.unsqueeze(0) print(img) pytorch image tensor Shape of input for encoder in this case is [1, 3, 224, 224], and this picture is normalized with the mean and std of ImageNet. Now I'm export this model to tensorflow, so, how to make the same image preprocessing for tf-model? I tried to do something like this: from PIL import Image img = Image.open(img_path) img = img.convert('RGB') img = tf.keras.preprocessing.image.img_to_array(img) img = tf.image.resize(tf_img, (224, 224)) img = tf.keras.applications.resnet.preprocess_input(img)# now shape is [224, 224, 3] img = tf.reshape(img, [1, 3, 224, 224]) print(img) tensorflow image tensor but I'm sure that I did something wrong, becouse torch and tf tensors looks very different for one image and give completely different output results from one encoder model. Can anyone helps, what should I fix in tf preprocessing?
This: img = Image.open(img_path) img = img.convert('RGB') could be replaced with image = tf.io.read_file(filename=filepath) image = tf.image.decode_jpeg(image, channels=3) #or decode_png Also, the opposite of unsqueeze and squeeze is expand_dims: img = tf.expand_dims(img,axis=0) Everything should work well, just ensure that tf.keras.applications.resnet.preprocess_input(img) `and` data.transforms() yield the desired/necessary transformations. As for the photos, I am quite sure that you missed a /255.0 in case of PyTorch or added a 255.0 division in case of TensorFlow. In fact, when digging deep into the Keras backend, you can see that when you call your preprocessing function, it will call this function here: def _preprocess_numpy_input(x, data_format, mode): """Preprocesses a Numpy array encoding a batch of images. Arguments: x: Input array, 3D or 4D. data_format: Data format of the image array. mode: One of "caffe", "tf" or "torch". - caffe: will convert the images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. - tf: will scale pixels between -1 and 1, sample-wise. - torch: will scale pixels between 0 and 1 and then will normalize each channel with respect to the ImageNet dataset. Returns: Preprocessed Numpy array. """ if not issubclass(x.dtype.type, np.floating): x = x.astype(backend.floatx(), copy=False) if mode == 'tf': x /= 127.5 x -= 1. return x elif mode == 'torch': x /= 255. mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] else: if data_format == 'channels_first': # 'RGB'->'BGR' if x.ndim == 3: x = x[::-1, ...] else: x = x[:, ::-1, ...] else: # 'RGB'->'BGR' x = x[..., ::-1] mean = [103.939, 116.779, 123.68] std = None # Zero-center by mean pixel if data_format == 'channels_first': if x.ndim == 3: x[0, :, :] -= mean[0] x[1, :, :] -= mean[1] x[2, :, :] -= mean[2] if std is not None: x[0, :, :] /= std[0] x[1, :, :] /= std[1] x[2, :, :] /= std[2] else: x[:, 0, :, :] -= mean[0] x[:, 1, :, :] -= mean[1] x[:, 2, :, :] -= mean[2] if std is not None: x[:, 0, :, :] /= std[0] x[:, 1, :, :] /= std[1] x[:, 2, :, :] /= std[2] else: x[..., 0] -= mean[0] x[..., 1] -= mean[1] x[..., 2] -= mean[2] if std is not None: x[..., 0] /= std[0] x[..., 1] /= std[1] x[..., 2] /= std[2] return x The default mode parameter in Keras and TensorFlow for preprocessing for ResNet50, is surprisingly not tf but caffe. Therefore, the preprocessing that is done to the image is on this else branch( I am adding the else branch and the code thereafter so that you can follow the transformations and see what you are missing): else: if data_format == 'channels_first': # 'RGB'->'BGR' if x.ndim == 3: x = x[::-1, ...] else: x = x[:, ::-1, ...] else: # 'RGB'->'BGR' x = x[..., ::-1] mean = [103.939, 116.779, 123.68] std = None # Zero-center by mean pixel if data_format == 'channels_first': if x.ndim == 3: x[0, :, :] -= mean[0] x[1, :, :] -= mean[1] x[2, :, :] -= mean[2] if std is not None: x[0, :, :] /= std[0] x[1, :, :] /= std[1] x[2, :, :] /= std[2] else: x[:, 0, :, :] -= mean[0] x[:, 1, :, :] -= mean[1] x[:, 2, :, :] -= mean[2] if std is not None: x[:, 0, :, :] /= std[0] x[:, 1, :, :] /= std[1] x[:, 2, :, :] /= std[2] else: x[..., 0] -= mean[0] x[..., 1] -= mean[1] x[..., 2] -= mean[2] if std is not None: x[..., 0] /= std[0] x[..., 1] /= std[1] x[..., 2] /= std[2] return x The description is: caffe: will convert the images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling.
https://stackoverflow.com/questions/63610604/
Pytorch geometric: Having issues with tensor sizes
This is the first time I'm using Pytorch and Pytorch geometric. I'm trying to create a simple Graph Neural Network with Pytorch Geometric. I'm creating a custom dataset by following the Pytorch Geometric documentations and extending the InMemoryDataset. After that I split the dataset into training, validation and test dataset which sizes (3496, 437, 439) respectively. These are the number of graphs in each dataset. Here is my simple Neural Network class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = GCNConv(dataset.num_node_features, 10) self.conv2 = GCNConv(10, dataset.num_classes) def forward(self, data): x, edge_index, batch = data.x, data.edge_index, data.batch x = self.conv1(x, edge_index) x = F.relu(x) x = F.dropout(x, training=self.training) x = self.conv2(x, edge_index) return F.log_softmax(x, dim=1) I get this error while training my model, which suggest that there's some issue with my input dimensions. Maybe the reason is behind my batch sizes? RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): File "E:\Users\abc\Anaconda3\lib\site-packages\torch_scatter\scatter.py", line 22, in scatter_add size[dim] = int(index.max()) + 1 out = torch.zeros(size, dtype=src.dtype, device=src.device) return out.scatter_add_(dim, index, src) ~~~~~~~~~~~~~~~~ <--- HERE else: return out.scatter_add_(dim, index, src) RuntimeError: index 13654 is out of bounds for dimension 0 with size 678 The error happens specifically on this line of code in the Neural Network, x = self.conv1(x, edge_index) EDIT: Added more information about edge_index and explained in more detail about the data that I'm using. Here are the shapes of the variables that I'm trying to pass x: torch.Size([678, 43]) edge_index: torch.Size([2, 668]) torch.max(edge_index): tensor(541690) torch.min(edge_index): tensor(1920) I'm using a datalist which contains Data(x=node_features, edge_index=edge_index, y=labels) objects. When I'm splitting the dataset into training, validation and test datasets, I get (3496, 437, 439) graphs in each dataset respectively. Originally I tried to create one single graph from my dataset, but I'm not sure how it would work with Dataloader and minibatches. train_loader = DataLoader(train_dataset, batch_size=batch_size) val_loader = DataLoader(val_dataset, batch_size=batch_size) test_loader = DataLoader(test_dataset, batch_size=batch_size) Here's the code that generates the graph from dataframe. I've tried to create an simple graph where there are just some amount of vertices with some amount of edges connecting them. I've probably overlooked something and that's why I have this issue. I've tried to follow the Pytorch geometric documentation when creating this graph (Pytorch Geometric: Creating your own dataset) def process(self): data_list = [] grouped = df.groupby('EntityId') for id, group in grouped: node_features = torch.tensor(group.drop(['Labels'], axis=1).values) source_nodes = group.index[1:].values target_nodes = group.index[:-1].values labels = torch.tensor(group.Labels.values) edge_index = torch.tensor([source_nodes, target_nodes]) data = Data(x=node_features, edge_index=edge_index, y=labels) data_list.append(data) if self.pre_filter is not None: data_list = [data for data in data_list if self.pre_filter(data)] if self.pre_transform is not None: data_list = [self.pre_transform(data) for data in data_list] data, slices = self.collate(data_list) torch.save((data, slices), self.processed_paths[0]) If someone could help me with the process of creating a graph on any kind of data and using it with GCNConv, I would appreciate it.
I agree with @trialNerror -- it is a data problem. Your edge_index should refer to the data nodes and its max should not be that high. Since you don't want to show us the data and ask for "creating a graph on any kind of data ", here it is. I mostly left your Net unchanged. You can play around with the constants stated to match with your data. import torch import torch.nn as nn import torch.nn.functional as F from torch_geometric.nn import GCNConv from torch_geometric.data import Data num_node_features = 100 num_classes = 2 num_nodes = 678 num_edges = 1500 num_hidden_nodes = 128 x = torch.randn((num_nodes, num_node_features), dtype=torch.float32) edge_index = torch.randint(low=0, high=num_nodes, size=(2, num_edges), dtype=torch.long) y = torch.randint(low=0, high=num_classes, size=(num_nodes,), dtype=torch.long) class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = GCNConv(num_node_features, num_hidden_nodes) self.conv2 = GCNConv(num_hidden_nodes, num_classes) def forward(self, data): x, edge_index = data.x, data.edge_index x = self.conv1(x, edge_index) x = F.relu(x) x = F.dropout(x, training=self.training) x = self.conv2(x, edge_index) return F.log_softmax(x, dim=1) data = Data(x=x, edge_index=edge_index, y=y) net = Net() optimizer = torch.optim.Adam(net.parameters(), lr=1e-2) for i in range(1000): output = net(data) loss = F.cross_entropy(output, data.y) optimizer.zero_grad() loss.backward() optimizer.step() if i % 100 == 0: print('Accuracy: ', (torch.argmax(output, dim=1)==data.y).float().mean()) Output Accuracy: tensor(0.5059) Accuracy: tensor(0.8702) Accuracy: tensor(0.9159) Accuracy: tensor(0.9233) Accuracy: tensor(0.9336) Accuracy: tensor(0.9484) Accuracy: tensor(0.9602) Accuracy: tensor(0.9676) Accuracy: tensor(0.9705) Accuracy: tensor(0.9749) (yes we can overfit to random data)
https://stackoverflow.com/questions/63610626/
I'm having a problem trying to load a Pytoch model: "Can't find Identity in module"
When trying to load a pytorch model it gives the following attribute error model = torch.load('../input/melanoma-model/melanoma_model_0.pth') model = model.to(device) model.eval() AttributeError Traceback (most recent call last) in 1 arch = EfficientNet.from_pretrained('efficientnet-b2') 2 model = Net(arch=arch) ----> 3 torch.load('../input/melanoma-model/melanoma_model_0.pth') 4 model = model.to(device) 5 model.eval() /opt/conda/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 591 return torch.jit.load(f) 592 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) --> 593 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) 594 595 /opt/conda/lib/python3.7/site-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args) 771 unpickler = pickle_module.Unpickler(f, **pickle_load_args) 772 unpickler.persistent_load = persistent_load --> 773 result = unpickler.load() 774 775 deserialized_storage_keys = pickle_module.load(f, **pickle_load_args) AttributeError: Can't get attribute 'Identity' on <module 'efficientnet_pytorch.utils' from '/opt/conda/lib/python3.7/site-packages/efficientnet_pytorch/utils.py'>
First you need a model class to load the parameters from the .pth into. And you are missing one step: model = Model() # the model class (yours has probably another name) model.load_state_dict(torch.load('../input/melanoma-model/melanoma_model_0.pth')) model = model.to(device) model.eval() There you go, I hope that solved your problem!
https://stackoverflow.com/questions/63615305/
syntax pytorch.nn functions neural networks, class or functions?
I am puzzled by the syntax of the functions used in neural networks in pytorch. Here is an example of how one can define a linear transformation layer: (cf. https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) m = nn.Linear(20, 30) input = torch.randn(128, 20) output = m(input) print(output.size()) torch.Size([128, 30]) Can someone explain me where the expression nn.Linear(20,30)(input) comes from ? It disturbs me a bit. Indeed, one can define a class neural network with such cosntructor : (for example) class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes, p=0): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size, bias=True) self.fc2 = nn.Linear(hidden_size, hidden_size, bias=True) self.fc3 = nn.Linear(hidden_size, hidden_size, bias=True) self.fc4 = nn.Linear(hidden_size, num_classes, bias=False) self.dropout = nn.Dropout(p=p) and I was trying to write an attribute using numpy function, like: self.enter_reshape = np.reshape(-1, input_size * input_size) self.exit_reshape = np.reshape(input_size, num_classes / input_size) or, using the view function from pytorch: self.reshape = view(-1, self.num_flat_features()) The closest thing I know about is the partial function and closures, where one could write f(z)(x)(y). I looked into the definition of Linear, and I saw that linear is an object, but I don't see where they redefined __call__ magic function, which I thought would be used here when one calls the object. So basically, can one explain what is up with such writting, and also, would it be possible to give to the neural network the numpy or view functions as attributes?
torch.nn.Linear inherits from torch.nn.Module (see source code), which in turn defines __call__ method. You can see source code for torch.nn.Module here. This class allows users to make their own Modules by inheritance (as you did in your example) and is a base for all PyTorch defined modules like nn.Linear (see available methods, documentation here). Its __call__ essentially calls forward but running registered hooks (and registering), checking torchscript etc. (see source code here, with relevant line here. would it be possible to give to the neural network the numpy or view functions as attributes? From the example you've given, what you are trying to do is probably partial function (or lambda as in the example below) saved as attribute (though that is pretty uncommon and never seen it tbh), like this: import torch class MyModule(torch.nn.Module): def __init__(self, shape: int = -1): super().__init__() # required self.reshape = lambda tensor: torch.reshape(tensor, (shape,)) def forward(self, tensor): return self.reshape(tensor) module = MyModule() module(torch.randn(4, 5, 6)).shape # [120] shape You shouldn't use numpy with pytorch unless you really need it and/or there is no sensible pytorch counterpart (although you can if you transform torch.Tensor to numpy). Also you shouldn't do anything like the code above as it's really confusing, just save attributes (anything like input_shape, hidden_dim, output_size) and use it in forward: class MyModule(torch.nn.Module): def __init__(self, shape: int = -1): super().__init__() # required self.shape = shape def forward(self, tensor): return torch.reshape(tensor, (self.shape,))
https://stackoverflow.com/questions/63617394/
Perform Delta Function between elements in PyTorch tensor
I have a 1 dimensional pyTorch tensor (dtype: int32) and was wondering if there was a way to perform a Dirac Delta function on the elements in this tensor, i.e: f = tensor[1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1] f_after_dirac_delta = tensor[0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1] Thanks for any help in advance! EDIT: as @GirishDattatrayHegde mentioned, the term Dirac-Delta was misleading. The correct term should have been a Kronecker-Delta. My apologies.
If I understand correctly, you want to compare successive elemeents of your tensor. This should work : import torch f = torch.tensor([1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1]) f_dirac = (f[1:] == f[:-1]).to(torch.long)
https://stackoverflow.com/questions/63618047/
How to rotate a Torch Tensor by a random number of degrees
as part of training a CNN, I am working with an array inputs that contain <class 'torch.Tensor'> objects. I want to rotate an individual <class 'torch.Tensor'> object by some random number of degrees x, as shown here: def rotate(inputs, x): # Rotate inputs[0] by x degrees, x can take on any value from 0 - 180 degrees How can I do this? For existing implementations, I can only find that torch has a rot90 function, but that limits me to multiples of 90 degrees which does not help my scenario. Thanks, Vinny
To transform an torch.tensor you can use scipy.ndimage.rotate function (read here),that rotates a torch.tensor but also it converts it to numpy.ndarray, so you have to convert it back to torch.tensor. See this toy example. Function: def rotate(inputs, x): return torch.from_numpy(ndimage.rotate(inputs, x, reshape=False)) Detailed explanation: import torch from scipy import ndimage alpha = torch.rand(3,3) print(alpha.dtype)#torch.float32 angle_in_degrees = 45 output = ndimage.rotate(alpha, angle_in_degrees, reshape=False) print(output.dtype) #numpy_array output = torch.from_numpy(output) #convert it back to torch tensor print(output.dtype) #torch.float32 Also, You can directly transform PIL image before converting it to tensor if that's a possibility. To transform PIL image you can use PyTorch builtin torchvision.transforms.functional.rotate (read here).
https://stackoverflow.com/questions/63619435/
Python PyTorch Pyro - Multivariate Distributions
How does one sample a multivariate distribution in Pyro? I just want a (M, N) Beta distribution, but the following doesn't work: impor torch import pyro with pyro.plate("theta_plate", M): theta = pyro.sample("theta", pyro.distributions.Beta(concentration0=torch.ones(N), concentration1=torch.ones(N)))
Use to_event(n) to declare depdent samples. import torch import pyro import pyro.distributions as dist def model(N, M): with pyro.plate("theta_plate", M): theta = pyro.sample("theta", dist.Beta(torch.ones(N),1.).to_event(1)) return theta if __name__ == '__main__': print(model(10,12).shape) # (10,12)
https://stackoverflow.com/questions/63625449/
Non-MNIST Digit Recognition Pytorch
I'm looking for examples of pytorch being used to classify non-MNIST digits. After hours of searching, it appears the algorithms are against me. Does anyone have a good example? Thanks.
I am posting this as answer since i do not have the rep to comment, Please view the google street view dataset (SVHN). It is like MNIST but there is much more noise present in the data. Another option for you could be to use GANs and make more images which practically wouldn't have existed before. You could also try your hand at non - english mnist data-sets (though it moves away from your original goal). Link to SVHN with pytorch: https://github.com/potterhsu/SVHNClassifier-PyTorch Link to original SVHN: https://pytorch.org/docs/stable/torchvision/datasets.html#svhn P.S. You could also try making a dataset on your own! This is quite fun to do.
https://stackoverflow.com/questions/63626587/
Reset parameters of a neural network in pytorch
I have a neural network with the following structure: class myNetwork(nn.Module): def __init__(self): super(myNetwork, self).__init__() self.bigru = nn.GRU(input_size=2, hidden_size=100, batch_first=True, bidirectional=True) self.fc1 = nn.Linear(200, 32) torch.nn.init.xavier_uniform_(self.fc1.weight) self.fc2 = nn.Linear(32, 2) torch.nn.init.xavier_uniform_(self.fc2.weight) I need to reinstate the model to an unlearned state by resetting the parameters of the neural network. I can do so for nn.Linear layers by using the method below: def reset_weights(self): torch.nn.init.xavier_uniform_(self.fc1.weight) torch.nn.init.xavier_uniform_(self.fc2.weight) But, to reset the weight of the nn.GRU layer, I could not find any such snippet. My question is how does one reset the nn.GRU layer? Any other way of resetting the network is also fine. Any help is appreciated.
You can use reset_parameters method on the layer. As given here for layer in model.children(): if hasattr(layer, 'reset_parameters'): layer.reset_parameters() Or Another way would be saving the model first and then reload the module state. Using torch.save and torch.load see docs for more Or Saving and Loading Models
https://stackoverflow.com/questions/63627997/
How can I import/use two different versions of a library (pytorch) in one program in Python?
I need to use two different versions of pytorch in different parts of the same python webserver. Unfortunately, I can't install them both on the same conda environment that I'm using. I've tried importing one of them from the path itself: MODULE_PATH = "/home/abc/anaconda3/envs/env/lib/python3.7/site-packages/torch/__init__.py" MODULE_NAME = "torch" import importlib import sys spec = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH) module = importlib.util.module_from_spec(spec) sys.modules[spec.name] = module spec.loader.exec_module(module) Which works fine for importing a different version than the one in the active environment, but then I run into an error when trying to import the second one (I've tried simply 'import torch' and also the same as above): File "/home/abc/anaconda3/envs/env2/lib/python3.7/site-packages/torch/__init__.py", line 82, in <module> __all__ += [name for name in dir(_C) NameError: name '_C' is not defined Any ideas on how I can use both versions? Thanks!
In principle, importing two libraries with the same name is not possible. Sure, it might be the case that you could do some import-sorcery and manage to do it. But keep in mind that pytorch is not a straightforward Python package. Now, even if you manage to solve this, it seems extremely strange to me that you need for your own service two different versions. Having that situation will just be a headache for you in the long run. My advice would be to reconsider how you're doing it. Without knowing your situation, I'd recommend splitting the web service into two. This will allow you to have two environments and the two versions of pytorch you need.
https://stackoverflow.com/questions/63629212/
Why is my loss function always returning nan?
I'm trying to learn about ML and I tried to make a simple linear model, but when I run it, loss comes out as null: So I tried to find whats the problem about. If I print first 10 y_pred, only about 17 of them have numbers, the rest of them are null. Maybe im doing something wrong, please help. import torch from torch.utils.data import TensorDataset, DataLoader import torch.nn as nn import numpy as np #Input (temp, rainfall, humidity) inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70], [73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70], [73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70]], dtype='float32') #Target (apples, oranges) targets = np.array([[56, 70], [81, 101], [119, 133], [22, 37], [103, 119], [56, 70], [81, 101], [119, 133], [22, 37], [103, 119], [56, 70], [81, 101], [119, 133], [22, 37], [103, 119]], dtype='float32') inputs = torch.from_numpy(inputs) targets = torch.from_numpy(targets) #Define datasets train_ds = TensorDataset(inputs, targets) train_ds[0:3] #Hyperparameters batch_size = 5 num_epochs = 100 learning_rate = 0.01 train_dl = DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True) model = nn.Linear(3,2) #inputs(temp, rainfall, humidity) , targets(apples, oranges) loss_f = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for xb, yb in train_dl: y_pred = model(xb) loss = loss_f(y_pred, yb) loss.backward() optimizer.step() optimizer.zero_grad() if(epoch+1) % 10 == 0: print(f'epoch = {epoch+1}/{num_epochs}, loss = {loss.item():.4f}') print(f'Final loss = {loss.item():.4f}') edit: y_pred.shape = torch.Size([5, 2]), yb.shape = torch.Size([5, 2])
It is not about the Loss function. Your model is predicting Nan and Infy numbers. Possible solution Reduce the learning rate (Ex.:learning_rate = 0.001) or Reduce batch size (Ex.: batch_size = 2) or Add more layers into the model with activation functions or Normalize inputs
https://stackoverflow.com/questions/63630290/
Using PyTorch with Celery
I'm trying to run a PyTorch model in a Django app. As it is not recommended to execute the models (or any long-running task) in the views, I decided to run it in a Celery task. My model is quite big and it takes about 12 seconds to load and about 3 seconds to infer. That's why I decided that I couldn't afford to load it at every request. So I tried to load it at settings and save it there for the app to use it. So my final scheme is: When the Django app starts, in the settings the PyTorch model is loaded and it's accessible from the app. When views.py receives a request, it delays a celery task The celery task uses the settings.model to infer the result The problem here is that the celery task throws the following error when trying to use the model [2020-08-29 09:03:04,015: ERROR/ForkPoolWorker-1] Task app.tasks.task[458934d4-ea03-4bc9-8dcd-77e4c3a9caec] raised unexpected: RuntimeError("Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method") Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tensor/lib/python3.7/site-packages/celery/app/trace.py", line 412, in trace_task R = retval = fun(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/tensor/lib/python3.7/site-packages/celery/app/trace.py", line 704, in __protected_call__ return self.run(*args, **kwargs) /*...*/ File "/home/ubuntu/anaconda3/envs/tensor/lib/python3.7/site-packages/torch/cuda/__init__.py", line 191, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method Here's the code in my settings.py loading the model: if sys.argv and sys.argv[0].endswith('celery') and 'worker' in sys.argv: #In order to load only for the celery worker import torch torch.cuda.init() torch.backends.cudnn.benchmark = True load_model_file() And the task code @task def getResult(name): print("Executing on GPU:", torch.cuda.is_available()) if os.path.isfile(name): try: outpath = model_inference(name) os.remove(name) return outpath except OSError as e: print("Error", name, "doesn't exist") return "" The print in the task shows "Executing on GPU: true" I've tried setting torch.multiprocessing.set_start_method('spawn') in the settings.py before and after the torch.cuda.init() but it gives the same error.
Setting this method works as long as you're also using Process from the same library. from torch.multiprocessing import Pool, Process Celery uses "regular" multiprocessing library, thus this error. If I were you I'd try either: run it single threaded to see if that helps run it with eventlet to see if that helps read this
https://stackoverflow.com/questions/63645357/
About pytorch the output is not what I expect
import torch a = torch.randn(2, 2) a = ((a * 3) / (a - 1)) print(a) The output is: tensor([[ -0.7242, 2.0021], [-280.8320, 0.6750]]) But I think it should be: tensor([[ 6, 6], [ 6, 6]]) Why I'm wrong?
torch.randn will preduce a random 2X2 tensor, as stated in the docs Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). In your case you will get the desired output if you will generate a tensor where all the values are 2. a = torch.ones([2, 2], dtype=torch.float64) * 2. will give you the desired one.
https://stackoverflow.com/questions/63645419/
About pytorch, I can't understand why that's the output
The code is as below about pytorch, it's about derivative, I think the output is 18 but it's 4.5, I don'k know why: import torch x = torch.ones(2, 2, requires_grad=True) y = x + 2 z = y * y * 3 out = z.mean() out.backward() print(x.grad) The output: tensor([[4.5000, 4.5000], [4.5000, 4.5000]]) I think the derivative is 2*3*(1+2), so it should be: tensor([[18, 18], [18, 18]]) Why the output is 4.5? Some people think it's the mean method that make the derivative /4, but when I do the code "print(out)", the output is "tensor(27., grad_fn=)" rather than (4.5., grad_fn=), I'm a new learning of pytorch, so I don't know what it does with "tensor.mean()", but since the output of "print(out)" is 27, so I don't think there is a "/4" process in "tensor.mean()", so I don't think it should include "/4" process in the derivative computation, is that correct?(Please help me~)
Here's how I think it goes: y = x + 2 and z = y * y * 3, so z is 3 * (x+2)^2 Next, out = z.mean(), or sigma z / n which is sigma z/4 since we had a total of 4 numbers in z. Thus, you find the derivative of sigma (3 * (x + 2)^2)/4 at x = 1. That gives (3/4) * 2(x + 2) at x = 1, which is 4.5 So I think you had it all figured out, except that in the last step you missed dividing by 4, which you need to do since there's a mean() function in there. Edit: Since you're confused with how the mean() is affecting the output, let's do a tensor of values [1,2,3,4] instead of torch.ones() to see the effect. x = torch.tensor([1.0,2.0,3.0,4.0], requires_grad=True) y = x + 2 z = y * y * 3 out = z.mean() out.backward() print(x.grad) this will output tensor([4.5000, 6.0000, 7.5000, 9.0000]) How? Remember we derived the equation for our derivative to be: (3/4) * 2(x + 2) Now you substitute x to be 1, and get 4.500. Then for x = 2, you get 6.000, for x = 3 you get 7.500 and so on. In the earlier example, you had four instances of x = 1 which is why you had x.grad to be [[4.5, 4.5], [4.5, 4.5]]
https://stackoverflow.com/questions/63646286/
Custom CNN gives wrong output shape
I need some help. I am trying to make a custom CNN, which should accept one channel images and do binary classification. This is the model: class custom_small_CNN(nn.Module): def __init__(self, input_channels=1, output_features=1): super(custom_small_CNN, self).__init__() self.input_channels = input_channels self.output_features = output_features self.conv1 = nn.Conv2d(self.input_channels, 8, kernel_size=(7, 7), stride=(2, 2), padding=(6, 6), dilation=(2, 2)) self.conv2 = nn.Conv2d(8, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), dilation=(1, 1)) self.pool = nn.MaxPool2d(kernel_size=(2, 2)) self.fc1 = nn.Linear(in_features=1024, out_features=self.output_features, bias=True) self.dropout = nn.Dropout(p=0.5) self.softmax = nn.Softmax(dim=1) self.net_name = 'Custom_Small_CNN' self.net = nn.Sequential(self.conv1, self.pool, self.conv2, self.pool, self.fc1) def forward(self, x): x = self.conv1(x) x = self.pool(x) #x = self.dropout(x) x = self.conv2(x) x = self.pool(x) x = x.view(-1, 1024) x = self.dropout(x) x = self.fc1(x) if not self.output_features == 1: x = self.softmax(x) return x However, when I put an example batch with 4 images (all zeros) in the model like this: x = torch.from_numpy(np.zeros((4, 1, 256, 256))).float() net = custom_small_CNN(output_features=2, input_channels=1).float() output = net(x) the output has shape torch.Size([16, 2]) instead of torch.Size([4, 2]), which is what I want and what e.g. a ResNet delivers as an output. What am I missing? Thanks!
When you apply pooling layer, it returns (batch_size, 2, 2, num_filters), so when you reshape it x = x.view(-1, 1024), it results in (batch_size * 4, num_filters) as shape. Instead of reshaping like that you should either flatten or average the output of pooling layer. Flattening is most commonly used here. So, replacing following line x = x.view(-1, 1024) with x = nn.Flatten()(x) would result in correct final output shape
https://stackoverflow.com/questions/63646368/
How to Find Confusion Matrix and Plot it for Image Classifier in PyTorch
Basically this is the VGG-16 Model, I have performed Transfer Learning and Fine Tuned the model, I have trained this model 2 weeks ago and found both the test and train accuracy but now I need Class wise accuracy of the model too, I am trying to find out the Confusion matrix and wanna plot the matrix too. Training Code: # Training the model again from the last CNN Block to The End of the Network dataset = 'C:\\Users\\Sara Latif Khan\\OneDrive\\Desktop\\FYP_\\Scene15\\15-Scene' model = model.to(device) optimizer = Adam(filter(lambda p: p.requires_grad, model.parameters())) #Training Fixed Feature Extractor for 15 epochs num_epochs = 5 batch_loss = 0 cum_epoch_loss = 0 #cumulative loss for each batch for e in range(num_epochs): cum_epoch_loss = 0 for batch, (images, labels) in enumerate(trainloader,1): images = images.to(device) labels = labels.to(device) optimizer.zero_grad() logps = model(images) loss = criterion(logps, labels) loss.backward() optimizer.step() batch_loss += loss.item() print(f'Epoch({e}/{num_epochs} : Batch number({batch}/{len(trainloader)}) : Batch loss : {loss.item()}') torch.save(model, dataset+'_model_'+str(e)+'.pt') print(f'Training loss : {batch_loss/len(trainloader)}') This is the code I am using to check the accuracy of my model based on data from the test loader. model. to('cpu') model.eval() with torch.no_grad(): num_correct = 0 total = 0 #set_trace () for batch, (images,labels) in enumerate(testloader,1): logps = model(images) output = torch.exp(logps) pred = torch.argmax(output,1) total += labels.size(0) num_correct += (pred==labels).sum().item() print(f'Batch ({batch} / {len(testloader)})') # to check the accuracy of model on 5 batches # if batch == 5: # break print(f'Accuracy of the model on {total} test images: {num_correct * 100 / total }% ') Next, I need to find the class-wise accuracy of the model. I am working on the Jupyter Notebook. Should I reload a saved model and find the cm or what will the appropriate way of doing it.
You have to save all the predictions and targets of the test set. predictions, targets = [], [] for images, labels in testloader: logps = model(images) output = torch.exp(logps) pred = torch.argmax(output, 1) # convert to numpy arrays pred = pred.detach().cpu().numpy() labels = labels.detach().cpu().numpy() for i in range(len(pred)): predictions.append(pred[i]) targets.append(labels[i]) Now you have all the predictions and actual targets of the test-set stored. Next step is to create the confusion matrix. I think I can just give you my function I always use: def create_confusion_matrix(y_true, y_pred, classes): """ creates and plots a confusion matrix given two list (targets and predictions) :param list y_true: list of all targets (in this case integers bc. they are indices) :param list y_pred: list of all predictions (in this case one-hot encoded) :param dict classes: a dictionary of the countries with they index representation """ amount_classes = len(classes) confusion_matrix = np.zeros((amount_classes, amount_classes)) for idx in range(len(y_true)): target = y_true[idx][0] output = y_pred[idx] output = list(output).index(max(output)) confusion_matrix[target][output] += 1 fig, ax = plt.subplots(1) ax.matshow(confusion_matrix) ax.set_xticks(np.arange(len(list(classes.keys())))) ax.set_yticks(np.arange(len(list(classes.keys())))) ax.set_xticklabels(list(classes.keys())) ax.set_yticklabels(list(classes.keys())) plt.setp(ax.get_xticklabels(), rotation=45, ha="left", rotation_mode="anchor") plt.setp(ax.get_yticklabels(), rotation=45, ha="right", rotation_mode="anchor") plt.show() So y_true are all the targets, y_pred all the predictions and classes is a dictionary which maps the labels to the actual class-names, for example: classes = {"dog": [1, 0], "cat": [0, 1]} Then simply call: create_confusion_matrix(targets, predictions, classes) Probably you will have to adapt it to your code a little but I hope this works for you. :)
https://stackoverflow.com/questions/63647547/
Pytorch crossentropy loss with 3d input
I have a network which outputs a 3D tensor of size (batch_size, max_len, num_classes). My groud truth is in the shape (batch_size, max_len). If I do perform one-hot encoding on the labels, it'll be of shape (batch_size, max_len, num_classes) i.e the values in max_len are integers in the range [0, num_classes]. Since the original code is too long, I have written a simpler version that reproduces the original error. criterion = nn.CrossEntropyLoss() batch_size = 32 max_len = 350 num_classes = 1000 pred = torch.randn([batch_size, max_len, num_classes]) label = torch.randint(0, num_classes,[batch_size, max_len]) pred = nn.Softmax(dim = 2)(pred) criterion(pred, label) the shape of pred and label are respectively,torch.Size([32, 350, 1000]) and torch.Size([32, 350]) The error encountered is ValueError: Expected target size (32, 1000), got torch.Size([32, 350, 1000]) If I one-hot encode labels for computing the loss x = nn.functional.one_hot(label) criterion(pred, x) it'll throw the following error ValueError: Expected target size (32, 1000), got torch.Size([32, 350, 1000])
From the Pytorch documentation, CrossEntropyLoss expects the shape of its input to be (N, C, ...), so the second dimension is always the number of classes. Your code should work if you reshape preds to be of size (batch_size, num_classes, max_len).
https://stackoverflow.com/questions/63648735/
how to reduce the second dimension in pytorch python tensor operation
I have the below tensor with the following shape seq = dataset['features'][...] print(f'shape of seq before unsequeeze {seq.shape}') shape of seq before unsequeeze (461, 1024) I am trying to convert the shape in (461, 512) How should I achieve this in pytorch tensor operation. examples feature x as below, import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") x = torch.tensor([[0.1,0.8,0.8,0.4,0.9,0.4,0.4,0.5,0.4,0.2,0.3,0.8,0.2,0.7,0.6,0.1,0.2,0.1,0.1,0.4,0.2,0.1,0.0,0.5,0.2,0.4,0.3,0.3,0.7,0.1,0.4,0.6,0.5,1.0,0.1,0.8,0.9,0.0,0.2,0.9,0.8,0.0,0.9,0.7,0.2,0.2,0.9,0.6,0.1,0.2,0.6,0.0,0.1,0.1,0.3,0.5,0.8,0.8,0.4,0.4,0.7,0.7,0.4,0.2,0.1,1.0,0.3,0.8,0.1,0.7,0.7,0.9,0.6,0.3,0.8,0.2,0.9,0.6,0.7,0.8,0.2,0.1,1.0,0.6,0.5,0.5,0.5,0.8,0.8,0.3,0.1,0.2,0.5,0.9,0.6,0.8,0.0,0.6,0.2,0.1,0.8,0.4,0.8,0.5,0.8,0.4,0.7,0.6,0.8,0.1,0.4,0.8,1.0,0.9,0.4,0.4,0.4,0.1,0.7,0.3,0.8,0.6,0.4,0.5,0.9,0.1,0.9,0.7,0.4,0.7,0.1,0.8,0.2,0.2,0.7,0.2,0.9,0.6,0.2,0.9,0.1,0.9,0.2,1.0,0.9,0.6,0.3,0.6,0.9,0.6,0.0,0.3,0.4,0.6,0.7,0.9,0.2,0.6,0.2,0.5,0.3,0.3,0.4,0.4,0.1,0.2,0.6,0.0,0.7,0.5,0.5,0.2,0.5,0.6,0.5,0.5,0.7,0.8,0.4,0.5,0.8,0.8,0.1,0.5,0.7,0.8,0.1,0.1,0.8,0.6,0.6,0.4,1.0,0.4,0.6,0.9,0.1,0.6,0.3,1.0,0.7,0.2,0.5,0.5,1.0,0.5,0.4,0.3,0.7,0.1,1.0,0.9,0.4,0.6,0.6,0.6,0.2,0.0,0.9,0.9,0.2,0.1,0.5,0.5,0.8,0.7,0.8,0.0,0.0,0.1,0.5,0.5,0.5,0.8,0.1,0.5,1.0,0.3,0.2,0.8,0.9,0.4,0.4,0.9,0.2,0.4,0.9,0.9,0.3,0.7,0.4,0.9,0.5,0.7,0.8,0.5,0.5,0.5,0.8,0.7,0.9,0.2,0.8,1.0,0.1,0.9,0.6,0.5,0.0,0.2,0.8,0.2,0.8,0.5,0.9,0.9,0.5,0.6,0.1,0.8,1.0,0.3,0.1,0.5,0.9,0.1,0.0,0.5,0.3,0.1,0.5,0.8,0.3,0.4,0.4,0.3,0.2,0.8,0.7,0.6,0.3,0.5,0.1,0.7,0.4,0.2,0.1,0.1,0.4,0.2,0.8,0.8,0.4,0.1,0.0,0.3,0.2,0.0,1.0,0.2,0.6,0.5,0.7,0.7,0.7,0.1,0.2,0.1,0.1,0.9,0.6,0.5,1.0,0.4,0.4,0.8,0.7,0.5,0.6,0.9,0.0,0.8,0.3,0.1,0.5,0.9,0.9,0.9,0.7,0.7,1.0,0.6,0.6,1.0,0.8,1.0,0.4,0.3,0.2,1.0,0.9,0.2,0.7,0.1,0.3,0.1,0.1,0.7,0.6,0.8,0.8,0.7,0.7,0.4,0.8,0.4,0.1,0.0,1.0,0.2,0.6,0.8,0.3,0.9,0.3,0.6,0.6,0.4,0.7,0.0,0.2,0.9,0.2,0.1,0.4,0.9,0.5,0.2,0.4,1.0,0.1,0.3,0.8,0.8,0.2,0.2,0.6,0.8,0.1,0.0,0.5,1.0,0.5,0.7,0.3,0.5,0.0,0.2,0.6,0.7,0.6,0.4,0.2,0.0,0.4,0.4,0.0,0.3,0.3,0.8,0.5,0.7,0.4,0.1,0.8,0.4,0.1,0.3,1.0,0.3,0.6,0.5,0.6,0.2,0.9,0.4,0.4,0.8,0.0,0.3,0.8,0.3,0.1,0.0,0.5,0.5,0.8,0.6,1.0,0.7,0.8,0.7,0.7,0.6,0.0,0.6,0.6,0.3,0.7,0.2,1.0,0.6,0.4,0.8,0.4,0.7,0.3,0.8,0.8,0.1,0.1,0.2,0.2,0.7,0.1,0.8,0.4,1.0,0.6,1.0,0.3,0.9,0.9,0.9,0.9,1.0,0.2,0.3,0.9,0.5,0.5,0.4,0.1,0.4,0.0,0.7,0.2,0.6,0.8,0.2,0.8,0.2,0.6,0.9,0.1,0.3,0.4,0.2,0.9,0.3,0.9,0.1,0.1,0.7,1.0,0.4,0.2,0.9,0.2,0.5,0.1,0.3,0.6,0.5,0.6,0.5,0.3,0.4,0.3,0.9,0.7,0.1,0.2,0.8,1.0,0.5,0.0,0.8,0.2,0.2,0.0,1.0,0.2,1.0,0.5,1.0,0.9,0.5,0.2,0.5,0.8,0.4,0.9,0.9,0.2,0.5,0.5,0.2,0.6,0.3,0.3,0.8,0.3,0.5,0.4,0.2,0.7,0.8,0.9,0.2,0.9,0.6,0.0,0.3,0.8,0.5,0.3,0.9,0.9,0.7,0.4,0.9,0.3,0.7,0.4,0.3,0.5,0.8,0.9,0.7,0.6,0.5,0.1,0.9,0.6,0.5,0.2,0.7,0.3,0.3,0.1,0.0,0.2,0.5,0.9,0.7,0.3,0.3,1.0,0.3,0.6,0.9,0.1,0.9,0.3,0.7,0.1,0.7,0.6,0.6,0.5,0.1,0.1,0.3,0.5,0.7,0.1,0.7,0.4,0.8,0.4,0.6,0.8,0.7,0.6,0.0,0.1,0.3,0.8,0.2,0.5,0.7,0.0,0.4,1.0,0.2,0.2,0.4,0.3,0.9,0.2,0.4,0.3,0.4,0.2,0.5,0.6,0.6,0.8,0.7,0.3,0.1,0.7,0.5,0.1,0.4,1.0,0.2,0.8,0.5,0.7,0.3,0.7,0.6,0.7,0.5,1.0,0.2,0.8,0.0,0.1,0.2,0.6,0.0,0.2,0.1,0.2,0.4,0.6,0.2,1.0,0.3,0.1,0.1,0.7,0.0,0.7,0.0,0.7,0.9,0.1,0.2,0.8,0.7,0.5,0.3,0.8,0.3,0.0,0.1,0.1,0.8,0.9,0.2,0.5,0.5,0.4,0.4,0.8,0.9,0.4,1.0,0.8,0.4,0.2,0.1,0.3,0.1,0.7,0.9,0.2,0.9,0.8,0.7,0.2,0.7,0.4,0.0,1.0,0.7,0.3,0.6,0.9,0.1,0.5,0.2,0.5,0.7,0.3,0.9,0.7,0.2,1.0,0.6,0.4,0.3,0.1,0.1,0.0,0.3,0.9,0.7,0.5,0.9,0.8,0.6,0.8,0.1,0.4,0.5,0.8,0.7,0.4,0.8,0.4,0.1,0.6,0.8,0.0,0.9,0.7,0.7,0.7,0.7,0.3,0.4,0.4,0.2,0.6,0.3,0.4,1.0,0.2,0.3,0.0,0.5,1.0,0.8,0.7,0.3,0.2,0.7,0.1,0.5,0.2,0.3,0.4,0.8,0.4,0.2,0.3,0.9,0.5,0.1,0.7,0.0,0.3,0.3,0.1,0.1,0.8,0.2,0.6,0.2,0.0,0.3,0.6,0.4,0.7,0.6,0.2,0.8,0.4,0.3,0.7,0.3,0.7,0.9,0.4,0.8,0.9,0.4,0.5,0.4,0.6,0.7,0.5,0.6,0.6,0.4,0.4,0.8,0.3,0.9,0.8,0.9,0.6,0.1,0.9,1.0,1.0,0.8,0.8,0.2,0.1,0.1,0.4,0.9,0.9,0.9,0.6,0.4,0.8,0.6,0.6,0.4,0.6,0.6,0.8,1.0,0.2,0.3,0.4,0.9,0.3,0.7,0.9,0.6,1.0,0.5,0.3,0.5,0.9,0.1,0.9,0.6,0.4,0.9,0.9,0.7,0.9,0.0,0.3,0.7,0.2,0.1,0.2,0.6,0.1,0.6,0.3,0.5,0.1,0.5,0.7,0.1,0.9,0.4,0.1,0.4,1.0,0.1,0.7,0.5,0.6,0.1,0.4,1.0,0.3,0.8,0.3,0.9,0.8,0.9,0.4,0.2,0.2,0.7,0.0,0.8,0.7,0.3,0.2,0.2,0.3,0.9,0.8,0.2,0.3,0.4,0.2,0.9,0.4,0.6,0.2,0.5,0.6,0.0,0.3,0.2,0.9,0.7,0.5,0.7,0.8,0.8,0.2,0.7,0.7,0.5,0.1,0.0,0.3,0.6,0.4,1.0,1.0,0.1,0.2,0.4,0.5,0.0,0.2,0.6,0.8,0.7,0.5,0.2,0.3,0.7,0.4,0.7,0.8,0.2,0.7,0.8,0.9,0.7,0.2,0.5,0.7,0.9,0.7,0.5,0.1,1.0,0.5,0.6,0.9,0.5,0.7,0.3,0.9,0.8], [0.5,0.6,0.0,0.9,0.9,0.4,0.4,0.9,0.1,0.7,0.8,0.7,1.0,0.5,0.6,0.5,0.9,0.7,0.2,0.4,0.6,0.7,0.4,0.2,0.3,0.3,0.9,1.0,0.0,0.5,0.5,0.6,0.1,0.6,0.1,1.0,0.8,0.4,0.2,0.6,0.9,0.2,0.1,0.5,0.0,0.5,0.3,0.9,0.5,0.0,0.9,0.4,0.4,0.5,0.7,0.9,0.1,0.9,0.0,0.2,0.6,0.8,0.7,0.1,0.6,0.2,0.2,0.8,0.7,0.2,0.1,0.2,0.6,0.8,0.6,0.4,0.8,0.8,0.9,0.7,0.8,0.4,0.5,0.1,0.7,0.9,0.2,0.3,0.0,0.7,0.0,0.1,0.7,0.8,0.9,0.7,0.6,0.3,0.7,0.7,0.2,0.1,0.3,0.7,0.3,0.8,0.2,0.1,0.8,0.9,0.2,0.4,0.5,0.5,0.9,0.9,0.3,0.7,0.1,0.6,0.7,0.2,0.6,0.9,0.8,0.7,0.0,0.4,0.1,0.6,0.5,0.1,0.8,0.7,0.9,0.7,0.5,0.7,0.8,0.8,0.2,0.5,0.3,0.4,0.8,0.4,0.1,0.3,0.4,0.3,0.4,0.7,0.4,0.7,0.9,0.2,0.8,0.3,0.8,0.3,0.8,0.7,0.3,0.4,0.4,0.6,0.1,0.3,0.6,0.5,0.9,0.7,0.3,0.6,0.5,0.3,0.4,0.2,0.8,0.3,0.1,0.9,0.9,0.6,0.1,0.4,0.2,0.4,0.8,0.9,0.1,0.4,0.8,0.5,0.4,0.8,0.9,1.0,0.1,0.8,0.8,0.8,0.8,0.8,0.3,0.1,1.0,0.2,0.9,0.2,0.9,0.7,0.9,1.0,0.4,0.2,0.5,0.4,0.3,0.2,0.1,0.1,0.8,0.7,0.0,0.3,1.0,1.0,0.0,0.5,0.0,0.5,0.6,0.8,0.2,0.4,0.0,0.8,0.5,0.8,0.6,0.3,0.4,0.7,0.9,0.0,0.8,0.7,0.9,0.9,0.2,0.3,0.3,0.9,0.3,0.3,0.3,0.6,0.8,0.5,0.5,0.0,0.5,0.8,1.0,0.4,1.0,0.3,0.5,0.5,0.6,0.6,0.7,0.1,0.3,0.6,0.4,0.2,0.8,1.0,0.6,0.9,0.7,0.5,0.1,0.7,0.6,1.0,0.4,0.9,0.3,0.6,0.1,1.0,0.8,0.7,0.7,0.5,0.0,0.6,0.5,1.0,0.6,0.9,0.8,0.9,0.7,1.0,0.9,1.0,0.3,0.2,0.5,0.3,0.8,0.1,0.9,0.6,0.9,0.9,0.3,0.4,0.1,0.6,0.0,0.0,0.2,0.2,0.9,0.9,0.6,1.0,0.2,0.7,1.0,0.8,1.0,0.2,0.3,0.3,0.9,0.5,0.1,0.2,0.5,0.9,0.1,0.5,0.2,1.0,0.7,0.4,0.2,0.1,0.4,0.4,0.7,0.8,0.3,0.6,0.0,1.0,0.8,1.0,0.1,0.2,0.9,0.4,0.8,0.0,0.0,1.0,0.1,0.3,0.0,0.7,0.6,0.9,0.4,0.4,0.9,0.4,0.8,0.7,0.7,0.5,0.3,0.6,0.5,0.5,0.5,0.9,0.8,0.4,0.8,0.6,0.4,0.2,0.9,1.0,0.8,0.2,0.2,0.8,0.9,0.7,0.1,0.8,0.7,0.3,0.1,0.2,0.3,0.6,0.6,0.6,0.7,0.4,0.1,0.9,0.5,0.5,0.5,0.4,0.6,0.2,0.7,0.6,0.3,0.3,0.2,0.4,0.2,0.9,0.9,0.9,0.7,0.8,0.3,0.0,0.4,0.1,0.9,0.6,0.3,0.0,0.7,0.1,0.8,0.6,0.3,0.6,0.8,0.2,0.1,0.4,0.8,0.9,1.0,0.7,0.8,0.1,0.4,0.1,0.4,0.9,0.4,0.6,0.7,0.2,0.5,0.6,0.8,0.6,0.6,0.9,0.7,0.4,0.3,0.5,0.1,0.8,0.9,0.4,0.0,0.4,0.0,0.3,0.6,0.8,0.1,0.4,0.1,0.6,0.7,0.1,0.0,0.0,0.0,0.8,0.7,0.6,0.8,0.6,0.9,0.1,0.4,0.0,0.4,0.0,0.4,0.7,0.5,0.1,0.9,0.3,0.3,0.1,0.3,0.6,0.6,0.8,0.8,0.9,0.2,0.0,0.6,0.3,1.0,0.6,0.7,1.0,1.0,0.9,0.4,0.1,0.6,0.9,0.1,0.1,0.1,0.2,0.5,0.0,0.8,0.5,0.0,0.8,0.4,0.1,0.2,0.2,0.8,0.9,0.6,0.3,0.2,0.5,0.0,0.1,0.1,0.8,0.9,1.0,0.8,0.2,0.8,0.3,0.8,0.2,0.0,0.1,1.0,0.7,0.1,0.8,0.2,0.5,0.3,0.6,0.1,0.7,0.7,0.5,0.2,0.3,0.5,0.5,1.0,0.2,0.3,0.4,0.1,0.1,0.7,1.0,0.7,0.6,0.9,1.0,0.4,0.8,0.1,0.4,0.1,0.9,0.7,0.4,0.0,0.0,0.3,0.3,0.5,0.6,0.3,0.8,0.5,0.3,0.1,0.9,0.5,0.1,0.3,0.9,0.4,0.3,0.4,0.2,0.9,0.5,0.4,0.9,0.8,0.9,0.9,0.9,0.6,0.6,0.3,0.4,0.3,0.3,0.4,0.4,0.2,0.3,0.7,0.1,0.4,0.1,0.7,0.2,0.7,0.7,0.1,0.3,1.0,0.4,0.4,0.0,0.1,0.4,0.6,0.9,0.5,0.1,0.6,0.9,0.1,0.2,0.4,0.5,0.5,0.1,0.7,0.0,0.1,1.0,0.6,0.1,0.5,0.7,0.2,0.7,0.1,0.1,0.5,0.5,0.2,0.7,0.0,0.9,0.3,0.2,0.9,0.2,0.2,0.5,0.5,0.6,0.3,0.4,0.9,0.4,0.5,0.8,0.1,0.4,0.5,0.9,0.5,0.4,0.3,1.0,0.7,0.5,0.1,0.0,0.3,0.0,0.5,0.5,0.9,0.6,0.3,0.7,0.1,0.9,0.1,0.9,0.1,0.8,0.0,0.9,0.0,0.0,0.7,0.6,1.0,0.5,0.9,0.7,0.4,0.5,0.6,0.3,0.6,0.9,0.4,0.3,0.3,1.0,0.2,1.0,0.3,0.7,0.9,0.8,0.8,0.7,0.6,0.6,0.8,0.5,0.3,0.4,0.5,0.1,0.3,0.4,0.0,0.2,0.8,0.3,1.0,0.5,0.0,0.7,0.9,0.3,0.3,0.9,0.9,0.5,0.0,0.0,0.6,0.7,0.6,0.5,0.1,0.8,0.3,0.3,0.1,0.7,0.0,0.6,0.0,0.1,0.9,0.1,0.4,0.1,0.5,1.0,0.3,0.2,0.8,0.6,0.3,0.5,0.3,0.1,0.9,0.1,0.9,0.9,0.1,0.8,0.7,0.8,0.3,0.5,1.0,0.1,0.7,0.4,0.7,0.7,0.9,0.9,1.0,0.3,0.8,0.3,0.3,0.5,0.2,0.6,0.4,0.5,0.7,0.8,0.9,0.8,0.9,0.2,0.0,0.5,0.2,1.0,0.7,0.4,0.1,0.6,0.6,0.0,0.4,0.6,0.6,0.4,0.1,0.7,1.0,0.1,0.4,0.3,0.9,0.1,0.0,0.1,0.6,0.1,1.0,0.1,0.3,0.3,0.4,0.3,0.8,0.2,0.5,0.1,0.3,0.8,0.7,0.0,0.4,0.5,0.2,0.0,0.5,0.8,0.2,0.6,0.9,0.8,0.9,0.5,0.7,0.5,0.9,0.9,0.3,0.5,0.3,1.0,0.8,0.7,0.9,0.6,0.6,0.5,0.8,0.2,0.7,0.6,0.3,0.1,0.9,0.2,0.4,0.9,0.3,0.2,0.5,0.5,0.9,0.2,1.0,0.9,0.8,0.2,0.2,1.0,0.4,0.4,0.6,0.8,0.3,0.2,0.6,0.0,0.5,0.9,0.6,0.3,0.4,0.8,0.5,0.6,0.7,0.6,0.0,0.1,0.3,0.7,0.4,0.1,0.2,0.7,0.2,0.3,0.8,0.2,0.4,0.2,1.0,1.0,0.7,0.8,0.2,0.5,0.3,0.5,0.4,0.6,0.5,0.3,0.6,0.5,1.0,0.7,0.8,0.9,0.0,0.6,0.3,0.9,0.3,0.9,0.5,0.7,0.5,0.1,0.1,0.3,0.7,0.8,0.1,0.0,0.7,0.5,1.0,0.3,0.8,0.7,0.7,0.2,0.9,0.5,0.6,0.1,0.5,0.5,0.0,0.2,0.7,0.9,0.1,0.9,0.3,0.2]]).to( device, dtype=torch.int64 ) x.shape torch.Size([2, 1024]) I need to reduce the feature size to 512 keeping the first dim batch size in tact, x.shape torch.Size([2, 512]) Thanks
z = torch.narrow(x,1,0,512) z.shape torch.Size([2, 512]) if you use x.unsqueeze(0), then the dim 2 will require the change, z = torch.narrow(x,2,0,512)
https://stackoverflow.com/questions/63650770/
pytorch DataLoader extremely slow first epoch
When I create a PyTorch DataLoader and start iterating -- I get an extremely slow first epoch (x10--x30 slower then all next epochs). Moreover, this problem occurs only with the train dataset from the Google landmark recognition 2020 from Kaggle. I can't reproduce this on synthetic images, also, I tried to create a folder with 500k images from GLR2020, and everything worked well. Found few similar problems in the PyTorch forum without any solutions. import argparse import pandas as pd import numpy as np import os, sys import multiprocessing, ray import time import cv2 import logging import albumentations as albu from torch.utils.data import Dataset, DataLoader samples = 50000 # count of samples to speed up test bs = 64 # batch size dir = '/hdd0/datasets/ggl_landmark_recognition_2020/train' # directory with train data all_files = pd.read_csv('/hdd0/datasets/ggl_landmark_recognition_2020/train.csv') files = np.random.choice(all_files.id.values, 50000) files = [os.path.join(_[0], _[1], _[2], _+'.jpg') for _ in files] # augmentations aug = albu.Compose([albu.Resize(400, 400), albu.Rotate(limit=15), albu.ChannelDropout(p=0.1), albu.Normalize(),]) class ImgDataset: def __init__(self, path, files, augmentation = None): self.path = path self.files = {k:v for k, v in enumerate(files)} self.augmentation = augmentation def __len__(self): return len(self.files) def __getitem__(self, idx): img_name = self.files[idx] img = np.array(cv2.imread(os.path.join(self.path, img_name))) if self.augmentation is not None: return self.augmentation(image=img)['image'] dtset = ImgDataset(dir,files, aug) torchloader = DataLoader(dataset= dtset, batch_size=64, num_worker=16, shuffle=True) for _ in range(3): t1 = time.time() for idx, val in enumerate(torchloader): pass t2 = time.time() print(str(t2-t1) +' sec') Here are some examples of execution speed with different num_workers in DataLoader #num_workers=0 273.1584792137146 sec 83.15653467178345 sec 83.67923021316528 sec # num_workers = 8 165.62366938591003 sec 10.405716896057129 sec 10.495309114456177 sec # num_workers = 16 156.60744667053223 sec 8.051618099212646 sec 7.922858238220215 sec Looks like the problem is not with DataLoader, but with dataset. When I delete and reinitialise DataLoader object after first "long" iteration, everything still works fine. When I reinitialise dataset -- long first iteration appears again. Moreover, I tracked my cpu utilisation via htop during this epochs with num_workers setted to 32, and during the first epoch, utilisation is really low; only 1-2 of 32 cores are working, during other epochs ~all cores are working.
Slavka, TLDR: This is a caching effect. I did not download the whole GLR2020 dataset but I was able to observe this effect on the image dataset that I had locally (80000 jpg images of approx 400x400 size). To find the reasons for the difference in performance I tried the following: reducing the augmentation to just resizing testing just ImgDataset.__getitem__() function ImgDataset.__getitem__() without augmentation just loading the raw jpg image and passing it from the dataset without even numpy conversion. It turns out that the difference comes from the image loading timing. Python (or OS itself) implements some kind of caching which is observed when loading image multiple times in the following test. for i in range(5): t0 = time.time() data = cv2.imread(filename) print (time.time() - t0) 0.03395271301269531 0.0010004043579101562 0.0010004043579101562 0.0010008811950683594 0.001001119613647461 same is observed when just reading from file to variable for i in range(5): t0 = time.time() with open(filename, mode='rb') as file: data = file.read() print (time.time() - t0) 0.036234378814697266 0.0028831958770751953 0.0020024776458740234 0.0031833648681640625 0.0028734207153320312 One way to reduce the loading speed is to keep the data on very fast local SSD. If size allows, try loading part of the dataset into RAM and writing custom dataloader to feed from there... BTW Based on my findings this effect should be reproducible with any dataset - see if you used different drives or some caching.
https://stackoverflow.com/questions/63654232/
Can't understand about pytorch tensor broadcast
I have the following code: import torch d = 2 n = 50 X = torch.randn(n,d) z = torch.tensor([[-1.0], [2.0]]) y = X @ z X.size() z.size() y.size() The output is: torch.Size([50, 2]) torch.Size([2, 1]) torch.Size([50, 1]) My question is, why after broadcasting, the size of the result y is [50,1] rather than [50,2], I think it should be [50,2], am I correct?
The @ is not broadcasting but multiplication. In python 3.5, the @ operator was introduced for matrix multiplication, following PEP465. This is implemented e.g. in numpy as the matmul operator. So the size of y is fine. multiply a matrix of size [50,2] with a vector of size [2,1] will output a vector of size [50,1]. An example showing it more clearly is: import torch xx = torch.ones(3, 2) zz = torch.tensor([[-1.0], [2.0]]) yy = xx @ zz print(xx) print(zz) print(yy) # tensor([[1., 1.], # [1., 1.], # [1., 1.]]) # tensor([[-1.], # [ 2.]]) # tensor([[1.], # [1.], # [1.]]) As you can see the third output is indeed just the multiplication of the 2 tensors. If you wish to do broadcasting I recommend that you will refer to https://medium.com/ai%C2%B3-theory-practice-business/understanding-broadcasting-in-pytorch-ca9e9533f05f
https://stackoverflow.com/questions/63654500/
How can I save my training progress in PyTorch for a certain batch no.?
I'm simply trying to train a ResNet18 model using PyTorch library. The training dataset consists of 25,000 images. Therefore, it is taking a lot of time for even the first epoch to complete. Therefore, I want to save the progress after a certain no. of batch iteration is completed. But I can't figure out how to modify my code and how to use the torch.save() and torch.load() functions in my code to save the periodic progress. My code is given below: # BUILD THE NETWORK import torch import torch.nn as nn import torch.optim as optim import torch.utils.data import torch.nn.functional as F import torchvision import torchvision.models as models from torchvision import transforms from PIL import Image import matplotlib.pyplot as plt # DOWNLOAD PRETRAINED MODELS ON ImageNet model_resnet18 = torch.hub.load('pytorch/vision', 'resnet18', pretrained = True) model_resnet34 = torch.hub.load('pytorch/vision', 'resnet34', pretrained = True) for name, param in model_resnet18.named_parameters(): if('bn' not in name): param.requires_grad = False for name, param in model_resnet34.named_parameters(): if('bn' not in name): param.requires_grad = False num_classes = 2 model_resnet18.fc = nn.Sequential(nn.Linear(model_resnet18.fc.in_features, 512), nn.ReLU(), nn.Dropout(), nn.Linear(512, num_classes)) model_resnet34.fc = nn.Sequential(nn.Linear(model_resnet34.fc.in_features, 512), nn.ReLU(), nn.Dropout(), nn.Linear(512, num_classes)) # FUNCTIONS FOR TRAINING AND LOADING DATA def train(model, optimizer, loss_fn, train_loader, val_loader, epochs = 5, device = "cuda"): print("Inside Train Function\n") for epoch in range(epochs): print("Epoch : {} running".format(epoch)) training_loss = 0.0 valid_loss = 0.0 model.train() k = 0 for batch in train_loader: optimizer.zero_grad() inputs, targets = batch inputs = inputs.to(device) output = model(inputs) loss = loss_fn(output, targets) loss.backward() optimizer.step() training_loss += loss.data.item() * inputs.size(0) print("End of batch loop iteration {} \n".format(k)) k = k + 1 training_loss /= len(train_loader.dataset) model.eval() num_correct = 0 num_examples = 0 for batch in val_loader: inputs, targets = batch inputs.to(device) output = model(inputs) targets = targets.to(device) loss = loss_fn(output, targets) valid_loss += loss.data.item() * inputs.size(0) correct = torch.eq(torch.max(F.softmax(output, dim = 1), dim = 1)[1], targets).view(-1) num_correct += torch.sum(correct).item() num_examples += correct.shape[0] valid_loss /= len(val_loader.dataset) print('Epoch: {}, Training Loss: {:.4f}, Validation Loss: {:.4f}, accuracy = {:.4f}'.format(epoch, training_loss, valid_loss, num_correct / num_examples)) batch_size = 32 img_dimensions = 224 img_transforms = transforms.Compose([ transforms.Resize((img_dimensions, img_dimensions)), transforms.ToTensor(), transforms.Normalize(mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225]) ]) img_test_transforms = transforms.Compose([ transforms.Resize((img_dimensions, img_dimensions)), transforms.ToTensor(), transforms.Normalize(mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225]) ]) def check_image(path): try: im = Image.open(path) return True except: return False train_data_path = "E:\Image Recognition\dogsandcats\\train\\" train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image) validation_data_path = "E:\\Image Recognition\\dogsandcats\\validation\\" validation_data = torchvision.datasets.ImageFolder(root=validation_data_path,transform=img_test_transforms, is_valid_file=check_image) test_data_path = "E:\\Image Recognition\\dogsandcats\\test\\" test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_test_transforms, is_valid_file=check_image) num_workers = 6 train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=num_workers) validation_data_loader = torch.utils.data.DataLoader(validation_data, batch_size=batch_size, shuffle=False, num_workers=num_workers) test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=num_workers) print(torch.cuda.is_available(), "\n") if torch.cuda.is_available(): device = torch.device("cuda") else: device = torch.device("cpu") print(f'Num training images: {len(train_data_loader.dataset)}') print(f'Num validation images: {len(validation_data_loader.dataset)}') print(f'Num test images: {len(test_data_loader.dataset)}') def test_model(model): print("Inside Test Model Function\n") correct = 0 total = 0 with torch.no_grad(): for data in test_data_loader: images, labels = data[0].to(device), data[1].to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('correct: {:d} total: {:d}'.format(correct, total)) print('accuracy = {:f}'.format(correct / total)) model_resnet18.to(device) optimizer = optim.Adam(model_resnet18.parameters(), lr=0.001) if __name__ == "__main__": train(model_resnet18, optimizer, torch.nn.CrossEntropyLoss(), train_data_loader, validation_data_loader, epochs=2, device=device) test_model(model_resnet18) model_resnet34.to(device) optimizer = optim.Adam(model_resnet34.parameters(), lr=0.001) if __name__ == "__main__": train(model_resnet34, optimizer, torch.nn.CrossEntropyLoss(), train_data_loader, validation_data_loader, epochs=2, device=device) test_model(model_resnet34) import os def find_classes(dir): classes = os.listdir(dir) classes.sort() class_to_idx = {classes[i]: i for i in range(len(classes))} return classes, class_to_idx def make_prediction(model, filename): labels, _ = find_classes('E:\\Image Recognition\\dogsandcats\\test\\test') img = Image.open(filename) img = img_test_transforms(img) img = img.unsqueeze(0) prediction = model(img.to(device)) prediction = prediction.argmax() print(labels[prediction]) make_prediction(model_resnet34, 'E:\\Image Recognition\\dogsandcats\\test\\test\\3.jpg') #dog make_prediction(model_resnet34, 'E:\\Image Recognition\\dogsandcats\\test\\test\\5.jpg') #cat torch.save(model_resnet18.state_dict(), "./model_resnet18.pth") torch.save(model_resnet34.state_dict(), "./model_resnet34.pth") # Remember that you must call model.eval() to set dropout and batch normalization layers to # evaluation mode before running inference. Failing to do this will yield inconsistent inference results. resnet18 = torch.hub.load('pytorch/vision', 'resnet18') resnet18.fc = nn.Sequential(nn.Linear(resnet18.fc.in_features,512),nn.ReLU(), nn.Dropout(), nn.Linear(512, num_classes)) resnet18.load_state_dict(torch.load('./model_resnet18.pth')) resnet18.eval() resnet34 = torch.hub.load('pytorch/vision', 'resnet34') resnet34.fc = nn.Sequential(nn.Linear(resnet34.fc.in_features,512),nn.ReLU(), nn.Dropout(), nn.Linear(512, num_classes)) resnet34.load_state_dict(torch.load('./model_resnet34.pth')) resnet34.eval() # Test against the average of each prediction from the two models models_ensemble = [resnet18.to(device), resnet34.to(device)] correct = 0 total = 0 if __name__ == '__main__': with torch.no_grad(): for data in test_data_loader: images, labels = data[0].to(device), data[1].to(device) predictions = [i(images).data for i in models_ensemble] avg_predictions = torch.mean(torch.stack(predictions), dim=0) _, predicted = torch.max(avg_predictions, 1) total += labels.size(0) correct += (predicted == labels).sum().item() if total != 0: print('accuracy = {:f}'.format(correct / total)) print('correct: {:d} total: {:d}'.format(correct, total)) To be very precise, I want to save my progress at the end of for batch in train_loader: loop, for say k = 1500. If anyone can guide me about modifying my code so that I can save my progress and resume it later, then it will be a great and highly appreciated.
Whenever you want to save your training progress, you need to save two things: Your model's state dict Your optimizer's state dict This can be done in the following way: def save_checkpoint(model, optimizer, save_path, epoch): torch.save({ 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'epoch': epoch }, save_path) To resume training, you can restore your model and optimizer's state dict. def load_checkpoint(model, optimizer, load_path): checkpoint = torch.load(load_path) model.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) epoch = checkpoint['epoch'] return model, optimizer, epoch You can save your model at any point in training, wherever you need to. However, it should be ideal to save after finishing an epoch.
https://stackoverflow.com/questions/63655048/
Cupy works well with TITAN V, but not with TITAN RTX
I am using cupy to run a cuda code with pytorch. My env is ubuntu 20, anaconda-python 3.7.6, nvidia-driver 440, cuda 10.2, cupy-cuda102, torch 1.4.0 First, I wrote a simple main code import data_load_test from tqdm import tqdm import torch from torch.utils.data import DataLoader def main(): dataset = data_load_test.DataLoadTest() training_loader = DataLoader(dataset, batch_size=1) with torch.cuda.device(0): pbar = tqdm(training_loader) for epoch in range(3): for i, img in enumerate(pbar): print("see the message") if __name__ == "__main__": main() and data loader like this. from torch.utils.data import Dataset import cv2 import cupy as cp def read_cuda_file(cuda_path): f = open(cuda_path, 'r') source_line = "" while True: line = f.readline() if not line: break source_line = source_line + line f.close() return source_line class DataLoadTest(Dataset): def __init__(self): source = read_cuda_file("cuda/cuda_code.cu") cuda_source = '''{}'''.format(source) module = cp.RawModule(code=cuda_source) self.myfunc = module.get_function('myfunc') self.input = cp.asarray(cv2.imread("hi.png",-1), cp.uint8) h, w, c = self.input.shape self.h = h self.w = w self.output = cp.zeros((w, h, 3), dtype=cp.uint8) self.block_size = (32, 32) self.grid_size = (h // self.block_size[1], w // self.block_size[0]) def __len__(self): return 1 def __getitem__(self, idx): self.myfunc(self.grid_size, self.block_size, (self.input, self.output, self.h, self.w)) return cp.asnumpy(self.output) And my cuda code is, #define PI 3.14159265358979323846f extern "C"{ __global__ void myfunc(const unsigned char* refImg, unsigned char* warpImg, const long long cols, const long long rows) { long long x = blockDim.x * blockIdx.x + threadIdx.x; long long y = blockDim.y * blockIdx.y + threadIdx.y; long long indexImg = y * cols + x; warpImg[indexImg * 3] = 0; warpImg[indexImg * 3 + 1] = 1; warpImg[indexImg * 3 + 2] = 2; } } I have two GPUs TITAN V (device 0) and TITAN RTX (device 1) When I run this code with TITAN V,(main function 3rd line) with torch.cuda.device(0): it works fine, but with TITAN RTX, with torch.cuda.device(1): It gives an error message like this. File "cupy/core/raw.pyx", line 66, in cupy.core.raw.RawKernel.__call__ File "cupy/cuda/function.pyx", line 162, in cupy.cuda.function.Function.__call__ File "cupy/cuda/function.pyx", line 144, in cupy.cuda.function._launch File "cupy/cuda/driver.pyx", line 293, in cupy.cuda.driver.launchKernel File "cupy/cuda/driver.pyx", line 118, in cupy.cuda.driver.check_status cupy.cuda.driver.CUDADriverError: CUDA_ERROR_CONTEXT_IS_DESTROYED: context is destroyed Please help.
In main() when dataLoadTest() class is instantiated, it is happening on the default device 0, so cuPy is compiling myFunc() there. The next line “with torch.cuda.device(0):“ is where you switch to device 1 in the version that fails? What happens if you call cuPy.cuda.Device(1).use() as the first line in main(), to make sure myFunc() gets instantiated on device 1?
https://stackoverflow.com/questions/63656680/
Why W_q matrix in torch.nn.MultiheadAttention is quadratic
I am trying to implement nn.MultiheadAttention in my network. According to the docs, embed_dim  – total dimension of the model. However, according to the source file, embed_dim must be divisible by num_heads and self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim)) If I understand properly, this means each head takes only a part of features of each query, as the matrix is quadratic. Is it a bug of realization or is my understanding wrong?
Each head uses a different part of the projected query vector. You can imagine it as if the query gets split into num_heads vectors that are independently used to compute the scaled dot-product attention. So, each head operates on a different linear combination of the features in queries (and keys and values, too). This linear projection is done using the self.q_proj_weight matrix and the projected queries are passed to F.multi_head_attention_forward function. In F.multi_head_attention_forward, it is implemented by reshaping and transposing the query vector, so that the independent attentions for individual heads can be computed efficiently by matrix multiplication. The attention head sizes are a design decision of PyTorch. In theory, you could have a different head size, so the projection matrix would have a shape of embedding_dim × num_heads * head_dims. Some implementations of transformers (such as C++-based Marian for machine translation, or Huggingface's Transformers) allow that.
https://stackoverflow.com/questions/63657679/
Unable to install pillow 6.2.1 in conda
After upgrading pytorch / torch-vision the following error occurs: python -c "import torch ; import torchvision as tv; print(torch.__version__, tv.__version__) > " Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/pointr/anaconda3/lib/python3.7/site-packages/torchvision/__init__.py", line 4, in <module> from torchvision import datasets File "/home/pointr/anaconda3/lib/python3.7/site-packages/torchvision/datasets/__init__.py", line 9, in <module> from .fakedata import FakeData File "/home/pointr/anaconda3/lib/python3.7/site-packages/torchvision/datasets/fakedata.py", line 3, in <module> from .. import transforms File "/home/pointr/anaconda3/lib/python3.7/site-packages/torchvision/transforms/__init__.py", line 1, in <module> from .transforms import * File "/home/pointr/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 17, in <module> from . import functional as F File "/home/pointr/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 5, in <module> from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION ImportError: cannot import name 'PILLOW_VERSION' from 'PIL' (/home/pointr/anaconda3/lib/python3.7/site-packages/PIL/__init__.py) This has been noted as due to an incompatibility between conda and pytorch 7.0.0 https://github.com/pytorch/vision/issues/1712 . So I need to downgrade to pillow 6.2.1: The command posted to do this is: conda install pillow=6.2.1 -y However that is failing: (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | That is hanging. So now what ? The workaround needs itself a workaround .. Is conda broken for torchvision now? The primary reason I am using conda in the first place is torch / torchvision .. Update Conda tried to resolve conflicts. After 20 minutes of this it was 13% done. Ridiculous. This is a 2020 core i7 mini-tower. No sane program takes more than low double digits seconds to resolve dependencies. I finally killed it. I am going to try the suggestion to do directly from pip: pip install Pillow==6.2.1 OK - that is hanging .. I am going to uninstall pillow and reinstall it with that version. Another update @erip has recommended conda install -c conda-forge pillow=6.2.1 - so here we go: conda install -c conda-forge pillow=6.2.1 Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: \ I will let the dust settle on that one but looking quite unlikely.
The only thing that worked for me is to uninstall / reinstall conda https://docs.anaconda.com/anaconda/install/uninstall/ https://docs.conda.io/en/latest/miniconda.html And here are the versions conda elected to install: (base) pointr@alienware:~/anaconda3$ python -c "import cv2; import PIL;print('cv2: ' + cv2.__version__); print('PIL: ' + PIL.__version__)" cv2: 4.1.0 PIL: 7.1.2 (base) pointr@alienware:~/anaconda3$ python -c "import torch ; import torchvision as tv; print('torch:' + torch.__version__); print('torchvision: ' + tv.__version__)" torch:1.3.1 torchvision: 0.4.2
https://stackoverflow.com/questions/63660023/
Normal distribution sampling in pytorch-lightning
In Pytorch-Lightning you usually never have to specify cuda or gpu. But when I want to create a gaussian sampled Tensor using torch.normal I get RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! so, how do I have to change torch.normal such that pytorch-lightning works properly? Since I use the code on different machines on cpu and on gpu centers = data["centers"] #already on GPU... sometimes... lights = torch.normal(0, 1, size=[100, 3]) lights += centers
The recommended way is to do lights = torch.normal(0, 1, size=[100, 3], device=self.device) if this is inside lightning class. You could also do: lights = torch.normal(0, 1, size=[100, 3]).type_as(tensor), where tensor is some tensor which is on cuda.
https://stackoverflow.com/questions/63660624/
Latent vector variance much larger in CoreML than PyTorch
I have a PyTorch model that I've converted to CoreML. In PyTorch, the inferred latent vector's values range to a limit of around ±1.6, but the converted mlmodel varies as much as ±55.0. What might be causing this huge discrepancy? The conversion is pretty straightforward: encoder = Encoder_CNN(latent_dim, n_c) enc_fname = os.path.join(models_dir, encoder.name + '_153.pth.tar') encoder.load_state_dict(torch.load(enc_fname, map_location={'cuda:0': 'cpu'})) encoder.cpu() encoder.eval() img_in = torch.rand(1, 1, 28, 28).cpu() traced_encoder_model = torch.jit.trace(encoder, img_in) traced_encoder_model.save("{}_encoder_mobile_{}_{}.pt".format(model_type, latent_dim, n_c)) coreml_encoder = ct.convert(traced_encoder_model, inputs=[ct.ImageType(name=name, shape=img_in.shape)]) coreml_encoder.save("{}_encoder_{}_{}.mlmodel".format(model_type, latent_dim, n_c))
You probably are using different input normalization for PyTorch and Core ML. Your img_in consists of values between 0 and 1. I don't see the inference code for Core ML, but your input pixels are probably between 0 and 255 there. You can fix this by specifying image preprocessing settings when you convert the PyTorch model to Core ML.
https://stackoverflow.com/questions/63661197/
Weight & Biases Detectron2 Google Colab - wandb: ERROR Unable to log event [Errno 95] Operation not supported
I am training a Faster-RCNN model by Detectron2 in Google Colab. I would like to track my experiments with Weights and Biases (WandB). My dataset is uploaded to Google Drive and mounted to the session via: from google.colab import drive drive.mount('/content/gdrive') Following the suggestion from https://github.com/facebookresearch/detectron2/issues/774 I am trying to link WandB via Tensorboard with: import wandb wandb.init(sync_tensorboard=True) Once the training starts I get the following error repeatedly: wandb: ERROR Unable to log event [Errno 95] Operation not supported: '/content/gdrive/My Drive/Data/output/events.out.tfevents.1598810231.3dc4616192b5.103.0' -> '/content/gdrive/My Drive/Data/wandb/run-20200830_175618-3fp3tyhs/events.out.tfevents.1598810231.3dc4616192b5.103.0' In this case, in my WandB account, I can see that there is an active experiment running but there are no logs of losses, learning rate, etc., only hardware info like the specs of the GPU show up. Interestingly, when I add the linking between Tensorboard and WandB in the Demo Colab Notebook of Detectron2 (https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5), it works perfectly: the logging of losses, learning rate, etc shows up in my WandB account. Can I get some tips regarding what is going wrong in my case?
After one week the problem disappeared. I assume that someone must have fixed the bug that caused this issue. I can now use: import wandb wandb.init(sync_tensorboard=True) and all the training metrics are synchronized to WandB without any problems.
https://stackoverflow.com/questions/63661337/
GCP AI Platform Notebook driver too old?
I am trying to run the following Hugging Face Transformers tutorial on GCP's AI Platform Notebook with 32 vCPUs, 208 GB RAM, and 2 NVIDIA Tesla T4s. However, when I try to run the part model = DistillBERTClass() model.to(device) I get the following Assertion Error: AssertionError: The NVIDIA driver on your system is too old (found version 10010). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. However, when I run !nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 38C P0 22W / 70W | 10MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 Off | 00000000:00:05.0 Off | 0 | | N/A 39C P8 10W / 70W | 10MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | The version on the NVIDIA driver is compatible with the latest PyTorch version, which I am using. Has anyone else ran into this error, and is there a way around it?
You can try a newer NVIDIA driver version, we support latest CUDA 11 driver version, and then install Pytorch on top of it: gcloud beta notebooks instances create cuda11 \ --vm-image-project=deeplearning-platform-release \ --vm-image-family=common-cu110-notebooks-debian-9 \ --machine-type=n1-standard-1 \ --location=us-west1-a \ --format=json Image family: common-cu110-notebooks-debian-9 common-cu110-notebooks-debian-10
https://stackoverflow.com/questions/63662548/
How do I retain grads and also change device type in pytorch?
When I change my input variable from CPU to cuda, it loses all its grad and also loses its is_leaf status. How do I circumvent this? I want to keep the gradients and also change it to another device.
A leaf tensor is one which has its requires_grad attribute set to True. When you do any out-of-place operation on a tensor the resulting tensor is no longer a leaf tensor. This includes creating a copy of the tensor on a different device using .to(device), .cuda(), or .cpu(). The recommended way to set the requires_grad attribute of an existing tensor is to use the in-place Tensor.requires_grad_ method. If you want the tensor on the GPU to be the leaf node then you will need to set requires_grad after copying to the desired device. For example input = input.to('cuda') input.requires_grad_(True) # need to set requires_grad after copying to GPU or a little more concisely input = input.to('cuda').requires_grad_(True)
https://stackoverflow.com/questions/63662624/
How to replace every row that are filled with zero with a certain value from a Pytorch tensor?
I have a Pytorch tensor of the size bsize x 50 x 50 where some of the rows are completely filled with zeros: [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], I want to replace the rows filled with zeros with a negative value -100 throughout in the tensor. Expected tensor - [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], ..., [-100, -100, -100, ..., -100,-100, -100], [-100, -100, -100, ..., -100,-100, -100], [-100, -100, -100, ..., -100,-100, -100]], [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], Whats the best way to do this avoiding a loop over the row shape ?
Assuming x is your tensor with BxRxC (batch, rows, and columns), you can do something like this: x[(x == 0).all(dim=-1)] = -100 Basically: x == 0 returns a boolean tensor (shape BxRxC) with True where it is equal to zero; then, .all(dim=-1) returns another boolean tensor, now with shape BxR because we chose to do all in the last dimension (-1), with True where all the columns are True; finally, we use this boolean tensor to index the original tensor and the -100 is assigned to the True positions.
https://stackoverflow.com/questions/63675583/
Hugging face - RuntimeError: Caught RuntimeError in replica 0 on device 0 on Azure Databricks
How do I run the run_language_modeling.py script from hugging face using the pretrained roberta case model to fine-tune using my own data on the Azure databricks with a GPU cluster. Using Transformer version 2.9.1 and 3.0 . Python 3.6 Torch `1.5.0 torchvision 0.6 This is the script I ran below on Azure databricks %run '/dbfs/FileStore/tables/dev/run_language_modeling.py' \ --output_dir='/dbfs/FileStore/tables/final_train/models/roberta_base_reduce_n' \ --model_type=roberta \ --model_name_or_path=roberta-base \ --do_train \ --num_train_epochs 5 \ --train_data_file='/dbfs/FileStore/tables/final_train/train_data/all_data_desc_list_full.txt' \ --mlm This is the error I get after running the above command. /dbfs/FileStore/tables/dev/run_language_modeling.py in <module> 279 280 if __name__ == "__main__": --> 281 main() /dbfs/FileStore/tables/dev/run_language_modeling.py in main() 243 else None 244 ) --> 245 trainer.train(model_path=model_path) 246 trainer.save_model() 247 # For convenience, we also re-save the tokenizer to the same directory, /databricks/python/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path) 497 continue 498 --> 499 tr_loss += self._training_step(model, inputs, optimizer) 500 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( /databricks/python/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer) 620 inputs["mems"] = self._past 621 --> 622 outputs = model(**inputs) 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc) 624 /databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) /databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs) 153 return self.module(*inputs[0], **kwargs[0]) 154 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) --> 155 outputs = self.parallel_apply(replicas, inputs, kwargs) 156 return self.gather(outputs, self.output_device) 157 /databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs) 163 164 def parallel_apply(self, replicas, inputs, kwargs): --> 165 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) 166 167 def gather(self, outputs, output_device): /databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices) 83 output = results[i] 84 if isinstance(output, ExceptionWrapper): ---> 85 output.reraise() 86 outputs.append(output) 87 return outputs /databricks/python/lib/python3.7/site-packages/torch/_utils.py in reraise(self) 393 # (https://bugs.python.org/issue2651), so we work around it. 394 msg = KeyErrorMessage(msg) --> 395 raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 239, in forward output_hidden_states=output_hidden_states, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 762, in forward output_hidden_states=output_hidden_states, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 439, in forward output_attentions, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 371, in forward hidden_states, attention_mask, head_mask, output_attentions=output_attentions, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 315, in forward hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 240, in forward attention_scores = attention_scores / math.sqrt(self.attention_head_size) RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 11.17 GiB total capacity; 10.68 GiB already allocated; 95.31 MiB free; 10.77 GiB reserved in total by PyTorch)``` Please how do I resolve this
The out of memory error is likely caused by not cleaning up the session and or freeing up the GPU. From the similar Github issue. It is because of mini-batch of data does not fit on to GPU memory. Just decrease the batch size. When I set batch size = 256 for cifar10 dataset I got the same error; Then I set the batch size = 128, it is solved.
https://stackoverflow.com/questions/63676307/
How to get stable output for torch.nn.Transformer
Looks like Transformer layers of pytorch give not reproducible outputs. It happens both for cpu and gpu. I know that it sometimes happens because of parallel computations on gpu. emb = nn.Embedding(10, 12).to(device) inp1 = torch.LongTensor([1, 2, 3, 4]).to(device) inp1 = emb(inp1).reshape(inp1.shape[0], 1, 12) #S N E encoder_layer = nn.TransformerEncoderLayer(d_model=12, nhead=4) transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=4) out1 = transformer_encoder(inp1) out2 = transformer_encoder(inp1) out1 and out2 are different. It can be multiprocessing on cpu, but results looks too shaky. How to fix this?
nn.TransformerEncoderLayer has a default dropout rate of 0.1. The indices to be dropped will be randomized in every iteration when the model is in training mode. If you want to train the model with dropout, just ignore this behavior in training and call model.eval() in testing. If you want to disable such random behavior in training, set dropout=0 like so nn.TransformerEncoderLayer(d_model=12, nhead=4, dropout=0) Full testing script: import torch import torch.nn as nn device = 'cpu' emb = nn.Embedding(10, 12).to(device) inp1 = torch.LongTensor([1, 2, 3, 4]).to(device) inp1 = emb(inp1).reshape(inp1.shape[0], 1, 12) #S N E encoder_layer = nn.TransformerEncoderLayer(d_model=12, nhead=4, dropout=0).to(device) transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=4).to(device) out1 = transformer_encoder(inp1) out2 = transformer_encoder(inp1) print((out1-out2).norm())
https://stackoverflow.com/questions/63680281/
Why can't PyCharm show PyTorch Module object attributes in debug mode
I define a sub-class of PyTorch's Module in PyCharm and create an instance a: from torch.nn import Module class AModule(Module): def __init__(self): self.something = 10 def __repr__(self): return "AModule" a = AModule() If I run the debugger and examine a, I can't see its attributes: I checked and Module is written in Python (as opposed to being implemented in C), so why is that?
This is caused by not having properly initialized Module with a super call in the first like of __init__: super(AModule, self).__init__() However, PyCharm could have shown more useful information, so I created this issue.
https://stackoverflow.com/questions/63680816/
How can I have submodules of a PyTorch Module that are not attributes of the module
I would like to have a PyTorch sub-class of Module that keeps sub-modules in a list (because there may be a variable number of sub-modules depending on the constructor's arguments). I set this list in the following way: self.hidden_layers = [torch.nn.Linear(i, o) for i, o in pairwise(self.layer_sizes)] According to this and this question, a submodule is only registered by __setattr__, when a Module object is assigned to an attribute of self. Because hidden_layers is not assigned an object of type Module, the submodules in the list are not registered as submodules, and as a result self.parameters() does not iterate over the submodules' parameters. I suppose I could explicitly call __subattr__ for each element of the list but that would be quite ugly. Is there a more correct way to register a submodule that is not a direct attribute of Module?
As answered nn.ModuleList is what you want. What you can also use is nn.Sequential. You can create a list of layers and then combine them via nn.Sequential, which will just act as a wrapper and combines all layers to essential one layer/module. This has the advantage that you only need one call to forward it through all the layers, which is nice if you have a dynamic count of modules, so you don't have to write the loops on your own. One example would be in the pytorch ResNet code: https://github.com/pytorch/vision/blob/497744b9d510ff2df756f479ee5a19fce0d579b6/torchvision/models/resnet.py#L177
https://stackoverflow.com/questions/63681985/
How to prune a Detectron2 model?
I'm a teacher who is studying computer vision for months. I was very excited when I was able to train my first object detection model using Detectron2's Faster R-CNN model. And it works like a charm! Super cool! But the problem is that, in order to increase the accuracy, I used the largest model in the model zoo. Now I want to deploy this as something people can use to ease their job. But, the model is so large that it takes ~10 seconds to infer a single image on my CPU which is Intel i7-8750h. Therefore, it's really difficult to deploy this model even on a regular cloud server. I need to use either GPU servers or latest model CPU servers which are really expensive and I'm not sure if I can even compensate for server expenses for months. I need to make it smaller and faster for deployment. So, yesterday I found that there's something like pruning the model!! I was very excited (since I'm not a computer or data scientists, don't blame me (((: ) I read official pruning documentation of PyTorch, but it's really difficult for me to understand. I found global pruning is of the easiest one to do. But the problem is, I have no idea what parameters should I write to prune. Like I said, I used Faster R-CNN X-101 model. I have it as "model_final.pth". And it uses Base RCNN FPN.yaml and its meta architecture is "GeneralizedRCNN". It seems like an easy configuration to do. But like I said, since it's not my field it's very hard for a person like me. I'd be more than happy if you could help me on this step by step. I'm leaving my cfg.yaml which I used training the model and I saved it using "dump" method in Detectron2 config class just in case. Here's the Drive link. Thank you very much in advance.
So I guess, you are trying to optimize inference time and achieving satisfactory accuracy. Without knowing details about your object types, training size, image size, it will be hard to provide suggestions. However, as you know, ML project development is an iterative process, you can have a look at the following page and check inference and accuracy. https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md#coco-object-detection-baselines I would suggest, you try R50-FPN backbone and see how your accuracy comes. Then, you will get a better understanding of what to do next.
https://stackoverflow.com/questions/63687033/
Extracting execution DAG from PyTorch Module
Let us consider a simple feed-forward deep net or even a resnet. In both case, you can think of the execution flow as a DAG (directed acyclic graph) where nodes are layers (it's a chain if there's no skip connexion). For some reason, I need this DAG for pyTorch models. Up to now I have been using custom object subclassing nn.Module (that stores the information that I need) but I would like to be able ultimately to use an arbitrary model in input of my pipeline. Is there a way to extract this DAG automatically ? (it does not matter if it's not working in all cases, I am mainly looking for general ideas / principles to achieve my goal).
That is indeed possible. Consider following evaluation of a simple formula as a stand in for a more complicated network: import torch a = torch.randn(3, requires_grad=True) b = 3*a + 1 c = torch.relu(b) d = c.sum() Then d has a .grad_fn attribute, from which you can recurse through the evaluation graph like so d.grad_fn d.grad_fn.next_functions d.grad_fn.next_functions d.grad_fn.next_functions[0][0].next_functions d.grad_fn.next_functions[0][0].next_functions[0][0].next_functions Basically next_functions gives you a list of the arguments to the current module/operation, and each entries of this list is a tuple with the actual object, as well as an integer that indicates the position of the argument. This is documented in more detail here. If you don't want to do it yourself, you can also use the visualization tool for the computation graph that is built into tensorboard, that is SummaryWriter.add_graph() as documented here.
https://stackoverflow.com/questions/63688636/
PyTorch Tensorboard not as described in documentation
I'm using PyTorch's ùtils.tensorboard.writer to log training of an RNN. For the àdd_hparams() function the Docs say: Params: hparam_dict (dict) – Each key-value pair in the dictionary is the name of the hyper parameter and it’s corresponding value. The type of the value can be one of bool, string, float, int, or None. metric_dict (dict) – Each key-value pair in the dictionary is the name of the metric and it’s corresponding value. Note that the key used here should be unique in the tensorboard record. Otherwise the value you added by add_scalar will be displayed in hparam plugin. In most cases, this is unwanted. hparam_domain_discrete – (Optional[Dict[str, List[Any]]]) A dictionary that contains names of the hyperparameters and all discrete values they can hold run_name (str) – Name of the run, to be included as part of the logdir. If unspecified, will use current timestamp. Source: https://pytorch.org/docs/master/tensorboard.html But when I try to use the run_name parameter, I get the error TypeError: add_hparams() got an unexpected keyword argument 'run_name' So I looked up the writer.py file I imported and found the cause to be the add_hparams() function itself: def add_hparams(self, hparam_dict, metric_dict): I checked my installation of PyTorch, but it's up to date. Is this some sort of nightly feature, if so, how can I download the nightly version of torch?
The docs you linked to are the ones from the current master branch of PyTorch. So yes, it is a nightly feature. The docs of the stable version (1.6) do not mention add_hparams. You can get the command to download PyTorch nightly here by selecting Preview (Nightly) instead of Stable.
https://stackoverflow.com/questions/63707055/