instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
TypeError: __init__() got an unexpected keyword argument 'filepath'
I don't know how to solve this error but I hope some of you guys know how to solve this issue. Error:TypeError: __init__() got an unexpected keyword argument 'filepath' Full error message: File "train.py", line 167, in <module> main(args) File "train.py", line 113, in main checkpoint_callback=checkpoint_callback(), File "train.py", line 86, in checkpoint_callback return ModelCheckpoint( TypeError: __init__() got an unexpected keyword argument 'filepath' from pytorch_lightning.callbacks import ModelCheckpoint save_model_path = path/to/your/dir def checkpoint_callback(): return ModelCheckpoint( filepath= save_model_path, save_top_k=True, verbose=True, monitor='val_loss', mode='min', prefix='' )
ModelCheckpoint doesn't have a filepath keyword, it does however have a dirpath keyword (as you can see in the documentation), replace filepath with dirpath, like this: from pytorch_lightning.callbacks import ModelCheckpoint save_model_path = path/to/your/dir def checkpoint_callback(): return ModelCheckpoint( dirpath=save_model_path, # changed line save_top_k=True, verbose=True, monitor='val_loss', mode='min', prefix='' )
https://stackoverflow.com/questions/66489112/
Pytorch: split a tensor by column
How can I split a tensor by column (axis = 1). For example """ input: result: tensor([[1, 1], (tensor([1, 2, 3, 1, 2, 3]), [2, 1], tensor([1, 1, 2, 2, 3, 3])) [3, 2], [1, 2], [2, 3], [3, 3]]) """ The solution I came out with is first transpose the input tensor, split it and then flatten each of the split tensor. However, is there a simpler and more effective way on doing this? Thank you import torch x = torch.LongTensor([[1,1],[2,1],[3,2],[1,2],[2,3],[3,3]]) x1, x2 = torch.split(x.T, 1) x1 = torch.flatten(x1) x2 = torch.flatten(x2) x1, x2 # output
Simply do: x1 = x[:, 0] x2 = x[:, 1] # x1: (tensor([1, 2, 3, 1, 2, 3]), x2: tensor([1, 1, 2, 2, 3, 3]))
https://stackoverflow.com/questions/66489810/
CUDA_OUT_OF_MEMORY in PyTorch head2head model
I am executing the head2head model presented in the Github repo here. When I am running the code using the following command: ./scripts/train/train_on_target.sh Obama head2headDataset with contents of the train_on_target.sh file as: target_name=$1 dataset_name=$2 python train.py --checkpoints_dir checkpoints/$dataset_name \ --target_name $target_name \ --name head2head_$target_name \ --dataroot datasets/$dataset_name/dataset \ --serial_batches Then I am getting the following error: Traceback (most recent call last): File "train.py", line 108, in <module> flow_ref, conf_ref, t_scales, n_frames_D) File "/home/nitin/head2head/util/util.py", line 48, in get_skipped_flows flow_ref_skipped[s], conf_ref_skipped[s] = flowNet(real_B[s][:,1:], real_B[s][:,:-1]) File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/nitin/head2head/models/flownet.py", line 38, in forward flow, conf = self.compute_flow_and_conf(input_A, input_B) File "/home/nitin/head2head/models/flownet.py", line 55, in compute_flow_and_conf flow1 = self.flowNet(data1) File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/nitin/head2head/models/flownet2_pytorch/models.py", line 156, in forward flownetfusion_flow = self.flownetfusion(concat3) File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/nitin/head2head/models/flownet2_pytorch/networks/FlowNetFusion.py", line 62, in forward concat0 = torch.cat((out_conv0,out_deconv0,flow1_up),1) RuntimeError: CUDA out of memory. Tried to allocate 82.00 MiB (GPU 0; 5.80 GiB total capacity; 4.77 GiB already allocated; 73.56 MiB free; 4.88 GiB reserved in total by PyTorch) I have checked the batch size in the file options/base_options.py. It is already set to 1. How can I solve the above mentioned exception. My system has 6 GB NVIDIA GTX 1660 Super GPU.
Data management: You can try reducing the dataset used for training to check if is a hardware limitation. Moreover, if it is an image dataset, you can reduce the dimensions of the images by reducing the dpi. Model parameters management: Another approach is to reduce the number of parameters of your model. The first suggestion would be to change the Dense layer size and then the other neural network hyperparameters.
https://stackoverflow.com/questions/66492008/
Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU
Following my previous question , I have written this code to train an autoencoder and then extract the features. (There might be some changes in the variable names) # Autoencoder class #https://medium.com/pytorch/implementing-an-autoencoder-in-pytorch-19baa22647d1 class AE_class(nn.Module): def __init__(self, **kwargs): super().__init__() self.encoder_hidden_layer = nn.Linear( in_features=kwargs["input_shape"], out_features=128 ) self.encoder_output_layer = nn.Linear( in_features=128, out_features=128 ) self.decoder_hidden_layer = nn.Linear( in_features=128, out_features=128 ) self.decoder_output_layer = nn.Linear( in_features=128, out_features=kwargs["input_shape"] ) def forward(self, features): #print("in forward") #print(type(features)) activation = self.encoder_hidden_layer(features) activation = torch.relu(activation) code = self.encoder_output_layer(activation) code = torch.relu(code) activation = self.decoder_hidden_layer(code) activation = torch.relu(activation) activation = self.decoder_output_layer(activation) reconstructed = torch.relu(activation) return reconstructed def encode(self, features_h): activation_h = self.encoder_hidden_layer(features_h) activation_h = torch.relu(activation_h) code_h = self.encoder_output_layer(activation_h) code_h = torch.relu(code_h) return code_h And then, for training: def retrieve_AE_features(X_before, n_voxel_region): # use gpu if available #https://discuss.pytorch.org/t/runtimeerror-tensor-for-out-is-on-cpu-tensor-for-argument-1-self-is-on-cpu-but-expected-them-to-be-on-gpu-while-checking-arguments-for-addmm/105453 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # create a model from `AE` autoencoder class # load it to the specified device, either gpu or cpu model_AE = AE_class(input_shape=n_voxel_region).to(device) # create an optimizer object # Adam optimizer with learning rate 1e-3 optimizer = optim.Adam(model_AE.parameters(), lr=1e-3) # mean-squared error loss criterion = nn.MSELoss() X_tensor = torch.tensor(X_before, dtype=torch.float32) print(type(X_tensor)) train_loader = torch.utils.data.DataLoader( X_tensor, batch_size=64, shuffle=True, num_workers=2, pin_memory=True ) test_loader = torch.utils.data.DataLoader( X_tensor, batch_size=32, shuffle=False, num_workers=2 ) print(type(train_loader)) for epoch in range(epochs_AE): loss = 0 for batch_features in train_loader: # reshape mini-batch data to [N, 784] matrix # load it to the active device #batch_features = batch_features.view(-1, 784).to(device) #print(batch_features.shape) # reset the gradients back to zero # PyTorch accumulates gradients on subsequent backward passes optimizer.zero_grad() # compute reconstructions outputs = model_AE(batch_features) # compute training reconstruction loss train_loss = criterion(outputs, batch_features) # compute accumulated gradients train_loss.backward() # perform parameter update based on current gradients optimizer.step() # add the mini-batch training loss to epoch loss loss += train_loss.item() # compute the epoch training loss loss = loss / len(train_loader) # display the epoch training loss print("AE, epoch : {}/{}, loss = {:.6f}".format(epoch + 1, epochs_AE, loss)) #After training hidden_features = model_AE.encode(X_before) return hidden_features However, I received the following error: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm) It seems some of my variables should be defined in another way to be able to be executed on GPU. My questions: How can I understand which variables will be executed on GPU and which ones on CPU? How to fix it? In other words, how to define a variable executable on GPU? Thanks in advance
I see that Your model is moved to device which is decided by this line device = torch.device("cuda" if torch.cuda.is_available() else "cpu") This can be is either cpu or cuda. So adding this line batch_features = batch_features.to(device) will actually move your input data to device. Since your model is moved to device , You should also move your input to the device. Below code has that change for epoch in range(epochs_AE): loss = 0 for batch_features in train_loader: batch_features = batch_features.to(device) #this will move inout to your device optimizer.zero_grad() outputs = model_AE(batch_features) train_loss = criterion(outputs, batch_features) ... coming to your question : Calling .to(device) can directly move the tensor to your specified device And if you want it it to be hard coded then do .to('cpu') or .to('cuda') on your torch tensor
https://stackoverflow.com/questions/66493943/
Pytorch pairwise concatenation of tensors
I'd like to compute a pairwise concatenation over a specific dimension in a batched manner. For instance, x = torch.tensor([[[0],[1],[2]],[[3],[4],[5]]]) x.shape = torch.Size([2, 3, 1]) I would like to get y such that y is the concatenation of all pairs of vectors across one dimension, ie: y = torch.tensor([[[[0,0],[0,1],[0,2]],[[1,0],[1,1],[1,2]], [[2,0], [2,1], [2,2]]], [[[3,3],[3,4],[3,5]],[[4,3],[4,4],[4,5]], [[5,3],[5,4],[5,5]]]]) y.shape = torch.Size([2, 3, 3, 2]) So essentially, for each x[i,:], you generate all pairs of vectors and you concatenate them on the last dimension. Is there a straightforward way of doing that?
One possible way to do that would be: all_ordered_idx_pairs = torch.cartesian_prod(torch.tensor(range(x.shape[1])),torch.tensor(range(x.shape[1]))) y = torch.stack([x[i][all_ordered_idx_pairs] for i in range(x.shape[0])]) After reshaping the tensor: y = y.view(x.shape[0], x.shape[1], x.shape[1], -1) you get: y = torch.tensor([[[[0,0],[0,1],[0,2]],[[1,0],[1,1],[1,2]], [[2,0], [2,1], [2,2]]], [[[3,3],[3,4],[3,5]],[[4,3],[4,4],[4,5]], [[5,3],[5,4],[5,5]]]])
https://stackoverflow.com/questions/66495588/
TorchServe MAR multiple Python files per model
Is there a way to split a model between multiple files before calling MAR? For example, if I have the file base_model.py: import torch from torch import nn class BaseModel(nn.Module): ... And a model that inherits from BaseModel: from base_model import BaseModel class FullModel(BaseModel): ... The same goes for the handler. I don't know how to split the code between multiple files.
Short answer: Yes You can pass main model file and remaining files as extra-files. For example: full_model.py using --model-file ./xxx/aaa/full_model.py and all the other files using --extra-files ./xxx/bbb/base_model_1.py,./xxx/ccc/base_model_2.py,./xxx/ddd/mini_handler.py,./xxx/eee/mini_handler_2.py. All the files will be in a single folder before packaging as .mar file. With this in mind you might need to change imports in your python files a bit. All in all, it should be similar to torch-model-archiver \ --model-name recommendation \ --version 1.0 \ --model-file src/model/full_model.py \ --serialized-file model/model.pt \ --handle model_handler.py \ --extra-files \ "dd/utils/utils.py,\ ee/network_config.json,\ ff/utils/preprocess.py,\ gg/utils/data_loader.py\ Now you can use functions from preprocess or dataloader in model handler by normal imports
https://stackoverflow.com/questions/66496016/
Error in running backwards() function in PyTorch
The code: import numpy as np predictors = np.array([[73,67,43],[91,88,64],[87,134,58],[102,43,37],[69,96,70]],dtype='float32') outputs = np.array([[56,70],[81,101],[119,133],[22,37],[103,119]],dtype='float32') inputs = torch.from_numpy(predictors) targets = torch.from_numpy(outputs) weights = torch.randn(2,3,requires_grad=True) biases = torch.randn(2,requires_grad=True) def loss_mse(x,y): d = x-y return torch.sum(d*d)/d.numel() def model(w,b,x): return x @ w.t() +b def train(x,y,w,b,lr,e): w = torch.tensor(w,requires_grad=True) b = torch.tensor(b,requires_grad=True) for epoch in range(e): preds = model(w,b,x) loss = loss_mse(y,preds) if epoch%5 == 0: print("Loss at Epoch [{}/{}] is {}".format(epoch,e,loss)) #loss.requires_grad=True loss.backward() with torch.no_grad(): w = w - lr*w.grad b = b - lr*b.grad w.grad.zero_() b.grad.zero_() train(inputs,targets,weights,biases,1e-5,100) Running this gives different errors. Once it gave the error that loss was of size 0. Then it gave the error in the update line w = w-lr*w.grad that float can't be subtracted from NoneType.
First, why do you wrap your weights and biases as Tensor twice? weights = torch.randn(2,3,requires_grad=True) biases = torch.randn(2,requires_grad=True)de here then inside the train function you use: w = torch.tensor(w,requires_grad=True) b = torch.tensor(b,requires_grad=True) Second, in the part of updating your weights change it to: with torch.no_grad(): w_new = w - lr*w.grad b_new = b - lr*b.grad w.copy_(w_new) b.copy_(b_new) w.grad.zero_() b.grad.zero_() you can check this discussion for a more comprehensive explanation: https://discuss.pytorch.org/t/updatation-of-parameters-without-using-optimizer-step/34244/20
https://stackoverflow.com/questions/66496647/
Purpose of '*' in front of Python3 function
I saw the code for a ResNet CNN in Python3 and PyTorch here as follows: def resnet_block(input_channels, num_channels, num_residuals, first_block=False): blk = [] for i in range(num_residuals): if i == 0 and not first_block: blk.append(Residual(input_channels, num_channels, use_1x1conv=True, strides=2)) else: blk.append(Residual(num_channels, num_channels)) return blk To add the modules, the following code is used- b2 = nn.Sequential(*resnet_block(64, 64, 2, first_block=True)) b3 = nn.Sequential(*resnet_block(64, 128, 2)) b4 = nn.Sequential(*resnet_block(128, 256, 2)) b5 = nn.Sequential(*resnet_block(256, 512, 2)) What does "*resnet_block()" mean/do?
Basically *iterable is used to unpack the items of an iterable object as positional arguments. In your question resnet_block returns a list, and the items of that list are passed to nn.Sequential rather than the list itself.
https://stackoverflow.com/questions/66503443/
Pytorch only sees 15GB memory when the device should have more
I just kicked off a g4dn.8xlarge instance on AWS. According to their website, this instance should have 128GB memory, as shown in the screenshot below: However, I notice my model keeps running out of memory. When I investigated, I see this: >>> print(torch.cuda.get_device_properties('cuda:0')) _CudaDeviceProperties(name='Tesla T4', major=7, minor=5, total_memory=15109MB, multi_processor_count=40) It turns out the device can only use about 15GB of memory! Could someone please explain why this might be? Do I need to use a different AWS instance to get more memory, or is PyTorch just not able to use the rest of the memory that's available?
The g4dn.8xlarge use NVIDIA T4 GPU which have 16GB of RAM. So I think PyTorch shows the memory of the GPU, not the instance.
https://stackoverflow.com/questions/66511351/
Is pytorch 1.7 officially enabled for cuda 10.0?
I had to stay on CUDA 10.0 for personal projects. Rather than installing Pytorch with versions appropriate for CUDA 10.0, I accidentally installed Pytorch 1.7 supported with CUDA 10.1. In particular, I installed by pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html Surprisingly, everything works fine so far although the CUDA versions do not match. To verify my installation, I've run the code given in collect_env.py, and it was fine. I am just wondering few things. Did Pytorch team officially comment Pytorch 1.7 is compatible with CUDA 10.0? Would there be more rigorous ways to verify my Pytorch installation?
Surprisingly, everything works fine so far although the CUDA versions do not match. Changes between minor versions should work (mismatch like this worked in my case), although there is no promise of compatibility in 10.x release (source), only since 11.x there will be binary compatibility. Did Pytorch team officially commented Pytorch 1.7 is compatible with CUDA 10.0? Not that I'm aware of, but listed wheels do not include 10.0 CUDA and PyTorch 1.7.0 (latest with 10.0 support seems to be 1.4.0). Would there be more rigorous way to verify my Pytorch installation? As above, maybe cloning PyTorch's github repo, reverting to tagged release and running tests (folder here, one of cuda test files here), but for personal projects might be excessive.
https://stackoverflow.com/questions/66514218/
AttributeError: module 'torchtext.data' has no attribute 'Field'
I want to run a git project used pytorch and torchtext but when I run it, it raise error: File "main.py", line 60, in <module> main() File "main.py", line 50, in main train_iters, dev_iters, test_iters, vocab = load_dataset(config) File "/home/esmailza/style transfer/style-transformer/data.py", line 23, in load_dataset TEXT = data.Field(batch_first=True, eos_token='<eos>') AttributeError: module 'torchtext.data' has no attribute 'Field' torch version = 1.8.0 torchtext version = 0.9 def load_dataset(config, train_pos='train.pos', train_neg='train.neg', dev_pos='dev.pos', dev_neg='dev.neg', test_pos='test.pos', test_neg='test.neg'): root = config.data_path TEXT = data.Field(batch_first=True, eos_token='<eos>') dataset_fn = lambda name: data.TabularDataset( path=root + name, format='tsv', fields=[('text', TEXT)] )
From TorchText 0.9.0 Release Notes torchtext.data.Field -> torchtext.legacy.data.Field This means, all features are still available, but within torchtext.legacy instead of torchtext. torchtext.data.Field has been moved to torchtext.legacy.data.Field And the imports would change this way: from torchtext.legacy import data
https://stackoverflow.com/questions/66516388/
Does the PyTorch Lightning average metrics over the whole epoch?
I am looking at the example provided on PyTorch-Lightning official documentation https://pytorch-lightning.readthedocs.io/en/0.9.0/lightning-module.html. Here the loss and metric is calculated on the concrete batch. But when logging one is not interested in the accuracy for a particular batch, which can be rather small and not representative, but the averaged over all epoch. Do I understand correctly, that there is some code performing the averaging over all batches, passed through the epoch? import pytorch_lightning as pl from pytorch_lightning.metrics import functional as FM class ClassificationTask(pl.LightningModule): def __init__(self, model): super().__init__() self.model = model def training_step(self, batch, batch_idx): x, y = batch y_hat = self.model(x) loss = F.cross_entropy(y_hat, y) return pl.TrainResult(loss) def validation_step(self, batch, batch_idx): x, y = batch y_hat = self.model(x) loss = F.cross_entropy(y_hat, y) acc = FM.accuracy(y_hat, y) result = pl.EvalResult(checkpoint_on=loss) result.log_dict({'val_acc': acc, 'val_loss': loss}) return result def test_step(self, batch, batch_idx): result = self.validation_step(batch, batch_idx) result.rename_keys({'val_acc': 'test_acc', 'val_loss': 'test_loss'}) return result def configure_optimizers(self): return torch.optim.Adam(self.model.parameters(), lr=0.02)
If you want to average metrics over the epoch, you'll need to tell the LightningModule you've subclassed to do so. There are a few different ways to do this such as: Call result.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True) as shown in the docs with on_epoch=True so that the training loss is averaged across the epoch. I.e.: def training_step(self, batch, batch_idx): x, y = batch y_hat = self.model(x) loss = F.cross_entropy(y_hat, y) result = pl.TrainResult(loss) result.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True) return result Alternatively, you can call the log method on the LightningModule itself: self.log("train_loss", loss, on_epoch=True, sync_dist=True) (Optionally passing sync_dist=True to reduce across accelerators). You'll want to do something similar in validation_step to get aggregated val-set metrics or implement the aggregation yourself in the validation_epoch_end method.
https://stackoverflow.com/questions/66516486/
PyTorch: load weights from another model without saving
Assume that I have two models in PyTorch, how can I load the weights of model 1 by weights of model 2 without saving the weights? Like this: model1.weights = model2.weights In TensorFlow I can do this: variables1 = model1.trainable_variables variables2 = model2.trainable_variables for v1, v2 in zip(variables1, variables2): v1.assign(v2.numpy())
Assuming you have two instances of the same model (must subclass nn.Module), then you can use nn.Module.state_dict() and nn.Module.load_state_dict(). You can find a brief introduction to state dictionaries here. model1.load_state_dict(model2.state_dict())
https://stackoverflow.com/questions/66517484/
Facing Broken Pipe error when trying to run the next(iter(train_data_loader)). I'm running the code in local jupyter notebook
I'm facing BrokenPipeError when I'm trying to run sentiment analysis with hugging face. It's returning [Error No] 32 Broken Pipe. Is there any way to rewrite the next(iter(train_data_loader)) code? Link with total code 'https://colab.research.google.com/drive/1wBXKa-gkbSPPk-o7XdwixcGk7gSHRMas?usp=sharing' The code is def create_data_loader(df, tokenizer, max_len, batch_size): ds = GPReviewDataset( reviews=df.content.to_numpy(), targets=df.sentiment.to_numpy(), tokenizer=tokenizer, max_len=max_len ) return DataLoader( ds, batch_size=batch_size, num_workers=4 ) Followed by below code BATCH_SIZE = 16 train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE) val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE) test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE) Followed by data = next(iter(train_data_loader)) data.keys() I'm facing error with this 'data = next(iter(train_data_loader))' code Error is BrokenPipeError: [Errno 32] Broken pipe
One of the reason of this issue might be the OS. When you're using Windows, you should not define num_worker, because PyTorch dataloader does not support multi-processing on Windows. By default num_workers is 0 and works on Windows. DataLoader( ds, batch_size=batch_size, num_workers=0 # should be zero on Windows )
https://stackoverflow.com/questions/66518645/
PyTorch Tensors - vectorized slicing with given list of end indices
Suppose I have a 1D PyTorch tensor end_index of length L. I want to construct a 2D PyTorch tensor T with L lines where T[i,j] = 2 when j < end_index[i] and T[i,j] = 1 otherwise. The following works: T = torch.ones([4,3], dtype=torch.long) for element in end_index: T[:, :element] = 2 Is there a vectorizd way to do this?
You can construct such a tensor using broadcast semantics # sample inputs L, C = 4, 3 end_index = torch.tensor([0, 2, 2, 1]) # Construct tensor of shape [L, C] such that for all (i, j) # T[i, j] = 2 if j < end_index[i] else 1 j_range = torch.arange(C, device=end_index.device) T = (j_range[None, :] < end_index[:, None]).long() + 1 which results in T = tensor([[1, 1, 1], [2, 2, 1], [2, 2, 1], [2, 1, 1]])
https://stackoverflow.com/questions/66520261/
How to replace NaN values in the weights of neural network?
I am trying to replace NaN values in the weights with some constant. However I am unaware on how one can do so. I am aware that to access weights, I have to write. for i in model.params: arr = i.clone() arr[torch.isnan(arr)] = 0 param_list.append(nn.Parameter(arr)) model.params= param_list But this breaks the computation graph, any better way you can suggest me for the same?
How about with torch.no_grad(): for p in model.parameters(): p.data[torch.isnan(p.data)] = 0
https://stackoverflow.com/questions/66524054/
AttributeError: 'str' object has no attribute 'shape' while encoding tensor using BertModel with PyTorch (Hugging Face)
AttributeError: 'str' object has no attribute 'shape' while encoding tensor using BertModel with PyTorch (Hugging Face). Below is the code bert_model = BertModel.from_pretrained(r'downloads\bert-pretrained-model') input_ids Output is: tensor([[ 101, 156, 13329, ..., 0, 0, 0], [ 101, 156, 13329, ..., 0, 0, 0], [ 101, 1302, 1251, ..., 0, 0, 0], ..., [ 101, 25456, 1200, ..., 0, 0, 0], [ 101, 143, 9664, ..., 0, 0, 0], [ 101, 2586, 7340, ..., 0, 0, 0]]) Followed by code below last_hidden_state, pooled_output = bert_model( input_ids=encoding['input_ids'], attention_mask=encoding['attention_mask'] ) Followed by code below last_hidden_state.shape Output is AttributeError Traceback (most recent call last) <ipython-input-70-9628339f425d> in <module> ----> 1 last_hidden_state.shape AttributeError: 'str' object has no attribute 'shape' Complete Code link is 'https://colab.research.google.com/drive/1FY4WtqCi2CQ9RjHj4slZwtdMhwaWv2-2?usp=sharing'
The issue is that the return type has changed since 3.xx version of transformers. So, we have explicitly ask for a tuple of tensors. So, we can pass an additional kwarg return_dict = False when we call the bert_model() to get an actual tensor that corresponds to the last_hidden_state. last_hidden_state, pooled_output = bert_model( input_ids=encoding['input_ids'], attention_mask=encoding['attention_mask'], return_dict = False # this is needed to get a tensor as result ) In case you do not like the previous approach, then you can resort to: In [13]: bm = bert_model( ...: encoding_sample['input_ids'], ...: encoding_sample['attention_mask'] ...: ) In [14]: bm.keys() Out[14]: odict_keys(['last_hidden_state', 'pooler_output']) # accessing last_hidden_state In [15]: bm['last_hidden_state'] In [16]: bm['last_hidden_state'].shape Out[16]: torch.Size([1, 17, 768])
https://stackoverflow.com/questions/66524542/
PyTorch for Object detection - Image augmentation
I am using PyTorch for object detection and refining an existing model (transfer learning) as described in the following link - https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html While different transformations are used for image augmentation (horizontal flip in this tutorial), the tutorial doesnt mention anything on transforming the bounding box/annotation to ensure they are in line with the transformed image. Am I missing something basic here?
In the training phase, the transforms are indeed applied on both images and targets, while loading the data. In the PennFudanDataset class, we have these two lines: if self.transforms is not None: img, target = self.transforms(img, target) where target is a dictionary containing: target = {} target["boxes"] = boxes target["labels"] = labels target["masks"] = masks target["image_id"] = image_id target["area"] = area target["iscrowd"] = iscrowd self.transforms() in PennFudanDataset class is set to a list of transforms comprising [transforms.ToTensor(), transforms.Compose()], the return value from get_transform() while instantiating the dataset with: dataset = PennFudanDataset('PennFudanPed', get_transform(train=True)) The transforms transforms.Compose() comes from T, a custom transform written for object detection task. Specifically, in the __call__ of RandomHorizontalFlip(), we process both the image and target (e.g., mask, keypoints): For the sake of completeness, I borrow the code from the github repo: def __call__(self, image, target): if random.random() < self.prob: height, width = image.shape[-2:] image = image.flip(-1) bbox = target["boxes"] bbox[:, [0, 2]] = width - bbox[:, [2, 0]] target["boxes"] = bbox if "masks" in target: target["masks"] = target["masks"].flip(-1) if "keypoints" in target: keypoints = target["keypoints"] keypoints = _flip_coco_person_keypoints(keypoints, width) target["keypoints"] = keypoints return image, target Here, we can understand how they perform the flipping on the masks and keypoints in accordance with the image.
https://stackoverflow.com/questions/66525072/
Training yolov5 causes a CUDNN_STATUS_NOT_INITIALIZED error
I'm following this guide without changing anything. I'm using an aws server with deep learning ami: Deep Learning AMI (Ubuntu 18.04) Version 40.0 I've tried to change my custom dataset to the coco dataset and to a small subset of the custom one. batch size doesn't seems to matter, CUDA and other drivers seems to work. The exception is thrown when the batch starts the training process. This is the full stack trace: Logging results to runs/train/exp66 Starting training for 5 epochs... Epoch gpu_mem box obj cls total targets img_size 0%| | 0/22 [00:00<?, ?it/s] Traceback (most recent call last): File "train.py", line 533, in <module> train(hyp, opt, device, tb_writer, wandb) File "train.py", line 298, in train pred = model(imgs) # forward File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/yolov5/models/yolo.py", line 121, in forward return self.forward_once(x, profile) # single-scale inference, train File "/home/ubuntu/yolov5/models/yolo.py", line 137, in forward_once x = m(x) # run File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/yolov5/models/common.py", line 113, in forward return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/yolov5/models/common.py", line 38, in forward return self.act(self.bn(self.conv(x))) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 399, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 395, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
I fixed it using conda, I cloned the pytorch environment one which came with the image and it works perfectly. I still don't know the cause though.
https://stackoverflow.com/questions/66528261/
Wild discrepancies between training DeepLab ResNet V3 on Google Colab versus on local machine
I am attempting to train Deeplab Resnet V3 to perform semantic segmentation on a custom dataset. I had been working on my local machine however my GPU is just a small Quadro T1000 so I decided to move my model onto Google Colab to take advantage of their GPU instances and get better results. Whilst I get the speed increase I was hoping for, I am getting wildly different training losses on colab compared to my local machine. I have copied and pasted the exact same code so the only difference I can find would be in the dataset. I am using the exact same dataset except for the one on colab is a copy of the local dataset on Google Drive. I have noticed that Drive orders file differently on Windows but I can't see how this is a problem since I randomly shuffle the dataset. I understand that these random splitting can cause small differences in the outputs however a difference of about 10x in the training losses doesn't make sense. I have also tried running the version on colab with different random seeds, different batch sizes, different train_test_split parameters, and changing the optimizer from SGD to Adam, however, this still causes the model to converge very early at a loss of around 0.5. Here is my code: import torch from torch.utils import data from torchvision import transforms from customdatasets import SegmentationDataSet import pathlib from sklearn.model_selection import train_test_split from customtransforms import Compose, AlbuSeg2d, DenseTarget from customtransforms import MoveAxis, Normalize01, Resize import albumentations import matplotlib.pyplot as plt import time import GPUtil def get_filenames_of_path(path: pathlib.Path, ext: str = '*'): """Returns a list of files in a directory/path. Uses pathlib.""" filenames = [file for file in path.glob(ext) if file.is_file()] return filenames if __name__ == '__main__': root = pathlib.Path.cwd() / 'train' inputs = get_filenames_of_path(root / 'input') targets = get_filenames_of_path(root / 'target') # training transformations and augmentations transforms_training = Compose([ Resize(input_size=(128, 128, 3), target_size=(128, 128)), AlbuSeg2d(albu=albumentations.HorizontalFlip(p=0.5)), MoveAxis(), Normalize01() ]) # validation transformations transforms_validation = Compose([ Resize(input_size=(128, 128, 3), target_size=(128, 128)), MoveAxis(), Normalize01() ]) if torch.cuda.is_available(): device = torch.device('cuda') else: device = torch.device('cpu') random_seed = 142 train_size = 0.8 inputs_train, inputs_valid = train_test_split( inputs, random_state=random_seed, train_size=train_size, shuffle=True) targets_train, targets_valid = train_test_split( targets, random_state=random_seed, train_size=train_size, shuffle=True) dataset_train = SegmentationDataSet(inputs=inputs_train, targets=targets_train, transform=transforms_training, device=device) dataset_valid = SegmentationDataSet(inputs=inputs_valid, targets=targets_valid, transform=transforms_validation, device=device) dataloader_training = data.DataLoader(dataset=dataset_train, batch_size=15, shuffle=True, num_workers=4, pin_memory=True) dataloader_validation = data.DataLoader(dataset=dataset_valid, batch_size=15, shuffle=True, num_workers=4, pin_memory=True) model = torch.hub.load('pytorch/vision:v0.6.0', 'deeplabv3_resnet101', pretrained=False) criterion = torch.nn.CrossEntropyLoss() model = model.to(device) optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.99) epochs = 10 steps = 0 running_loss = 0 print_every = 10 train_losses, valid_losses = [], [] start_time = time.time() prev_time = time.time() for epoch in range(epochs): #Training for inputs, labels in dataloader_training: steps += 1 inputs, labels = inputs.to(device, non_blocking=True), labels.to(device,non_blocking=True) optimizer.zero_grad() logps = model(inputs) loss = criterion(logps['out'], labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: train_losses.append(running_loss / len(dataloader_training)) epoch_time = time.time() elasped_time = epoch_time - prev_time prev_time = epoch_time print(f"Epoch {epoch + 1}/{epochs}.. " f"Train loss: {running_loss / print_every:.3f}.. " f"Elapsed time: {elasped_time}") running_loss = 0 model.train() # Evaluation valid_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in dataloader_validation: inputs, labels = inputs.to(device, non_blocking=True), labels.to(device, non_blocking=True) logps = model.forward(inputs) batch_loss = criterion(logps['out'], labels) valid_loss += batch_loss.item() ps = torch.exp(logps['out']) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() valid_losses.append(valid_loss / len(dataloader_validation)) print(f"Epoch {epoch + 1}/{epochs}.. " f"Validation loss: {valid_loss / len(dataloader_training):.3f}.. " f"Validation accuracy: {accuracy / len(dataloader_training):.3f} ") model.train() torch.save(model, 'model.pth') end_time = time.time() total_time = end_time - start_time print("Total Time: ", total_time) plt.plot(train_losses, label='Training loss') plt.plot(valid_losses, label='Validation loss') plt.legend(frameon=False) plt.show() This is the output for one epoch on Colab: Epoch 1/10.. Train loss: 2.080.. Elapsed time: 12.156640768051147 Epoch 1/10.. Train loss: 1.231.. Elapsed time: 8.76858925819397 Epoch 1/10.. Train loss: 1.051.. Elapsed time: 8.315532445907593 Epoch 1/10.. Train loss: 0.890.. Elapsed time: 8.249168634414673 Epoch 1/10.. Train loss: 0.839.. Elapsed time: 8.248667478561401 Epoch 1/10.. Train loss: 0.807.. Elapsed time: 8.120820999145508 Epoch 1/10.. Train loss: 0.742.. Elapsed time: 8.298616886138916 Epoch 1/10.. Train loss: 0.726.. Elapsed time: 8.170734167098999 Epoch 1/10.. Train loss: 0.677.. Elapsed time: 8.221246004104614 Epoch 1/10.. Train loss: 0.698.. Elapsed time: 8.124614000320435 Epoch 1/10.. Train loss: 0.675.. Elapsed time: 8.197462558746338 Epoch 1/10.. Train loss: 0.682.. Elapsed time: 8.263437509536743 Epoch 1/10.. Train loss: 0.626.. Elapsed time: 8.156179189682007 Epoch 1/10.. Train loss: 0.632.. Elapsed time: 8.268096446990967 Epoch 1/10.. Train loss: 0.616.. Elapsed time: 8.214547872543335 Epoch 1/10.. Train loss: 0.585.. Elapsed time: 8.31475019454956 Epoch 1/10.. Train loss: 0.598.. Elapsed time: 8.388074398040771 Epoch 1/10.. Train loss: 0.626.. Elapsed time: 8.179292440414429 Epoch 1/10.. Train loss: 0.612.. Elapsed time: 8.252359390258789 Epoch 1/10.. Train loss: 0.592.. Elapsed time: 8.284745693206787 Epoch 1/10.. Train loss: 0.597.. Elapsed time: 8.31213927268982 Epoch 1/10.. Train loss: 0.566.. Elapsed time: 8.164374113082886 Epoch 1/10.. Train loss: 0.556.. Elapsed time: 8.300082206726074 Epoch 1/10.. Train loss: 0.568.. Elapsed time: 8.26304841041565 Epoch 1/10.. Train loss: 0.572.. Elapsed time: 8.309881448745728 Epoch 1/10.. Train loss: 0.586.. Elapsed time: 8.211671352386475 Epoch 1/10.. Train loss: 0.586.. Elapsed time: 8.321797609329224 Epoch 1/10.. Train loss: 0.535.. Elapsed time: 8.318871021270752 Epoch 1/10.. Train loss: 0.543.. Elapsed time: 8.152915239334106 Epoch 1/10.. Train loss: 0.569.. Elapsed time: 8.251380205154419 Epoch 1/10.. Train loss: 0.526.. Elapsed time: 8.29153847694397 Epoch 1/10.. Train loss: 0.565.. Elapsed time: 8.15071702003479 Epoch 1/10.. Train loss: 0.542.. Elapsed time: 8.253364562988281 Epoch 1/10.. Validation loss: 0.182.. Validation accuracy: 0.271 And here is the output on my local machine: Epoch 1/10.. Train loss: 2.932.. Elapsed time: 32.148621797561646 Epoch 1/10.. Train loss: 1.852.. Elapsed time: 14.120505809783936 Epoch 1/10.. Train loss: 0.887.. Elapsed time: 14.210048198699951 Epoch 1/10.. Train loss: 0.618.. Elapsed time: 14.23294186592102 Epoch 1/10.. Train loss: 0.549.. Elapsed time: 14.212541103363037 Epoch 1/10.. Train loss: 0.519.. Elapsed time: 14.047481775283813 Epoch 1/10.. Train loss: 0.506.. Elapsed time: 14.060708045959473 Epoch 1/10.. Train loss: 0.347.. Elapsed time: 14.301624059677124 Epoch 1/10.. Train loss: 0.399.. Elapsed time: 13.9844491481781 Epoch 1/10.. Train loss: 0.361.. Elapsed time: 13.957871913909912 Epoch 1/10.. Train loss: 0.305.. Elapsed time: 14.164010763168335 Epoch 1/10.. Train loss: 0.296.. Elapsed time: 14.001536846160889 Epoch 1/10.. Train loss: 0.298.. Elapsed time: 14.019971132278442 Epoch 1/10.. Train loss: 0.271.. Elapsed time: 13.951345443725586 Epoch 1/10.. Train loss: 0.252.. Elapsed time: 14.037938594818115 Epoch 1/10.. Train loss: 0.283.. Elapsed time: 13.944657564163208 Epoch 1/10.. Train loss: 0.299.. Elapsed time: 13.977224826812744 Epoch 1/10.. Train loss: 0.219.. Elapsed time: 13.941975355148315 Epoch 1/10.. Train loss: 0.242.. Elapsed time: 13.936140060424805 Epoch 1/10.. Train loss: 0.244.. Elapsed time: 13.942122459411621 Epoch 1/10.. Train loss: 0.216.. Elapsed time: 13.960899114608765 Epoch 1/10.. Train loss: 0.186.. Elapsed time: 13.956881523132324 Epoch 1/10.. Train loss: 0.241.. Elapsed time: 13.944581985473633 Epoch 1/10.. Train loss: 0.203.. Elapsed time: 13.934357404708862 Epoch 1/10.. Train loss: 0.189.. Elapsed time: 13.938358306884766 Epoch 1/10.. Train loss: 0.181.. Elapsed time: 13.944468021392822 Epoch 1/10.. Train loss: 0.186.. Elapsed time: 13.946297407150269 Epoch 1/10.. Train loss: 0.164.. Elapsed time: 13.940366744995117 Epoch 1/10.. Train loss: 0.165.. Elapsed time: 13.938241720199585 Epoch 1/10.. Train loss: 0.176.. Elapsed time: 14.015569925308228 Epoch 1/10.. Train loss: 0.165.. Elapsed time: 14.019208669662476 Epoch 1/10.. Train loss: 0.175.. Elapsed time: 14.149503469467163 Epoch 1/10.. Train loss: 0.159.. Elapsed time: 14.128302097320557 Epoch 1/10.. Train loss: 0.155.. Elapsed time: 13.935027837753296 Epoch 1/10.. Train loss: 0.137.. Elapsed time: 13.937382221221924 Epoch 1/10.. Train loss: 0.127.. Elapsed time: 13.929635524749756 Epoch 1/10.. Train loss: 0.133.. Elapsed time: 13.935472011566162 Epoch 1/10.. Train loss: 0.152.. Elapsed time: 13.922808647155762 Epoch 1/10.. Validation loss: 0.032.. Validation accuracy: 0.239 I won't paste more than this since it's long and takes a while to run but by the end of the 3rd epoch, the loss on the Colab model is still bouncing around 0.5 whereas locally it reaches 0.02. If anyone could help me resolve this issue it would be greatly appreciated.
I fixed this problem by unzipping the training data to Google Drive and reading the files from there instead of using the Colab command to unzip the folder to my workspace directly. I have absolutely no idea why this was causing the problem; a quick visual inspection at the images and their corresponding tensors looks fine, but I can't go through each of the 6,000 or so images to check every one. If anyone knows why this was causing a problem, please let me know!
https://stackoverflow.com/questions/66529577/
How to get confusion matrix in pytorch?
There are 15 classes. I have two tensors, one has the target classes and other has the predicted classes. How can I get a confusion matrix?
You can try using the sklearn package, given you have your target class y_test and predicted class y_pred. from sklearn.metrics import confusion_matrix print(confusion_matrix(y_test, y_pred))
https://stackoverflow.com/questions/66533863/
Which PyTorch modules are affected by model.eval() and model.train()?
The model.eval() method modifies certain modules (layers) which are required to behave differently during training and inference. Some examples are listed in the docs: This has [an] effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Is there an exhaustive list of which modules are affected?
In addition to info provided by @iacob: Base class Module Criteria RNNBase RNN LSTM GRU dropout > 0 (default: 0) Transformer layers Transformer TransformerEncoder TransformerDecoder dropout > 0 (Transformer default: 0.1) Lazy variants LazyBatchNorm currently nightly merged PR track_running_stats=True
https://stackoverflow.com/questions/66534762/
Add column and row to multidimensional torch.tensor (kind of wrap-up or padding)
The tensor should be updated with additional row-zeros (bottom) and column-zeros (on the right side). My solution will be provided below. Is there any better (actually simpler) one? input: («ones» are just for clarification - figures might be different, because in my case there is a tensor exactly the same size but with real values in it) tensor([[[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]]]) output: tensor([[[[1., 1., 1., 0.], [1., 1., 1., 0.], [1., 1., 1., 0.], [0., 0., 0., 0.]], [[1., 1., 1., 0.], [1., 1., 1., 0.], [1., 1., 1., 0.], [0., 0., 0., 0.]], [[1., 1., 1., 0.], [1., 1., 1., 0.], [1., 1., 1., 0.], [0., 0., 0., 0.]]]]) possible solution: x1 = torch.ones(1, 3, 3, 3) z2 = torch.cat((torch.cat((x1[0, 0, :], torch.zeros(1, 3)), 0), torch.zeros(4, 1)), 1) z3 = torch.cat((torch.cat((x1[0, 1, :], torch.zeros(1, 3)), 0), torch.zeros(4, 1)), 1) z4 = torch.cat((torch.cat((x1[0, 2, :], torch.zeros(1, 3)), 0), torch.zeros(4, 1)), 1) output_t = torch.zeros(1, 3, 4, 4) output_t[0, 0, :] = z2 output_t[0, 1, :] = z3 output_t[0, 2, :] = z4 output_t
You can do this with pytorch's torch.nn.ConstantPad?d functions. from torch import nn x1 = torch.ones(1, 3, 3, 3) pad_value = 0 pad_func = nn.ConstantPad1d((0, 1, 0, 1), pad_value) output_t = pad_func(x1) You could also exchange nn.ConstantPad1d with nn.ConstantPad2d or nn.ConstantPad3d. All did what you want with the same settings. Then there is also numpy's np.pad. x1 = torch.ones(1, 3, 3, 3) pad_value = 0 output_n = np.pad(x1.numpy(), (0, 0), (0, 0), (0, 1), (0, 1)), "constant", constant_values=pad_value) output_t = torch.from_numpy(output_n)
https://stackoverflow.com/questions/66536957/
Cross entropy IndexError Dimension out of range
I'm trying to train a GAN in some images, I followed the tutorial on pytorch's page and got to the following code, but when the crossentropy function is applyed during the training it returns the error below the code: import random import torch.nn as nn import torch.optim as optim import torch.utils.data import torchvision.datasets as torch_dataset import torchvision.transforms as transforms import torchvision.utils as vutils import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from IPython.display import HTML seed = 1 print("Random Seed: ", seed) random.seed(seed) torch.manual_seed(seed) images_folder_path = "./images/" batch_size = 128 image_size = 256 n_channels = 1 z_vector = 100 n_features_generator = 64 n_features_discriminator = 64 num_epochs = 5 lr = 0.0002 beta1 = 0.5 dataset = torch_dataset.ImageFolder( root=images_folder_path, transform=transforms.Compose( [ transforms.Grayscale(num_output_channels=1), transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize(0.5, 0.5) ] ) ) dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=0) device = torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu") def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.main = nn.Sequential( nn.ConvTranspose2d(z_vector, n_features_generator * 8, 4, 1, 0, bias=False), nn.BatchNorm2d(n_features_generator * 8), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator * 8, n_features_generator * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_generator * 4), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator * 4, n_features_generator * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_generator * 2), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator * 2, n_features_generator, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_generator), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator, n_channels, 4, 2, 1, bias=False), nn.Tanh() ) def forward(self, inputs): return self.main(inputs) class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.main = nn.Sequential( nn.Conv2d(n_channels, n_features_discriminator, 4, 2, 1, bias=False), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator, n_features_discriminator * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_discriminator * 2), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator * 2, n_features_discriminator * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_discriminator * 4), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator * 4, n_features_discriminator * 8, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_discriminator * 8), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator * 8, 1, 4, 1, 0, bias=False), nn.Sigmoid() ) def forward(self, inputs): return self.main(inputs) netG = Generator().to(device) if device.type == 'cuda': netG = nn.DataParallel(netG) netG.apply(weights_init) print(netG) netD = Discriminator().to(device) if device.type == 'cuda': netD = nn.DataParallel(netD) netD.apply(weights_init) print(netD) criterion = nn.CrossEntropyLoss() fixed_noise = torch.randn(64, z_vector, 1, 1, device=device) real_label = 1. fake_label = 0. optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999)) optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999)) img_list = [] G_losses = [] D_losses = [] iters = 0 print("Starting Training Loop...") for epoch in range(num_epochs): for i, data in enumerate(dataloader, 0): netD.zero_grad() real_cpu = data[0].to(device) b_size = real_cpu.size(0) label = torch.full((b_size,), real_label, dtype=torch.float, device=device) output = netD(real_cpu).view(-1) # ---------------------------------------------------------------------------------- errD_real = criterion(output, label) # ERROR HAPPENS HERE # ---------------------------------------------------------------------------------- errD_real.backward() D_x = output.mean().item() noise = torch.randn(b_size, z_vector, 1, 1, device=device) fake = netG(noise) label.fill_(fake_label) output = netD(fake.detach()).view(-1) errD_fake = criterion(output, label) errD_fake.backward() D_G_z1 = output.mean().item() errD = errD_real + errD_fake optimizerD.step() netG.zero_grad() label.fill_(real_label) output = netD(fake).view(-1) errG = criterion(output, label) errG.backward() D_G_z2 = output.mean().item() optimizerG.step() if i % 50 == 0: print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch, num_epochs, i, len(dataloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) G_losses.append(errG.item()) D_losses.append(errD.item()) if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)): with torch.no_grad(): fake = netG(fixed_noise).detach().cpu() img_list.append(vutils.make_grid(fake, padding=2, normalize=True)) iters += 1 Error: Traceback (most recent call last): File "G:/Pastas Estruturadas/Conhecimento/CEFET/IA/SpectroGAN/dcgan.py", line 137, in errD_real = criterion(output, label) File "C:\Users\Ramon\anaconda3\envs\vision\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\Ramon\anaconda3\envs\vision\lib\site-packages\torch\nn\modules\loss.py", line 948, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "C:\Users\Ramon\anaconda3\envs\vision\lib\site-packages\torch\nn\functional.py", line 2422, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "C:\Users\Ramon\anaconda3\envs\vision\lib\site-packages\torch\nn\functional.py", line 1591, in log_softmax ret = input.log_softmax(dim) IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) Process finished with exit code 1
Your model's output is not consistent with your criterion. If you want to keep the model and change the criterion: Use BCELoss instead of CrossEntropyLoss. Note: You will need to cast your labels to float before passing them in. Also consider removing the Sigmoid() from the model and using BCEWithLogitsLoss. If you want to keep the criterion and change the model: CrossEntropyLoss expects the shape (..., num_classes). So for your 2 class case (real & fake), you will have to predict 2 values for each image in the batch which means you will need to alter the output channels of the last layer in your model. It also expects the raw logits, so you should remove the Sigmoid().
https://stackoverflow.com/questions/66539555/
How to include a OneHot in an ONNX coming from PyTorch
I'm using PyTorch to train neural-net and output them into ONNX. I use these models in a Vespa index, which loads ONNXs through TensorRT. I need one-hot-encoding for some features but this is really hard to achieve within the Vespa framework. Is it possible to embed a one-hot-encoding for some given features inside my ONNX net (e.g. before the network's representation) ? If so, how should I achieve this based on a PyTorch model ? I already noticed two things: ONNX format includes the OneHot operator : see ONNX doc PyTorch built-in ONNX exporting system not not support OneHot operator : see torch.onnx doc EDIT 2021/03/11: Here is my workflow: training learning-to-rank models via PyTorch exporting them as ONNX importing these ONNX into my Vespa index in order to rank any query's results thanks to the ONNX model. Under the hood, Vespa uses TensorRT for inference (so I use Vespa's ONNX model evaluation)
So, according to my testing, PyTorch does support one-hot encoding export to ONNX. With the following model: #! /usr/bin/env python3 import torch import torch.onnx import torch.nn.functional as F class MyModel(torch.nn.Module): def __init__(self, classes=5): super(MyModel, self).__init__() self._classes = classes self.linear = torch.nn.Linear(in_features=self._classes, out_features=1) self.logistic = torch.nn.Sigmoid() def forward(self, input): one_hot = F.one_hot(input, num_classes=self._classes).float() return self.logistic(self.linear(one_hot)) def main(): model = MyModel() # training omitted data = torch.tensor([0, 4, 2]) torch.onnx.export(model, data, "test.onnx", input_names=["input"], output_names=["output"]) result = model.forward(data) print(result) if __name__ == "__main__": main() This model doesn't do any training, just takes a vector of indices in, one-hot encodes them using PyTorch's one_hot and sends that to the simple NN layer. The weights are randomly initialised, and the output here for me was: tensor([[0.5749], [0.5081], [0.5581]], grad_fn=<SigmoidBackward>) This model is exported to ONNX to the "test.onnx" file. Testing this model using ONNX Runtime (which is what Vespa uses in the backend, not TensorRT): In [1]: import onnxruntime as ort In [2]: m = ort.InferenceSession("test.onnx") In [3]: m.run(input_feed={"input":[0,4,2]}, output_names=["output"]) Out[3]: [array([[0.57486993], [0.5081395 ], [0.5580716 ]], dtype=float32)] Which is the same output as given from PyTorch with the same input. So PyTorch does export the OneHot ONNX operator. This was for PyTorch 1.7.1. If the input to the one-hot encoding is indexed in Vespa as integers, you can then just use these directly as inputs.
https://stackoverflow.com/questions/66544994/
AttributeError: 'dict' object has no attribute 'step'
How to solve this problem? File "train.py", line 98, in validation_epoch_end self.scheduler.step(avg_loss) AttributeError: 'dict' object has no attribute 'step' My source code: import os import ast import torch import torchaudio import torch.nn as nn from torch.nn import functional as F import torch.optim as optim from torch.utils.data import DataLoader from pytorch_lightning.core.lightning import LightningModule from pytorch_lightning import Trainer from argparse import ArgumentParser from model import SpeechRecognition from dataset import Data, collate_fn_padd from pytorch_lightning.callbacks import ModelCheckpoint nodes = int(1) gpus = int(1) data_workers = int(0) train_file = 'path/to/your/train.json' valid_file = 'path/to/your/test.json' valid_every = int(1000) save_model_path = 'path/where/you/want/to/save/your/model' logdir = 'path/where/you/want/to/save/your/logs' epochs = int(10) batch_size = int(64) pct_start = float(0.3) div_factor = int(100) dparams_override = str("{}") hparams_override = str("{}") load_model_from = None resume_from_checkpoint = None class SpeechModule(LightningModule): def __init__(self, model, args): super(SpeechModule, self).__init__() self.model = model self.criterion = nn.CTCLoss(blank=28, zero_infinity=True) self.args = args def forward(self, x, hidden): return self.model(x, hidden) def configure_optimizers(self): self.optimizer = optim.AdamW(self.model.parameters(), self.args.learning_rate) lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau( self.optimizer, mode='min', factor=0.50, patience=6 ) self.scheduler = { 'scheduler': lr_scheduler, 'reduce_on_plateau': True, 'monitor': 'val_checkpoint_on' } return [self.optimizer], [self.scheduler] def step(self, batch): spectrograms, labels, input_lengths, label_lengths = batch bs = spectrograms.shape[0] hidden = self.model._init_hidden(bs) hn, c0 = hidden[0].to(self.device), hidden[1].to(self.device) output, _ = self(spectrograms, (hn, c0)) output = F.log_softmax(output, dim=2) loss = self.criterion(output, labels, input_lengths, label_lengths) return loss def training_step(self, batch, batch_idx): loss = self.step(batch) logs = {'loss': loss, 'lr': self.optimizer.param_groups[0]['lr'] } return {'loss': loss, 'log': logs} def train_dataloader(self): d_params = Data.parameters d_params.update(dparams_override) train_dataset = Data(json_path=self.train_file, **d_params) return DataLoader(dataset=train_dataset, batch_size= batch_size, num_workers= data_workers, pin_memory=True, collate_fn=collate_fn_padd) def validation_step(self, batch, batch_idx): loss = self.step(batch) return {'val_loss': loss} def validation_epoch_end(self, outputs): avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() self.scheduler.step(avg_loss) tensorboard_logs = {'val_loss': avg_loss} return {'val_loss': avg_loss, 'log': tensorboard_logs} def val_dataloader(self): d_params = Data.parameters d_params.update(dparams_override) test_dataset = Data(json_path=valid_file, **d_params, valid=True) return DataLoader(dataset=test_dataset, batch_size= batch_size, num_workers= data_workers, collate_fn=collate_fn_padd, pin_memory=True) def checkpoint_callback(): return ModelCheckpoint( filepath= save_model_path, save_top_k=True, verbose=True, monitor='val_loss', mode='min', prefix='' ) def main(args): h_params = SpeechRecognition.hyper_parameters h_params.update(hparams_override) model = SpeechRecognition(**h_params) if load_model_from: speech_module = SpeechModule.load_from_checkpoint(load_model_from, model=model) else: speech_module = SpeechModule(model, args) trainer = Trainer() trainer = Trainer( max_epochs= epochs, gpus= gpus, num_nodes= nodes, distributed_backend=None, gradient_clip_val=1.0, val_check_interval= valid_every, checkpoint_callback=checkpoint_callback, resume_from_checkpoint= resume_from_checkpoint ) trainer.fit(speech_module) if __name__ == "__main__": parser = ArgumentParser() parser.add_argument('--learning_rate', default=1e-3, type=float, help='learning rate') args = parser.parse_args() hparams_override = ast.literal_eval(hparams_override) dparams_override = ast.literal_eval(dparams_override) if save_model_path: if not os.path.isdir(os.path.dirname(save_model_path)): raise Exception("the directory for path {} does not exist".format(save_model_path)) main(args)
self.scheduler.step(avg_loss) should be self.step(avg_loss)
https://stackoverflow.com/questions/66545871/
Confusion about the output channels of convolution neural network
I'm confused about the multi-channel scenario in convolution neural network. Say I have a 10(width) * 5(height) * 6(channels) image, and I feed it into a default 2-D convolution layer with stride=1 and padding=0 and expect the output to be 8(width) * 3(height) * 16(channels). I know the size of the kernel is 3(width) * 3(height), but I don't know how many kernels are there exactly, and how the are applied to the input data to give the final 16 channels. Someone can help me please.
A 2D convolution layer contains one kernel per input channel, per output channel. So in your case, this will be 6*16=96 kernels. For 3x3 kernels, this corresponds to 3*3*96 = 864 parameters. >>> import torch >>> conv = torch.nn.Conv2d(6, 16, (3, 3)) >>> torch.numel(conv.weight) 864 For one image, one kernel per input channel is first applied. In your case, this results in 6 features maps, that are summed together (+ a possible bias) to form 1 of the output channel. Then, you repeat this 15 times to form the 15 other output channels.
https://stackoverflow.com/questions/66547536/
How does the output of the Discriminator of a Convolutional Generative Adversarial Network work, can it have a Fully Connected Layer?
I'm building a DCGAN, and I am having a problem with the shape of the output, it is not matching the shape of the labels when I try calculating the BCELoss. To generate the discriminator output, do I have to use convolutions all the way down or can I add a Linear layer at some point to match the shape I want? I mean, do I have to reduce the shape by adding more convolutional layers or can I add a fully connected one? I thought it should have a fully connected layer, but on every tutorial I checked the discriminator had no fully connected layer. import random import torch.nn as nn import torch.optim as optim import torch.utils.data import torchvision.datasets as torch_dataset import torchvision.transforms as transforms import torchvision.utils as vutils import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from IPython.display import HTML seed = 1 print("Random Seed: ", seed) random.seed(seed) torch.manual_seed(seed) images_folder_path = "./spectrograms/" batch_size = 1 image_size = 256 n_channels = 1 z_vector = 100 n_features_generator = 32 n_features_discriminator = 32 num_epochs = 5 lr = 0.0002 beta1 = 0.5 dataset = torch_dataset.ImageFolder( root=images_folder_path, transform=transforms.Compose( [ transforms.Grayscale(num_output_channels=1), transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize(0.5, 0.5) ] ) ) dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=0) device = torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu") def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.main = nn.Sequential( nn.ConvTranspose2d(z_vector, n_features_generator * 8, 4, 1, bias=False), nn.BatchNorm2d(n_features_generator * 8), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator * 8, n_features_generator * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_generator * 4), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator * 4, n_features_generator * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_generator * 2), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator * 2, n_features_generator, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_generator), nn.ReLU(True), nn.ConvTranspose2d(n_features_generator, n_channels, 4, 2, 1, bias=False), nn.Tanh() ) def forward(self, inputs): return self.main(inputs) # Convolutional Layer Output Shape = [(W−K+2P)/S]+1 # W is the input volume # K is the Kernel size # P is the padding # S is the stride class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.main = nn.Sequential( nn.Conv2d(n_channels, n_features_discriminator, 4, 2, 1, bias=False), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator, n_features_discriminator * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_discriminator * 2), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator * 2, n_features_discriminator * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_discriminator * 4), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator * 4, n_features_discriminator * 8, 4, 2, 1, bias=False), nn.BatchNorm2d(n_features_discriminator * 8), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(n_features_discriminator * 8, 1, 4, 1, bias=False), ) def forward(self, inputs): return self.main(inputs) netG = Generator().to(device) if device.type == 'cuda': netG = nn.DataParallel(netG) netG.apply(weights_init) print(netG) netD = Discriminator().to(device) if device.type == 'cuda': netD = nn.DataParallel(netD) netD.apply(weights_init) print(netD) criterion = nn.BCEWithLogitsLoss() fixed_noise = torch.randn(64, z_vector, 1, 1, device=device) real_label = 1. fake_label = 0. optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999)) optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999)) img_list = [] G_losses = [] D_losses = [] iters = 0 print("Starting Training Loop...") for epoch in range(num_epochs): for i, data in enumerate(dataloader, 0): netD.zero_grad() real_cpu = data[0].to(device) b_size = real_cpu.size(0) label = torch.full((b_size,), real_label, dtype=torch.float, device=device) output = netD(real_cpu) print(output.shape) print(label.shape) output = output.view(-1) errD_real = criterion(output, label) errD_real.backward() D_x = output.mean().item() noise = torch.randn(b_size, z_vector, 1, 1, device=device) fake = netG(noise) label.fill_(fake_label) output = netD(fake.detach()).view(-1) errD_fake = criterion(output, label) errD_fake.backward() D_G_z1 = output.mean().item() errD = errD_real + errD_fake optimizerD.step() netG.zero_grad() label.fill_(real_label) output = netD(fake).view(-1) errG = criterion(output, label) errG.backward() D_G_z2 = output.mean().item() optimizerG.step() if i % 50 == 0: print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch, num_epochs, i, len(dataloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) G_losses.append(errG.item()) D_losses.append(errD.item()) if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)): with torch.no_grad(): fake = netG(fixed_noise).detach().cpu() img_list.append(vutils.make_grid(fake, padding=2, normalize=True)) iters += 1 The error I'm getting: Traceback (most recent call last): File "G:/Pastas Estruturadas/Conhecimento/CEFET/IA/SpectroGAN/dcgan.py", line 140, in <module> errD_real = criterion(output, label) File "C:\Users\Ramon\anaconda3\envs\vision\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\Ramon\anaconda3\envs\vision\lib\site-packages\torch\nn\modules\loss.py", line 631, in forward reduction=self.reduction) File "C:\Users\Ramon\anaconda3\envs\vision\lib\site-packages\torch\nn\functional.py", line 2538, in binary_cross_entropy_with_logits raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size())) ValueError: Target size (torch.Size([1])) must be the same as input size (torch.Size([169])) The shape of output: torch.Size([1, 1, 13, 13]), and shape of label: torch.Size([1]).
The DCGAN described a concrete architecture where Conv layers were used for the downsampling of the feature maps. If you carefully design your Conv layers, you can do without a Linear layer but that does not mean that it will not work when you use a Linear layer to downsample (especially as the very last layer). The DCGAN paper just found out it worked better to use Conv layers instead of Linear to downsample. If you want to maintain this architecture, you can change the kernel size or padding or stride to give you exactly a single value in the last layer. Refer to the Pytorch documentation on Conv layers to see what the output size should be, given an input size
https://stackoverflow.com/questions/66548475/
Appling sliding window to torch.tensor and adjusting tensor initial size
Looking for a simpler way of torch.tensor modification. Probably there is a way to apply Unfold to initial tensor directly. input: tensor([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]) output: tensor([[0., 1., 3., 4.], [1., 2., 4., 5.], [3., 4., 6., 7.], [4., 5., 7., 8.]]) possible solution: import torch t = torch.linspace(0., 8., steps=9) t1 = t.reshape(3,3) # starting point t2 = torch.flatten(t1) t3 = t2.reshape(1, 1, 1, -1) # unfold works with 4D only unfold = torch.nn.Unfold(kernel_size=(1, 5), dilation=1) t4 = unfold(t3) indices = torch.tensor([0, 1, 3, 4]) # deleting 3d (or middle) row and 3d (middle) column t5 = torch.index_select(torch.index_select(t4.squeeze(), 0, indices), 1, indices) t5
You can use unfold, but in a simpler manner: import torch import torch.nn.functional as nnf t1 = torch.arange(9.).reshape(3,3) # initial tensor out = nnf.unfold(t1[None, None, ...], kernel_size=2, padding=0) # that's it. done.
https://stackoverflow.com/questions/66553512/
How to make a list of pytorch tensors in python?
How do I make a list of tensors in Pytorch. The list should look like this: mylist = [tensor1, tensor2, tensor3] where all the tensors have different shapes
You can instantiate each tensor using pytorch inline or append to a list in a loop. Inline: mylist = [torch.rand(2), torch.rand(5), torch.rand(1)] In a loop: mylist = [torch.rand(i) for i in range(1, 5)] To create a custom tensor, use torch.tensor([[1., -1.], [1., -1.]]) for example. https://pytorch.org/docs/stable/tensors.html
https://stackoverflow.com/questions/66553832/
Fixing Python Dependencies The Right Way
I'm just getting my first Python environment setup. All have gone well and it seems to be GPU enabled and all that good stuff. However, I have one issue and no idea how to fix. After getting the correct install command for torch it informed of this issue: Installing collected packages: torch ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchvision 0.8.2 requires torch==1.7.1, but you have torch 1.8.0+cu111 which is incompatible. Successfully installed torch-1.8.0+cu111 As far as I can tell torchvision 0.8.2 is the latest version. The environment seems happy at the moment as all these commands return expected things: import torch print(torch.__version__) torch.cuda.get_device_name(0) I've seen some people talking about "patching requirements files" or updating dependencies. But I'm not sure of the best way to tackle this.
You can lock the version of a package in a requirements file. This file has the appropriate values. requirements.txt: torch==1.7.1 torchvision==0.8.2 The packages are installed via pip like so: pip install -r requirements.txt You may have other dependencies for this project. In that case, you can generate a requirements.txt file with pip as well: pip freeze > requirements.txt Check out the documentation on managing dependencies with pip
https://stackoverflow.com/questions/66555412/
Weights of pre-trained BERT model not initialized
I am using the Language Interpretability Toolkit (LIT) to load and analyze a BERT model that I pre-trained on an NER task. However, when I'm starting the LIT script with the path to my pre-trained model passed to it, it fails to initialize the weights and tells me: modeling_utils.py:648] loading weights file bert_remote/examples/token-classification/Data/Models/results_21_03_04_cleaned_annotations/04.03._8_16_5e-5_cleaned_annotations/04-03-2021 (15.22.23)/pytorch_model.bin modeling_utils.py:739] Weights of BertForTokenClassification not initialized from pretrained model: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] modeling_utils.py:745] Weights from pretrained model not used in BertForTokenClassification: ['bert.embeddings.position_ids'] It then simply uses the bert-base-german-cased version of BERT, which of course doesn't have my custom labels and thus fails to predict anything. I think it might have to do with PyTorch, but I can't find the error. If relevant, here is how I load my dataset into CoNLL 2003 format (modification of the dataloader scripts found here): def __init__(self): # Read ConLL Test Files self._examples = [] data_path = "lit_remote/lit_nlp/examples/datasets/NER_Data" with open(os.path.join(data_path, "test.txt"), "r", encoding="utf-8") as f: lines = f.readlines() for line in lines[:2000]: if line != "\n": token, label = line.split(" ") self._examples.append({ 'token': token, 'label': label, }) else: self._examples.append({ 'token': "\n", 'label': "O" }) def spec(self): return { 'token': lit_types.Tokens(), 'label': lit_types.SequenceTags(align="token"), } And this is how I initialize the model and start the LIT server (modification of the simple_pytorch_demo.py script found here): def __init__(self, model_name_or_path): self.tokenizer = transformers.AutoTokenizer.from_pretrained( model_name_or_path) model_config = transformers.AutoConfig.from_pretrained( model_name_or_path, num_labels=15, # FIXME CHANGE output_hidden_states=True, output_attentions=True, ) # This is a just a regular PyTorch model. self.model = _from_pretrained( transformers.AutoModelForTokenClassification, model_name_or_path, config=model_config) self.model.eval() ## Some omitted snippets here def input_spec(self) -> lit_types.Spec: return { "token": lit_types.Tokens(), "label": lit_types.SequenceTags(align="token") } def output_spec(self) -> lit_types.Spec: return { "tokens": lit_types.Tokens(), "probas": lit_types.MulticlassPreds(parent="label", vocab=self.LABELS), "cls_emb": lit_types.Embeddings()
This actually seems to be expected behaviour. In the documentation of the GPT models the HuggingFace team writes: This will issue a warning about some of the pretrained weights not being used and some weights being randomly initialized. That’s because we are throwing away the pretraining head of the BERT model to replace it with a classification head which is randomly initialized. So it seems to not be a problem for the fine-tuning. In my use case described above it worked despite the warning as well.
https://stackoverflow.com/questions/66561880/
Conda - ModuleNotFoundError: No module named 'torch'
Steps to reproduce: I am using Anaconda on Windows to set up environment for this repo. conda create --name pytorch-yolo Then I install all dependencies with conda install --file requirements.txt Which returns PackagesNotFoundError: The following packages are not available from current channels: - torch[version='>=1.2'] So I install pytorch with conda install pytorch torchvision cudatoolkit=10.2 -c pytorch When I am trying to evaluate the model from the following repo: https://github.com/eriklindernoren/PyTorch-YOLOv3 with python3 test.py --weights_path weights/yolov3.weights inside conda environment, it returns the following error ModuleNotFoundError: No module named 'torch' If I run a Jupyter lab or notebook, or even go with python inside conda terminal, torch is easily imported. What I've already tried: Almost everything from this thread: No module named "Torch" Specifically, creating a new conda environment with python set to 3.8.2, installing torch to base and then to the environment. I'm also using pyenv to set global python to 3.8.2, but that doesn't help at all. conda list shows I have pytorch installed Can't wrap my head around this issue.
You are probably using the wrong python binary. Can you try python test.py --weights_path weights/yolov3.weights? I am not familiar with Windows terminal, but you can get the path to the binaries by using the where command (which for Linux): (pytorch-yolo) C:\Users\RemiChauvenne>where python3 C:\Users\RemiChauvenne\AppData\Local\Microsoft\WindowsApps\python3.exe (pytorch-yolo) C:\Users\RemiChauvenne>where python C:\Users\RemiChauvenne\miniconda3\envs\pytorch-yolo\python.exe C:\Users\RemiChauvenne\AppData\Local\Microsoft\WindowsApps\python.exe We can see that python3 does not go to the python binary inside the conda environment, whereas python is correctly linked to pytorch-yolo.
https://stackoverflow.com/questions/66565780/
PyTorch error: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`
I have a very simple example import torch if __name__ == "__main__": DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") m = torch.nn.Linear(20, 30).to(DEVICE) input = torch.randn(128, 20).to(DEVICE) output = m(input) print('output', output.size()) exit() and I get: Traceback (most recent call last): File "test.py", line 9, in <module> output = m(input) File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward return F.linear(input, self.weight, self.bias) File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torch/nn/functional.py", line 1753, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)` I'm using PyTorch 1.7.1. Any help would be greatly appreciated. Thank you. EDIT. The update of python -m torch.utils.collect_env is: Collecting environment information... PyTorch version: 1.8.0 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 11.1.0 CMake version: version 3.18.4 Python version: 3.8 (64-bit runtime) Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: TITAN RTX GPU 1: TITAN RTX GPU 2: TITAN RTX GPU 3: TITAN RTX GPU 4: TITAN RTX GPU 5: TITAN RTX GPU 6: TITAN RTX GPU 7: TITAN RTX Nvidia driver version: 460.39 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.20.1 [pip3] torch==1.8.0 [pip3] torchaudio==0.8.0 [pip3] torchsummary==1.5.1 [conda] Could not collect
As described in your log, pytorch 1.8 is installed, not 1.7.1. Otherwise please send your log again, using the right python binary. I faced the exact same issue, using 1.8. Downgrading to 1.7.1 solved it (as mentioned on huggingface transformers github issue).
https://stackoverflow.com/questions/66567324/
Optimize input instead of network in pytorch
I am trying to optimize the inputs in some required task and I don't want to update my network as its freezed. I have written a minimal example but its not working as z has the same value in all iterations. I am sure that I am doing some silly mistake in this process. Any guidance is highly appreciated. thanks. import torch z = torch.rand((1,6)) z.requires_grad_(True) optimizer = torch.optim.SGD([z], lr= 0.1) criteria = torch.nn.MSELoss() for i in range(5): optimizer.zero_grad() print(z) loss = criteria(z, z+torch.rand(1)) #print(loss) loss.backward() optimizer.step() ##output tensor([[0.1105, 0.8152, 0.2820, 0.1122, 0.6645, 0.7211]], requires_grad=True) tensor([[0.1105, 0.8152, 0.2820, 0.1122, 0.6645, 0.7211]], requires_grad=True) tensor([[0.1105, 0.8152, 0.2820, 0.1122, 0.6645, 0.7211]], requires_grad=True) tensor([[0.1105, 0.8152, 0.2820, 0.1122, 0.6645, 0.7211]], requires_grad=True) tensor([[0.1105, 0.8152, 0.2820, 0.1122, 0.6645, 0.7211]], requires_grad=True)
The gradients are always 0. This can be seen as follows: import torch z = torch.rand((1,6)) z.requires_grad_(True) z.retain_grad() optimizer = torch.optim.SGD([z], lr= 0.1) criteria = torch.nn.MSELoss() for i in range(5): optimizer.zero_grad() # print(z) loss = criteria(z, z+torch.rand(1)) # print(loss) loss.backward() print(z.grad) optimizer.step() tensor([[0., 0., 0., 0., 0., 0.]]) tensor([[0., 0., 0., 0., 0., 0.]]) ... I don't have the mathematical proof, but it probably comes from the definition of the target z+torch.rand(1), that directly depends on the input z by a simple addition.
https://stackoverflow.com/questions/66572604/
How to get topk's values with its indices (2D)?
I have two 3D tensor and I want to use one's top k indices get another top k. For example for the following tensor a = torch.tensor([[[1], [2], [3]], [[4], [5], [6]]]) b = torch.tensor([[[7,1], [8,2], [9,3]], [[10,4],[11,5],[12,6]]]) pytorch's topk function will give me the following. top_tensor, indices = torch.topk(a, 2, dim=1) # top_tensor: tensor([[[3], [2]], # [[6], [5]]]) # indices: tensor([[[2], [1]], # [[2], [1]]]) But I want to use the result of a, map to b. # use indices to do something for b, get torch.tensor([[[8,2], [9,3]], # [[11,5],[12,6]]]) In this case, I don't know the real values of b, so I can't use topk to b. on the other word, I want to get a funtion foo_slice as following: top_tensor, indices = torch.topk(a, 2, dim=1) # top_tensor == foo_slice(a, indices) Is there any approach to achieve this using pytorch? Thanks!
The solution what you are looking for is here So the code based solution to your problem is as follows #inputs are changed in order from the above ques a = torch.tensor([[[1], [2], [3]], [[5], [6], [4]]]) b = torch.tensor([[[7,1], [8,2], [9,3]], [[11,5],[12,6],[10,4]]]) top_tensor, indices = torch.topk(a, 2, dim=1) v = [indices.view(-1,2)[i] for i in range(0,indices.shape[1])] new_tensor = [] for i,f in enumerate(v): new_tensor.append(torch.index_select(b[i], 0, f)) print(new_tensor ) #[tensor([[9, 3], # [8, 2]]), #tensor([[12, 6], # [11, 5]])]
https://stackoverflow.com/questions/66576655/
Pip is uninstalling pytorch built from source when i use 'pip install '. Is there a way?
I needed to build pytorch from source in order to use the latest cuda toolkit - hence I did that. My environment is a conda enviornment. The package I am trying to install is allennlp. Pip list is not showing torch installed, conda is showing it is. Tried building allennlp from source through: conda skeleton pypi <package> conda build <package> conda build is also causing problems due to incompability with pip packages etc... I am just wondering if there is a simpler way to do this? For example: Tell pip that torch is already installed so it stops uninstalling the current torch (why can't it just pick it up as installed). Appreciate the help!
You can install allennlp with pip install --no-deps allennlp, but then you have to make sure the other dependencies are there yourself.
https://stackoverflow.com/questions/66577876/
Decoder upsample size
I have a simple autoencoder which looks like this: class Autoencoder(nn.Module): def __init__(self): super(Autoencoder, self).__init__() self.channels = [3, 8, 16, 32] self.encoder = nn.Sequential( # layer 1 nn.Conv2d(self.channels[0], self.channels[1], 3, 1, 1), nn.BatchNorm2d(self.channels[1]), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=4, stride=4), # layer 2 nn.Conv2d(self.channels[1], self.channels[2], 3, 1, 1), nn.BatchNorm2d(self.channels[2]), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=4, stride=4), # layer 3 nn.Conv2d(self.channels[2], self.channels[3], 3, 1, 1), nn.BatchNorm2d(self.channels[3]), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2) ) self.decoder = nn.Sequential( nn.Conv2d(self.channels[3], self.channels[2], 3, 1, 1), nn.ReLU(inplace=True), nn.Upsample(size=(15, 20), mode='bilinear', align_corners=False), nn.Conv2d(self.channels[2], self.channels[1], 3, 1, 1), nn.ReLU(inplace=True), nn.Upsample(scale_factor=4, mode='bilinear', align_corners=False), nn.Conv2d(self.channels[1], 1, 1), nn.ReLU(inplace=True), nn.Upsample(scale_factor=4, mode='bilinear', align_corners=False) ) def forward(self, x): h = self.encoder(x) y = self.decoder(h) return y Can someone clarify if the nn.Upsample(size=(15, 20)... in the Decoder has something to do with restoring the images to their original dimensions? My input images(torch tensors) are of size 240*320 and the network is supposed to restore the input tensor to its original size. Also if I add a fourth layer to my encoder with 64 channels and a 4th layer to my decoder (similar to the layers above), what should the upsampling size be in the first layere of the decode?
As mentioned in PyTorch Documentation, You may define an upsampling layer with a scale factor or an output size. Stating output_size=(H, W) will make sure the output size will be (H, W), regardless of the input size. On the other hand, stating scale_factor=4 will make sure the output of each spatial dimension will be 4 times the input, e.g. (30, 40) -> (120, 160). You want to mention you need to get an output of size 240 * 320. Therefore, the spatial dimensions of your output from the decoder will be (15*4*4, 20*4*4) which conveniently is (240, 320).
https://stackoverflow.com/questions/66579395/
How do I prevent a lack of VRAM halfway through training a Huggingface Transformers (Pegasus) model?
I'm taking a pre-trained pegasus model through Huggingface transformers, (specifically, google/pegasus-cnn_dailymail, and I'm using Huggingface transformers through Pytorch) and I want to finetune it on my own data. This is however quite a large dataset and I've run into the problem of running out of VRAM halfway through training, which because of the size of the dataset can be a few days after training even started, which makes a trial-and-error approach very inefficient. I'm wondering how I can make sure ahead of time that it doesn't run out of memory. I would think that the memory usage of the model is in some way proportional to the size of the input, so I've passed truncation=True, padding=True, max_length=1024 to my tokenizer, which if my understanding is correct should make all the outputs of the tokenizer of the same size per line. Considering that the batch size is also a constant, I would think that the amount of VRAM in use should be stable. So I should just be able to cut up the dataset into managable parts, just looking at the ram/vram use of the first run, and infer that it will run smoothly from start to finish. However, the opposite seems to be true. I've been observing the amount of VRAM used at any time and it can vary wildly, from ~12GB at one time to suddenly requiring more than 24GB and crashing (because I don't have more than 24GB). So, how do I make sure that the amount of vram in use will stay within reasonable bounds for the full duration of the training process, and avoid it crashing due to a lack of vram when I'm already days into the training process?
padding=True actually doesn't pad to max_length, but to the longest sample in the list you pass to the tokenizer. To pad to max_length you need to set padding='max_length'.
https://stackoverflow.com/questions/66581492/
Initialise pytorch convolution layer with my own values
I want to know if there's a way to initialize pytorch convolution filter with my own values. eg, I have a tuple [-0.8423, 0.3778][-3.1070, -2.6518] and I want to initialize a 2X2 filter with these values, how do I do that? I looked up some answers but they were mostly using torch normal distribution and others. Nothing that I can't specify. Let me know.
You can directly assign values to weigts: conv = nn.Conv2d(1, 1, kernel_size=2) with torch.no_grad(): conv.weight.data = torch.tensor([[-0.8423, 0.3778],[-3.1070, -2.6518]]) # you might need to play a bit with the dimensionality of this tensor
https://stackoverflow.com/questions/66582936/
How to use a custom Rank to GPU mapping in Pytorch Distributed
I'm running PyTorch distributed on a single 4-gpu node, where I am not the only one running projects. As a result, sometimes certain GPUs are too busy, while others are idle. Let's say GPU 0 and GPU 2 are busy, but GPU 1 and GPU 3 are not, so I want to use just these GPUs. It seems that by default, Rank 0 is always GPU 0 in pytorch distributed. How can I specify to PyTorch that Rank 0 should correspond to GPU 1, and Rank 1 should correspond to GPU 3?
You can tell any command which GPUs should be visible to them by setting the variable CUDA_VISIBLE_DEVICES. E.g. CUDA_VISIBLE_DEVICES=2,3 python run.py <args> This will only make GPU 2 and 3 visible to that process and thus DDP will only run on those GPUs that you specified. (Note that this is zero-indexed, i.e. 0 is the first GPU.)
https://stackoverflow.com/questions/66585606/
How to handle "Memory quota exceeded" in heroku app developed using streamlit and PyTorch?
I have been developing a style transfer image application using streamlit and PyTorch. The app is facing memory quota exceeded whenever high resolution image is stylized(using the app). How do I reduce the app size(its around 200+ mb)? App code : https://github.com/pr2tik1/deep-stylize-image App deployment : https://deep-style-images.herokuapp.com
It is not about slug size so it is nothing to do with your application size, it is about RAM you are using. Since you are using free dyno you can not process high resolution images. You will require minimum performance dyno in my opinion since image computation require pretty much memory.
https://stackoverflow.com/questions/66586465/
Calling the forward method in PyTorch vs. calling the model instance
A lot of the PyTorch tutorials I've been viewing do something like this. Define model: class Network(nn.Module): def __init__(): super().__init__() self.conv1 = .. ... def forward(x) ... ... Once the Network has been instantiated (net = Network()), the people in the tutorials write net(input_data) instead of net.forward(input_data). I tried net.forward() and it gives the same results as net(). Why is this a common practice, and also why does this work?
You should avoid calling Module.forward. The difference is that all the hooks are dispatched in the __call__ function see this, so if you call .forward and have hooks in your model, the hooks won’t have any effect. Inshort when you call Module.forward, pytorch hooks wont have any effect Detailed answer can be found in this post
https://stackoverflow.com/questions/66594136/
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle)` with GPU only
I'm working on the CNN with one-dimensional signal. It works totally fine with CPU device. However, when I training model in GPU, CUDA error occurred. I set os.environ['CUDA_LAUNCH_BLOCKING'] = "1" command after I got RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle). With doing this, a cublasSgemm error occurred instead of cublasCreate error. Though the nvidia document doubt the hardware problem, I can training other CNN with images without any error. Below is my code for the data loading and set data in training model. idx = np.arange(len(dataset)) # dataset & label shuffle in once np.random.shuffle(idx) dataset = dataset[idx] sdnn = np.array(sdnn)[idx.astype(int)] train_data, val_data = dataset[:int(0.8 * len(dataset))], dataset[int(0.8 * len(dataset)):] train_label, val_label = sdnn[:int(0.8 * len(sdnn))], sdnn[int(0.8 * len(sdnn)):] train_set = DataLoader(dataset=train_data, batch_size=opt.batch_size, num_workers=opt.workers) for i, data in enumerate(train_set, 0): # data.shape = [batch_size, 3000(len(signal)), 1(channel)] tensor x = data.transpose(1, 2) label = torch.Tensor(train_label[i * opt.batch_size:i * opt.batch_size + opt.batch_size]) x = x.to(device, non_blocking=True) label = label.to(device, non_blocking=True) # [batch size] label = label.view([len(label), 1]) optim.zero_grad() # Feature of signal extract y_predict = model(x) # [batch size, fc3 output] # Error occurred HERE loss = mse(y_predict, label) Below is the error message from this code. File C:/Users/Me/Desktop/Me/Study/Project/Analysis/Regression/main.py", line 217, in Processing y_predict = model(x) # [batch size, fc3 output] File "C:\Anaconda\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\ME\Desktop\ME\Study\Project\Analysis\Regression\cnn.py", line 104, in forward x = self.fc1(x) File "C:\Anaconda\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\Anaconda\envs\torch\lib\site-packages\torch\nn\modules\linear.py", line 91, in forward return F.linear(input, self.weight, self.bias) File "C:\Anaconda\envs\torch\lib\site-packages\torch\nn\functional.py", line 1674, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` I've tried to solve this error for weeks but can't find the solution. If you can see anything wrong here, please let me know.
With searched with the partial keywords, I finally got the similar situation. Because of the stability, I used the CUDA 10.2 version. The reference asked to upgrade CUDA toolkit to higher - 11.2 in my case - and problem solved! I've deal with other training processes but this one only caused error. As the CUDA error occurred with various reasons, changes the version could be counted for solution.
https://stackoverflow.com/questions/66600362/
Indexing using pyTorch tensors along one specific dimension with 3 dimensional tensor
I have 2 tensors: A with shape (batch, sequence, vocab) and B with shape (batch, sequence). A = torch.tensor([[[ 1., 2., 3.], [ 5., 6., 7.]], [[ 9., 10., 11.], [13., 14., 15.]]]) B = torch.tensor([[0, 2], [1, 0]]) I want to get the following: C = torch.zeros_like(B) for i in range(B.shape[0]): for j in range(B.shape[1]): C[i,j] = A[i,j,B[i,j]] But in a vectorized way. I tried torch.gather and other stuff but I cannot make it work. Can anyone please help me?
>>> import torch >>> A = torch.tensor([[[ 1., 2., 3.], ... [ 5., 6., 7.]], ... ... [[ 9., 10., 11.], ... [13., 14., 15.]]]) >>> B = torch.tensor([[0, 2], ... [1, 0]]) >>> A.shape torch.Size([2, 2, 3]) >>> B.shape torch.Size([2, 2]) >>> C = torch.zeros_like(B) >>> for i in range(B.shape[0]): ... for j in range(B.shape[1]): ... C[i,j] = A[i,j,B[i,j]] ... >>> C tensor([[ 1, 7], [10, 13]]) >>> torch.gather(A, -1, B.unsqueeze(-1)) tensor([[[ 1.], [ 7.]], [[10.], [13.]]]) >>> torch.gather(A, -1, B.unsqueeze(-1)).shape torch.Size([2, 2, 1]) >>> torch.gather(A, -1, B.unsqueeze(-1)).squeeze(-1) tensor([[ 1., 7.], [10., 13.]]) Hi, you can use torch.gather(A, -1, B.unsqueeze(-1)).squeeze(-1). the first -1 between A and B.unsqueeze(-1) is indicating the dimension along which you want to pick the element. the second -1 in B.unsqueeze(-1) is to add one dim to B to make the two tensor the same dims otherwise you get RuntimeError: Index tensor must have the same number of dimensions as input tensor. the last -1 is to reshape the result from torch.Size([2, 2, 1]) to torch.Size([2, 2])
https://stackoverflow.com/questions/66604482/
How to index a tensor in a for loop?
I am getting an error that says my tensor has to be a long, byte, or bool tensor. Debugging showed me that it is not getting an index [i], but the actual row [i] of the tensor x. I need that index to grab another row of another tensor y, tho. Any ideas how to solve this? number_batches_in = int(len(dataset_in)/batch_size) number_batches_out = int(len(dataset_out)/batch_size) x = torch.empty(size=(number_batches_in, 800)) y = torch.empty(size=(number_batches_out,1200), dtype=torch.float64) for index, (x1, x2) in enumerate(dataloader_in): batch = torch.cat((x1, x2), 0) x[index] = batch for index, (y1, y2, y3) in enumerate(dataloader_out): batch = torch.cat((y1, y2, y3), 0) y[index] = batch model = Network(800,1200,3,800,200) SAVE_PATH = "trained/model.dat" epochs = 5 learning_rate = 0.001 optimizer = optim.Adam(model.parameters(),lr=learning_rate, eps=1e-08) hist_error = [] hist_loss = [] beta = 0.5 for epoch in range(epochs): epoch_error = [] epoch_loss = [] for i in x: optimizer.zero_grad() pred = model.forward(i) y_true = y[i] loss = torch.mean(torch.sum((pred - y_true)) ** 2) loss.backward() optimizer.step() error = torch.mean(torch.sqrt((pred - y[i]) ** 2)).detach().numpy() epoch_error.append(error) epoch_loss.append(loss.data.detach().numpy()) hist_error.append(np.mean(epoch_error)) hist_loss.append(np.mean(epoch_loss)) print("Epoch %d -- loss %f, RMS error %f " % (epoch+1, hist_loss[-1], hist_error[-1])) torch.save(model.state_dict(), SAVE_PATH) print("Model saved to %s" % SAVE_PATH) Error message: Traceback (most recent call last): File "/home/samim/miniconda3/envs/deep/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-54864ad18480>", line 1, in <module> runfile('/home/samim/Documents/train.py', wdir='/home/samim/Documents/') File "/home/samim/.local/share/JetBrains/PyCharm2020.3/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/home/samim/.local/share/JetBrains/PyCharm2020.3/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/samim/Documents/train.py", line 61, in <module> y_true = y[i] IndexError: tensors used as indices must be long, byte or bool tensors
Either enumerate: ... for j, row_x in enumerate(x): optimizer.zero_grad() pred = model.forward(row_x) y_true = y[j] loss = torch.mean(torch.sum((pred - y_true)) ** 2) ... or better yet, zip: ... for row_x, y_true in zip(x, y): optimizer.zero_grad() pred = model.forward(row_x) loss = torch.mean(torch.sum((pred - y_true)) ** 2) ...
https://stackoverflow.com/questions/66606405/
What is the difference between src and tgt in nn.Transformer for PyTorch?
From the docs it says to create a transformer model like this: transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12) src = torch.rand((10, 32, 512)) tgt = torch.rand((20, 32, 512)) # What is tgt?? out = transformer_model(src, tgt) What is tgt mean't to be? Should tgt be the same as the src?
The transformer structure is of two components, the encoder and the decoder. The src is the input to encoder and the tgt is the input to decoder. For example doing a machine translation task that translates English sentence to French, the src is english sequence ids and tgt is french sequence ids.
https://stackoverflow.com/questions/66608447/
Getting negative (inverted) image in Pytorch
I want to get the negative of an image straight from the data loader and feed it as a tensor. Are there any libraries that I can use? I have tried torch transforms and didn't find any.
As of 0.9, you can use torchvision.transforms.functional.invert() from the torchvision transforms library.
https://stackoverflow.com/questions/66624873/
Difference between Tensorflow's tf.keras.layers.Dense and PyTorch's torch.nn.Linear?
I have a quick (and possibly silly) question about how Tensorflow defines its Linear layer. Within PyTorch, a Linear (or Dense) layer is defined as, y = x A^T + b where A and b are the weight matrix and bias vector for a Linear layer (see here). However, I can't precisely find an equivalent equation for Tensorflow! Is it the same as PyTorch or is it just y = x A + b ? Thank you in advance!
tf.keras.layers.Dense is defined here in the tensorflow source code: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/layers/core.py#L1081 If you follow the references in its call function, it leads you to the definition of the operation used here, which is indeed a matrix multiplication of the inputs and weights plus a bias vector as expected: https://github.com/tensorflow/tensorflow/blob/a68c6117a1a53431e739752bd2ab8654dbe2534a/tensorflow/python/keras/layers/ops/core.py#L74 outputs = gen_math_ops.MatMul(a=inputs, b=kernel) ... outputs = nn_ops.bias_add(outputs, bias)
https://stackoverflow.com/questions/66626700/
Bubble up non null rows of a 3-D Tensor PyTorch
I am having some problem trying to convert the following function into a function only manipulating tensors. def valid_sequence_output(sequence_output, valid_mask): bs, max_len, feat_dim = sequence_output.shape valid_output = torch.zeros(bs, max_len, feat_dim, dtype=torch.float32) for i in range(bs): jj = -1 for j in range(max_len): if valid_mask[i][j].item() == 1: jj += 1 valid_output[i][jj] = sequence_output[i][j] return valid_output where the input tensors can be created as follow: size = ((2,5,2)) sequence_output = torch.randint(0, 250, size=size) valid_mask = torch.randint(0, 2, size=size[:2]) I basically aim at “bubbling up” the non null rows of sequence_output. As an example having sequence_output equal to: tensor([[[ 0, 0], [ 15, 47], [124, 230], [ 0, 0], [ 65, 31]], [[ 0, 0], [ 0, 0], [ 0, 0], [139, 228], [224, 205]]]) I am trying to obtain the following tensor: tensor([[[ 15, 47], [124, 230], [ 65, 31], [ 0, 0], [ 0, 0]], [[139, 228], [224, 205], [ 0, 0], [ 0, 0], [ 0, 0]]]) If somebody has a suggestion on how to do this, I would really appreciate :D
I managed to come up with a quite nasty solution (which he's probably suboptimal) by constructing a matrix A of zeros and ones that will swap rows of X when performing a matrix multiplication... def vso(seq_out, valid_mask): X = torch.where(valid_mask.unsqueeze(-1) == 1, seq_out, torch.zeros_like(seq_out)) bs, max_len, _ = X.shape tu = torch.unique(torch.nonzero(X)[:, :2], dim=0) batch_axis = tu[:, 0] rows_axis = tu[:, 1] a = torch.arange(bs).repeat(batch_axis.shape).reshape(batch_axis.shape[0], -1).T T = torch.cumsum(batch_axis == a, dim=1) - 1 cols_axis = T[batch_axis, torch.arange(batch_axis.shape[0])] A = torch.zeros((bs, max_len, max_len)) A[(batch_axis, cols_axis, rows_axis)] = 1 valid_output = torch.matmul(A, X) return valid_output Still looking for better answers though !
https://stackoverflow.com/questions/66626938/
TorchScript requires source access in order to carry out compilation for collections.deque
I'm trying to convert PyTorch FOMM model to TorchScript. As soon as I started to annotate some classes with @torch.jit.script I've got an error: OSError: Can't get source for <class 'collections.deque'>. TorchScript requires source access in order to carry out compilation, make sure original .py files are available. As I understand that class implemented in CPython therefore cannot be read by TorchScript compiler. I failed to find any pure-Python implementation. How can I overcome this issue? Here is the class I'm trying to annotate: import queue import collections import threading import torch @torch.jit.script class SyncMaster(object): """An abstract `SyncMaster` object. - During the replication, as the data parallel will trigger an callback of each module, all slave devices should call `register(id)` and obtain an `SlavePipe` to communicate with the master. - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, and passed to a registered callback. - After receiving the messages, the master device should gather the information and determine to message passed back to each slave devices. """ def __init__(self, master_callback): """ Args: master_callback: a callback to be invoked after having collected messages from slave devices. """ self._master_callback = master_callback self._queue = queue.Queue() self._registry = collections.OrderedDict() self._activated = False def __getstate__(self): return {'master_callback': self._master_callback} def __setstate__(self, state): self.__init__(state['master_callback']) def register_slave(self, identifier): """ Register an slave device. Args: identifier: an identifier, usually is the device id. Returns: a `SlavePipe` object which can be used to communicate with the master device. """ if self._activated: assert self._queue.empty(), 'Queue is not clean before next initialization.' self._activated = False self._registry.clear() future = FutureResult() self._registry[identifier] = _MasterRegistry(future) return SlavePipe(identifier, self._queue, future) def run_master(self, master_msg): """ Main entry for the master device in each forward pass. The messages were first collected from each devices (including the master device), and then an callback will be invoked to compute the message to be sent back to each devices (including the master device). Args: master_msg: the message that the master want to send to itself. This will be placed as the first message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. Returns: the message to be sent back to the master device. """ self._activated = True intermediates = [(0, master_msg)] for i in range(self.nr_slaves): intermediates.append(self._queue.get()) results = self._master_callback(intermediates) assert results[0][0] == 0, 'The first result should belongs to the master.' for i, res in results: if i == 0: continue self._registry[i].result.put(res) for i in range(self.nr_slaves): assert self._queue.get() is True return results[0][1] @property def nr_slaves(self): return len(self._registry)
Switched TorchScript generation method from torch.jit.script to torch.jit.trace and it worked, no need in annotating anything. Alternatively torch.onnx.export works sometimes.
https://stackoverflow.com/questions/66628965/
RuntimeError('dot : expected both vectors to have same dtype, but found Double and Float
I am trying to perform this operation: L_c += 1 - torch.abs(torch.dot(nuNormalized, Nu)) where: nuNormalized -> tensor([ 0.3790, -0.4208, 0.8242], dtype=torch.float64) Nu -> tensor([-0.9961, -0.9961, -0.9961], device='cuda:0') I am getting this error: (<class 'RuntimeError'>, RuntimeError('dot : expected both vectors to have same dtype, but found Double and Float',), <traceback object at 0x7f4d276155c8>) Any suggestions please? Thanks!
cast nuNormalized to float from double. also you might consider moving both to the same device.
https://stackoverflow.com/questions/66630508/
How to find the best value for mean and STD of Normalize in torchvision.transforms
I have started working with PyTorch and cannot figure it how I am supposed to find mean and std as the input parameters of normalise. I have seen this transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) #https://pytorch.org/vision/stable/transforms.html# and in another example: transformation = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5]) ]) #https://github.com/MicrosoftDocs/ml-basics/blob/master/challenges/05%20%20-%20Safari%20CNN%20Solution%20(PyTorch).ipynb so, how am I supposed to know or get these values if I have a set of images? Are these three parameters are related to R G B also?
suppose you have already X_train which is a list of numpy matrix eg. 32x32x3: X_train = X_train / 255 #normalization of pixels train_mean = X_train.reshape(3,-1).mean(axis=1) train_std = X_train.reshape(3,-1).std(axis=1) then you can pass this last 2 variables in your normalizer transformer : transforms.Normalize(mean = train_mean ,std = train_std)
https://stackoverflow.com/questions/66632136/
Torch select index for mult dimensional vector
Suppose I have the following data: import torch torch.manual_seed(42) logits = torch.randn(2, 5, 3) idx = torch.randint(0, 3, (2, 5)) What I want to do is the following: [[logits[i,j,idx[i][j]] for j in range(len(idx[i]))] for i in range(len(idx))] However, this is clearly inefficient. The closest that I came to this was doing this, but this again looks pretty ugly: new_idx = torch.stack([idx]*logits.shape[-1], dim=-1) logits.gather(dim=-1, index=new_idx) where in the above case the desired output is replicated 3 times. In terms of practical use case, I'm thinking of a language model where the logits are of shape (batch_size, sequence_len, vocabulary), and the indices are simply (batch_size, sequence_len). Apologies if this has been asked before but I couldn't find anything.
same with this answer torch.gather(logits, -1, idx.unsqueeze(-1)) >>> [[logits[i,j,idx[i][j]] for j in range(len(idx[i]))] for i in range(len(idx))] [[tensor(0.9007), tensor(0.6784), tensor(-0.0431), tensor(-1.4036), tensor(-0.7279)], [tensor(-0.2168), tensor(1.7174), tensor(-0.4245), tensor(0.9956), tensor(-1.2742)]] >>> torch.gather(logits, -1, idx.unsqueeze(-1)) tensor([[[ 0.9007], [ 0.6784], [-0.0431], [-1.4036], [-0.7279]], [[-0.2168], [ 1.7174], [-0.4245], [ 0.9956], [-1.2742]]])
https://stackoverflow.com/questions/66633213/
Sequence to Sequence Loss
I'm trying to figure out how sequence to sequence loss is calculated. I am using the huggingface transformers library in this case, but this might actually be relevant to other DL libraries. So to get the required data we can do: from transformers import EncoderDecoderModel, BertTokenizer import torch import torch.nn.functional as F torch.manual_seed(42) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') MAX_LEN = 128 tokenize = lambda x: tokenizer(x, max_length=MAX_LEN, truncation=True, padding=True, return_tensors="pt") model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints input_seq = ["Hello, my dog is cute", "my cat cute"] output_seq = ["Yes it is", "ok"] input_tokens = tokenize(input_seq) output_tokens = tokenize(output_seq) outputs = model( input_ids=input_tokens["input_ids"], attention_mask=input_tokens["attention_mask"], decoder_input_ids=output_tokens["input_ids"], decoder_attention_mask=output_tokens["attention_mask"], labels=output_tokens["input_ids"], return_dict=True) idx = output_tokens["input_ids"] logits = F.log_softmax(outputs["logits"], dim=-1) mask = output_tokens["attention_mask"] Edit 1 Thanks to @cronoik I was able to replicate the loss calculated by huggingface as being: output_logits = logits[:,:-1,:] output_mask = mask[:,:-1] label_tokens = output_tokens["input_ids"][:, 1:].unsqueeze(-1) select_logits = torch.gather(output_logits, -1, label_tokens).squeeze() huggingface_loss = -select_logits.mean() However, since the last two tokens of the second input is just padding, shouldn't we calculate the loss to be: seq_loss = (select_logits * output_mask).sum(dim=-1, keepdims=True) / output_mask.sum(dim=-1, keepdims=True) seq_loss = -seq_loss.mean() ^This takes into account the length of the sequence of each row of outputs, and the padding by masking it out. Think this is especially useful when we have batches of varying length outputs.
ok I found out where I was making the mistakes. This is all thanks to this thread in the HuggingFace forum. The output labels need to have -100 for the masked version. The transoformers library does not do it for you. One silly mistake I made was with the mask. It should have been output_mask = mask[:, 1:] instead of :-1. 1. Using Model We need to set the masks of output to -100. It is important to use clone as shown below: labels = output_tokens["input_ids"].clone() labels[output_tokens["attention_mask"]==0] = -100 outputs = model( input_ids=input_tokens["input_ids"], attention_mask=input_tokens["attention_mask"], decoder_input_ids=output_tokens["input_ids"], decoder_attention_mask=output_tokens["attention_mask"], labels=labels, return_dict=True) 2. Calculating Loss So the final way to replicate it is as follows: idx = output_tokens["input_ids"] logits = F.log_softmax(outputs["logits"], dim=-1) mask = output_tokens["attention_mask"] # shift things output_logits = logits[:,:-1,:] label_tokens = idx[:, 1:].unsqueeze(-1) output_mask = mask[:,1:] # gather the logits and mask select_logits = torch.gather(output_logits, -1, label_tokens).squeeze() -select_logits[output_mask==1].mean(), outputs["loss"] The above however ignores the fact that this comes from two different lines. So an alternate way of calculating loss could be: seq_loss = (select_logits * output_mask).sum(dim=-1, keepdims=True) / output_mask.sum(dim=-1, keepdims=True) seq_loss.mean()
https://stackoverflow.com/questions/66633813/
(Differentiable Image Sampling) Custom Integer Sampling Kernel, Spatial Transformer Network
I was looking through the Spatial Transformer Network paper, and I am trying to implement a custom grid_sample function (inheriting the autograd.Function class) in PyTorch for the Integer Sampling Kernel. While defining the backward function, I have come across the following conundrum. Given that the integer sampling works as the following: I think that the gradients w.r.t the input map and the transformed grid (x_i^s, y_i^s) should be like the following: Gradient w.r.t. input map: Gradient w.r.t transformed grid (x_i^s): Gradient w.r.t transformed grid (y_i^s): as the derivative of the Kronecker delta function is zero (I'm unsure about this!! - HELP) Derivative of the Kronecker delta? Thus I am reaching a conclusion that the gradient w.r.t to the input should be: a tensor of the same size as the input filled with ones if the pixel was sampled and 0 if it wasn't sampled, and the gradient w.r.t the transformed grid should be a tensor full of zeros. However, if the gradient of the transformed grid is 0, then due to the chain rule, no information will be passed on to the layers before the integer sampler. Therefore I think the derivate with respect to the grid should be something else. Could anybody point out what I'm doing wrong? Many thanks in advance!
For future reference, and for those who might have had similar questions to the one I posted. I've emailed Dr Jaderberg (one of the authors of the 'Spatial Transformer Networks') about this question and he has confirmed: "that the gradient wrt the coordinates for integer sampling is 0.". So I wasn't doing anything wrong, and it was right all along! He was very kind in his response and expressed that integer sampling was mentioned in the paper to introduce the bilinear sampling scheme, and have given insights into how to possibly implement integer sampling if I really wanted to: "you could think about using some numerical differentiation techniques (e.g. look at difference of x to its neighbours). This would assume smoothness in the image wrt coordinates." So with great thanks to Dr Jaderberg, I'm happy to close this question. I guess thinking about how I'd use numerical methods to implement the integer kernel for the sampling function is another challenge for myself, but until then I guess the bilinear sampler is my friend! :)
https://stackoverflow.com/questions/66633854/
Why are PyTorch "convolutions" implemented as cross-correlations?
PyTorch convolutions are actually implemented as cross-correlations. This shouldn't produce issues in training a convolution layer, since one is just a flipped version of the other (and hence the learned function will be equally powerful), but it does prove an issue when: trying to implement an actual convolution with the functional library trying to copy the weights of an actual convolution from another deep learning library The authors say the following in Deep Learning with PyTorch: Convolution, or more precisely, discrete convolution1... 1. There is a subtle difference between PyTorch's convolution and mathematics' convolution: one argument's sign is flipped. If we were in a pedantic mood, we could call PyTorch's convolutions discrete cross-correlations. But they don't explain why it was implemented like this. Is there a reason? Maybe something similar to how the PyTorch implementation of CrossEntropyLoss isn't actually cross entropy but an analogous function taking "logits" as inputs instead of raw probabilities (to avoid numerical instability)?
I think the reason is simpler. As you said, convolution is the flipped version of cross-correlation, but that's not a problem in the context of training a CNN. So we can just avoid doing the flipping, which simplifies the code and reduces the computation time: The advantage of cross-correlation is that it avoids the additional step of flipping the filters to perform the convolutions. Performance Evaluation of cuDNN Convolution Algorithms on NVIDIA Volta GPUs Flipping the kernel won't have any effect on the mathematical stability. The operations remain the same.
https://stackoverflow.com/questions/66640802/
How to define a “don't care” class in Pytorch?
I have a time series classification task in which I should output a classification of 3 classes for every time stamp t. All data is labeled per frame. In the data set are more than 3 classes [which are also imbalanced]. My net should see all samples sequentially, because it uses that for historical information. Thus, I can't just eliminate all irrelevant class samples at preprocessing time. In case of a prediction on a frame which is labeled differently than those 3 classes, I don't care about the result. How to do this correctly in Pytorch?
Following from this discussion, which was not google searchable, there are two options, both are options of the CrossEntropyLoss: Option 1 If there is only one class to ignore, use ignore_index=class_index when instantiating the loss. Option 2 If there are more classes, use weight=weights, with weights.shape==n_classes and torch.sum(weights[ignored_classes]) == 0
https://stackoverflow.com/questions/66641829/
Strange Cuda out of Memory behavior in Pytorch
Edit: SOLVED- Problem relied on the number of workers, lowered them, problem solved I am using a 24GB Titan RTX and I am using it for an image segmentation Unet with Pytorch, it is always throwing Cuda out of Memory at different batch sizes, plus I have more free memory than it states that I need, and by lowering batch sizes, it INCREASES the memory it tries to allocate which doesn't make any sense. here is what I tried: Image size = 448, batch size = 8 "RuntimeError: CUDA error: out of memory" Image size = 448, batch size = 6 "RuntimeError: CUDA out of memory. Tried to allocate 3.12 GiB (GPU 0; 24.00 GiB total capacity; 2.06 GiB already allocated; 19.66 GiB free; 2.31 GiB reserved in total by PyTorch)" is says it tried to allocate 3.12GB and I have 19GB free and it throws an error?? Image size = 224, batch size = 8 "RuntimeError: CUDA out of memory. Tried to allocate 28.00 MiB (GPU 0; 24.00 GiB total capacity; 2.78 GiB already allocated; 19.15 GiB free; 2.82 GiB reserved in total by PyTorch)" Image size = 224, batch size = 6 "RuntimeError: CUDA out of memory. Tried to allocate 344.00 MiB (GPU 0; 24.00 GiB total capacity; 2.30 GiB already allocated; 19.38 GiB free; 2.59 GiB reserved in total by PyTorch)" reduced batch size but tried to allocate more ??? Image size = 224, batch size = 4 "RuntimeError: CUDA out of memory. Tried to allocate 482.00 MiB (GPU 0; 24.00 GiB total capacity; 2.21 GiB already allocated; 19.48 GiB free; 2.50 GiB reserved in total by PyTorch)" Image size = 224, batch size = 2 "RuntimeError: CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 24.00 GiB total capacity; 1.44 GiB already allocated; 19.88 GiB free; 2.10 GiB reserved in total by PyTorch)" Image size = 224, batch size = 1 "RuntimeError: CUDA out of memory. Tried to allocate 1.91 GiB (GPU 0; 24.00 GiB total capacity; 894.36 MiB already allocated; 20.94 GiB free; 1.03 GiB reserved in total by PyTorch)" Even with stupidly low image sizes and batch sizes...
SOLVED- Problem relied on the number of workers, lowered them, problem solved
https://stackoverflow.com/questions/66642338/
Pytorch: test loss becoming nan after some iteration
I am trying to train a deep learning architecture, the model trains perfectly. I am testing after each epoch. For 7 epoch all the loss and accuracy seems okay but at 8 epoch during the testing test loss becomes nan. I have checked my data, it got no nan. Also my test accuracy is higher than train which is weird. Train data size is 37646 and test is 18932 so it should be enough. Before becoming nan test started to become very high around 1.6513713663602217e+30. This is really weird and I don't understand why is happening. Any help or suggestion is much appreciated.
Assuming that a very high learning rate isn't the cause of the problem, you can clip your gradients before the update, using PyTorch's gradient clipping. Example: optimizer.zero_grad() loss, hidden = model(data, hidden, targets) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip_value) optimizer.step() This is the first thing to do when you have a NaN loss, if of course you have made sure than you don't have NaNs elsewhere, e.g. in your input features. I have made use of gradient clipping in cases where increasing the learning rate caused NaNs, but still wanted to test a higher learning rate. Decreasing the learning rate could also solve your problem, but I'm guessing that you have already tried this. Empirically, I set clip_value = 5 most of the times, and then see its (usually non-significant) impact on performance. Feel free to experiment with different values.
https://stackoverflow.com/questions/66648432/
Pixel operations in batches
I have a batch of depth images, shape -> [B, 1, H, W]. For each pixel in each image of the batch I need to perform: X = d * Kinverse @ [u, v, 1] #therefore X is in R^3 where d is float tensor[0;1] representing depth at pixel u,v; Kinverse is a constant 3X3 matrix and u, v refer to the pixel column and row respectively. Is there some way I can vectorize the operation to obtain X(u+1,v), X(u,v) and X(u,v+1) for all the images in the batch. I eventually need to take this cross product: {X(u+1,v) - X(u,v)} x {X(u, v+1) - X(u,v)} Thanks for the help!
You can use torch.meshgrid to produce the u and v tensors. Once you have them, you can use torch.einsum to do the batched matrix multiplication with Kinverse. Finally, you can use torch.cross to compute the cross product: u, v = torch.meshgrid(*[torch.arange(s_, dtype=d.dtype, device=d.device) for s_ in d.shape[2:]]) # make a single 1x1xHxW for [u v 1] per pixel: uv = torch.cat((u[None, None, ...], v[None, None, ...], torch.ones_like(u)[None, None, ...]), dim=1) # compute X X = d * torch.einsum('ij,bjhw->bihw',Kinverse,uv) # the cross product out = torch.cross(X[..., 1:, :-1] - X[..., :-1, :-1], X[..., :-1, 1:] - X[..., :-1, :-1], dim=1)
https://stackoverflow.com/questions/66650233/
PyTorch: saving both weights and model definition
During prototyping, I often perform numerous changes to a PyTorch model. For instance, suppose the first model I am experimenting with is: class Model(nn.Module): def __init__(self, **args): super().__init__() self.l1 = nn.Linear(128, 1) then I will add another layer: class Model(nn.Module): def __init__(self, **args): super().__init__() self.l1 = nn.Linear(128, 32) self.l2 = nn.Linear(32, 1) or maybe add some convolutions, and so on. The problem is that I often get disorganized the more experiments I perform, as I haven't found a straightforward way of saving both model definition and its weights so that I can load a previous state. I know I can do: torch.save({'model': Model(), 'state': model.state_dict()}, path) # or directly torch.save(model, path) but then loading the model also requires that the model class (here, Model) to exist in the current file. In Keras you can simply do: model = ... # Get model (Sequential, Functional Model, or Model subclass) model.save('path/to/location') which saves model's architecture/config and weights, among other things. This means that you can load the model without having defined the architecture: model = keras.models.load_model('path/to/location') Refering to Keras model saving: The SavedModel and HDF5 file contains: the model's configuration (topology) the model's weights the model's optimizer's state (if any) Thus models can be reinstantiated in the exact same state, without any of the code used for model definition or training. This is what I want to achieve in PyTorch. Is there a similar approach for PyTorch? What is the best practice for these situations?
As Pytorch provides a huge amount of flexibility in the model, it will be challenging to save the architecture along with the weights in a single file. Keras models are usually built solely by stacking keras components, but pytorch models are orchestrated by the library consumer in their own way and therefore can contain any sort of logic. I think you have three choices: Come up with a organised schema for your experiments so that losing the model definition is less likely. You could go for something as simple as a file named through a schema that only defines each model. I would recommend this approach as this level of organisation would likely benefit your prototyping in other ways and the overhead is minimal. Try and save the code along with the pickle file. Although potentially possible, I think this would lead you down a rabbit-hole with a lot of potential problems. Use a different standardised way of saving the model, such as onnx. I would recommend this route if you do not want to go with option 1. Onnx does allow you to save a pytorch model's architecture along with its weights but comes with a few drawbacks. For example, it only supports some operations so completely custom forward methods or use of non-matrix operations may not work.
https://stackoverflow.com/questions/66652447/
Longformer get last_hidden_state
I am trying to follow this example in the huggingface documentation here https://huggingface.co/transformers/model_doc/longformer.html: import torch from transformers import LongformerModel, LongformerTokenizer model = LongformerModel.from_pretrained('allenai/longformer-base-4096') tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') SAMPLE_TEXT = ' '.join(['Hello world! '] * 1000) # long input document input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1 # Attention mask values -- 0: no attention, 1: local attention, 2: global attention attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to global attention to be deactivated for all tokens global_attention_mask[:, [1, 4, 21,]] = 1 # Set global attention to random tokens for the sake of this example # Usually, set global attention based on the task. For example, # classification: the <s> token # QA: question tokens # LM: potentially on the beginning of sentences and paragraphs outputs = model(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, output_hidden_states= True) sequence_output = outputs[0].last_hidden_state pooled_output = outputs.pooler_output I suppose that this would return a document embedding for the sample text. However, I run into the following error: AttributeError: 'Tensor' object has no attribute 'last_hidden_state' Why isnt it possible to call last_hidden_state?
Do not select via index: sequence_output = outputs.last_hidden_state outputs is a LongformerBaseModelOutputWithPooling object with the following properties: print(outputs.keys()) Output: odict_keys(['last_hidden_state', 'pooler_output', 'hidden_states']) Calling outputs[0] or outputs.last_hidden_state will both give you the same tensor, but this tensor does not have a property called last_hidden_state.
https://stackoverflow.com/questions/66655023/
Is there a way to only display frames from a video that only have detected object bounding boxes
I'm using a video for testing my custom yolov5s object detection model on test data. However, not every frame in the video has a valid object to be detected. [ When the inference testing is ran on the video, the image below shows when the model finds a frame that has an object that can be detected and classified As you can see, frame 4401 has detected a yield traffic sign but frame 4399 and 4400 haven't detected anything. My question is how do I process through 4500 frames of a video and only display frames that have detected objects. I have extracted the frames as JPEGS but only want the valid JPEGS to evaluate test performance? Better still, is it possible to extract the data from relevant frames to a text file? If anyone could help me out this would be a great help and time saver, many thanks in advance. the new error
from shutil import move import os txt_dir = "txtfiles" images_folder = "images_folder" move_des = "move_imgs_path" for file in os.listdir(txt_dir): if file.endswith('txt'): frame_name = file.split('.')[0]+ ".jpeg" print(frame_name) move(os.path.join(images_folder, frame_name), move_des)
https://stackoverflow.com/questions/66655151/
NER tagging schema for non-contiguous tokens
The most common tagging procedure for NER is IOB. But it seems that this kind of tagging is limited to cases where tokens from the same entity are contiguous. So for instance, Jane Smith is walking in the park would be tagged as: B-PER I-PER O O O O O And here my PER entity is the concatenation of [Jane, Smith] If we tweak the example: Jane and James Smith are walking in the park B-PER O B-PER I-PER O O O O O Now the issue is that the entities we would get are [Jane] and [James, Smith] because the IOB tagging does not allow to link Jane to Smith. Is there any tagging schema that would allow to mark as entities both [Jane, Smith] and [James, Smith]?
First, about doing this without a new data format: There are a paper and repo about doing this using TextAE for this: paper repo However, looking at their examples and yours, it seems like you could improve on what they did by using dependency parsing. If you look at the dependency parse of "Jane and James Smith are walking in the park", you can see that spaCy understands that Jane is conjoined with Smith. So after running entity extraction, you could do a dependency parse step, then edit your entities based on that. Now, to answer the real question. I have seen multi-dimensional labels that work in the following way (assume you have a maximum of ten entities per sentence: empty = [0,0,0,0,0,0,0,0,0] tokens = ["Jane", "and", "James", "Smith", "are", "walking", "in", "the", "park"] labels = [ [1, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0, 0, 0], ] labels = labels + [empty] * (10-len(labels)) If you have more than one entity type, you can use those instead of just 1. This format works better with BERT anyway, since the BIO format is a pain when you have to split up tokens into BPE anyway.
https://stackoverflow.com/questions/66655836/
Understanding pos_weight argument in BCEWithLogitsLoss
I am trying to understand how the pos_weight argument is being used in BCEWithLogitsLoss in order to be able to correctly define the pos_weight Tensor. The documentation only mentions: "a weight of positive examples. Must be a vector with length equal to the number of classes.". As I was not able to get a sufficient understanding by looking at the code (the actual code is hidden behind multiple function loaders and I didn't even manage to reach a point where pos_weight is used), I have several questions regarding the pos_weight argument: Is the weight for negative samples always 1? If the weight for negative samples is always 1 and assuming I want every sample to contribute equally to the loss I would do something like the following. Let l = [100, 10, 5, 15] where l[0] is the amount of negative samples and l[1:] is the amount of positive samples from each label. In pseudo code this looks like: l = [100, 10, 5, 15] lcm = LCM(l) # 300 weights = lcm / l # weights = [3, 30, 60, 20] weights = weights / l[0] # weights = [1, 10, 20, 6.6667] positive_weights = weights[1:] # [10, 20, 6.66667] criterion = nn.BCEWithLogitsLoss(pos_weight=positive_weights) Can someone please confirm if my understanding of how pos_weight should be used is correct?
I didn't manage to find a definitive answer but from my experiments it seems I was right, i.e. negative sample weights are considered to be 1 and increasing the weights past 1 for positive samples makes them more important. Additionally, the LCM approach for figuring out the weights is actually both not good and not needed. First it is bad because if you have a lot of classes with different number of samples the LCM can be an incredibly large number so very often computing it exactly is not even feasible. Additionally, it is not needed because you achieve the same weights by simply dividing the amount of negative samples with the amount of positive samples for every label and obtain the weights in that manner. Here is a short code snippet on how you can obtain the weights if you have your data in pandas DataFrame. Obtaining the weights in this way results in a 'balanced' dataset where ideally you optimize for both precision and recall. def get_train_dataset_pos_weights(self): """Returns weights for positive samples used for unbalanced datasets.""" num_pos_samples = self.train_df[self.label_columns].sum().sum() num_neg_samples = len(self.train_df) - num_pos_samples assert len(self.train_df) == (num_neg_samples + num_pos_samples) num_pos_samples_per_label = np.array( list(self.train_df[self.label_columns].sum())) weights = num_neg_samples / num_pos_samples_per_label return torch.from_numpy(weights)
https://stackoverflow.com/questions/66660354/
Dot product along a dimension
I have two tensors of shape [B, 3 , 240, 320] where B represents the batch size 3 represents the channels, 240 the height(H), 320 the width(W). I need to find the dot product along the channels dimension(3 channels) thus the resulting tensor would be of shape [B, 1 , 240, 320]. My tensors have float32 elements in gpu(cuda to backprop). Can you all please suggest how I can do that? Thanks! More clarification: Let's say we have B=10, H=100, W=200. So from the above would be common for both the first and seconds tensors. If we keep B, H, W constant we get a 1D vector as the resultant tensor(with 3 elements). I need to take the dot product of these two vectors. Thus the resultant tensor is of dimension [B, 1, 240, 320]
Dot product is the summation of multiplication of values in two vectors: So I am guessing you want to multiply all values along the channel dimension and need to find the summation of the result, please correct me if my understanding is wrong. import torch t1 = torch.rand(10, 3, 240, 320) t2 = torch.rand(10, 3, 240, 320) # Multiply two tensors and sum along the channel dimension multp = t1 * t2 summed = multp.sum(dim = 1, keepdim = True) print(summed.shape) # torch.Size([10, 1, 240, 320])
https://stackoverflow.com/questions/66663531/
Pytorch Conv1d on simple 1d signal
1d-convolution is pretty simple when it is done by hand. However, I want to implement what is done here using nn.Conv1d and it is not simple for me to do it. In this example h=[1,2,-1], x=[4,1,2,5] and the output is going to be y=[4,9,0,8,8,-5]. To do it using Pytorch we need to define h=nn.Conv1d(in, out, k) and x=torch.tensor(*) and y=h(x) should be the result. Note: please do not use nn.Conv2d to implement it.
First, you should be aware that the term "convolution" used in basically all literature related to convolutional neural networks (CNNs) actually corresponds to the correlation operation not the convolution operation. The only difference (for real-valued inputs) between correlation and convolution is that in convolution the kernel is flipped/mirrored before sliding it across the signal, whereas in correlation no such flipping occurs. There are also some extra operations that convolution layers in CNNs perform that are not part of the definition of convolution. They apply an offset (a.k.a. bias), they operate on mini-batches, and they map multi-channel inputs to multi-channel outputs. Therefore, in order to recreate a convolution operation using a convolution layer we should (i) disable bias, (ii) flip the kernel, and (iii) set batch-size, input channels, and output channels to one. For example, a PyTorch implementation of the convolution operation using nn.Conv1d looks like this: import torch from torch import nn x = torch.tensor([4, 1, 2, 5], dtype=torch.float) k = torch.tensor([1, 2, -1], dtype=torch.float) # Define these constants to differentiate the various usages of "1". BATCH_SIZE, IN_CH, OUT_CH = 1, 1, 1 # Pad with len(k)-1 zeros to ensure all non-zero outputs are computed. h = nn.Conv1d(IN_CH, OUT_CH, kernel_size=len(k), padding=len(k) - 1, bias=False) # Copy flipped k into h.weight. # h.weight is shape (OUT_CH, IN_CH, kernel_size), reshape k accordingly. # Perform copy inside no_grad context to avoid autograd issues. with torch.no_grad(): h.weight.copy_(torch.flip(k, dims=[0]).reshape(OUT_CH, IN_CH, -1)) # Input shape to h is assumed to be (BATCH_SIZE, IN_CH, SIGNAL_LENGTH), reshape x accordingly. # Output shape of h is (BATCH_SIZE, OUT_CH, OUTPUT_LENGTH), reshape output to 1D signal. y = h(x.reshape(BATCH_SIZE, IN_CH, -1)).reshape(-1) which results in >>> print(y) tensor([ 4., 9., 0., 8., 8., -5.], grad_fn=<ViewBackward>)
https://stackoverflow.com/questions/66663657/
pytorch question about tensor shape. And how to reshape a tensor
When I do print("action shape:, ", action.shape) for my tensor action I got (64,). It is the same as (1, 64)? And how do I reshape its size to (64,1)?
Technically it is not the same shape and in pytorch you will get an error if you have things that need a shape of (64,) but you give it (1,64) but it is easy to change it to (64,) by squeezing it. To reshape it to a size of (64, 1) you can do this action = action.unsqueeze(1) # or action = action.view(-1, 1) either will work but I would recommend the first one.
https://stackoverflow.com/questions/66664229/
when setting .eval() my model performs worse than when I set .train()
During the training phase, I select the model parameters with the best performance metric. if performance_metric.item()>max_performance: max_performance= performance_metric.item() torch.save(neural_net.state_dict(), PATH+'/best_model.pt') This is the neural network model used: class Neural_Net(nn.Module): def __init__(self, M,shape_input,batch_size): super(Neural_Net, self).__init__() self.lstm = nn.LSTM(shape_input,M) #self.dense1 = nn.Linear(shape_input,M) self.dense1 = nn.Linear(M,M) #Used with the LSTM torch.nn.init.xavier_uniform_(self.dense1.weight) self.dense2 = nn.Linear(M,M) torch.nn.init.xavier_uniform_(self.dense2.weight) self.dense3 = nn.Linear(M,1) torch.nn.init.xavier_uniform_(self.dense3.weight) self.drop = nn.Dropout(0.7) self.bachnorm1 = nn.BatchNorm1d(M) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() self.hidden_cell = (torch.zeros(1,batch_size,M),torch.zeros(1,batch_size,M)) def forward(self, x): lstm_out, self.hidden_cell = self.lstm(x.view(1 ,len(x), -1), self.hidden_cell) x = self.drop(self.relu(self.dense1(self.bachnorm1(lstm_out.view(len(x), -1))))) x = self.drop(self.relu(self.dense2(x))) x = self.relu(self.dense3(x)) return x After that I load the model with the best parameters and set the evaluation mode: neural_net.load_state_dict(torch.load(PATH+'/best_model.pt')) neural_net.eval() The results are completely random. When I set train() the performance is similar to the selected best model parameter. There is an important aspect of the eval() that I am forgetting? Is the batch normalization correctly used? I am using a batch the same size as in the training phase for the test phase.
Without knowing your batch size, training/test dataset size, or the training/test dataset discrepancies, this issue has been discussed on the pytorch forums previously here. In my experience, it sounds very much like your latent training data representation in your model is significantly different to your validation data representation. The main advice I can provide is for you to try reducing the momentum of your batchnorm layer. It might be worth substituting a layernorm layer instead (which doesn't track a running mean/standard deviation) OR setting track_running_stats=False in the batchnorm1d function and seeing if the problem persists.
https://stackoverflow.com/questions/66664400/
Visualizing CNN
Hi I'm trying to visualize CNN. I've been going through https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html to study CNN by visualizing the structure. What I couldn't understand is its dimension. import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 3x3 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net) So the code part is supposed to be the CNN model structure from the image. What I don't get is this. Convolutions happen from the input to C1 and 3*3 kernel was used. In this case, shouldn't the dimension of C1 be 30 X 30 instead of 28 X 28? According to the image, dimension of the input to layer F5 is 16 X 5 X 5 but the code says otherwise. It seems like layer F5 is taking an input of dimension 16 X 6 X 6. I'm not sure whether I'm taking it incorrectly or the image is wrong.
I'm pretty sure the image is wrong. If you check the documentation of Conv2d. Using the equation there, the first convolution layer should output (batch_size, 6, 30, 30). Running the model also confirms my conclusion. The image should be modified to: INPUT: 1 x 32 x 32 C1: 6 x 30 x 30 S2: 6 x 15 x 15 C2: 16 x 13 x 13 S2: 16 x 6 x 6
https://stackoverflow.com/questions/66666163/
Why does Pytorch EmbeddingBag with mode "max" not accept `per_sample_weights`?
Pytorch's EmbeddingBag allows for efficient lookup + reduce operations on varying length collections of embedding indices. There are 3 modes: "sum", "average" and "max" for the reduce operation. With "sum", you can also provide per_sample_weights giving you a weighted sum. Why is per_sample_weights not allowed for the "max" operation? Looking at how it's implemented, I can only assume there is an issue with performing a "ReduceMean" or "ReduceMax" operation after a "Mul" operation. Could that be something to do with calculating gradients?? p.s: It's easy enough to turn a weighted sum into a weighted average by dividing by the sum of the weights, but for "max" you can't get a weighted equivalent like that.
The argument per_sample_weights was only implemented for mode='sum', not due to technical limitations, but because the developers found no use cases for a "weighted max": I haven't been able to find use cases for "weighted mean" (which can be emulated via weighted sum) and "weighted max". Feature request: Weighted average for EmbeddingBag
https://stackoverflow.com/questions/66666246/
Which format is preferable for tflite model NCHW or NHWC?
For gpus nchw mode is preferrable due to speed benefits, but what is the preferable mode in terms of mobile perfomance for tflite model? Now converting from pytorch to tflite yields a working nchw model but is this mode optimal?
The TensorFlow Lite's matrix multiplication library, for convolution 2d and so on, prefers NHWC inputs . The TFLite converter tries to automatically transform the given NCHW weights to the corresponding NHWC weights if the given weights are constant to perform well on mobile. If the given weights are not constants, the converter will add a transpose operator after the NCHW weights to choose the NHWC's based convolution 2d algorithm for mobile.
https://stackoverflow.com/questions/66671358/
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed
I get the error like the title....and I found some answers so I try retain_graph=True, but it doesn't work. Maybe my code got another problems (it occurs in loss_actor.backward(retain_grah....)) q = torch.zeros(len(reward)) q_target = torch.zeros(len(reward)) for j, r in enumerate(reward): q_target[j] = self.critic_network(torch.transpose(next_state[j], 0, 1), self.actor_network(torch.transpose(next_state[j], 0, 1)).view(1, 1)) q_target[j] = r + (done[j] * gamma * q_target[j]).detach() q[j] = self.critic_network(torch.transpose(state[j], 0, 1), action[j].view(1, 1)) loss_critic = F.mse_loss(q, q_target) self.critic_optimizer.zero_grad() loss_critic.backward() self.critic_optimizer.step() b = torch.zeros(len(reward)) for j, r in enumerate(reward): b[j] = self.critic_network(torch.transpose(state[j], 0, 1), self.actor_network(torch.transpose(state[j], 0, 1)).view(1, 1)) loss_actor = -torch.mean(b) self.actor_optimizer.zero_grad() loss_actor.backward(retain_graph=True) self.actor_optimizer.step()
Based on the provided info about part of your computational graph, I assume, that loss_actor and loss_critic share some parts of it, I think its state (not sure) state -> q --> loss_critic <-- backward 1 | -------> b --> loss_actor <--- backward 2 to reproduce your example: # Some computations that produce state state = torch.ones((2, 2), requires_grad=True) ** 2 # Compute the first loss q = torch.zeros((1)) q[0] = state[0, 0] l1 = torch.sum(2 * q) l1.backward() # Compute the second loss b = torch.zeros((1)) b[0] = state[1, 1] l2 = torch.mean(2 * b) l2.backward() RuntimeError Traceback (most recent call last) <ipython-input-28-2ab509bedf7a> in <module> 10 b[0] = state[1, 1] 11 l2 = torch.mean(2 * b) ---> 12 l2.backward() RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time. Trying ... l2.backward(retain_graph=True) doesn't help, because you have to Specify retain_graph=True when calling backward the first time. here, on the first backward call (for l1) l1.backward(retain_graph=True)
https://stackoverflow.com/questions/66673231/
Pytorch -> [Onnx -> tensorflow] -> tflite generates lots of redundant conv2d operators
I am converting efficientnet from onnx to tensorflow for further conversion to tflite. The conversion from onnx to tensorflow yields strange results Onnx has 1 conv2d operator And after converting to tensorflow and later tflite model I get this stack of convs(All of them couldn't fit in the picture, there are 32 of the in total, as the number of kernels in onnx) I suspect the issue is with onnx-tf conversion. It seems like the conv2d output are split into 32 separate outputs, then they are processed individually on separate conv2d operator and then concatenated together. There are 2 connected issues on their github. First has a fix for depthwise convs but it can't be applied in my case https://github.com/onnx/onnx-tensorflow/issues/473 https://github.com/onnx/onnx-tensorflow/issues/754 Looking at the source of https://github.com/onnx/onnx-tensorflow/blob/master/onnx_tf/handlers/backend/conv_mixin.py They do indeed split the output produce separate convolutions and later concatenate them. Can this split to multiple convolutions be avoided?
After some additional digging I've found the following My convs were depthwise(conv2d is depthwise in pytorch and onnx if it has groups parameter > 1) This bunch of convs is an inefficient way of doing a depthwise conv. To do it efficiently we need to use tf.depthwiseconv To fix this in onnx-tf v1.7.0 you should apply a patch to onnx-tf source code posted here https://github.com/onnx/onnx-tensorflow/issues/473#issuecomment-533522917 In current master branch there is an attempt to fix the issue by detecting depthwise convs but it currently contains a bug. To fix the bug in master branch you can apply a fix I posted here https://github.com/onnx/onnx-tensorflow/issues/754#issuecomment-801775203 I've used the master branch and the fix above resulting in a small graph with depthwise conv I've also created a fork with the fix above, so you can do pip install git+https://github.com/Vozf/onnx-tensorflow to apply it instead of patching on your own It seems like the issue should be fixed in the next major release(probably 1.8.0) Also consider using https://github.com/nerox8664/onnx2keras as it supports depthwise convolutions and moreover supports full nhwc conversion which allows removing all the transpose ops.
https://stackoverflow.com/questions/66675299/
Installing ROCm 4.0.1
With the latest update, Pytorch offers support for ROCm 4.0.1. I've tried to install the latest version of ROCm using the guide on the official AMD site, but doing that I've only managed to install version 4.0.0. As I understand at this step wget -q -O - https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add - echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list I should change the repository, but I can't figure out how. Could someone help me with the exact lines that I should run in order to install 4.0.1?
It appears that the maintainers of the apt repository at https://repo.radeon.com/rocm/apt/debian/, have not yet uploaded a package for rocm 4.0.1 there. If you need this exact version, you can use the repository at http://repo.radeon.com/rocm/apt/4.0.1/ instead as recommended in the guide you linked to. Note that this will pin your rocm version to exactly this version without any automatic updates.
https://stackoverflow.com/questions/66677087/
Pytorch crashes cuda on wrong line
How to see which python line causes a cuda crash down the line in Pytorch, which executes asynchronous code outside of the GIL? Here is a case where I had Pytorch crash cuda, running this code on this dataset and every run would crash with the debugger on a different python line, making it very difficult to debug.
I found an answer in a completely unrelated thread in the forums. Couldn't find a Googleable answer, so posting here for future users' sake. Since CUDA calls are executed asynchronously, you should run your code with CUDA_LAUNCH_BLOCKING=1 python script.py This makes sure the right line of code will throw the error message.
https://stackoverflow.com/questions/66677500/
Pytorch Siamese NN with BERT for sentence matching
I'm trying to build a Siamese neural network using pytorch in which I feed BERT word embeddings and trying to find whether two sentences are similar or not (imagine duplicate posts matching, product matching etc). Here's the model: class SiameseNetwork(torch.nn.Module): def __init__(self): super(SiameseNetwork, self).__init__() self.brothers = torch.nn.Sequential( torch.nn.Linear(512 * 768, 512), torch.nn.BatchNorm1d(512), torch.nn.ReLU(inplace=True), torch.nn.Linear(512, 256), torch.nn.BatchNorm1d(256), torch.nn.ReLU(inplace=True), torch.nn.Linear(256, 32), ) self.final = torch.nn.Sequential( torch.nn.Linear(32, 16), torch.nn.ReLU(inplace=True), torch.nn.Linear(16, 2), ) def forward(self, left, right): outputLeft = self.brothers(left) outputRight = self.brothers(right) output = self.final((outputLeft - outputRight) ** 2) return output bros = SiameseNetwork() bros = bros.to(device) Criterion and optimizer: criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(params=bros.parameters(), lr=0.001) The training loop: for batch in tqdm(tLoader, desc=f"Train epoch: {epoch+1}"): a = batch[0].to(device) b = batch[1].to(device) y = torch.unsqueeze(batch[2].type(torch.FloatTensor), 1).to(device) optimizer.zero_grad() output = bros(a,b) loss = criterion(output, y) loss.backward() trainingLoss += loss.item() optimizer.step() Now, this seems to be working, as it produces results that are reasonable, but the validation error stops dropping at 0.13 after just a of epochs. Can't find a lot of things on this kind of NNs using Pytorch. Are there ways to optimize it? Am I doing something wrong?
Your first layer is severely overparameterized and prone to overfitting (counts a total of 201 million parameters). I assume the shape 512 * 768 reflects the number of tokens times their dimensionality; if that's the case, you need to rethink your architecture. You need some sort of weight sharing or pooling strategy to reduce the num_words * dim input to a fixed representation (that's exactly why recurrent networks replaced the fully-connected varieties for sentence encoding). Specifically in transformer-based architectures, the [CLS] token (token number 0, prefixing the input) is typically used as the "summary" token for sequence- and bisequence-level tasks.
https://stackoverflow.com/questions/66678360/
Sorting a list of tensors by their length in Pytorch
I have a list of tensors in the form list = [tensor([1,2]), tensor([3, 4, 5])] and would like to order it in descending order based on the length of the tensors. This means the sorted list should look like list = [tensor([3, 4, 5]), tensor([1, 2])]. Using .sort(key=length) does not work, and have also tried using .sort(key= lambda x: len(x)) unsuccessfully.
You should avoid using python built-ins (list) for your variable names. you can sort like following: list_tensors = [torch.tensor([1,2]), torch.tensor([3, 4, 5])] print(sorted(list_tensors, key=lambda x: x.size()[0])) which will output : [tensor([1, 2]), tensor([3, 4, 5])] Or in descending order : list_tensors = [torch.tensor([1,2]), torch.tensor([3, 4, 5])] print(sorted(list_tensors, key=lambda x: x.size()[0], reverse=True)) output : [tensor([3, 4, 5]), tensor([1, 2])]
https://stackoverflow.com/questions/66681859/
Why is this nerual network performing poorly on MNIST?
Hi I am building a nerual network in pytorch to classify MNIST and for the life of me, I can't seem to find out why this network won't perform above 7% accuracy. Any guidance would be nice. import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from keras.datasets import mnist from keras.utils.np_utils import to_categorical import numpy as np from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, confusion_matrix (X_train, Y_train), (X_test, Y_test) = mnist.load_data() X_train = X_train.astype("float32")/255 X_test = X_test.astype("float32")/255 X_train = X_train.reshape(X_train.shape[0],(X_train.shape[1] * X_train.shape[2])); X_test = X_test.reshape(X_test.shape[0],(X_test.shape[1] * X_test.shape[2])); class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.lin_1 = nn.Linear(784, 128) self.lin_2 = nn.Linear(128, 64) self.lin_3 = nn.Linear(64, 10) def forward(self,x) : x = self.lin_1(x) x = torch.relu(x) x = self.lin_2(x) x = torch.relu(x) x = self.lin_3(x) x = torch.softmax(x, dim=0) return x net = Net(); loss = torch.nn.CrossEntropyLoss(); optimizer = torch.optim.SGD(net.parameters(),lr = 0.01); X_train = torch.from_numpy(X_train); X_test = torch.from_numpy(X_test); y_train = torch.from_numpy(Y_train); y_test = torch.from_numpy(Y_test) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu"); X_train.to(device); X_test.to(device); y_train.to(device); y_test.to(device); net.to(device); loss.to(device); y_train = y_train.type(torch.long) y_test = y_test.type(torch.long) net.train() for epoch in range(10): #pred = torch.max(net(X_train),1); pred = net(X_train.to(device)); train_loss = loss(pred,y_train.to(device)); optimizer.zero_grad() train_loss.backward() optimizer.step() net.eval() pred = torch.max(net(X_test.to(device)),1)[1]; print('The accuracy for pytorch is ' , accuracy_score(y_test.cpu().numpy(),pred.cpu().numpy())); I feel like I have to transform the data somehow. That's why I am dividing the the training and testing data by 255, and the network excepts a float for the input and a long for the output. Here is a numpy version I made without pytorch from keras.datasets import mnist from keras.utils.np_utils import to_categorical (X_train, Y_train), (X_test, Y_test) = mnist.load_data() X_train = X_train.astype("float32")/255 X_test = X_test.astype("float32")/255 X_train = X_train.reshape(X_train.shape[0],(X_train.shape[1] * X_train.shape[2])); X_test = X_test.reshape(X_test.shape[0],(X_test.shape[1] * X_test.shape[2])); Y_train = to_categorical(Y_train); Y_test = to_categorical(Y_test) import numpy as np print(Y_test.shape) class DNN(): def __init__(self, sizes, epochs=10, lr = 0.01): self.sizes = sizes self.epochs = epochs self.lr = lr self.params = self.initialization(); def ReLu(self, x, derivative=False): if derivative: return 1. * (x > 0) else: return x * (x > 0) def softmax(self, x, derivative=False): # Numerically stable with large exponentials exps = np.exp(x - x.max()) if derivative: return exps / np.sum(exps, axis=0) * (1 - exps / np.sum(exps, axis=0)) return exps / np.sum(exps, axis=0) def initialization(self): # number of nodes in each layer input_layer=self.sizes[0] hidden_1=self.sizes[1] hidden_2=self.sizes[2] output_layer=self.sizes[3] params = { "W1":np.random.randn(hidden_1, input_layer) * np.sqrt(1. / hidden_1), "W2":np.random.randn(hidden_2, hidden_1) * np.sqrt(1. / hidden_2), "W3":np.random.randn(output_layer, hidden_2) * np.sqrt(1. / output_layer) } return params def forward (self,X_train): self.params["X0"] = X_train; self.params["Z1"] = np.dot(self.params["W1"], self.params["X0"]) self.params['X1'] = self.ReLu(self.params["Z1"]) self.params['Z2'] = np.dot(self.params["W2"], self.params["X1"]) self.params["X2"] = self.ReLu(self.params["Z2"]) self.params["Z3"] = np.dot(self.params["W3"], self.params["X2"]) self.params["X3"] = self.softmax(self.params["Z3"]) return self.params["X3"] def backpropagation (self, Y_train, output): update = {}; error = 2 * (output - Y_train) / output.shape[0] * self.softmax(self.params["Z3"], derivative=True) update["W3"] = np.outer(error, self.params["X2"]) error = np.dot(self.params["W3"].T, error) * self.ReLu(self.params["Z2"], derivative=True) update["W2"] = np.outer(error, self.params["X1"]) error = np.dot(self.params["W2"].T, error) * self.ReLu(self.params["Z1"], derivative=True) update["W1"] = np.outer(error, self.params["X0"]) return update def updateParams (self,update): for key, value in update.items(): #print(key) self.params[key] -= self.lr * value def test_accuracy(self, X_test, Y_train): predictions = [] for i in range(len(X_test)): output = self.forward(X_test[i]) pred = np.argmax(output) predictions.append(pred == np.argmax(Y_train[i])) return np.mean(predictions) def train(self, X_train, Y_train): for epoch in range(self.epochs): print("epoch ", epoch) for i in range(len(X_train)): output = self.forward(X_train[i]) update = self.backpropagation(Y_train[i], output) self.updateParams(update) dnn = DNN(sizes=[784, 200, 50, 10],epochs=10) dnn.train(X_train, Y_train) print("The accuracy of the numpy network on the test dataset is ", dnn.test_accuracy(X_test,Y_test))
Well, I can immediately tell there are a couple of problems with the code you provided: Please check the documentation for PyTorch's cross entropy loss function. If you read it, you'll notice that torch.nn.CrossEntropyLoss performs the softmax function internally. This means that you shouldn't really be using another torch.softmax as the output activation if you're using nn.CrossEntropyLoss. If for some reason you want to use softmax at the output layer, you should consider using nn.NLLLoss instead. If you look at the image I posted below, simply removing x = torch.softmax(x, dim=0) causes the loss to fall, whereas using it causes the loss to be the same (hence, bad). You are training with way too few epochs. I tried running your code with 3,000 epochs rather than 10, and the end performance is 0.9028 rather than the original 0.1038. You can also see that the loss value drops much more compared to the original implementation (second picture). Edit After taking a look at your NumPy code the problem became clearer. My second point still holds in essence: you're not training your model nearly enough. I somewhat incorrectly used the term "epoch" above, but what I really meant is "steps." If you look at your NumPy code, you have two for loops: the outer one is the number of epochs and the inner one loops through the training data. You're apparently using single-batch training for ten epochs. This implies that you're updating your model's parameters for a total of 600,000 times (60,000 training samples * 10 epochs) for the entire process. For your PyTorch code you're feeding the entire training data in one batch and training for ten epochs. This means you're updating your parameters only ten times. If you modify your PyTorch code to be: for epoch in range(10): net.train() for idx, _ in enumerate(X_train): prediction = net(X_train[idx].to(device)) train_loss = loss(prediction.unsqueeze(0), y_train[idx].unsqueeze(0).to(device)) optimizer.zero_grad() train_loss.backward() optimizer.step() net.eval() prediction = torch.max(net(X_test.to(device)), 1)[1] accuracy = accuracy_score(y_test,cpu().numpy(), prediction.cpu().numpy()) print(f"Epoch {epoch + 1} test accuracy is {accuracy}.") then you'll notice that it only takes two epochs for the model to reach 96% accuracy.
https://stackoverflow.com/questions/66683381/
Numpy/Torch : Selecting elements using indices over a dimension
Given an array like below: np.arange(12).reshape(4,3) Out[119]: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) I want to select a single element from each of the rows using a list of indices [0, 2, 1, 2] to create a 4x1 array of [0, 5, 7, 11]. Is there any easy way to do this indexing. The closest I could see was the gather method in pytorch.
arr[[0,1,2,3], [0,2,1,2]] or if you prefer np.arange(4) for the 1st indexing array.
https://stackoverflow.com/questions/66684378/
Fill tensor with another tensor where mask is true
I need to insert elements of tensor new into a tensor old with a certain probability, let's say that it is 0.8 for simplicity. Substantially this is what masked_fill would do, but it only works with monodimensional tensor. Actually I am doing prob = torch.rand(trgs.shape, dtype=torch.float32).to(trgs.device) mask = prob < 0.8 dim1, dim2, dim3, dim4 = new.shape for a in range(dim1): for b in range(dim2): for c in range(dim3): for d in range(dim4): old[a][b][c][d] = old[a][b][c][d] if mask[a][b][c][d] else new[a][b][c][d] which is awful. I would like something like prob = torch.rand(trgs.shape, dtype=torch.float32).to(trgs.device) mask = prob < 0.8 old = trgs.multidimensional_masked_fill(mask, new)
I am not sure what some of your objects are, but this should get you to do what you need in short order: old is the your existing data. mask is the mask you generated with probability p new is the new tensor that has elements you want to insert. # torch.where result = old.where(mask, new)
https://stackoverflow.com/questions/66688647/
How to save a list of pytorch models
This is a newbie question. I have trained 8 pytorch convolutional models and put them in a list called models. I can use them for prediction so they are working. I would like to save them. I can't even work out how to save one however. I tried: torch.save(models[0].save_dict(), "test.pth") but this gives me: ModuleAttributeError: 'Net' object has no attribute 'save_dict' type(models[0]) gives me __main__.Net How can I save this list of models?
To save model weights for one model (example the 1st) you should do torch.save(models[0].state_dict(), "test0.pth") And to load it back you should instantiate a new object and load weights model0 = Net() # or however you instantiate the model in the first place model0.load_state_dict(torch.load("test0.pth")) You can save your other models this way. And don't forget to call model.eval() if you are using it for inference If you want to save all models weights using a for loop for i, m in enumerate(models): torch.save(m.state_dict(), f"test{i}.pth") And to load them all, supposing your model instantiation is something like follow : models = [Net() for i in range(8)] # Note that you should adapt this to whatever method you use to create your models in the first place for i, m in enumerate(models): m.load_state_dict(torch.load(f"test{i}.pth")) # if you are using it for inference uncomment the line below # m.eval() and that's it your variable models is ready for use Last but not least you can save the whole at once (but discouraged approach) torch.save(models, "all_models.pth") # to load back models = torch.load("all_models.pth") However, This will cause error if you try to load models for example from another python script where the definition of your class Net is not available
https://stackoverflow.com/questions/66689639/
How to load a Pytorch model when the parameters are saved as numpy arrays?
On this GitHub repo, I've downloaded the pretrained model senet50_ft. I load it like so: import pickle f = open('pretrained_models/senet50_ft_weight.pkl', 'rb') state_dict = pickle.load(f, encoding='latin1') f.close() The state is loaded, the Github repos also provides the SENet model Class here. So I managed to instanciate that model: model = senet.senet50() Then I Tried to load the state, but I got an error: model.load_state_dict(state_dict) Traceback (most recent call last): File "...\module.py", line 982, in _load_from_state_dict param.copy_(input_param) TypeError: copy_(): argument 'other' (position 1) must be Tensor, not numpy.ndarray During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "...\module.py", line 1037, in load_state_dict load(self) File "...\module.py", line 1035, in load load(child, prefix + name + '.') File "...\module.py", line 1032, in load state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs) File "...\module.py", line 988, in _load_from_state_dict .format(key, param.size(), input_param.size(), ex.args)) TypeError: 'int' object is not callable I tried to convert ndarray to Tensor by doing the following: for key in state_dict.keys(): state_dict[key] = torch.from_numpy(state_dict[key]) But I got an another error and I think I'm not going anywhere. I'm new to PyTorch but I suspect that this model was serialized with an old version of PyTorch. Do you know if a solution exists?
They have a load_state_dict function that does what you want.
https://stackoverflow.com/questions/66691620/
How can I emulate/run PyTorch model that uses ATen STFT implementation on arm based cpu?
I am trying to run my PyTorch model for ASR on an arm based device without gpu. As far as I know, arm does not support MKL which ATen uses. Naturally, I am getting the following error when I try to make inference: RuntimeError: fft: ATen not compiled with MKL support How can I solve this problem? Are there any alternatives that I can use?
I solved this issue by bypassing PyTorch's stft implementation. This may not be feasible for everyone, but in my case it allowed me to make predictions using my model with no issues on arm device. The problem stemmed from _VF.stft call in packages/torch/functional.py. I changed the line return _VF.stft(input, n_fft, hop_length, win_length, window, normalized, onesided, return_complex) with: librosa_stft = librosa.stft(input.cpu().detach().numpy().reshape(-1), n_fft, hop_length, win_length, window="hann", center=True, pad_mode=pad_mode) librosa_stft = np.array([[a.real, a.imag] for a in librosa_stft]) librosa_stft = np.transpose(librosa_stft, axes=[0, 2, 1]) librosa_stft = np.expand_dims(librosa_stft, 0) librosa_stft = torch.from_numpy(librosa_stft) return librosa_stft This code may be optimized further. I just tried to replicate what PyTorch did by using Librosa. Resulting output is same in both versions in my case. But you should check your outputs to be sure if you decide to use this method.
https://stackoverflow.com/questions/66693151/
while exporting T5 model to onnx using fastT5 getting "RuntimeError:output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2]"
i'm trying to convert T5 model to onnx using the fastT5 library, but getting an error while running the following code from fastT5 import export_and_get_onnx_model from transformers import AutoTokenizer model_name = 't5-small' model = export_and_get_onnx_model(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) t_input = "translate English to French: The universe is a dark forest." token = tokenizer(t_input, return_tensors='pt') tokens = model.generate(input_ids=token['input_ids'], attention_mask=token['attention_mask'], num_beams=2) output = tokenizer.decode(tokens.squeeze(), skip_special_tokens=True) print(output) the error: /usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py:244: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-16-80094b7c4f6f> in <module>() 7 input_names=decoder_input_names, 8 output_names=decoder_output_names, ----> 9 dynamic_axes=dyn_axis_params, 10 ) 24 frames /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions) 497 position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length) 498 --> 499 scores += position_bias 500 attn_weights = F.softmax(scores.float(), dim=-1).type_as( 501 scores RuntimeError: output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2] can someone please help me solve the issue? thank you.
I've checked the repository, it looks like a known issue as reported here : https://github.com/Ki6an/fastT5/issues/1 Developer of the library has posted a solution and created a notebook file here: https://colab.research.google.com/drive/1HuH1Ui3pCBS22hW4djIOyUBP5UW93705?usp=sharing Solution is to modify modeling_t5.py file, at line 494 : # Define this at line 426: int_seq_length = int(seq_length) # Change this at line 494: position_bias = position_bias[:, :, -seq_length:, :] position_bias = position_bias[:, :, -int_seq_length:, :] # Updated version If you don't want to modify the file yourself, you will need to wait until this pull request to be merged into Transformers library.
https://stackoverflow.com/questions/66693724/
I can iterate through my first iteration of PyTorch successfully, but am unable to do a second iteration. Please see code and error I get
I can iterate through my first iteration of PyTorch successfully, but am unable to do a second iteration. Please see my code and error I get if I were to run my code a second time through. current_grad = 0 l_rate = 10**-4 x=torch.tensor([[1.0,2.,4.],[2.,3.,2.]]) y=torch.tensor([1.0,0.]) w=torch.tensor([.5,2.,1.], requires_grad=True) # forward propagate output = x @ w.T y_pred = 1/(1+ math.e**-(output)) # objective function loss = sum(y*(y_pred**.5) + ((1-y) * (1-y_pred)**.5)) / len(y_pred) # now, get gradient over these layers x.requires_grad =True y.requires_grad =True w.requires_grad =True loss.backward() # update only 1 set of weights here. with torch.no_grad(): w = w + (w.grad * l_rate) I get an error at my loss.backward line: TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' How can I fix this so that my w.grad is not NoneType on the second time around?
The main issue with your code is that w = w + (w.grad * l_rate) is assigning a new variable to w. Instead you need to update w in place. with torch.no_grad(): w.copy_(w + w.grad * l_rate) Some other issues Learning rate is much too small for this problem You need to zero the gradients of w after each step since backwards accumulates gradients. Setting requires_grad for x and y is unnecessary since you don't need the gradient of loss w.r.t. these tensors. In your code, the value of "loss" is actually something you want to maximize, since your objective function is maximal when y = y_pred. Generally we wouldn't call this loss, because that would imply you want to minimize it (only a masochist wants to maximize their loss :P). Correcting for these issues import torch l_rate = 0.1 x = torch.tensor([[1.0, 2., 4.], [2., 3., 2.]]) y = torch.tensor([1.0, 0.]) w = torch.tensor([.5, 2., 1.], requires_grad=True) # training loop ... # forward propagate output = x @ w y_pred = torch.sigmoid(output) # objective function objective = torch.mean(y * (y_pred**.5) + ((1 - y) * (1 - y_pred)**.5)) # compute gradient of objective w.r.t. w objective.backward() with torch.no_grad(): # gradient ascent w.copy_(w + (w.grad * l_rate)) # zero the gradients w.grad.zero_()
https://stackoverflow.com/questions/66695858/
Transfer Learning Segmentation Model Perfoming Significantly Worse on Test Data
I am quite new to the field of semantic segmentation and have recently tried to run the code provided on this paper: Transfer Learning for Brain Tumor Segmentation that was made available on GitHub. It is a semantic segmentation task that uses the BraTS2020 dataset, comprising of 4 modalities, T1, T1ce, T2 and FLAIR. The author utilised a transfer learning approach using Resnet34 weights. Due to hardware constraints, I had to half the batch size from 24 to 12. However, after training the model, I noticed a significant drop in performance, with the Dice Score (higher is better) of the 3 classes being only around 5-19-11 as opposed to the reported result of 78-87-82 in the paper. The training and validation accuracies however, seem to be performing normally, just that the model does not perform well on test data, I selected the model that was produced before overfitting (validation loss starts increasing but training loss still decreasing) but yielded equally bad results. So far I have tried: Decreasing the learning rate from 1e-3 to 1e-4, yielded similar results Increased the number of batches fed to the model per training epoch to 200 batches per epoch, to match the number of iterations ran in the paper since I effectively halved the batch size - (100 batches per epoch, batch size of 24) I noticed that image augmentations were applied to the training and validation dataset to increase the robustness of the model training. Do these augmentations need to be performed on the test set in order to make predictions? There are no resizing transforms, transforms that are present are Gaussian Blur and Noise, change in brightness intensity, rotations, elastic deformation, and mirroring, all implemented using the example here. I'd greatly appreciate help on these questions: By doubling the number of batches per epoch, it effectively matches the number of iterations performed as in the original paper since the batch size is halved. Is this the correct approach? Does the test set data need to be augmented similarly to the training data in order to perform predictions? (Note: no resizing transformations were performed)
Technically, for a smaller batch the number of iterations should be higher for convergence. So, your approach is going to help, but it probably won't give the same performance boost as doubling the batch size. Usually, we don't use augmentation on test data. But if the transformation applied on training and validation is not applied to the test data, the test performance will be poor, no doubt. You can try test time augmentation though, even though it's not very common for segmentation tasks https://github.com/qubvel/ttach
https://stackoverflow.com/questions/66697928/
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
I am doing training and put the dataset inside the data folder. The Structure looks like this. --data -----mars ---------bbox_train ---------bbox_test ---------info Many developers said that this is a label problem but I am not sure because labels are in the right place. Traceback (most recent call last): Args:Namespace(arch='resnet50graphpoolparthyper', concat=False, dataset='mars', dropout=0.1, eval_step=100, evaluate=False, gamma=0.1, gpu_devices='0', height=256, htri_only=False, lr=0.0003, margin=0.3, max_epoch=800, nheads=8, nhid=512, num_instances=4, part1=4, part2=8, part3=2, pool='avg', pretrained_model='/home/jiyang/Workspace/Works/video-person-reid/3dconv-person-reid/pretrained_models/resnet-50-kinetics.pth', print_freq=80, save_dir='log_hypergraphsagepart', seed=1, seq_len=8, start_epoch=0, stepsize=200, test_batch=1, train_batch=32, use_cpu=False, warmup=True, weight_decay=0.0005, width=128, workers=4, xent_only=False) ========== Currently using GPU 0 Initializing dataset mars => MARS loaded Dataset statistics: ------------------------------ subset | # ids | # tracklets ------------------------------ train | 625 | 8298 query | 626 | 1980 gallery | 622 | 9330 ------------------------------ total | 1251 | 19608 number of images per tracklet: 2 ~ 920, average 59.5 ------------------------------ Initializing model: resnet50graphpoolparthyper Model size: 44.17957M ==> Epoch 1/800 lr:1.785e-05 Traceback (most recent call last): File "main_video_person_reid_hypergraphsage_part.py", line 357, in <module> main() File "main_video_person_reid_hypergraphsage_part.py", line 220, in main train(model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu) File "main_video_person_reid_hypergraphsage_part.py", line 257, in train outputs, features = model(imgs) File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 165, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/media/khawar/HDD_Khawar1/hypergraph_reid/models/ResNet_hypergraphsage_part.py", line 621, in forward x = self.base(x) File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/media/khawar/HDD_Khawar1/hypergraph_reid/models/resnet.py", line 213, in forward x = self.conv1(x) File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 399, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 396, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
Installing torch with CUDA 11.1 with the following command did fix the initial issue with torch 1.8: pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/66701442/
Maximum Input Length of words/sentences of the Pegasus Model in the Transformers library
In the Transformers library, what is the maximum input length of words and/or sentences of the Pegasus model? I read in the Pegasus research paper that the max was 512 tokens, but how many words and/or sentences is that? Also, can you increase the maximum number of 512 tokens?
In the Transformers library, what is the maximum input length of words and/or sentences of the Pegasus model? It actually depends on your pretraining. You can create a pegagsus model that supports a length of 100 tokens or 10000 tokens. For example the model google/pegasus-cnn_dailymail supports 1024 tokens, while google/pegasus-xsum supports 512: from transformers import PegasusTokenizerFast t = PegasusTokenizerFast.from_pretrained("google/pegasus-xsum") t2 = PegasusTokenizerFast.from_pretrained("google/pegasus-cnn_dailymail") print(t.max_len_single_sentence) print(t2.max_len_single_sentence) Output: 511 1023 The numbers are reduced by one because of the special token that is added to each sequence. I read in the Pegasus research paper that the max was 512 tokens, but how many words and/or sentences is that? That depends on your vocabulary. from transformers import PegasusTokenizerFast t = PegasusTokenizerFast.from_pretrained("google/pegasus-xsum") print(t.tokenize('This is a test sentence')) print("I know {} tokens".format(len(t))) Output: ['▁This', '▁is', '▁a', '▁test', '▁sentence'] I know 96103 tokens A word can be a token but it can also be split into several tokens: print(t.tokenize('neuropsychiatric conditions')) Output: ['▁neuro', 'psych', 'i', 'atric', '▁conditions'] Also, can you increase the maximum number of 512 tokens? Yes, you can train a model with a pegasus architecture for a different input length but this is costly.
https://stackoverflow.com/questions/66703229/
How to use fine-tuned BERT model for sentence encoding?
I fine-tuned the BERT base model on my own dataset following the script here: https://github.com/cedrickchee/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning I saved the model as a .pt file and I want to use it now for a sentence similarity task. Unfortunately, it is not clear to me, how to load the fine-tuned model. I tried the following: model = BertModel.from_pretrained('trained_model.pt') model.eval() This doesn't work. It says: ReadError: not a gzip file So apparently, loading a .pt file with the from_pretrained method is not possible. Can anyone help me out here? Thank's a lot!! :) Edit: I saved the model in a s3 bucket as follows: # Convert model to buffer buffer = io.BytesIO() torch.save(model, buffer) # Save in s3 bucket output_model_file = output_folder + "trained_model.pt" s3_.put_object(Bucket="power-plant-embeddings", Key=output_model_file, Body=buffer.getvalue())
To load a model with BertModel.from_pretrained() you need to have saved it using save_pretrained() (link). Any other storage method would require the corresponding load. I am not familiar with S3, but I assume you can use get_object (link) to retrieve the model, and then save it using the huggingface api. From then on, you should be able to use from_pretrained() normally.
https://stackoverflow.com/questions/66707770/
My OpenCV Live Webcam Demo Doesn't Show Accurate Emotions
I developed a code by following a non-complete tutorial for emotion recognition with using PyTorch. I had so many errors but i fixed them thanks to the other questions here. But i simply stucked at this one. I am running my code, then my webcam opens but i see "Neutral" emotions all the time, even tough i perform different emotions. But on the console i am seeing other emotions as recorded, like: tensor(0, device='cuda:0') tensor(6, device='cuda:0') tensor(4, device='cuda:0') tensor(4, device='cuda:0') You have any idea? Here's my code: import cv2 ### pip install opencv-python ##pip install open-cv-contrib-python fullpackage #from deepface import DeepFace ##pip install deepface import numpy as np path = "haarcascade_frontalface_default.xml" font_scale = 1.5 font = cv2.FONT_HERSHEY_PLAIN #set the rectangle background to white rectangle_bgr = (255, 255, 255) # make a black image img = np.zeros((500, 500)) #set some text text = "VİDGA Projesi" #get the width and height of the text box (text_width, text_height) = cv2.getTextSize(text, font, fontScale=font_scale, thickness=1)[0] #set the text start position text_offset_x = 10 text_offset_y = img.shape[0] - 25 #make the coords of the box with a small padding of two pixels box_coords = ((text_offset_x, text_offset_y), (text_offset_x + text_width + 2, text_offset_y - text_height -2)) cv2.rectangle(img, box_coords[0], box_coords[1], rectangle_bgr, cv2.FILLED) cv2.putText(img, text, (text_offset_x, text_offset_y), font, fontScale=font_scale, color= (0,0,0), thickness=1) cap = cv2.VideoCapture(1) #Check if the webcam is opened correctly if not cap.isOpened(): cap = cv2.VideoCapture(0) if not cap.isOpened(): raise IOError("Cannot open webcam") while True: ret, frame = cap.read() #eye_Cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml') faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') if ret == True: gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #print(faceCascade.empty()) faces = faceCascade.detectMultiScale(gray,1.1,4) for x,y,w,h in faces: roi_gray = gray[y:y+h, x:x+w] roi_color = frame[y:y+h, x:x+w] cv2.rectangle(frame, (x,y), (x+w, y+h), (255, 0, 0), 2) facess = faceCascade.detectMultiScale(roi_gray) if len(facess) == 0: print("Face not detected") else: for(ex,ey,ew,eh) in facess: face_roi = roi_color[ey: ey+eh, ex: ex+ew] ##cropping the face graytemp = cv2.cvtColor(face_roi, cv2.COLOR_BGR2GRAY) final_image = cv2.resize(graytemp, (48,48)) final_image = np.expand_dims(final_image, axis =0) #add third dimension final_image = np.expand_dims(final_image, axis =0) #add fourth dimension final_image = final_image/255.0 # normalization dataa = torch.from_numpy(final_image) dataa = dataa.type(torch.FloatTensor) dataa = dataa.to(device) outputs = net(dataa) Pred = F.softmax(outputs, dim=1) Predictions = torch.argmax(Pred) print(Predictions) font = cv2.FONT_HERSHEY_SIMPLEX font_scale = 1.5 font = cv2.FONT_HERSHEY_PLAIN if ((Predictions)==0): status = "Angry" x1,y1,w1,h1, = 0,0,175,5 # Draw black background rectangle cv2.rectangle(frame, (x1, x1), (x1 + w1, y1 + h1), (0,0,0), -1) # Add text cv2.putText(frame, status, (x1 + int(w1/10), y1 + int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,255), 2) cv2.putText(frame, status, (100,150), font, 3,(0,0,255), 2, cv2.LINE_4) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255)) elif ((Predictions)==1): status = "Disgust" x1,y1,w1,h1, = 0,0,175,5 # Draw black background rectangle cv2.rectangle(frame, (x1, x1), (x1 + w1, y1 + h1), (0,0,0), -1) # Add text cv2.putText(frame, status, (x1 + int(w1/10), y1 + int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,255), 2) cv2.putText(frame, status, (100,150), font, 3,(0,0,255), 2, cv2.LINE_4) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255)) elif ((Predictions)==2): status = "Fear" x1,y1,w1,h1, = 0,0,175,5 # Draw black background rectangle cv2.rectangle(frame, (x1, x1), (x1 + w1, y1 + h1), (0,0,0), -1) # Add text cv2.putText(frame, status, (x1 + int(w1/10), y1 + int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,255), 2) cv2.putText(frame, status, (100,150), font, 3,(0,0,255), 2, cv2.LINE_4) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255)) elif ((Predictions)==3): status = "Happy" x1,y1,w1,h1, = 0,0,175,5 # Draw black background rectangle cv2.rectangle(frame, (x1, x1), (x1 + w1, y1 + h1), (0,0,0), -1) # Add text cv2.putText(frame, status, (x1 + int(w1/10), y1 + int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,255), 2) cv2.putText(frame, status, (100,150), font, 3,(0,0,255), 2, cv2.LINE_4) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255)) elif ((Predictions)==4): status = "Sad" x1,y1,w1,h1, = 0,0,175,5 # Draw black background rectangle cv2.rectangle(frame, (x1, x1), (x1 + w1, y1 + h1), (0,0,0), -1) # Add text cv2.putText(frame, status, (x1 + int(w1/10), y1 + int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,255), 2) cv2.putText(frame, status, (100,150), font, 3,(0,0,255), 2, cv2.LINE_4) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255)) elif ((Predictions)==5): status = "Surprised" x1,y1,w1,h1, = 0,0,175,5 # Draw black background rectangle cv2.rectangle(frame, (x1, x1), (x1 + w1, y1 + h1), (0,0,0), -1) # Add text cv2.putText(frame, status, (x1 + int(w1/10), y1 + int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,255), 2) cv2.putText(frame, status, (100,150), font, 3,(0,0,255), 2, cv2.LINE_4) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255)) elif ((Predictions)==6): status = "Neutral" x1,y1,w1,h1, = 0,0,175,5 # Draw black background rectangle cv2.rectangle(frame, (x1, x1), (x1 + w1, y1 + h1), (0,0,0), -1) # Add text cv2.putText(frame, status, (x1 + int(w1/10), y1 + int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,255), 2) cv2.putText(frame, status, (100,150), font, 3,(0,0,255), 2, cv2.LINE_4) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255)) if ret == True: cv2.imshow('VIDGA Emotion Recognition', frame) if cv2.waitKey(2) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows
Predictions is a tensor, so you need its value and not the tensor itself. Change this line: Predictions = torch.argmax(Pred) with this: Predictions = torch.argmax(Pred).item()
https://stackoverflow.com/questions/66708076/
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. GPU not detected by pytorch
Having trouble with CUDA + Pytorch this is the error. I reinstalled CUDA and cudnn multiple times. Conda env is detecting GPU but its giving errors with pytorch and certain cuda libraries. I tried with Cuda 10.1 and 10.0, and cudnn version 8 and 7.6.5, Added cuda to path and everything. However anaconda is showing cuda tool kit 9.0 is installed, whilst I clearly installed 10.0, so I am not entirely sure what's the deal with that. => loading model from models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth Traceback (most recent call last): File "hydroman2.py", line 580, in <module> pose_model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False) File "C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py", line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py", line 773, in _legacy_load result = unpickler.load() File "C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py", line 729, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File "C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py", line 178, in default_restore_location result = fn(storage, location) File "C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py", line 154, in _cuda_deserialize device = validate_cuda_device(location) File "C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py", line 138, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. System info System info: -------------------------------------------------------------------------------- __Time Stamp__ Report started (local time) : 2021-03-19 19:59:06.957967 UTC start time : 2021-03-19 15:59:06.957967 Running time (s) : 4.003899 __Hardware Information__ Machine : AMD64 CPU Name : znver1 CPU Count : 12 Number of accessible CPUs : 12 List of accessible CPUs cores : 0 1 2 3 4 5 6 7 8 9 10 11 CFS Restrictions (CPUs worth of runtime) : None CPU Features : 64bit adx aes avx avx2 bmi bmi2 clflushopt clzero cmov cx16 cx8 f16c fma fsgsbase fxsr lzcnt mmx movbe mwaitx pclmul popcnt prfchw rdrnd rdseed sahf sha sse sse2 sse3 sse4.1 sse4.2 sse4a ssse3 xsave xsavec xsaveopt xsaves Memory Total (MB) : 16334 Memory Available (MB) : 8787 __OS Information__ Platform Name : Windows-10-10.0.19041-SP0 Platform Release : 10 OS Name : Windows OS Version : 10.0.19041 OS Specific Version : 10 10.0.19041 SP0 Multiprocessor Free Libc Version : ? __Python Information__ Python Compiler : MSC v.1916 64 bit (AMD64) Python Implementation : CPython Python Version : 3.8.5 Python Locale : en_US.cp1252 __LLVM Information__ LLVM Version : 10.0.1 __CUDA Information__ CUDA Device Initialized : True CUDA Driver Version : 11020 CUDA Detect Output: Found 1 CUDA devices id 0 b'GeForce GTX 1070' [SUPPORTED] compute capability: 6.1 pci device id: 0 pci bus id: 6 Summary: 1/1 devices are supported CUDA Librairies Test Output: Finding cublas from <unknown> named cublas.dll trying to open library... ERROR: failed to open cublas: Could not find module 'cublas.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding cusparse from <unknown> named cusparse.dll trying to open library... ERROR: failed to open cusparse: Could not find module 'cusparse.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding cufft from <unknown> named cufft.dll trying to open library... ERROR: failed to open cufft: Could not find module 'cufft.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding curand from <unknown> named curand.dll trying to open library... ERROR: failed to open curand: Could not find module 'curand.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding nvvm from <unknown> named nvvm.dll trying to open library... ERROR: failed to open nvvm: Could not find module 'nvvm.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding cudart from <unknown> named cudart.dll trying to open library... ERROR: failed to open cudart: Could not find module 'cudart.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding libdevice from <unknown> searching for compute_20... ERROR: can't open libdevice for compute_20 searching for compute_30... ERROR: can't open libdevice for compute_30 searching for compute_35... ERROR: can't open libdevice for compute_35 searching for compute_50... ERROR: can't open libdevice for compute_50 __ROC information__ ROC Available : False ROC Toolchains : None HSA Agents Count : 0 HSA Agents: None HSA Discrete GPUs Count : 0 HSA Discrete GPUs : None __SVML Information__ SVML State, config.USING_SVML : True SVML Library Loaded : True llvmlite Using SVML Patched LLVM : True SVML Operational : True __Threading Layer Information__ TBB Threading Layer Available : False +--> Disabled due to Unknown import problem. OpenMP Threading Layer Available : True +-->Vendor: MS Workqueue Threading Layer Available : True +-->Workqueue imported successfully. __Numba Environment Variable Information__ None found. __Conda Information__ Conda Build : 3.20.5 Conda Env : 4.9.2 Conda Platform : win-64 Conda Python Version : 3.8.5.final.0 Conda Root Writable : True __Installed Packages__ _pytorch_select 1.1.0 cpu anaconda _tflow_select 2.3.0 mkl anaconda absl-py 0.12.0 pypi_0 pypi alabaster 0.7.12 pypi_0 pypi appdirs 1.4.3 py36h28b3542_0 anaconda argparse 1.4.0 pypi_0 pypi asn1crypto 1.3.0 py36_0 anaconda astor 0.8.1 pyh9f0ad1d_0 conda-forge astunparse 1.6.3 pypi_0 pypi atomicwrites 1.4.0 py_0 anaconda attrs 19.3.0 py_0 anaconda babel 2.9.0 pypi_0 pypi backcall 0.2.0 py_0 anaconda backports 1.0 py_2 anaconda backports.weakref 1.0.post1 py36h9f0ad1d_1001 conda-forge blas 1.0 mkl anaconda bleach 1.5.0 py36_0 conda-forge blinker 1.4 py_1 conda-forge brotlipy 0.7.0 py36he774522_1000 anaconda bzip2 1.0.8 he774522_0 anaconda ca-certificates 2020.10.14 0 anaconda cachetools 4.1.1 py_0 anaconda certifi 2020.6.20 py36_0 anaconda cffi 1.14.0 py36h7a1dbc1_0 anaconda chardet 3.0.4 py36_1003 anaconda click 7.1.2 pyh9f0ad1d_0 conda-forge cloudpickle 1.4.1 py_0 anaconda colorama 0.4.3 py_0 anaconda contextlib2 0.6.0.post1 py_0 anaconda cpuonly 1.0 0 pytorch cryptography 2.9.2 py36h7a1dbc1_0 anaconda cudatoolkit 9.0 1 anaconda cudnn 7.6.5 cuda9.0_0 anaconda curl 7.71.0 h2a8f88b_0 anaconda cycler 0.10.0 py36h009560c_0 anaconda cython 0.29.22 pypi_0 pypi cytoolz 0.10.1 py36he774522_0 anaconda dask-core 2.19.0 py_0 anaconda decorator 4.4.2 py_0 anaconda defusedxml 0.6.0 py_0 anaconda dlib 19.20 py36h5653133_1 conda-forge docker-py 4.2.1 py36h9f0ad1d_0 conda-forge docker-pycreds 0.4.0 py_0 anaconda docutils 0.16 pypi_0 pypi easydict 1.7 pypi_0 pypi entrypoints 0.3 py36_0 anaconda ffmpeg 2.7.0 0 menpo flake8 3.8.3 py_0 anaconda flake8-polyfill 1.0.2 py36_0 anaconda flake8-quotes 3.0.0 pyh9f0ad1d_0 conda-forge flatbuffers 1.12 pypi_0 pypi freetype 2.10.2 hd328e21_0 anaconda gast 0.2.2 pypi_0 pypi geos 3.8.1 h33f27b4_0 anaconda gettext 0.19.8.1 hb01d8f6_1002 conda-forge git 2.23.0 h6bb4b03_0 anaconda glib 2.58.3 py36h04c7ab9_1004 conda-forge google-auth 1.28.0 pypi_0 pypi google-auth-oauthlib 0.4.3 pypi_0 pypi google-pasta 0.2.0 pyh8c360ce_0 conda-forge grpcio 1.32.0 pypi_0 pypi h5py 2.10.0 py36h5e291fa_0 anaconda hdf5 1.10.4 h7ebc959_0 anaconda html5lib 0.9999999 py36_0 conda-forge icc_rt 2019.0.0 h0cc432a_1 anaconda icu 58.2 ha925a31_3 anaconda idna 2.10 py_0 anaconda imageio 2.8.0 py_0 anaconda imageio-ffmpeg 0.4.2 py_0 conda-forge imagesize 1.2.0 pypi_0 pypi imgaug 0.4.0 pypi_0 pypi importlib-metadata 1.7.0 py36_0 anaconda importlib_metadata 1.7.0 0 anaconda intel-openmp 2019.4 245 anaconda ipykernel 5.3.0 py36h5ca1d4c_0 anaconda ipyparallel 6.3.0 pypi_0 pypi ipython 7.16.1 py36h5ca1d4c_0 anaconda ipython_genutils 0.2.0 py36_0 anaconda ipywidgets 7.5.1 py_0 anaconda jedi 0.17.1 py36_0 anaconda jinja2 2.11.2 py_0 anaconda joblib 0.15.1 py_0 anaconda jpeg 9d he774522_0 conda-forge json-tricks 3.15.5 pypi_0 pypi jsonschema 3.2.0 py36_0 anaconda jupyter 1.0.0 py36_7 anaconda jupyter_client 6.1.3 py_0 anaconda jupyter_console 6.1.0 py_0 anaconda jupyter_core 4.6.3 py36_0 anaconda keras-applications 1.0.8 py_1 anaconda keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.2.0 py36h74a9793_0 anaconda krb5 1.18.2 hc04afaa_0 anaconda leptonica 1.78.0 h919f142_2 conda-forge libarchive 3.3.3 h0643e63_5 anaconda libcurl 7.71.0 h2a8f88b_0 anaconda libffi 3.2.1 h6538335_1007 conda-forge libgpuarray 0.7.6 hfa6e2cd_1003 conda-forge libiconv 1.15 vc14h29686d3_5 [vc14] anaconda libmklml 2019.0.5 0 anaconda libpng 1.6.37 h2a8f88b_0 anaconda libprotobuf 3.12.3 h7bd577a_0 anaconda libsodium 1.0.18 h62dcd97_0 anaconda libssh2 1.9.0 h7a1dbc1_1 anaconda libtiff 4.1.0 h56a325e_0 anaconda libwebp 1.0.2 hfa6e2cd_5 conda-forge libxml2 2.9.10 h464c3ec_1 anaconda libxslt 1.1.34 he774522_0 anaconda lxml 4.5.0 py36h1350720_0 anaconda lz4-c 1.8.1.2 h2fa13f4_0 anaconda lzo 2.10 he774522_2 anaconda m2w64-gcc-libgfortran 5.3.0 6 conda-forge m2w64-gcc-libs 5.3.0 7 conda-forge m2w64-gcc-libs-core 5.3.0 7 conda-forge m2w64-gmp 6.1.0 2 conda-forge m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge mako 1.1.0 py_0 anaconda markdown 3.3.4 pypi_0 pypi markupsafe 1.1.1 py36he774522_0 anaconda matplotlib 3.1.3 py36_0 anaconda matplotlib-base 3.1.3 py36h64f37c6_0 anaconda mccabe 0.6.1 py36_1 anaconda mistune 0.8.4 py36he774522_0 anaconda mkl 2018.0.3 1 anaconda mkl_fft 1.0.6 py36hdbbee80_0 anaconda mkl_random 1.0.1 py36h77b88f5_1 anaconda mock 4.0.3 pypi_0 pypi more-itertools 8.4.0 py_0 anaconda moviepy 1.0.1 py_0 conda-forge msys2-conda-epoch 20160418 1 conda-forge nbconvert 5.6.1 py36_0 anaconda nbformat 5.0.7 py_0 anaconda networkx 2.4 py_0 anaconda ninja 1.9.0 py36h74a9793_0 anaconda nose 1.3.7 pypi_0 pypi notebook 6.0.3 py36_0 anaconda numpy 1.19.5 pypi_0 pypi oauthlib 3.1.0 py_0 anaconda olefile 0.46 py36_0 anaconda opencv-python 3.4.1.15 pypi_0 pypi openjpeg 2.3.1 h57dd2e7_3 conda-forge openssl 1.1.1h he774522_0 anaconda opt-einsum 3.3.0 pypi_0 pypi packaging 20.4 py_0 anaconda pandas 1.0.3 py36h47e9c7a_0 anaconda pandoc 2.9.2.1 0 anaconda pandocfilters 1.4.2 py36_1 anaconda parso 0.7.0 py_0 anaconda pcre 8.44 ha925a31_0 anaconda pep8-naming 0.8.2 py36_0 anaconda pickleshare 0.7.5 py36_0 anaconda pillow 7.1.2 py36hcc1f983_0 anaconda pip 20.2.4 py36_0 anaconda pluggy 0.13.1 py36_0 anaconda poppler 0.87.0 hdbe765f_0 conda-forge poppler-data 0.4.9 1 conda-forge proglog 0.1.9 py_0 conda-forge prometheus_client 0.8.0 py_0 anaconda prompt-toolkit 3.0.5 py_0 anaconda prompt_toolkit 3.0.5 0 anaconda protobuf 3.12.3 py36h33f27b4_0 anaconda psutil 5.8.0 pypi_0 pypi py 1.9.0 py_0 anaconda pyasn1 0.4.8 py_0 anaconda pyasn1-modules 0.2.8 pypi_0 pypi pycocotools 2.0 pypi_0 pypi pycodestyle 2.6.0 py_0 anaconda pycparser 2.20 py_0 anaconda pyflakes 2.2.0 py_0 anaconda pygments 2.6.1 py_0 anaconda pygpu 0.7.6 py36h7725771_1001 conda-forge pyjwt 1.7.1 py_0 conda-forge pyopenssl 19.1.0 py36_0 anaconda pyparsing 2.4.7 py_0 anaconda pyqt 5.9.2 py36h6538335_2 anaconda pyreadline 2.1 py36_1001 conda-forge pyrsistent 0.16.0 py36he774522_0 anaconda pysocks 1.7.1 py36_0 anaconda pytesseract 0.3.3 pyh8c360ce_0 conda-forge pytest 5.4.3 py36_0 anaconda python 3.6.10 h9f7ef89_1 anaconda python-dateutil 2.8.1 py_0 anaconda python_abi 3.6 1_cp36m conda-forge pytorch 1.5.1 py3.6_cpu_0 [cpuonly] pytorch pytz 2020.1 py_0 anaconda pywavelets 1.1.1 py36he774522_0 anaconda pywin32 223 py36hfa6e2cd_1 anaconda pywinpty 0.5.7 py36_0 anaconda pyyaml 5.3.1 py36he774522_0 anaconda pyzmq 19.0.1 py36ha925a31_1 anaconda qt 5.9.7 vc14h73c81de_0 [vc14] anaconda qtconsole 4.7.5 py_0 anaconda qtpy 1.9.0 py_0 anaconda requests 2.24.0 py_0 anaconda requests-oauthlib 1.3.0 pyh9f0ad1d_0 conda-forge rsa 4.6 pyh9f0ad1d_0 conda-forge scikit-image 0.16.2 py36h47e9c7a_0 anaconda scikit-learn 0.20.1 py36hb854c30_0 anaconda scipy 1.4.1 pypi_0 pypi send2trash 1.5.0 py36_0 anaconda setuptools 50.3.0 py36h9490d1a_1 anaconda shapely 1.6.4 pypi_0 pypi simplejson 3.17.0 py36he774522_0 anaconda sip 4.19.8 py36h6538335_0 anaconda six 1.15.0 py_0 anaconda sklearn 0.0 pypi_0 pypi slidingwindow 0.0.14 pypi_0 pypi snowballstemmer 2.1.0 pypi_0 pypi sphinx 3.5.2 pypi_0 pypi sphinxcontrib-applehelp 1.0.2 pypi_0 pypi sphinxcontrib-devhelp 1.0.2 pypi_0 pypi sphinxcontrib-htmlhelp 1.0.3 pypi_0 pypi sphinxcontrib-jsmath 1.0.1 pypi_0 pypi sphinxcontrib-qthelp 1.0.3 pypi_0 pypi sphinxcontrib-serializinghtml 1.1.4 pypi_0 pypi sqlite 3.32.3 h2a8f88b_0 anaconda swig 3.0.12 h047fa9f_3 anaconda tbb 2020.0 h74a9793_0 anaconda tbb4py 2020.0 py36h74a9793_0 anaconda tensorboard 1.13.1 pypi_0 pypi tensorboard-plugin-wit 1.8.0 pypi_0 pypi tensorboardx 1.6 py_0 conda-forge tensorflow 2.4.1 pypi_0 pypi tensorflow-estimator 1.13.0 pypi_0 pypi tensorflow-gpu 1.13.1 pypi_0 pypi tensorflow-gpu-estimator 2.1.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi terminado 0.8.3 py36_0 anaconda testpath 0.4.4 py_0 anaconda theano 1.0.4 py36h003fed8_1002 conda-forge threadpoolctl 2.1.0 pyh5ca1d4c_0 anaconda tk 8.6.10 he774522_0 anaconda toolz 0.10.0 py_0 anaconda torchfile 0.1.0 py_0 conda-forge torchvision 0.6.1 py36_cpu [cpuonly] pytorch tornado 6.0.4 py36he774522_1 anaconda tqdm 4.47.0 py_0 anaconda traitlets 4.3.3 py36_0 anaconda typing-extensions 3.7.4.3 pypi_0 pypi urllib3 1.25.11 py_0 anaconda vc 14.1 h0510ff6_4 anaconda visdom 0.1.8.9 0 conda-forge vs2015_runtime 14.16.27012 hf0eaf9b_3 anaconda vs2017_win-64 19.16.27038 h2e3bad8_2 conda-forge vswhere 2.7.1 h21ff451_0 anaconda wcwidth 0.2.5 py_0 anaconda webencodings 0.5.1 py36_1 anaconda websocket-client 0.57.0 py36_1 anaconda werkzeug 1.0.1 pyh9f0ad1d_0 conda-forge wget 1.16.3 0 menpo wheel 0.35.1 py_0 anaconda widgetsnbextension 3.5.1 py36_0 anaconda win_inet_pton 1.1.0 py36_0 anaconda wincertstore 0.2 py36h7fe50ca_0 anaconda winpty 0.4.3 4 anaconda wrapt 1.12.1 py36h68a101e_1 conda-forge xz 5.2.5 h62dcd97_0 anaconda yacs 0.1.8 pypi_0 pypi yaml 0.1.7 hc54c509_2 anaconda zeromq 4.3.2 ha925a31_2 anaconda zipp 3.3.1 py_0 anaconda zlib 1.2.11 h62dcd97_4 anaconda zstd 1.3.7 h508b16e_0 anaconda No errors reported.
Solved. Pytorch was installing CPU only version for some reason, reinstalling pytorch didn't help. Uninstalling pytorch: conda uninstall pytorch Followed by uninstalling cpu only: conda uninstall cpuonly Then installing pytorch again solved it.
https://stackoverflow.com/questions/66711799/
Batch size and Training time
Thank you for @Prune's critical comments on my questions. I am trying to find the relationship between batch size and training time by using MNIST dataset. By reading numerous questions in stackoverflow, such as this one: How does batch size impact time execution in neural networks? people said that the training time will be decreased when I use small batch size. However, by trying out these two, I found that training with batch size == 1 takes way more time than batch size == 60,000. I set epoch as 10. I split my MMIST dataset into 60k for the training and 10k for the testing. This below is my code and results. mnist_trainset = torchvision.datasets.MNIST(root=root_dir, train=True, download=True, transform=transforms.Compose([transforms.ToTensor()])) mnist_testset = torchvision.datasets.MNIST(root=root_dir, train=False, download=True, transform=transforms.Compose([transforms.ToTensor()])) train_dataloader = torch.utils.data.DataLoader(mnist_trainset, batch_size=1, shuffle=True) test_dataloader = torch.utils.data.DataLoader(mnist_testset, batch_size=50, shuffle=False) # Define the model class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.linear_1 = torch.nn.Linear(784, 256) self.linear_2 = torch.nn.Linear(256, 10) self.sigmoid = torch.nn.Sigmoid() def forward(self, x): x = x.reshape(x.size(0), -1) x = self.linear_1(x) x = self.sigmoid(x) pred = self.linear_2(x) return pred # trainer no_epochs = 10 def my_trainer(optimizer, model): criterion = torch.nn.CrossEntropyLoss() train_loss = list() test_loss = list() test_acc = list() best_test_loss = 1 for epoch in range(no_epochs): # timer starts start = timer() total_train_loss = 0 total_test_loss = 0 # training # set up training mode model.train() for itr, (image, label) in enumerate(train_dataloader): optimizer.zero_grad() pred = model(image) loss = criterion(pred, label) total_train_loss += loss.item() loss.backward() optimizer.step() total_train_loss = total_train_loss / (itr + 1) train_loss.append(total_train_loss) # testing # change to evaluation mode model.eval() total = 0 for itr, (image, label) in enumerate(test_dataloader): pred = model(image) loss = criterion(pred, label) total_test_loss += loss.item() # we now need softmax because we are testing. pred = torch.nn.functional.softmax(pred, dim=1) for i, p in enumerate(pred): if label[i] == torch.max(p.data, 0)[1]: total = total + 1 # caculate accuracy accuracy = total / len(mnist_testset) # append accuracy here test_acc.append(accuracy) # append test loss here total_test_loss = total_test_loss / (itr + 1) test_loss.append(total_test_loss) print('\nEpoch: {}/{}, Train Loss: {:.8f}, Test Loss: {:.8f}, Test Accuracy: {:.8f}'.format(epoch + 1, no_epochs, total_train_loss, total_test_loss, accuracy)) if total_test_loss < best_test_loss: best_test_loss = total_test_loss print("Saving the model state dictionary for Epoch: {} with Test loss: {:.8f}".format(epoch + 1, total_test_loss)) torch.save(model.state_dict(), "model.dth") # timer finishes end = timer() print(end - start) return no_epochs, test_acc, test_loss model_sgd = Model() optimizer_SGD = torch.optim.SGD(model_sgd.parameters(), lr=0.1) sgd_no_epochs, sgd_test_acc, sgd_test_loss = my_trainer(optimizer=optimizer_SGD, model=model_sgd) I calculated how much time did it took for each epoch. And this below is the result. Epoch: 1/10, Train Loss: 0.23193890, Test Loss: 0.12670580, Test Accuracy: 0.96230000 63.98903721500005 seconds Epoch: 2/10, Train Loss: 0.10275097, Test Loss: 0.10111042, Test Accuracy: 0.96730000 63.97179028100004 seconds Epoch: 3/10, Train Loss: 0.07269370, Test Loss: 0.09668248, Test Accuracy: 0.97150000 63.969843954 seconds Epoch: 4/10, Train Loss: 0.05658571, Test Loss: 0.09841745, Test Accuracy: 0.97070000 64.24135530400008 seconds Epoch: 5/10, Train Loss: 0.04183391, Test Loss: 0.09828428, Test Accuracy: 0.97230000 64.19695308500013 seconds Epoch: 6/10, Train Loss: 0.03393899, Test Loss: 0.08982467, Test Accuracy: 0.97530000 63.96944059600014 seconds Epoch: 7/10, Train Loss: 0.02808819, Test Loss: 0.08597597, Test Accuracy: 0.97700000 63.59837343000004 seconds Epoch: 8/10, Train Loss: 0.01859330, Test Loss: 0.07529452, Test Accuracy: 0.97950000 63.591578820999985 seconds Epoch: 9/10, Train Loss: 0.01383720, Test Loss: 0.08568452, Test Accuracy: 0.97820000 63.66664020100029 Epoch: 10/10, Train Loss: 0.00911216, Test Loss: 0.07377760, Test Accuracy: 0.98060000 63.92636473799985 seconds After this I changed the batch size to 60000 and run the same program again. train_dataloader = torch.utils.data.DataLoader(mnist_trainset, batch_size=60000, shuffle=True) test_dataloader = torch.utils.data.DataLoader(mnist_testset, batch_size=50, shuffle=False) print("\n===== Entering SGD optimizer =====\n") model_sgd = Model() optimizer_SGD = torch.optim.SGD(model_sgd.parameters(), lr=0.1) sgd_no_epochs, sgd_test_acc, sgd_test_loss = my_trainer(optimizer=optimizer_SGD, model=model_sgd) I got this result for batch size == 60000 Epoch: 1/10, Train Loss: 2.32325006, Test Loss: 2.30074144, Test Accuracy: 0.11740000 6.54154992299982 seconds Epoch: 2/10, Train Loss: 2.30010080, Test Loss: 2.29524792, Test Accuracy: 0.11790000 6.341824101999919 seconds Epoch: 3/10, Train Loss: 2.29514933, Test Loss: 2.29183527, Test Accuracy: 0.11410000 6.161918789000083 seconds Epoch: 4/10, Train Loss: 2.29196787, Test Loss: 2.28874513, Test Accuracy: 0.11450000 6.180891567999879 seconds Epoch: 5/10, Train Loss: 2.28899717, Test Loss: 2.28571669, Test Accuracy: 0.11570000 6.1449509030003355 seconds Epoch: 6/10, Train Loss: 2.28604794, Test Loss: 2.28270152, Test Accuracy: 0.11780000 6.311743144000047 seconds Epoch: 7/10, Train Loss: 2.28307867, Test Loss: 2.27968731, Test Accuracy: 0.12250000 6.060618773999977 seconds Epoch: 8/10, Train Loss: 2.28014660, Test Loss: 2.27666961, Test Accuracy: 0.12890000 6.171511712999745 seconds Epoch: 9/10, Train Loss: 2.27718973, Test Loss: 2.27364607, Test Accuracy: 0.13930000 6.164125173999764 seconds Epoch: 10/10, Train Loss: 2.27423453, Test Loss: 2.27061504, Test Accuracy: 0.15350000 6.077817454000069 seconds As you can see it is clear that it took more time for each epoch when batch_size == 1 which is different from what I have seen. Maybe I am confused with the training time per epoch vs the training time until convergence? Seems like my intuition is correct by looking at this webpage: https://medium.com/deep-learning-experiments/effect-of-batch-size-on-neural-net-training-c5ae8516e57 Can someone please explain what is happening?
This is a borderline question; you should still be able to extract this understanding from the basic literature ... eventually. Your insight is exactly correct: you are measuring execution time per epoch, rather than total Time-to-Train (TTT). You have also carried the generic "smaller batches" advice ad absurdum: a batch size of 1 is almost guaranteed to be sub-optimal. The mechanics are very simple at a macro level. With a batch size of 60k (the entire training set), you run all 60k images through the model, average their results, and then do one back-propagation for that average result. This tends to lose the learning you can get from focusing on little-seen features. With a batch size of 1, you run each image individually through the model, average the one result (a very simple operation :-) ), and do a back propagation. This tends to over-emphasize individual effects, especially retaining superstitious effects from each single image. It also gives too much weight to the initial assumptions of the first few images. The most obvious effect of the tiny batch size is that you're doing 60k back-props instead of 1, so each epoch takes much longer. Either of these approaches is an extreme case, usually absurd in application. You need to experiment to find the "sweet spot" that gives you the fastest convergence to acceptable (near-optimal) accuracy. There are a few considerations in choosing your experimental design: Memory size: you want to be able to ingest the entire batch into memory at once. This allows your model to pipeline reading and processing. If you exceed available memory, you will lose a lot of time to swapping. If you under-use the memory, you leave some potential performance untapped. Processors: if you're on a multi-processor chip, you want to keep them all busy. If you care to assign processors through your OS controls, you'll also want to play with how many to assign to model computation, and how many to assign to I/O and system use. For instance, in one project I did, our group found that our 32 cores were best used with 28 allocated to computation, 4 reserved for I/O and other system functions. Scaling: some characteristics work best in powers of 2. You may find that a batch size that is 2^n or 3 * 2^n for some n, works best, simply because of block sizes and other system allocations. The experimental design that has worked best for me over the years is to start with a power of 2 that is roughly the square root of the training set size. For you, there's an obvious starting guess of 256. Thus, you'd run experiments at perhaps 64, 128, 256, 512, and 1024. See which ones give you the fastest convergence. Then do one step of refinement, using that factor of 3. For instance, if you find that the best performance comes at 128, also try 96 and 192. You will likely see very little difference between your "sweet spot" and the adjacent batch sizes; this is the nature of most complex information systems.
https://stackoverflow.com/questions/66716370/
Spliting the dataset using SubsetRandomSampler not working
I used SubsetRandomSampler to split the training data to train (80%) and validation data (20%). But it is showing the same number of images for both after the split (4996): >>> print('len(train_data): ', len(train_loader.dataset)) >>> print('len(valid_data): ', len(validation_loader.dataset)) len(train_data): 4996 len(valid_data): 4996 Full code: import numpy as np import torch from torchvision import transforms from torch.utils.data.sampler import SubsetRandomSampler train_transforms = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) dataset = datasets.ImageFolder( '/data/images/train', transform=train_transforms ) validation_split = .2 shuffle_dataset = True random_seed= 42 batch_size = 20 dataset_size = len(dataset) #4996 indices = list(range(dataset_size)) split = int(np.floor(validation_split * dataset_size)) if shuffle_dataset : np.random.seed(random_seed) np.random.shuffle(indices) train_indices, val_indices = indices[split:], indices[:split] train_sampler = SubsetRandomSampler(train_indices) valid_sampler = SubsetRandomSampler(val_indices) train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=train_sampler) validation_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=valid_sampler)
train_loader.dataset and validation_loader.dataset are methods which return the underlying original dataset that the loaders sample from (i.e. the original dataset of size 4996). If you iterate through the loaders themselves you will see they only return as many samples (acounting for batching) as you have included in the index for each sampler, however.
https://stackoverflow.com/questions/66718763/
Pytorch 1D tensors expected but got 2D tensors
I have been working on making neural network from scratch in python. The input tensor is of shape [400,3] and target_tensor has the shape [400]. I am getting error during taking derivative of weights. Below are functions def sigmoid(z): return 1 / (1 + torch.exp(-z)) def nueral_net(data,weights,bias): return sigmoid( ( data @ weights ) + bias ) def loss_function(prediction,actual,m): return (-1/m) * (torch.sum(actual * torch.log(prediction) + (1-actual) * torch.log(1- prediction))) w = torch.randn(input_tensor.shape[1],1) b = torch.randn(1,1) predictions = nueral_net(input_tensor.float() , w, b) #Applying model loss = loss_function(predictions,target_tensor.unsqueeze(1),400) dw = (1/400) * torch.dot(input_tensor,(predictions - target_tensor).T) Running this throws an error. RuntimeError Traceback (most recent call last) <ipython-input-26-632338d8fd16> in <module> 1 predictions = nueral_net(input_tensor.float() , w, b) #Applying model 2 loss = loss_function(predictions,target_tensor.unsqueeze(1),400) ----> 3 dw = (1/400) * torch.dot(input_tensor,(predictions - target_tensor).T) 4 db = (1/400) * torch.sum(predictions - target_tensor) 5 #m = input_tensor.shape[0] RuntimeError: 1D tensors expected, but got 2D and 2D tensor
If we see the doc of torch.dot : torch.dot(input, other, *, out=None) → Tensor Computes the dot product of two 1D tensors. NOTE : Unlike NumPy’s dot, torch.dot intentionally only supports computing the dot product of two 1D tensors with the same number of elements. Parameters input (Tensor) – first tensor in the dot product, must be 1D. other (Tensor) – second tensor in the dot product, must be 1D. Coming to your question....both input_tensor & (predictions - target_tensor).T in 2D. Please make it 1D
https://stackoverflow.com/questions/66720543/
Loading PyTorch model in multiprocessing Process flashes cmd
Problem: When I open pytorch model (read as load state_dict from disk) in subprocess it pops up cmd window for couple ms which causes other programs to loose focus - annoying when working on something else etc. I have traced the cause to 2 lines, both causing it in some circumstances and managed to reproduce it for one (the second one is when doing model.to(device)) main.py model_path = 'testing\\agent\\model_test.pth' # create model from testing.agent.torch_model import Net import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # I have cuda available m = Net() m.to(device) # save it torch.save(m.state_dict(), model_path) # open it in subprocess from testing.agent.AgentOpenSim_Process import Open_Agent_Sim p = Open_Agent_Sim(p=model_path, msgLogger=None) p.start() torch_model.py (source pydocs: https://pytorch.org/tutorials/recipes/recipes/save_load_across_devices.html) import torch.nn as nn import torch.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x AgentOpenSim_Process.py from multiprocessing import Queue, Process import os, time, torch from testing.agent.torch_model import Net class Open_Agent_Sim(Process): def __init__(self, p:str, **kwargs): super(Process, self).__init__(daemon=True) self.path = p self._msgLogger = kwargs['msgLogger'] if kwargs['msgLogger'] is not None else Queue() self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.device_cpu = torch.device("cpu") def __print(self, msg: str, verbosity): # personal tool for debugging multiprocessing (it sends messages to Queue and main process # reads them inside thread and prints into console... and yes i know about multiprocessing # logger, its used aswell) self._msgLogger.put(('Open_Agent_Sim: '+msg, verbosity)) def run(self): self.__pid = os.getpid() try: self.__print('opening model',0) self.init_agent() self.__print('opening model - done',0) except Exception as e: # solved by custom exception wrapper pass else: self.__print('has ended',0) return def init_agent(self): # init instance self.__print('0a', 0) m = Net() self.__print('0b', 0) time.sleep(2) # load state dict self.__print('1a', 0) l = torch.load(self.path, map_location=self.device_cpu) self.__print('1b', 0) time.sleep(2) self.__print('2a', 0) m.load_state_dict(l) # set to device self.__print('2b', 0) time.sleep(2) try: self.__print('3a', 0) m.to(self.device) # ----> This line pops up cmd self.__print('3b', 0) except RuntimeError as e: self.__print(str(e), 0) When visually debugging those cmd pops up, its always in step 1 (m.load_state_dict(torch.load(self.path, map_location=self.device))) I have tried something like disabling console output which didnt work. import contextlib with contextlib.redirect_stdout(None): ... The if __name__=='__main__': makes no difference and also this is all part of heavy multiprocessing in some lower subprocess Update I traced problem to switching device - if I use torch.load(self.path, map_location=self.device_cpu) and later .to(self.device_gpu) it pops cmd on line with .to(...) but if I use torch.load(self.path, map_location=self.device_gpu) it pops on that line. Another thing to note is, that it does not matter on which device model is saved. I am open to any workaround.
Updating pytorch version by install command from their website solved the issue
https://stackoverflow.com/questions/66722935/
Delete duplicated rows in torch.tensor
I have a torch.tensor of shape (n,m) and I want to remove the duplicated rows (or at least find them). For example: t1 = torch.tensor([[1, 2, 3], [4, 5, 6], [1, 2, 3], [4, 5, 6]]) t2 = remove_duplicates(t1) t2 should be now equal to tensor([[1, 2, 3], [4, 5, 6]]), that is rows 1 and 3 are removed. Do you know a way to perform this operation? I was thinking to do something with torch.unique but I cannot figure out what to do.
You can simply exploit the parameter dim of torch.unique. t1 = torch.tensor([[1, 2, 3], [4, 5, 6], [1, 2, 3], [4, 5, 6], [7, 8, 9]]) torch.unique(t1, dim=0) In this way you obtain the result you want: tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) Here you can read the meaning of that parameter.
https://stackoverflow.com/questions/66722985/
Manually assign weights using PyTorch
I am using Python 3.8 and PyTorch 1.7 to manually assign and change the weights and biases for a neural network. As an example, I have defined a LeNet-300-100 fully-connected neural network to train on MNIST dataset. The code for class definition is: class LeNet300(nn.Module): def __init__(self): super(LeNet300, self).__init__() # Define layers- self.fc1 = nn.Linear(in_features = input_size, out_features = 300) self.fc2 = nn.Linear(in_features = 300, out_features = 100) self.output = nn.Linear(in_features = 100, out_features = 10) self.weights_initialization() def forward(self, x): out = F.relu(self.fc1(x)) out = F.relu(self.fc2(out)) return self.output(out) def weights_initialization(self): ''' When we define all the modules such as the layers in '__init__()' method above, these are all stored in 'self.modules()'. We go through each module one by one. This is the entire network, basically. ''' for m in self.modules(): if isinstance(m, nn.Linear): nn.init.xavier_normal_(m.weight) nn.init.constant_(m.bias, 0) To experiment with trying to change the weights for this model- # Instantiate model- mask_model = LeNet300() To assign all of the weights in each of the layers to one (1), I use the code- with torch.no_grad(): for layer in mask_model.state_dict(): mask_model.state_dict()[layer] = nn.parameter.Parameter(torch.ones_like(mask_model.state_dict()[layer])) # Sanity check- mask_model.state_dict()['fc1.weight'] This output shows that the weights are not equal to 1. I also tried the code- for param in mask_model.parameters(): # print(param.shape) param = nn.parameter.Parameter(torch.ones_like(param)) But this does not work as well. Help?
for param in mask_model.parameters(): param.data = nn.parameter.Parameter(torch.ones_like(param))
https://stackoverflow.com/questions/66724071/
Linear Regression with Pytorch : constant loss
I'm working on a linear regression problem with Pytorch (y=A*x, where the dimensions of A are 2x2). I wrote the following code. I don't know why the loss doesn't change... Can someone help me ? Thanks, Thomas import torch import numpy as np from scipy.integrate import odeint from matplotlib import pyplot as plt from torch.autograd import Variable def EDP(X,t): X_0=-2*X[0] X_1=-2*X[1] grad=np.array([X_0,X_1]) return grad T=np.arange(0,10,0.1) X_train=odeint(EDP,[10,20],T) Y_train=np.zeros_like(X_train) for i in range(Y_train.shape[0]): Y_train[i,:]=np.dot(np.array([[2,0],[0,2]]),X_train[i,:]) print(X_train,Y_train) X_train=torch.Tensor(X_train) torch.transpose(X_train,0,1) Y_train=torch.Tensor(Y_train) print(X_train.shape) import torch.nn as nn class LinearRegression(torch.nn.Module): def __init__(self): super(LinearRegression, self).__init__() self.linear = torch.nn.Linear(2,2,bias = False) # bias is default True def forward(self, x): y_pred = self.linear(x) return y_pred criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(our_model.parameters(), lr = 0.0001) our_model = LinearRegression() x_train = X_train y_train = Y_train #x_train.requires_grad=True print(x_train.shape) print(y_train.shape) ntrain=10 for t in range(ntrain): y_pred=our_model(x_train) loss=criterion(y_train,y_pred) loss.backward() optimizer.step() optimizer.zero_grad() print(t,loss) print(our_model.linear.weight)
In my laptop it worked ... since you are running it on just 10 epochs ...and using lr = 0.0001 ,you wont see it in just 10 epochs. i did this optimizer = torch.optim.SGD(our_model.parameters(), lr = 0.01) (increased lr )which actually decreased the loss in just 10 epochs
https://stackoverflow.com/questions/66724605/