instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
2-D Vectorized Function with pytorch
I have a function that I need to compute at every entry of a 2-D torch tensor, and it is depends on the value of the index of both axes. Right now, I have only been able to implement this as a nested for loop, iterating over both axes. This is quite slow (and needs to be executed >10^5 times) and I want to speed it up for better scaling. vs = 200 nt = 12 b = torch.ones(vs)/vs n_kw = torch.rand((nt, vs)) n_k = torch.rand((nt,)) def estimate_p(nt, vs, n_kw, n_k): p = torch.zeros((nt, vs)) for i in range(0, nt): for j in range(0, vs): p[i,j] = (n_kw[i, j] + b[j])/(n_k[i] + torch.sum(b)) return p Is there a way to vectorize this/map based in the i,j index?
Try playing with broadcasting: def estimate_p(nt, vs, n_kw, n_k): return (n_kw + b) / (n_k + b.sum()).unsqueeze(-1)
https://stackoverflow.com/questions/65925629/
pytorch OSError: [Errno 28] No space left on device
I'm using an ubuntu 18 docker container. $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS" When I try training a resnext101 model from torchvision, I get the following error. Downloading: "https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth" to /home/vmuser/.cache/torch/hub/checkpoints/resnext101_32x8d-8ba56ff5.pth 0%| | 0.00/340M [00:00<?, ?B/s] Traceback (most recent call last): File "train_attn_best_config.py", line 377, in <module> tabct = TabCT(cnn = model, fc_dim = fd, attn_filters = af, n_attn_layers = nal).to(gpu) File "train_attn_best_config.py", line 219, in __init__ self.ct_cnn = cnn_dict[cnn](pretrained = True) File "/home/vmuser/anaconda3/envs/pulmo/lib/python3.7/site-packages/torchvision/models/resnet.py", line 317, in resnext101_32x8d pretrained, progress, **kwargs) File "/home/vmuser/anaconda3/envs/pulmo/lib/python3.7/site-packages/torchvision/models/resnet.py", line 227, in _resnet progress=progress) File "/home/vmuser/anaconda3/envs/pulmo/lib/python3.7/site-packages/torch/hub.py", line 481, in load_state_dict_from_url download_url_to_file(url, cached_file, hash_prefix, progress=progress) File "/home/vmuser/anaconda3/envs/pulmo/lib/python3.7/site-packages/torch/hub.py", line 404, in download_url_to_file f.write(buffer) File "/home/vmuser/anaconda3/envs/pulmo/lib/python3.7/tempfile.py", line 481, in func_wrapper return func(*args, **kwargs) OSError: [Errno 28] No space left on device When I run df, I get this, one of my tmpfs is only 65 mb. I tried running export TMPDIR=/var/tmp and export TMPDIR=~/Data/tmp $df Filesystem 1K-blocks Used Available Use% Mounted on overlay 1797272568 1705953392 0 100% / tmpfs 65536 0 65536 0% /dev tmpfs 98346264 0 98346264 0% /sys/fs/cgroup /dev/sda6 1797272568 1705953392 0 100% /etc/hosts shm 65536 0 65536 0% /dev/shm /dev/sdb1 1845816492 1362932848 389098592 78% /home/vmuser/Data tmpfs 98346264 12 98346252 1% /proc/driver/nvidia tmpfs 19669256 93256 19576000 1% /run/nvidia-persistenced/socket udev 98318592 0 98318592 0% /dev/nvidia1 tmpfs 98346264 0 98346264 0% /proc/acpi tmpfs 98346264 0 98346264 0% /proc/scsi tmpfs 98346264 0 98346264 0% /sys/firmware But the error is still there.
This seems like a shm issue. Try running docker with ipc=host flag. For more details, see this thread.
https://stackoverflow.com/questions/65926311/
Is partial 3D convolution or transpose+2D convolution faster?
I have some data of shape B*C*T*H*W. I want to apply 2d convolutions on the H*W dimension. There are two options (that I see): Apply partial 3D convolution with shape (1, 3, 3). 3D convolution accepts data with shape B*C*T*H*W which is exactly what I have. This is however a pseudo 3d conv that might be under-optimized (despite its heavy use in P3D networks). Transpose the data, apply 2D convolutions, and transpose the result back. This requires the overhead of data reshaping, but it makes use of the heavily optimized 2D convolutions. data = raw_data.transpose(1,2).reshape(b*t, c, h, w).detach() out = conv2d(data) out = out.view(b, t, c, h, w).transpose(1, 2).contiguous() Which one is faster? (Note: I have a self-answer below. This aims to be a quick note for people who are googling, aka me 20 minutes ago)
Environment: PyTorch 1.7.1, CUDA 11.0, RTX 2080 TI. TL;DR: Transpose + 2D conv is faster (in this environment, and for the tested data shapes). Code (modified from here): import torch import torch.nn as nn import time b = 4 c = 64 t = 4 h = 256 w = 256 raw_data = torch.randn(b, c, t, h, w).cuda() def time2D(): conv2d = nn.Conv2d(c, c, kernel_size=3, padding=1).cuda() torch.cuda.synchronize() start = time.time() for _ in range(100): data = raw_data.transpose(1,2).reshape(b*t, c, h, w).detach() out = conv2d(data) out = out.view(b, t, c, h, w).transpose(1, 2).contiguous() out.mean().backward() torch.cuda.synchronize() end = time.time() print(" --- %s --- " %(end - start)) def time3D(): conv3d = nn.Conv3d(c, c, kernel_size=(1,3,3), padding=(0,1,1)).cuda() torch.cuda.synchronize() start = time.time() for _ in range(100): out = conv3d(raw_data.detach()) out.mean().backward() torch.cuda.synchronize() end = time.time() print(" --- %s --- " %(end - start)) print("Initializing cuda state") time2D() print("going to time2D") time2D() print("going to time3D") time3D() For shape = 4*64*4*256*256: 2D: 1.8675172328948975 3D: 4.384545087814331 For shape = 8*512*16*64*64: 2D: 37.95961904525757 3D: 49.730860471725464 For shape = 4*128*128*16*16: 2D: 0.6455907821655273 3D: 1.8380646705627441
https://stackoverflow.com/questions/65930768/
while importing torch- shows - [WinError 126] The specified module could not be found
I have tried to install python torch by using !pip install torch But I got the error OSError: [WinError 126] The specified module could not be found Then I tried with pip install torch -f https://download.pytorch.org/whl/torch_stable.html The running log is as follows, Then I have installed the torch by the below command to install the CPU version of the torch. pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html Then, when I try to test by the command, import torch print(torch.__version__) In the Spyder, it still could not recognize, However, in python.exe it could recognize and provides the correct output.
Ok the problem is that you are trying to install the cuda version of Pytorch on a non cuda enabled computer. You can find a similar problem here. You have to download the cpu version of pytorch like this pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/65931555/
PyTorch: while loading batched data using Dataloader, how to transfer the data to GPU automatically
If we use a combination of the Dataset and Dataloader classes (as shown below), I have to explicitly load the data onto the GPU using .to() or .cuda(). Is there a way to instruct the dataloader to do it automatically/implicitly? Code to understand/reproduce the scenario: from torch.utils.data import Dataset, DataLoader import numpy as np class DemoData(Dataset): def __init__(self, limit): super(DemoData, self).__init__() self.data = np.arange(limit) def __len__(self): return self.data.shape[0] def __getitem__(self, idx): return (self.data[idx], self.data[idx]*100) demo = DemoData(100) loader = DataLoader(demo, batch_size=50, shuffle=True) for i, (i1, i2) in enumerate(loader): print('Batch Index: {}'.format(i)) print('Shape of data item 1: {}; shape of data item 2: {}'.format(i1.shape, i2.shape)) # i1, i2 = i1.to('cuda:0'), i2.to('cuda:0') print('Device of data item 1: {}; device of data item 2: {}\n'.format(i1.device, i2.device)) Which will output the following; note - without explicit device transfer instruction, the data is loaded onto CPU: Batch Index: 0 Shape of data item 1: torch.Size([50]); shape of data item 2: torch.Size([50]) Device of data item 1: cpu; device of data item 2: cpu Batch Index: 1 Shape of data item 1: torch.Size([50]); shape of data item 2: torch.Size([50]) Device of data item 1: cpu; device of data item 2: cpu A possible solution is at this PyTorch GitHub repo. Issue(still open at the time this question was posted), but, I am unable to make it to work when the dataloader has to return multiple data-items!
You can modify the collate_fn to handle several items at once: from torch.utils.data.dataloader import default_collate device = torch.device('cuda:0') # or whatever device/cpu you like # the new collate function is quite generic loader = DataLoader(demo, batch_size=50, shuffle=True, collate_fn=lambda x: tuple(x_.to(device) for x_ in default_collate(x))) Note that if you want to have multiple workers for the dataloader, you'll need to add torch.multiprocessing.set_start_method('spawn') after your if __name__ == '__main__' (see this issue). Having said that, it seems like using pin_memory=True in your DataLoader would be much more efficient. Have you tried this option? See memory pinning for more information. Update (Feb 8th, 2021) This post made me look at my "data-to-model" time spent during training. I compared three alternatives: DataLoader works on CPU and only after the batch is retrieved data is moved to GPU. Same as (1) but with pin_memory=True in DataLoader. The proposed method of using collate_fn to move data to GPU. From my limited experimentation it seems like the second option performs best (but not by a big margin). The third option required fussing about the start_method of the data loader processes, and it seems to incur an overhead at the beginning of each epoch.
https://stackoverflow.com/questions/65932328/
PyTorch: How to insert before a certain element
Currently I have a 2D tensor, for each row, I want to insert a new element e before the first index of a specified value v. Additional information: cannot guarantee each row could have a such value. If there isn't, just append the element Example: Supporse e is 0, v is 10, Given a tensor [[9, 6, 5, 4, 10], [8, 7, 3, 5, 5], [4, 9, 10, 10, 10]] I want to get [[9, 6, 5, 4, 0, 10], [8, 7, 3, 5, 5, 0], [4, 9, 0, 10, 10, 10]] Are there some Torch-style ways to do this? The worst case I can treat this as a trivial Python problem but I think the corresponding solution is a little time-consuming.
I haven't yet found a full PyTorch solution. I'll keep looking, but here is somewhere to start: >>> v, e = 10, 0 >>> v, e = torch.tensor([v]), torch.tensor([e]) >>> x = torch.tensor([[ 9, 6, 5, 4, 10], [ 8, 7, 3, 5, 5], [ 4, 9, 10, 10, 10], [10, 9, 7, 10, 2]]) To deal with the edge case where v is not found in one of the rows you can add a temporary column to x. This will ensure every row has a value v in it. We will use x_ as a helper tensor: >>> x_ = torch.cat([x, v.repeat(x.size(0))[:, None]], axis=1) tensor([[ 9, 6, 5, 4, 10, 10], [ 8, 7, 3, 5, 5, 10], [ 4, 9, 10, 10, 10, 10], [10, 9, 7, 10, 2, 10]]) Find the indices of the first value v on each row: >>> bp = (x_ == v).int().argmax(axis=1) tensor([4, 5, 2, 0]) Finally, the easiest way to insert values at different positions in each row is with a list comprehension: >>> torch.stack([torch.cat([xi[:bpi], e, xi[bpi:]]) for xi, bpi in zip(x, bp)]) tensor([[ 9, 6, 5, 4, 0, 10], [ 8, 7, 3, 5, 5, 0], [ 4, 9, 0, 10, 10, 10], [ 0, 10, 9, 7, 10, 2]]) Edit - If v cannot occur in the first position, then no need for x_: >>> x tensor([[ 9, 6, 5, 4, 10], [ 8, 7, 3, 5, 5], [ 4, 9, 10, 10, 10]]) >>> bp = (x == v).int().argmax(axis=1) - 1 >>> torch.stack([torch.cat([xi[:bpi], e, xi[bpi:]]) for xi, bpi in zip(x, bp)]) tensor([[ 9, 6, 5, 0, 4, 10], [ 8, 7, 3, 5, 0, 5], [ 4, 0, 9, 10, 10, 10]])
https://stackoverflow.com/questions/65932919/
How to stack matrices with different size
I have a list of matrices with size of (63,32,1,600,600), when I want to stack it with torch.stack(matrices).cpu().detach().numpy() it's raising with error: "stack expects each tensor to be equal size, but got [32, 1, 600, 600] at entry 0 and [16, 1, 600, 600] at entry 62". Is tried for resizing but it did not work. I appreciate any recommendations.
If I understand correctly what you're trying to do is stack the outputted mini-batches together into a single batch. My bet is that your last batch is partially filled (only has 16 elements instead of 32). Instead of using torch.stack (creating a new axis), I would simply concatenate with torch.cat on the batch axis (axis=0). Assuming matrices is a list of torch.Tensors. torch.cat(matrices).cpu().detach().numpy() As torch.cat concatenates on axis=0 by default.
https://stackoverflow.com/questions/65940793/
1D CNN on Pytorch: mat1 and mat2 shapes cannot be multiplied (10x3 and 10x2)
I have a time series with sample of 500 size and 2 types of labels and want to construct a 1D CNN with pytorch on them: class Simple1DCNN(torch.nn.Module): def __init__(self): super(Simple1DCNN, self).__init__() self.layer1 = torch.nn.Conv1d(in_channels=50, out_channels=20, kernel_size=5, stride=2) self.act1 = torch.nn.ReLU() self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1) self.fc1 = nn.Linear(10* 1 * 1, 2) def forward(self, x): x = x.view(1, 50,-1) x = self.layer1(x) x = self.act1(x) x = self.layer2(x) x = self.fc1(x) return x model = Simple1DCNN() model(torch.tensor(np.random.uniform(-10, 10, 500)).float()) But got this error message: Traceback (most recent call last): File "so_pytorch.py", line 28, in <module> model(torch.tensor(np.random.uniform(-10, 10, 500)).float()) File "/Users/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "so_pytorch.py", line 23, in forward x = self.fc1(x) File "/Users/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/Users/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "/Users/lib/python3.8/site-packages/torch/nn/functional.py", line 1692, in linear output = input.matmul(weight.t()) RuntimeError: mat1 and mat2 shapes cannot be multiplied (10x3 and 10x2) what am I doing wrong?
The shape of the output of the line x = self.layer2(x) (which is also the input of the next line x = self.fc1(x)) is torch.Size([1, 10, 3]). Now from the definition of self.fc1, it expects the last dimension of it's input to be 10 * 1 * 1 which is 10 whereas your input has 3 hence the error. I don't know what it is you're trying to do, but assuming what you want to do is; label the entire 500 size sequence to one of two labels, the you do this. # replace self.fc1 = nn.Linear(10* 1 * 1, 2) with self.fc1 = nn.Linear(10 * 3, 2) # replace x = self.fc1(x) with x = x.view(1, -1) x = self.fc1(x) label 10 timesteps each to one of two labels, then you do this. # replace self.fc1 = nn.Linear(10* 1 * 1, 2) with self.fc1 = nn.Linear(2, 2) The output shape for 1 will be (batch size, 2), and for 2 will be (batch size, 10, 2).
https://stackoverflow.com/questions/65945996/
Loss with custom backward function in PyTorch - exploding loss in simple MSE example
Before working on something more complex, where I knew I would have to implement my own backward pass, I wanted to try something nice and simple. So, I tried to do linear regression with mean squared error loss using PyTorch. This went wrong (see third implementation option below) when I defined my own backward method and I suspect it's because I'm not thinking very clearly about what I need to send PyTorch as gradients. So, I suspect what I need is some explanation/clarification/advice on what PyTorch expects me to provide in what form here. I am using PyTorch 1.7.0, so a bunch of old examples no longer work (different way of working with user-defined autograd functions as described in the documentation). First approach (standard PyTorch MSE loss function) Let's first do it the standard way without a custom loss function: import torch import torch.nn as nn import torch.nn.functional as F # Let's generate some fake data torch.manual_seed(42) resid = torch.rand(100) inputs = torch.tensor([ [ xx ] for xx in range(100)] , dtype=torch.float32) labels = torch.tensor([ (2 + 0.5*yy + resid[yy]) for yy in range(100)], dtype=torch.float32) # Now we define a linear regression model class linearRegression(torch.nn.Module): def __init__(self, inputSize, outputSize): super(linearRegression, self).__init__() self.bn = torch.nn.BatchNorm1d(num_features=1) self.linear = torch.nn.Linear(inputSize, outputSize) def forward(self, inx): x = self.bn(inx) # Adding BN to standardize input helps us use a higher learning rate x = self.linear(x) return x model = linearRegression(1, 1) # Using the standard mse_loss of PyTorch epochs = 25 mseloss = F.mse_loss optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1) for epoch in range(epochs): model.train() optimizer.zero_grad() outputs = model(inputs) loss = mseloss(outputs.view(-1), labels) loss.backward() optimizer.step() scheduler.step() print(f'epoch {epoch}, loss {loss}') This train just fine and I get to a loss of about 0.0824 and a plot of the fit looks fine. Second approach (custom loss function, but relying on PyTorch's automatic gradient calculation) So, now I replace the loss function with my own implementation of the MSE loss, but I still rely on PyTorch autograd. The only things I change here are defining the custom loss function, correspondingly defining the loss based on that, and a minor detail for how I hand over the predictions and true labels to the loss function. #######################################################3 class MyMSELoss(nn.Module): def __init__(self): super(MyMSELoss, self).__init__() def forward(self, inputs, targets): tmp = (inputs-targets)**2 loss = torch.mean(tmp) return loss #######################################################3 model = linearRegression(1, 1) mseloss = MyMSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1) for epoch in range(epochs): model.train() outputs = model(inputs) loss = mseloss(outputs.view(-1), labels) loss.backward() optimizer.step() optimizer.zero_grad() scheduler.step() print(f'epoch {epoch}, loss {loss}') This gives completely identical results as using the standard MSE loss function. Loss over epochs looks like this: epoch 0, loss 884.2006225585938 epoch 1, loss 821.930908203125 epoch 2, loss 718.7732543945312 epoch 3, loss 538.1835327148438 epoch 4, loss 274.50909423828125 epoch 5, loss 55.115299224853516 epoch 6, loss 2.405021905899048 epoch 7, loss 0.47621214389801025 epoch 8, loss 0.1584305614233017 epoch 9, loss 0.09725229442119598 epoch 10, loss 0.0853077694773674 epoch 11, loss 0.08297089487314224 epoch 12, loss 0.08251354098320007 epoch 13, loss 0.08242412656545639 epoch 14, loss 0.08240655809640884 epoch 15, loss 0.08240310847759247 epoch 16, loss 0.08240246027708054 epoch 17, loss 0.08240233361721039 epoch 18, loss 0.08240240067243576 epoch 19, loss 0.08240223675966263 epoch 20, loss 0.08240225911140442 epoch 21, loss 0.08240220695734024 epoch 22, loss 0.08240220695734024 epoch 23, loss 0.08240220695734024 epoch 24, loss 0.08240220695734024 Third approach (custom loss function with my own backward method) Now, the final version, where I implement my own gradients for the MSE. For that I define my own backward method in the loss function class and apparently need to do mseloss = MyMSELoss.apply. from torch.autograd import Function ####################################################### class MyMSELoss(Function): @staticmethod def forward(ctx, y_pred, y): ctx.save_for_backward(y_pred, y) return ( (y - y_pred)**2 ).mean() @staticmethod def backward(ctx, grad_output): y_pred, y = ctx.saved_tensors grad_input = torch.mean( -2.0 * (y - y_pred)).repeat(y_pred.shape[0]) # This fails, as does grad_input = -2.0 * (y-y_pred) # I've also messed around with the sign and that's not the sole problem, either. return grad_input, None ####################################################### model = linearRegression(1, 1) mseloss = MyMSELoss.apply optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1) for epoch in range(epochs): model.train() outputs = model(inputs) loss = mseloss(outputs.view(-1), labels) loss.backward() optimizer.step() optimizer.zero_grad() scheduler.step() print(f'epoch {epoch}, loss {loss}') This is where things go wrong and instead of the training loss decreasing, I get increasing training loss. Now it looks like this: epoch 0, loss 884.2006225585938 epoch 1, loss 3471.384033203125 epoch 2, loss 47768555520.0 epoch 3, loss 1.7422577779621402e+33 epoch 4, loss inf epoch 5, loss nan epoch 6, loss nan epoch 7, loss nan epoch 8, loss nan epoch 9, loss nan epoch 10, loss nan epoch 11, loss nan epoch 12, loss nan epoch 13, loss nan epoch 14, loss nan epoch 15, loss nan epoch 16, loss nan epoch 17, loss nan epoch 18, loss nan epoch 19, loss nan epoch 20, loss nan epoch 21, loss nan epoch 22, loss nan epoch 23, loss nan epoch 24, loss nan
(2 is constant can be neglected) So change your backward function to this: @staticmethod def backward(ctx, grad_output): y_pred, y = ctx.saved_tensors grad_input = 2 * (y_pred - y) / y_pred.shape[0] return grad_input, None
https://stackoverflow.com/questions/65947284/
using ModuleList, still getting ValueError: optimizer got an empty parameter list
With Pytorch I am attempting to use ModuleList to ensure model parameters are detected, and can be optimized. When calling the SGD optimizer I get the following error: ValueError: optimizer got an empty parameter list Can you please review the code below and advise? class LR(nn.Module): def ___init___(self): super(LR, self).___init___() self.linear = nn.ModuleList() self.linear.append(nn.Linear(in_features=28*28, out_features=128, bias=True)) def forward(self, x): y_p = torch.sigmoid(self.linear(x)) return y_p LR_model = LR() optimizer = torch.optim.SGD(params = LR_model.parameters(), lr=learn_rate)
This seems to be a copy-paste issue: your __init__ has 3 underscores instead of 2, both at __init__(self) and super(LR, self).__init__(). Thus the init itself failed. Delete the extra underscores and try again or try the below code: class LR(nn.Module): def __init__(self): super(LR, self).__init__() self.linear = nn.ModuleList() self.linear.append(nn.Linear(in_features=28*28, out_features=128, bias=True)) def forward(self, x): y_p = torch.sigmoid(self.linear(x)) return y_p LR_model = LR() optimizer = torch.optim.SGD(params = list(LR_model.parameters()), lr=learn_rate)
https://stackoverflow.com/questions/65949258/
Pytorch: 1D target tensor expected, multi-target not supported
I want to train a 1D CNN on time series. I get the following error message 1D target tensor expected, multi-target not supported Here is the code with simulated data corresponding to the structures of my data as well as the error message import torch from torch.utils.data import DataLoader import torch.utils.data as data import torch.nn as nn import numpy as np import random from tqdm.notebook import tqdm device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(device) train_dataset = [] n_item = 20 for i in range(0,n_item): train_data = np.random.uniform(-10, 10, 500) train_dataset.append(train_data) train_dataset = np.asarray(train_dataset) train_dataset.shape ecg_train = torch.from_numpy(train_dataset).float() labels_train = np.random.randint(2, size=n_item) labels_train = torch.from_numpy(labels_train).long() val_dataset = [] n_item = 10 for i in range(0,n_item): val_data = np.random.uniform(-10, 10, 500) val_dataset.append(val_data) val_dataset = np.asarray(val_dataset) val_dataset.shape ecg_validation = torch.from_numpy(val_dataset).float() labels_validation = np.random.randint(2, size=n_item) labels_validation = torch.from_numpy(labels_validation).long() class ECGNet(data.Dataset): """ImageNet Limited dataset.""" def __init__(self, ecgs, labls, transform=None): self.ecg = ecgs self.target = labls self.transform = transform def __getitem__(self, idx): ecgVec = self.ecg[idx] #.reshape(10, -1) labelID = self.target[idx].reshape(1) return ecgVec,labelID def __len__(self): return len(self.ecg) train_data = ECGNet(ecg_train, labels_train, ) print("size of Training dataset: {}".format(len(train_data))) validation_data = ECGNet(ecg_validation, labels_validation, ) print("size of Training dataset: {}".format(len(validation_data))) batch_size = 1 train_dataloader = DataLoader(dataset = train_data, batch_size=batch_size, shuffle = True, num_workers = 0) val_dataloader = DataLoader(dataset = validation_data, batch_size=batch_size, shuffle = True, num_workers = 0) def train_epoch(model, train_dataloader, optimizer, loss_fn): losses = [] correct_predictions = 0 # Iterate mini batches over training dataset for images, labels in tqdm(train_dataloader): images = images.to(device) #labels = labels.squeeze_() labels = labels.to(device) #labels = labels.to(device=device, dtype=torch.int64) # Run predictions output = model(images) # Set gradients to zero optimizer.zero_grad() # Compute loss loss = loss_fn(output, labels) # Backpropagate (compute gradients) loss.backward() # Make an optimization step (update parameters) optimizer.step() # Log metrics losses.append(loss.item()) predicted_labels = output.argmax(dim=1) correct_predictions += (predicted_labels == labels).sum().item() accuracy = 100.0 * correct_predictions / len(train_dataloader.dataset) # Return loss values for each iteration and accuracy mean_loss = np.array(losses).mean() return mean_loss, accuracy def evaluate(model, dataloader, loss_fn): losses = [] correct_predictions = 0 with torch.no_grad(): for images, labels in dataloader: images = images.to(device) #labels = labels.squeeze_() labels = labels.to(device=device, dtype=torch.int64) # Run predictions output = model(images) # Compute loss loss = loss_fn(output, labels) # Save metrics predicted_labels = output.argmax(dim=1) correct_predictions += (predicted_labels == labels).sum().item() losses.append(loss.item()) mean_loss = np.array(losses).mean() accuracy = 100.0 * correct_predictions / len(dataloader.dataset) # Return mean loss and accuracy return mean_loss, accuracy def train(model, train_dataloader, val_dataloader, optimizer, n_epochs, loss_function): # We will monitor loss functions as the training progresses train_losses = [] val_losses = [] train_accuracies = [] val_accuracies = [] for epoch in range(n_epochs): model.train() train_loss, train_accuracy = train_epoch(model, train_dataloader, optimizer, loss_fn) model.eval() val_loss, val_accuracy = evaluate(model, val_dataloader, loss_fn) train_losses.append(train_loss) val_losses.append(val_loss) train_accuracies.append(train_accuracy) val_accuracies.append(val_accuracy) print('Epoch {}/{}: train_loss: {:.4f}, train_accuracy: {:.4f}, val_loss: {:.4f}, val_accuracy: {:.4f}'.format(epoch+1, n_epochs, train_losses[-1], train_accuracies[-1], val_losses[-1], val_accuracies[-1])) return train_losses, val_losses, train_accuracies, val_accuracies class Simple1DCNN(torch.nn.Module): def __init__(self): super(Simple1DCNN, self).__init__() self.layer1 = torch.nn.Conv1d(in_channels=50, out_channels=20, kernel_size=5, stride=2) self.act1 = torch.nn.ReLU() self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1) self.fc1 = nn.Linear(10* 3, 2) def forward(self, x): print(x.shape) x = x.view(1, 50,-1) print(x.shape) x = self.layer1(x) print(x.shape) x = self.act1(x) print(x.shape) x = self.layer2(x) print(x.shape) x = x.view(1,-1) print(x.shape) x = self.fc1(x) print(x.shape) print(x) return x model_a = Simple1DCNN() model_a = model_a.to(device) criterion = nn.CrossEntropyLoss() loss_fn = torch.nn.CrossEntropyLoss() n_epochs_a = 50 learning_rate_a = 0.01 alpha_a = 1e-5 momentum_a = 0.9 optimizer = torch.optim.SGD(model_a.parameters(), momentum = momentum_a, nesterov = True, weight_decay = alpha_a, lr=learning_rate_a) train_losses_a, val_losses_a, train_acc_a, val_acc_a = train(model_a, train_dataloader, val_dataloader, optimizer, n_epochs_a, loss_fn ) Error message: cpu size of Training dataset: 20 size of Training dataset: 10 0%| | 0/20 [00:00<?, ?it/s] torch.Size([1, 500]) torch.Size([1, 50, 10]) torch.Size([1, 20, 3]) torch.Size([1, 20, 3]) torch.Size([1, 10, 3]) torch.Size([1, 30]) torch.Size([1, 2]) tensor([[ 0.5785, -1.0169]], grad_fn=<AddmmBackward>) Traceback (most recent call last): File "SO_question.py", line 219, in <module> train_losses_a, val_losses_a, train_acc_a, val_acc_a = train(model_a, File "SO_question.py", line 137, in train train_loss, train_accuracy = train_epoch(model, train_dataloader, optimizer, loss_fn) File "SO_question.py", line 93, in train_epoch loss = loss_fn(output, labels) File "/Users/mymac/Documents/programming/python/mainenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/Users/mymac/Documents/programming/python/mainenv/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 961, in forward return F.cross_entropy(input, target, weight=self.weight, File "/Users/mymac/Documents/programming/python/mainenv/lib/python3.8/site-packages/torch/nn/functional.py", line 2468, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/Users/mymac/Documents/programming/python/mainenv/lib/python3.8/site-packages/torch/nn/functional.py", line 2264, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: 1D target tensor expected, multi-target not supported What am I doing wrong?
You are using nn.CrossEntropyLoss as the criterion for your training. You correctly passed the labels as indices of the ground truth class: 0s and 1s. However, as the error message suggests, it needs to be a 1D tensor! Simply remove the reshape in ECGNet's __getitem__: def __getitem__(self, idx): ecgVec = self.ecg[idx] labelID = self.target[idx] return ecgVec,labelID Edit I want to increase the batch_size to 8. But now I get the error [...] You are doing a lot of broadcasting (flattening) which surely will affect the batch size. As a general rule of thumb never fiddle with axis=0. For instance, if you have an input shape of (8, 500), straight off you have a problem when doing x.view(1, 50, -1). Since the resulting tensor will be (1, 50, 80) (the desired shape would have been (8, 50, 10)). Instead, you could broadcast with x.view(x.size(0), 50, -1). Same with x.view(1, -1) later down forward. You are looking to flatten the tensor, but you should not flatten it along with the batches, they need to stay separated! It's safer to use torch.flatten, yet I prefer nn.Flatten which flattens from axis=1 to axis=-1 by default. My personal advice is to start with a simple setup (without train loops etc...) to verify the architecture and intermediate output shapes. Then, add the necessary logic to handle the training. class ECGNet(data.Dataset): """ImageNet Limited dataset.""" def __init__(self, ecgs, labls, transform=None): self.ecg = ecgs self.target = labls self.transform = transform def __getitem__(self, idx): ecgVec = self.ecg[idx] labelID = self.target[idx] return ecgVec, labelID def __len__(self): return len(self.ecg) class Simple1DCNN(nn.Module): def __init__(self): super(Simple1DCNN, self).__init__() self.layer1 = nn.Conv1d(in_channels=50, out_channels=20, kernel_size=5, stride=2) self.act1 = nn.ReLU() self.layer2 = nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1) self.fc1 = nn.Linear(10*3, 2) self.flatten = nn.Flatten() def forward(self, x): x = x.view(x.size(0), 50, -1) x = self.layer1(x) x = self.act1(x) x = self.layer2(x) x = self.flatten(x) x = self.fc1(x) return x batch_size = 8 train_data = ECGNet(ecg_train, labels_train) train_dl = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True, num_workers=0) model = Simple1DCNN() criterion = nn.CrossEntropyLoss() Then >>> x, y = next(iter(train_dl)) >>> y_hat = model(x) >>> y_hat.shape torch.Size([8, 2]) Also, make sure your loss works: >>> criterion(y_hat, y) tensor(..., grad_fn=<NllLossBackward>)
https://stackoverflow.com/questions/65951522/
Pip does not recognize torchaudio libary
When i try the command: pip install torchaudio i get this error: ERROR: Could not find a version that satisfies the requirement torchaudio ERROR: No matching distribution found for torchaudio I use windows 10
You could try doing this: pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html Found on the github. https://github.com/pytorch/audio
https://stackoverflow.com/questions/65960256/
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15____
I face that error RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15____ My input is binary vector of 340, target is binary vector of 8, For '" loss = criterion(outputs, stat_batch), I got outputs.shape= [64,8] and stat_batch.shape=[64,8] Here is the model class MMP(nn.Module): def __init__(self, M=1): super(MMP, self).__init__() # input layer self.layer1 = nn.Sequential( nn.Conv1d(340, 256, kernel_size=1, stride=1, padding=0), nn.ReLU()) self.layer2 = nn.Sequential( nn.Conv1d(256, 128, kernel_size=1, stride=1, padding=0), nn.ReLU()) self.layer3 = nn.Sequential( nn.Conv1d(128, 64, kernel_size=1, stride=1, padding=0), nn.ReLU()) self.drop1 = nn.Sequential(nn.Dropout()) self.batch1 = nn.BatchNorm1d(128) # LSTM self.lstm1=nn.Sequential(nn.LSTM( input_size=64, hidden_size=128, num_layers=2, bidirectional=True, batch_first= True)) self.fc1 = nn.Linear(128*2,8) self.sof = nn.Softmax(dim=-1) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = self.drop1(out) out = out.squeeze() out = out.unsqueeze(0) #out = out.batch1(out) out,_ = self.lstm1(out) print("lstm",out.shape) out = self.fc1(out) out =out.squeeze() #out = out.squeeze() out = self.sof(out) return out #traiin_model criterion = nn.CrossEntropyLoss() if CUDA: criterion = criterion.cuda() optimizer = optim.SGD(model.parameters(), lr=LEARNING_RATE, momentum=0.9) for epoch in range(N_EPOCHES): tot_loss=0 # Training for i, (seq_batch, stat_batch) in enumerate(training_generator): # Transfer to GPU seq_batch, stat_batch = seq_batch.to(device), stat_batch.to(device) print(i) print(seq_batch) print(stat_batch) optimizer.zero_grad() # Model computation seq_batch = seq_batch.unsqueeze(-1) outputs = model(seq_batch) if CUDA: loss = criterion(outputs, stat_batch).float().cuda() else: loss = criterion(outputs.view(-1), stat_batch.view(-1)) print(f"Epoch: {epoch},number: {i}, loss:{loss.item()}...\n\n") tot_loss += loss.item(print(f"Epoch: {epoch},file_number: {i}, loss:{loss.item()}...\n\n")) loss.backward() optimizer.step()
Your target stat_batch must have a shape of (64,) because nn.CrossEntropyLoss takes in class indices, not one-hot-encoding. Either construct your label tensor appropriately or use stat_batch.argmax(axis=1) instead.
https://stackoverflow.com/questions/65960597/
Python re-write jupyter notebook to python script problem
I have a very simple RNN model which I can execute on Jupyter notebook without any problem, now I want to embed this model in my Django project, so I downloaded the .ipynb file then saved it as .py file, then I just put it in my Django project. However, the first problem is VSCode says I have unresolved import 'MLutils, the MLutils is my utility file in the same folder. The second problem is if I just run this file, I'll get this error RuntimeError: cannot perform reduction function argmax on a tensor with no elements because the operation does not have an identity but if I use the Run Cell Above button of the VSCode, I'll get the correct result. The code I want to run is this, called MLTrain.py #!/usr/bin/env python # coding: utf-8 # In[1]: import torch import torch.nn as nn import matplotlib.pyplot as plt from MLutils import ALL_LETTERS, N_LETTERS from MLutils import load_data, letter_to_tensor, line_to_tensor, random_training_example class RNN(nn.Module): # implement RNN from scratch rather than using nn.RNN # # number of possible letters, hidden size, categories number def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size #Define 2 different liner layers self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input_tensor, hidden_tensor): # combine input and hidden tensor combined = torch.cat((input_tensor, hidden_tensor), 1) # apply the Linear layers and the softmax hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) # return 2 different tensors return output, hidden # need some initial hidden states in the begining. def init_hidden(self): return torch.zeros(1, self.hidden_size) # dictionary with the country as the key and names as values category_lines, all_categories = load_data() # number of categories n_categories = len(all_categories) # a Hyperparameter n_hidden = 128 # number of possible letters, hidden size, output size rnn = RNN(N_LETTERS, n_hidden, n_categories) # one step input_tensor = letter_to_tensor('A') hidden_tensor = rnn.init_hidden() output, next_hidden = rnn(input_tensor, hidden_tensor) #print(output.size()) #>>> size: [1,18] #print(next_hidden.size()) #>>> size: [1,128] # whole sequence/name input_tensor = line_to_tensor('if') hidden_tensor = rnn.init_hidden() output, next_hidden = rnn(input_tensor[0], hidden_tensor) print(output.size()) print(next_hidden.size()) # apply softmax in the end. # this is the likelyhood of each character of each category def category_from_output(output): # return index of the greatest value category_idx = torch.argmax(output).item() return all_categories[category_idx] print(category_from_output(output)) criterion = nn.NLLLoss() # hyperparameter learning_rate = 0.005 optimizer = torch.optim.SGD(rnn.parameters(), lr=learning_rate) # whole name as tensor, def train(line_tensor, category_tensor): hidden = rnn.init_hidden() #line_tensor.size()[0]: the length of the name for i in range(line_tensor.size()[0]): # apply the current character and the previous hidden state. output, hidden = rnn(line_tensor[i], hidden) loss = criterion(output, category_tensor) optimizer.zero_grad() loss.backward() optimizer.step() return output, loss.item() current_loss = 0 all_losses = [] plot_steps, print_steps = 100, 500 n_iters = 2000 for i in range(n_iters): category, line, category_tensor, line_tensor = random_training_example(category_lines, all_categories) output, loss = train(line_tensor, category_tensor) current_loss += loss if (i+1) % plot_steps == 0: all_losses.append(current_loss / plot_steps) current_loss = 0 if (i+1) % print_steps == 0: guess = category_from_output(output) correct = "CORRECT" if guess == category else f"WRONG ({category})" print(f"{i+1} {(i+1)/n_iters*100} {loss:.4f} {line} / {guess} {correct}") plt.figure() plt.plot(all_losses) plt.show() # model can be saved def predict(input_line): print(f"\n> {input_line}") with torch.no_grad(): line_tensor = line_to_tensor(input_line) hidden = rnn.init_hidden() for i in range(line_tensor.size()[0]): output, hidden = rnn(line_tensor[i], hidden) guess = category_from_output(output) print(guess) if __name__ == "__main__": predict("abcde 1 ifelse") # In[ ]: The MLutils.py is this import io import os import unicodedata import string import glob import torch import random # alphabet small + capital letters + " .,;'" ALL_LETTERS = string.ascii_letters + " .,;'" N_LETTERS = len(ALL_LETTERS) # Turn a Unicode string to plain ASCII, thanks to https://stackoverflow.com/a/518232/2809427 def unicode_to_ascii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' and c in ALL_LETTERS ) def load_data(): # Build the category_lines dictionary, a list of names per language category_lines = {} all_categories = [] def find_files(path): return glob.glob(path) # Read a file and split into lines def read_lines(filename): lines = io.open(filename, encoding='utf-8').read().strip().split('\n') return [unicode_to_ascii(line) for line in lines] for filename in find_files('data/categories/*.txt'): category = os.path.splitext(os.path.basename(filename))[0] all_categories.append(category) lines = read_lines(filename) category_lines[category] = lines return category_lines, all_categories # Find letter index from all_letters, e.g. "a" = 0 def letter_to_index(letter): return ALL_LETTERS.find(letter) # Just for demonstration, turn a letter into a <1 x n_letters> Tensor def letter_to_tensor(letter): tensor = torch.zeros(1, N_LETTERS) tensor[0][letter_to_index(letter)] = 1 return tensor # Turn a line into a <line_length x 1 x n_letters>, # or an array of one-hot letter vectors def line_to_tensor(line): tensor = torch.zeros(len(line), 1, N_LETTERS) for i, letter in enumerate(line): tensor[i][0][letter_to_index(letter)] = 1 return tensor def random_training_example(category_lines, all_categories): def random_choice(a): random_idx = random.randint(0, len(a) - 1) return a[random_idx] category = random_choice(all_categories) line = random_choice(category_lines[category]) category_tensor = torch.tensor([all_categories.index(category)], dtype=torch.long) line_tensor = line_to_tensor(line) return category, line, category_tensor, line_tensor if __name__ == '__main__': print(ALL_LETTERS) print(unicode_to_ascii('Ślusàrski')) # category_lines, all_categories = load_data() # print(category_lines['Flow'][:5]) # print(letter_to_tensor('J')) # [1, 57] # print(line_to_tensor('Jones').size()) # [5, 1, 57] The Run Above Button is this If I comment out the print here, I'll get this error Error Full-Text Traceback (most recent call last): File "c:\Users\Leslie\Desktop\todo_drf\MLModel\MLTrain.py", line 106, in <module> category, line, category_tensor, line_tensor = random_training_example(category_lines, all_categories) File "c:\Users\Leslie\Desktop\todo_drf\MLModel\MLutils.py", line 84, in random_training_example category = random_choice(all_categories) File "c:\Users\Leslie\Desktop\todo_drf\MLModel\MLutils.py", line 81, in random_choice random_idx = random.randint(0, len(a) - 1) File "C:\Users\Leslie\AppData\Local\Programs\Python\Python39\lib\random.py", line 338, in randint return self.randrange(a, b+1) File "C:\Users\Leslie\AppData\Local\Programs\Python\Python39\lib\random.py", line 316, in randrange raise ValueError("empty range for randrange() (%d, %d, %d)" % (istart, istop, width)) ValueError: empty range for randrange() (0, 0, 0) That's really wired or I am really stupid on somewhere. Someone please help me! Many thanks.
I figured out. If I use this file structure MLModel data categories Dataset.txt MLTrain.py MLutils.py For File path, I should use MLModel\data\categories\*.txt in .py scripts. But I should use data\categories\*.txt in jupyter notebook. Anyway pay attention to file path when you want to re-write from .ipynb to .py ----From a young man just lost his whole night time.
https://stackoverflow.com/questions/65963949/
How to convert output to 1/0
I am Relatively new to building deep learrning models and I seem to be completely confused and stuck with errors related to shape and size. Here’s the LSTM model and relevant code: class LSTMTagger(nn.Module): def __init__(self): super(LSTMTagger, self).__init__() self.embedding = nn.Embedding(wv.vectors.shape[0],512)#embedding_matrix.shape[1]) self.lstm1 = nn.LSTM(input_size = 512, hidden_size = 64, dropout = 0.1,batch_first=True,bidirectional = True) self.dropout = nn.Dropout(p = 0.25) self.linear1 = nn.Linear(in_features = 128, out_features = 64) self.dropout = nn.Dropout(p = 0.25) self.linear2 = nn.Linear(in_features = 64, out_features = 1) self.sigmoid = nn.Sigmoid() def forward(self, X): X_embed = self.embedding(X) outr1, _ = self.lstm1(X_embed) xr = self.dropout(outr1) xr= self.linear1(xr) xr = self.dropout(xr) xr= self.linear2(xr) outr4 = self.sigmoid(xr) outr4 = outr4.view(1,-1) return outr4 model = LSTMTagger() torch.multiprocessing.set_sharing_strategy('file_system') if torch.cuda.device_count() > 1: print("Using ", torch.cuda.device_count(), " GPUs") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs # model =model.load_state_dict(torch.load('best_model_state.bin')) model = nn.DataParallel(model, device_ids=[0]) #py r torch.cuda.empty_cache() device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) def train_epoch( model, data_loader, loss_fn, optimizer, device, scheduler, n_examples ): model = model.train() losses = [] correct_predictions = 0 for d in data_loader: print(f"Input ids: {np.shape(d['input_ids'])}\n len: {len(d['input_ids'][0])}") input_ids = d["input_ids"].to(device) targets = d["targets"].to(device) outputs = model(input_ids) _, preds = torch.max(outputs, dim=1) print(f"outputs is {np.shape(outputs)}") print(f"targets is {targets}") # continue loss = criterion(outputs.squeeze(), targets) # loss.backward() # nn.utils.clip_grad_norm_(model.parameters(), clip) # optimizer.step() # loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return correct_predictions.double() / n_examples, np.mean(losses) EPOCHS = 6 optimizer = optim.Adam(model.parameters(), lr=2e-5) total_steps = len(data_train) * EPOCHS scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1) loss_fn = nn.CrossEntropyLoss().to(device) history = defaultdict(list) best_accuracy = 0 criterion = nn.BCELoss() print('starting training') # exit() for epoch in range(EPOCHS): print(f'Epoch {epoch + 1}/{EPOCHS}') print('-' * 10) train_acc, train_loss = train_epoch( model, data_train, loss_fn, optimizer, device, scheduler, len(df_train) ) In this instance the sample input is a tensor of size: torch.Size([1, 512]) , that looks like this: tensor([[44561, 972, 7891, 94, 2191, 131, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], device=‘cuda:0’) and out put label (targets from train_epoch function) in case is just a simple 1 or 0 label in tensor form such as tensor([1], device=‘cuda:0’). I have been facing issues consistently with this approach. Initially the output was 1x512x1. So, I added outr4 = outr4.view(1,-1) after the sigmoid layer. Then, the output shape was reduced to 1x512 and I used squeeze function but, still, I face errors such as this one: ValueError: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([512])) is deprecated. Please ensure they have the same size. I have spent a lot of time trying to figure out what was going on but to no avail. Isn’t the output supposed to be either ?1 or 0, instead of being a 1x512 shaped tensor? I am relatively new to building models, so please excuse my lack of knowledge.
I suggest you track the shapes of your tensors as they pass through your forward function, e.g. by printing X.shape after every operation. Then you'll be less confused as you can understand the transformations. In your case I think it goes as follows: Input: [1, 512, 1] Embedding: [1, 512, 512] LSTM: [1, 512, 64x2] Lin1: [1, 512, 64] Lin2: [1, 512, 1] Then your activation function doesn't reshape the tensor, it just squashes the values in the last dimension to fit between 0 and 1. Logically, you run into an issue as you have 512 outputs (one for every word/token) instead of 1 (for the sentence). You have never reduced the word dimension to 1. To fix this, you have to flatten/pool the 512 dimension at some point in your model. For example, you could average over the word dimension after you run it through your LSTM: LSTM: [1, 512, 64x2] Avg: [1, 64x2] Lin1: [1, 64] Lin2: [1, 1] Or you can take only the last hidden state of your LSTM, which would also be of shape [1, 1, 128]. EDIT: Also, be careful of using so much padding. This might have undesired influence on your outcomes. You should try to work with a mask that remembers which inputs were actual words and which were padding spots. For example, averaging over so much padding will greatly lower the results; you should only average over the actual tokens. PyTorch has some functionality for the LSTM as well for this, in the form of pack_padded_sequence ( https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html ) and pad_packed_sequence. An example of their usage : https://suzyahyah.github.io/pytorch/2019/07/01/DataLoader-Pad-Pack-Sequence.html
https://stackoverflow.com/questions/65964512/
Scale down image represented in a tensor
I use the MNIST dataset to learn Pytorch. This is from the documentation to get a picture. import torch.nn.functional as F import torch from torchvision import datasets, transforms Tensor comes from the torchvision dataset. # Create prediction images, labels = next(iter(trainloader)) images[0].shape This is the Tensor: tensor([[-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.2549, -0.5059, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.8667, -0.3961, 0.2471, 0.6784, 0.9922, 0.8824, 0.3882, -0.8353, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.5922, 0.3490, 0.6235, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, -0.2235, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.3647, 0.9451, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.8510, 0.9922, 0.8745, -0.3725, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.0118, 0.9922, 0.9922, 0.9922, 0.9843, 0.8902, 0.3255, -0.1922, -0.6314, 0.7725, 0.9922, 0.7804, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 0.4431, 0.9922, 0.9922, 0.7725, -0.1608, -1.0000, -1.0000, -1.0000, -1.0000, 0.6235, 0.9922, 0.7804, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.0745, 0.9922, 0.7333, -0.7020, -1.0000, -1.0000, -1.0000, -1.0000, -0.3020, 0.9843, 0.9922, 0.7804, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.8980, 0.2078, -0.0902, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.2863, 0.9922, 0.9922, 0.7804, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 0.0039, 0.9922, 0.9922, 0.7804, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.8275, 0.8275, 0.9922, 0.9922, -0.0118, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 0.5765, 0.9922, 1.0000, 0.9529, -0.3725, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.9922, -0.3804, 0.9765, 0.9922, 0.9922, 0.0039, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.8667, 0.0275, 0.9922, 0.9922, 0.9922, 0.9922, -0.4667, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 0.0039, 0.8980, 0.9137, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9216, 0.8980, 0.8980, 0.8980, 0.8980, 0.8980, 0.6314, -0.1843, -0.6941, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 0.6784, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 1.0000, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.8353, 0.1451, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 0.6784, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.9922, 0.8745, 0.5059, 0.4196, -0.5686, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.1765, 0.9216, 0.9922, 0.9765, 0.6706, 0.6706, 0.2863, -0.4118, -0.4118, -0.4118, -0.4118, -0.4118, -0.4118, -0.4118, -0.5529, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -0.4275, -0.2471, -0.2863, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000]]) Out[181]: torch.Size([1, 28, 28]) I want to scale the image down to a 14x14 picture, so I guess I need a torch.Size([1, 14, 14]) I tried this, but it results in a different format: F.interpolate(images[0], 14).shape Out[184]: torch.Size([1, 28, 14]) I expected this to work, but it results in an error: F.interpolate(images[0], (14, 14)) ValueError: size shape must match input shape. Input is 1D, size is 2 Does anyone how to to get my desired result?
From the docs: The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. Currently the first 28 in your shape, and the 14 in the output size, are interpreted as the number of channels/colors, not the image height. Therefore the scaling does not happen in that dimension. Unsqueeze your input to be of shape (1, 1, 28, 28) to get the correct behavior.
https://stackoverflow.com/questions/65967259/
Create a custom dataset from a folder with separate files
I am using Pytorch's custom dataset feature to create a custom dataset from separate files in one folder. Each file contains 123 rows and 123 columns, and all data points are integers. My issue is that the resources I've come across cater to files in one .csv, mine are not. More so, opening the images after being transformed as an image doesn't run as well. I'm not sure how to proceed from here on as my code gives: AttributeError: 'Image' object has no attribute 'read' import os from torch.utils.data import DataLoader, Dataset from numpy import genfromtxt # Custom dataset class CONCEPTDataset(Dataset): """ Concept Dataset """ def __init__(self, file_dir, transforms=None): """ Args: file_dir (string): Directory with all the images. transforms (optional): Changes on the data. """ self.file_dir = file_dir self.transforms = transforms self.concepts = os.listdir(file_dir) self.concepts.sort() self.concepts = [os.path.join(file_dir, concept) for concept in self.concepts] def __len__(self): return len(self.concepts) def __getitem__(self, idx): image = self.concepts[idx] # csv file to a numpy array using genfromtxt data = genfromtxt(image, delimiter=',') data = self.transforms(data.unsqueeze(0)) return data
PIL.Image.fromarray is used to convert an array to a PIL Image while Image.open is used to load an image file from the file system. You don't need either of those two since you already have a NumPy array representing your image and are looking to return it. PyTorch will convert it to torch.Tensor automatically if you plug your dataset to a torch.data.utils.DataLoader.
https://stackoverflow.com/questions/65972043/
High loss in neural network sequence classification
I am using neural network to classify sequence of length 340 to 8 classes, I am using cross entropy as loss. I am getting very high number for the loss . I am wondering if I did mistake in calculating the loss for each epoch. Or should i use other loss function . criterion = nn.CrossEntropyLoss() if CUDA: criterion = criterion.cuda() optimizer = optim.SGD(model.parameters(), lr=LEARNING_RATE, momentum=0.9) loss_list = [] for epoch in range(N_EPOCHES): tot_loss=0 running_loss =0 model.train() loss_values = [] acc_list = [] acc_list = torch.FloatTensor(acc_list) sum_acc = 0 # Training for i, (seq_batch, stat_batch) in enumerate(training_generator): # Transfer to GPU seq_batch, stat_batch = seq_batch.to(device), stat_batch.to(device) optimizer.zero_grad() # Model computation seq_batch = seq_batch.unsqueeze(-1) outputs = model(seq_batch) loss = criterion(outputs.argmax(1), stat_batch.argmax(1)) loss.backward() optimizer.step() # print statistics running_loss += loss.item()*seq_batch.size(0) loss_values.append(running_loss/len(training_set)) if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 50000),"acc",(outputs.argmax(1) == stat_batch.argmax(1)).float().mean()) running_loss = 0.0 sum_acc += (outputs.argmax(1) == stat_batch.argmax(1)).float().sum() print("epoch" , epoch, "acc", sum_acc/len(training_generator)) print('Finished Training') [1, 2000] loss: 14.205 acc tensor(0.5312, device='cuda:0') [1, 4000] loss: 13.377 acc tensor(0.4922, device='cuda:0') [1, 6000] loss: 13.159 acc tensor(0.5508, device='cuda:0') [1, 8000] loss: 13.050 acc tensor(0.5547, device='cuda:0') [1, 10000] loss: 12.974 acc tensor(0.4883, device='cuda:0') epoch 1 acc tensor(133.6352, device='cuda:0') [2, 2000] loss: 12.833 acc tensor(0.5781, device='cuda:0') [2, 4000] loss: 12.834 acc tensor(0.5391, device='cuda:0') [2, 6000] loss: 12.782 acc tensor(0.5195, device='cuda:0') [2, 8000] loss: 12.774 acc tensor(0.5508, device='cuda:0') [2, 10000] loss: 12.762 acc tensor(0.5156, device='cuda:0') epoch 2 acc tensor(139.2496, device='cuda:0') [3, 2000] loss: 12.636 acc tensor(0.5469, device='cuda:0') [3, 4000] loss: 12.640 acc tensor(0.5469, device='cuda:0') [3, 6000] loss: 12.648 acc tensor(0.5508, device='cuda:0') [3, 8000] loss: 12.637 acc tensor(0.5586, device='cuda:0') [3, 10000] loss: 12.620 acc tensor(0.6016, device='cuda:0') epoch 3 acc tensor(140.6962, device='cuda:0') [4, 2000] loss: 12.520 acc tensor(0.5547, device='cuda:0') [4, 4000] loss: 12.541 acc tensor(0.5664, device='cuda:0') [4, 6000] loss: 12.538 acc tensor(0.5430, device='cuda:0') [4, 8000] loss: 12.535 acc tensor(0.5547, device='cuda:0') [4, 10000] loss: 12.548 acc tensor(0.5820, device='cuda:0') epoch 4 acc tensor(141.6522, device='cuda:0')
I am getting very high number for the loss What makes you think this is high? What do you compare this to? Yes, you should use nn.CrossEntropyLoss for multi-class classification tasks. And your training loss seems perfectly fine to me. At initialization, you should have loss = -log(1/8) = ~2.
https://stackoverflow.com/questions/65975711/
Applying a simple transformation to get a binary image using pytorch
I'd like to binarize image before passing it to the dataloader, I have created a dataset class which works well. but in the __getitem__() method I'd like to threshold the image: def __getitem__(self, idx): # Open image, apply transforms and return with label img_path = os.path.join(self.dir, self.filelist[filename"]) image = Image.open(img_path) label = self.x_data.iloc[idx]["label"] # Applying transformation to the image if self.transforms is not None: image = self.transforms(image) # applying threshold here: my_threshold = 240 image = image.point(lambda p: p < my_threshold and 255) image = torch.tensor(image) return image, label And then I tried to invoke the dataset: data_transformer = transforms.Compose([ transforms.Resize((10, 10)), transforms.Grayscale() //transforms.ToTensor() ]) train_set = MyNewDataset(data_path, data_transformer, rows_train) Since I have applied the threshold on a PIL object I need to apply afterwards a conversion to a tensor object , but for some reason it crashes. can somebody please assist me?
Why not apply the binarization after the conversion from PIL.Image to torch.Tensor? class ThresholdTransform(object): def __init__(self, thr_255): self.thr = thr_255 / 255. # input threshold for [0..255] gray level, convert to [0..1] def __call__(self, x): return (x > self.thr).to(x.dtype) # do not change the data type Once you have this transformation, you simply add it: data_transformer = transforms.Compose([ transforms.Resize((10, 10)), transforms.Grayscale(), transforms.ToTensor(), ThresholdTransform(thr_255=240) ])
https://stackoverflow.com/questions/65979207/
Customized loss function in PyTorch which uses DNN outputs and additional variables
(I am sorry if my English is not good) I can create my own loss function in PyTorch if the function requires only DNN output vector(predicted) and DNN output vector(ground truth). I want to use additional variables to calculate the loss. I make my training and test data like below; DNN input: Data_A -> processing 1 -> Data_X DNN output: Data_A -> processing 1 -> Data_X Data_B -> processing 1 -> Data_P Data_X , Data_P -> processing 2 -> Data_Y and I divide Data_X and Data_Y into train data and test data. x_train, x_test, y_train, y_test = train_test_split(Data_X,Data_Y,test_size=0.2, random_state=0) I want to use Data_A, Data_B, Data_Y(predicted), and Data_Y(ground truth) to calculate the loss. I saw many examples for customized loss function which only use Data_Y(predicted) and Data_Y(ground truth). I could use such a customized loss function before. However, I don't know what to do when I want to use another additional variables. Is there a good way? Thank you for your help!
You have no restrictions over the structure of your loss function (as long as the gradients make sense). For instance, you can have: class MyLossLayer(nn.Module): def __init__(self): super(MyLossLayer, self).__init__() def forward(self, pred_a, pred_b, gt_target): # I'm just guessing here - do whatever you want as long as you do not screw the gradients. loss = pred_a * (pred_b - target) return loss.mean()
https://stackoverflow.com/questions/65979838/
Cuda 10.2 not recognised on Pip installed Pytorch 1.7.1
Pip installing Pytorch 1.7.1 here on Gtx1660(latest drivers installed) doesn't recognise the installed Cuda toolkit 10.2 on my machine(Windows10). As this is a personal project, I don't wish to use Anaconda. How to resolve this? #INSTALLATION (10.2 CUDA SUPPORT) pip install torch===1.7.1 torchvision===0.8.2 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html #CODE import torch device = torch.device("cuda") # device = torch.device("cuda:0") #ALSO NOT WORKING cuda_available = torch.cuda.is_available() cuda_init = torch.cuda.is_initialized() print(f'device_current = {device},cuda_available = {cuda_available} and cuda_init = {cuda_init}\n') torch.cuda.init() print(f'device_current = {device} and cuda_init = {cuda_init}') #TERMINAL device_current = cuda:0,cuda_available = False and cuda_init = False Traceback (most recent call last): File "f:\Script\Ai\Pytorch\mod_test\test1.py", line 8, in <module> torch.cuda.init() File "C:\Users\User0\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\cuda\__init__.py", line 137, in init _lazy_init() File "C:\Users\User0\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
I could be wrong, but here is what I found while searching on their package registry. For a PyTorch 1.7.1 pip install. The instructions on pytorch.org are: ## cu101 pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html ## cu110 pip install torch===1.7.1+cu110 torchvision===0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html Yet the command for CUDA 10.2 support is: ## cu102 pip install torch===1.7.1 torchvision===0.8.2 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html This seems off because the CUDA version is not stated anywhere in the command. Yet download.pytorch.org provides support for this particular PyTorch version with cu92, cu101, cu102, and cu110. You can either use: pip install torch==1.7.0 torchvision==0.8.1 -f https://download.pytorch.org/whl/cu102/torch_stable.html or try this instead from the main Torch stable directory: pip install torch==1.7.1+cu102 torchvision==0.8.2+cu102 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/65980206/
PyTorch CNN linear layer shape after conv2d
I was trying to learn PyTorch and came across a tutorial where a CNN is defined like below, class Net(Module): def __init__(self): super(Net, self).__init__() self.cnn_layers = Sequential( # Defining a 2D convolution layer Conv2d(1, 4, kernel_size=3, stride=1, padding=1), BatchNorm2d(4), ReLU(inplace=True), MaxPool2d(kernel_size=2, stride=2), # Defining another 2D convolution layer Conv2d(4, 4, kernel_size=3, stride=1, padding=1), BatchNorm2d(4), ReLU(inplace=True), MaxPool2d(kernel_size=2, stride=2), ) self.linear_layers = Sequential( Linear(4 * 7 * 7, 10) ) # Defining the forward pass def forward(self, x): x = self.cnn_layers(x) x = x.view(x.size(0), -1) x = self.linear_layers(x) return x I understood how the cnn_layers are made. After the cnn_layers, the data should be flattened and given to linear_layers. I don't understand how the number of features to Linear is 4*7*7. I understand that 4 is the output dimension from the last Conv2d layer. How is 7*7 coming in to picture? Does stride or padding got any role in that? Input image shape is [1, 28, 28]
Conv2d layers have a kernel size of 3, stride and padding of 1, which means it doesn't change the spatial size of an image. There are two MaxPool2d layers which reduce the spatial dimensions from (H, W) to (H/2, W/2). So, for each batch, output of the last convolution with 4 output channels has a shape of (batch_size, 4, H/4, W/4). In the forward pass feature tensor is flattened by x = x.view(x.size(0), -1) which makes it in the shape (batch_size, H*W/4). I assume H and W are 28, for which the linear layer would take inputs of shape (batch_size, 196).
https://stackoverflow.com/questions/65982152/
Pretrained model or training from scratch for object detection?
I have a dataset composed of 10k-15k pictures for supervised object detection which is very different from Imagenet or Coco (pictures are much darker and represent completely different things, industrial related). The model currently used is a FasterRCNN which extracts features with a Resnet used as a backbone. Could train the backbone of the model from scratch in one stage and then train the whole network in another stage be beneficial for the task, instead of loading the network pretrained on Coco and then retraining all the layers of the whole network in a single stage?
From my experience, here are some important points: your train set is not big enough to train the detector from scratch (though depends on network configuration, fasterrcnn+resnet18 can work). Better to use a pre-trained network on the imagenet; the domain the network was pre-trained on is not really that important. The network, especially the big one, need to learn all those arches, circles, and other primitive figures in order to use the knowledge for detecting more complex objects; the brightness of your train images can be important but is not something to stop you from using a pre-trained network; training from scratch requires much more epochs and much more data. The longer the training is the more complex should be your LR control algorithm. At a minimum, it should not be constant and change the LR based on the cumulative loss. and the initial settings depend on multiple factors, such as network size, augmentations, and the number of epochs; I played a lot with fasterrcnn+resnet (various number of layers) and the other networks. I recommend you to use maskcnn instead of fasterrcnn. Just command it not to use the masks and not to do the segmentation. I don't know why but it gives much better results. don't spend your time on mobilenet, with your train set size you will not be able to train it with some reasonable AP and AR. Start with maskrcnn+resnet18 backbone.
https://stackoverflow.com/questions/65982245/
how to save the generated images from this code separated
I have run StarGAN Code from github, this code generate all the generated images in one picture. How I can save all the generated images separated to single folder? I do not want to save all the images in one picture. this is how it generate the output (sample image I want to save the generated images from the trained model, not as samples with all the images in one picture but just a file that contains all the generated images. This is the part of the code i want to change it # Translate fixed images for debugging. if (i+1) % self.sample_step == 0: with torch.no_grad(): x_fake_list = [x_fixed] for c_fixed in c_fixed_list: x_fake_list.append(self.G(x_fixed, c_fixed)) x_concat = torch.cat(x_fake_list, dim=3) sample_path = os.path.join(self.sample_dir, '{}-images.jpg'.format(i+1)) save_image(self.denorm(x_concat.data.cpu()), sample_path, nrow=1, padding=0) print('Saved real and fake images into {}...'.format(sample_path))
The generator self.G is called on each element of c_fixed_list to generate images. All results are concatenated, then saved using torchvision.utils.save_image. I don't see what's holding you from saving the images inside the loop. Something that would resemble: for j, c_fixed in enumerate(c_fixed_list): x_fake = self.G(x_fixed, c_fixed) for k in range(len(x_fake)): sample_path = os.path.join(self.sample_dir, f'{i+1}-{k}-feat{j}-image.jpg') save_image(self.denorm(x_fake.data[k].cpu()), sample_path, nrow=1, padding=0)
https://stackoverflow.com/questions/65983925/
AllenNLP 2.0: Can't get FBetaMultiLabelMeasure to run
I would like to compute the f1-score for a classifier trained with allen-nlp. I used the working code from a allen-nlp guide, which computed accuracy, not F1, so I tried to adjust the metric in the code. According to the documentation, CategoricalAccuracy and FBetaMultiLabelMeasure take the same inputs. (predictions: torch.Tensor of shape [batch_size, ..., num_classes], gold_labels: torch.Tensor of shape [batch_size, ...]) But for some reason the input that worked perfectly well for the accuracy results in a RuntimeError when given to the f1-multi-label metric. I condensed the problem to the following code snippet: >>> from allennlp.training.metrics import CategoricalAccuracy, FBetaMultiLabelMeasure >>> import torch >>> labels = torch.LongTensor([0, 0, 2, 1, 0]) >>> logits = torch.FloatTensor([[ 0.0063, -0.0118, 0.1857], [ 0.0013, -0.0217, 0.0356], [-0.0028, -0.0512, 0.0253], [-0.0460, -0.0347, 0.0400], [-0.0418, 0.0254, 0.1001]]) >>> labels.shape torch.Size([5]) >>> logits.shape torch.Size([5, 3]) >>> ca = CategoricalAccuracy() >>> f1 = FBetaMultiLabelMeasure() >>> ca(logits, labels) >>> f1(logits, labels) Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../lib/python3.8/site-packages/allennlp/training/metrics/fbeta_multi_label_measure.py", line 130, in __call__ true_positives = (gold_labels * threshold_predictions).bool() & mask & pred_mask RuntimeError: The size of tensor a (5) must match the size of tensor b (3) at non-singleton dimension 1 Why is this error happening? What am I missing here?
You want to use FBetaMeasure, not FBetaMultiLabelMeasure. "Multilabel" means you can specify more than one correct answer, but "Categorical Accuracy" only allows one correct answer. That means you have to specify another dimension in your labels. I suspect the documentation of FBetaMultiLabelMeasure is misleading. I'll look into fixing it.
https://stackoverflow.com/questions/65984950/
Pyorch: Applying a batch of filters (kernels) on one single picture using conv2d
I have a batch of filters, i.e., w, whose size is torch.Size([64, 3, 7, 7]) as follows: Also, I have a picture p from Imagenet as follows: How can I apply the filters to the picture and get a grid of 64x64 where each cell contains the same picture on which a different filter has been applied? I would like to make the grid using torchvision.utils.make_grid but do not know how? My try y = F.conv2d(p, w) The size of y is torch.Size([1, 64, 250, 250]) which does not make sense to me.
Each of your filters has size [3, 7, 7], so they would take an RGB image and produce a single channel output which is stacked in the channel dimension so your output [1, 64, H, W] makes perfect sense. To visualize these filters: import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torchvision import transforms from PIL import Image import matplotlib.pyplot as plt torch.random.manual_seed(42) transform = transforms.Compose([transforms.ToTensor()]) img = transform(Image.open('dog.jpg')).unsqueeze(0) print('Image size: ', img.shape) filters = torch.randn(64, 3, 7, 7) out = F.conv2d(img, filters) print('Output size: ', out.shape) list_of_images = [out[:,i] for i in range(64)] grid = torchvision.utils.make_grid(list_of_images, normalize=True) plt.imshow(grid.numpy().transpose(1,2,0)) plt.show() This is a more accurate representation of the output. It is however not very attractive -- we can obtain the colored version by processing each color channel independently. (The grayscale version can be obtained by summing over the color channels) color_out = [] for i in range(3): color_out.append(F.conv2d(img[:,i:i+1], filters[:,i:i+1])) out = torch.stack(color_out, 2) print('Output size: ', out.shape) list_of_images = [out[0,i] for i in range(64)] print(list_of_images[0].shape) grid = torchvision.utils.make_grid(list_of_images, normalize=True) plt.imshow(grid.numpy().transpose(1,2,0)) plt.show()
https://stackoverflow.com/questions/65986446/
Modifying the Learning Rate in the middle of the Model Training in Deep Learning
Below is the code to configure TrainingArguments consumed from the HuggingFace transformers library to finetune the GPT2 language model. training_args = TrainingArguments( output_dir="./gpt2-language-model", #The output directory num_train_epochs=100, # number of training epochs per_device_train_batch_size=8, # batch size for training #32, 10 per_device_eval_batch_size=8, # batch size for evaluation #64, 10 save_steps=100, # after # steps model is saved warmup_steps=500,# number of warmup steps for learning rate scheduler prediction_loss_only=True, metric_for_best_model = "eval_loss", load_best_model_at_end = True, evaluation_strategy="epoch", learning_rate=0.00004, # learning rate ) early_stop_callback = EarlyStoppingCallback(early_stopping_patience = 3) trainer = Trainer( model=gpt2_model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=test_dataset, callbacks = [early_stop_callback], ) The number of epochs as 100 and learning_rate as 0.00004 and also the early_stopping is configured with the patience value as 3. The model ran for 5/100 epochs and noticed that the difference in loss_value is negligible. The latest checkpoint is saved as checkpoint-latest. Now Can I modify the learning_rate may be to 0.01 from 0.00004 and resume the training from the latest saved checkpoint - checkpoint-latest? Doing that will be efficient? Or to train with the new learning_rate value should I start the training from the beginning?
No, you don't have to restart your training. Changing the learning rate is like changing how big a step your model take in the direction determined by your loss function. You can also think of it as transfer learning where the model has some experience (no matter how little or irrelevant) and the weights are in a state most likely better than a randomly initialised one. As a matter of fact, changing the learning rate mid-training is considered an art in deep learning and you should change it if you have a very very good reason to do it. You would probably want to write down when (why, what, etc) you did it if you or someone else wants to "reproduce" the result of your model.
https://stackoverflow.com/questions/65987683/
Pytorch: why does torch.where method does not work like numpy.where?
In order to replace positive values with a certain number and negative ones with another number in a random vector using Numpy one can do the following: npy_p = np.random.randn(4,6) quant = np.where(npy_p>0, c_plus , np.where(npy_p<0, c_minus , npy_p)) However where method in Pytorch throws out the following error: expected scalar type double but found float Can you help me with that?
I can't reproduce this error, maybe it will be better if you could share a specific example where it failed (it might be the values you try to fill the tensor with): import torch x = torch.rand(4,6) res = torch.where(x > 0.3,torch.tensor(0.), torch.where(x < 0.1, torch.tensor(-1.), x)) Where x is and it's of dtype float32: tensor([[0.1391, 0.4491, 0.2363, 0.3215, 0.7740, 0.4879], [0.3051, 0.0870, 0.2869, 0.2575, 0.8825, 0.8201], [0.4419, 0.1138, 0.0825, 0.9489, 0.1553, 0.6505], [0.8376, 0.7639, 0.9291, 0.0865, 0.5984, 0.3953]]) And the res is: tensor([[ 0.1391, 0.0000, 0.2363, 0.0000, 0.0000, 0.0000], [ 0.0000, -1.0000, 0.2869, 0.2575, 0.0000, 0.0000], [ 0.0000, 0.1138, -1.0000, 0.0000, 0.1553, 0.0000], [ 0.0000, 0.0000, 0.0000, -1.0000, 0.0000, 0.0000]]) The problem is caused because you mix data types in the torch.where, if you explicitly use the same datatype as the tensor in your constants it works fine.
https://stackoverflow.com/questions/65988166/
Difference between versions 9.2,10.1,10.2,11.0 of cuda for PyTorch 1.7
Recently,I need to install pytorch ,when I check out the website : install pytorch website It shows four different version 9.2,10.1,10.2,11.0 to choose ,And I have cuda version 10.0 and driver version 450 installed on my computer,I thought it would fail to enable gpu when using pytorch ,After I choose 10.1 and try torch.cuda.is_available() and it returns True I have two questions: Why does everything turn out to be working even my cuda version is not the same as any of one I mentioned ? What's the difference between choosing cuda verison 9.2,10.1,10.2,11.0 ?
PyTorch doesn't use the system's CUDA installation when installed from a package manager (either conda or pip). Instead, it comes with a copy of the CUDA runtime and will work as long as your system is compatible with that version of PyTorch. By compatible I mean that the GPU supports the particular version of CUDA and the GPU's compute capability is one that the PyTorch binaries (for the selected version) are compiled with support for. Therefore the version reported by nvcc (the version installed on the system) is basically irrelevant. The version you should be looking at is import torch # print the version of CUDA being used by pytorch print(torch.version.cuda) The only time the system's version of CUDA should matter is if you compiled PyTorch from source. As for which version of CUDA to select. You will probably want the newest version of CUDA that your system is compatible with. This is because newer versions generally include performance improvements compared to older versions.
https://stackoverflow.com/questions/65988678/
Pytorch object detection model optimization
I want to reduce the object detection model size. For the same, I tried optimising Faster R-CNN model for object detection using pytorch-mobile optimiser, but the .pt zip file generated is of the same size as that of the original model size. I used the code mention below import torch import torchvision from torch.utils.mobile_optimizer import optimize_for_mobile model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() script_model = torch.jit.script(model) from torch.utils.mobile_optimizer import optimize_for_mobile script_model_vulkan = optimize_for_mobile(script_model, backend='Vulkan') torch.jit.save(script_model_vulkan, "frcnn.pth")
You have to quantize your model first follow these steps here & then use these methods from torch.utils.mobile_optimizer import optimize_for_mobile script_model_vulkan = optimize_for_mobile(script_model, backend='Vulkan') torch.jit.save(script_model_vulkan, "frcnn.pth") EDIT: Quantization process for resnet50 model import torchvision model = torchvision.models.resnet50(pretrained=True) import os import torch def print_model_size(mdl): torch.save(mdl.state_dict(), "tmp.pt") print("%.2f MB" %(os.path.getsize("tmp.pt")/1e6)) os.remove('tmp.pt') print_model_size(model) # will print original model size backend = "qnnpack" model.qconfig = torch.quantization.get_default_qconfig(backend) torch.backends.quantized.engine = backend model_static_quantized = torch.quantization.prepare(model, inplace=False) model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False) print_model_size(model_static_quantized) ## will print quantized model size
https://stackoverflow.com/questions/65992364/
Difference between torch.flatten() and nn.Flatten()
What are the differences between torch.flatten() and torch.nn.Flatten()?
Flattening is available in three forms in PyTorch As a tensor method (oop style) torch.Tensor.flatten applied directly on a tensor: x.flatten(). As a function (functional form) torch.flatten applied as: torch.flatten(x). As a module (layer nn.Module) nn.Flatten(). Generally used in a model definition. All three are identical and share the same implementation, the only difference being nn.Flatten has start_dim set to 1 by default to avoid flattening the first axis (usually the batch axis). While the other two flatten from axis=0 to axis=-1 - i.e. the entire tensor - if no arguments are given.
https://stackoverflow.com/questions/65993494/
IndexError: Dimension out of range - PyTorch dimension expected to be in range of [-1, 0], but got 1
Despite already numerous answers on this very topic, failing to see in the example below (extract from https://gist.github.com/lirnli/c16ef186c75588e705d9864fb816a13c on Variational Recurrent Networks) which input and output dimensions trigger the error. Having tried to change dimensions in torch.cat and also suppress the call to squeeze(), the error persists, <ipython-input-51-cdc928891ad7> in generate(self, hidden, temperature) 56 x_sample = x = x_out.div(temperature).exp().multinomial(1).squeeze() 57 x = self.phi_x(x) ---> 58 tc = torch.cat([x,z], dim=1) 59 60 hidden_next = self.rnn(tc,hidden) IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) Thus how to shape the dimensions in x and z in tc = torch.cat([x,z], dim=1)? Note the code as follows, import torch from torch import nn, optim from torch.autograd import Variable class VRNNCell(nn.Module): def __init__(self): super(VRNNCell,self).__init__() self.phi_x = nn.Sequential(nn.Embedding(128,64), nn.Linear(64,64), nn.ELU()) self.encoder = nn.Linear(128,64*2) # output hyperparameters self.phi_z = nn.Sequential(nn.Linear(64,64), nn.ELU()) self.decoder = nn.Linear(128,128) # logits self.prior = nn.Linear(64,64*2) # output hyperparameters self.rnn = nn.GRUCell(128,64) def forward(self, x, hidden): x = self.phi_x(x) # 1. h => z z_prior = self.prior(hidden) # 2. x + h => z z_infer = self.encoder(torch.cat([x,hidden], dim=1)) # sampling z = Variable(torch.randn(x.size(0),64))*z_infer[:,64:].exp()+z_infer[:,:64] z = self.phi_z(z) # 3. h + z => x x_out = self.decoder(torch.cat([hidden, z], dim=1)) # 4. x + z => h hidden_next = self.rnn(torch.cat([x,z], dim=1),hidden) return x_out, hidden_next, z_prior, z_infer def calculate_loss(self, x, hidden): x_out, hidden_next, z_prior, z_infer = self.forward(x, hidden) # 1. logistic regression loss loss1 = nn.functional.cross_entropy(x_out, x) # 2. KL Divergence between Multivariate Gaussian mu_infer, log_sigma_infer = z_infer[:,:64], z_infer[:,64:] mu_prior, log_sigma_prior = z_prior[:,:64], z_prior[:,64:] loss2 = (2*(log_sigma_infer-log_sigma_prior)).exp() \ + ((mu_infer-mu_prior)/log_sigma_prior.exp())**2 \ - 2*(log_sigma_infer-log_sigma_prior) - 1 loss2 = 0.5*loss2.sum(dim=1).mean() return loss1, loss2, hidden_next def generate(self, hidden=None, temperature=None): if hidden is None: hidden=Variable(torch.zeros(1,64)) if temperature is None: temperature = 0.8 # 1. h => z z_prior = self.prior(hidden) # sampling z = Variable(torch.randn(z_prior.size(0),64))*z_prior[:,64:].exp()+z_prior[:,:64] z = self.phi_z(z) # 2. h + z => x x_out = self.decoder(torch.cat([hidden, z], dim=1)) # sampling x_sample = x = x_out.div(temperature).exp().multinomial(1).squeeze() x = self.phi_x(x) # 3. x + z => h # hidden_next = self.rnn(torch.cat([x,z], dim=1),hidden) tc = torch.cat([x,z], dim=1) hidden_next = self.rnn(tc,hidden) return x_sample, hidden_next def generate_text(self, hidden=None,temperature=None, n=100): res = [] hidden = None for _ in range(n): x_sample, hidden = self.generate(hidden,temperature) res.append(chr(x_sample.data[0])) return "".join(res) # Test net = VRNNCell() x = Variable(torch.LongTensor([12,13,14])) hidden = Variable(torch.rand(3,64)) output, hidden_next, z_infer, z_prior = net(x, hidden) loss1, loss2, _ = net.calculate_loss(x, hidden) loss1, loss2 hidden = Variable(torch.zeros(1,64)) net.generate_text()
The error IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) means that you're trying to access an index that doesn't exist in the tensor. For instance, the following code would cause the same IndexError you're experiencing. # sample input tensors In [210]: x = torch.arange(4) In [211]: z = torch.arange(6) # trying to concatenate along the second dimension # but the tensors have only one dimension (i.e., `0`). In [212]: torch.cat([x, z], dim=1) So, one way to overcome this is to promote the tensors to higher dimensions before concatenation, if that is what you need. # promoting tensors to 2D before concatenation In [216]: torch.cat([x[None, :], z[None, :]], dim=1) Out[216]: tensor([[0, 1, 2, 3, 0, 1, 2, 3, 4, 5]]) Thus, in your case, you've to analyze and understand what shape you need for x so that it can be concatenated with z along dimension 1 and then the tc passed as input to self.rnn() along with hidden. As far as I can see, x[None, :] , z[None, :] should work. Debugging for successful training The code you posted has been written for PyTorch v0.4.1. A lot has changed in the PyTorch Python API since then, but the code was not updated. Below are the changes you need to make the code run and train successfully. Copy the below functions and paste it at appropriate places in your code. def generate(self, hidden=None, temperature=None): if hidden is None: hidden=Variable(torch.zeros(1,64)) if temperature is None: temperature = 0.8 # 1. h => z z_prior = self.prior(hidden) # sampling z = Variable(torch.randn(z_prior.size(0),64))*z_prior[:,64:].exp()+z_prior[:,:64] z = self.phi_z(z) # 2. h + z => x x_out = self.decoder(torch.cat([hidden, z], dim=1)) # sampling x_sample = x = x_out.div(temperature).exp().multinomial(1).squeeze() x = self.phi_x(x) # 3. x + z => h x = x[None, ...] # changed here xz = torch.cat([x,z], dim=1) # changed here hidden_next = self.rnn(xz,hidden) # changed here return x_sample, hidden_next def generate_text(self, hidden=None,temperature=None, n=100): res = [] hidden = None for _ in range(n): x_sample, hidden = self.generate(hidden,temperature) res.append(chr(x_sample.data)) # changed here return "".join(res) for epoch in range(max_epoch): batch = next(g) loss_seq = 0 loss1_seq, loss2_seq = 0, 0 optimizer.zero_grad() for x in batch: loss1, loss2, hidden = net.calculate_loss(Variable(x),hidden) loss1_seq += loss1.data # changed here loss2_seq += loss2.data # changed here loss_seq = loss_seq + loss1+loss2 loss_seq.backward() optimizer.step() hidden.detach_() if epoch%100==0: print('>> epoch {}, loss {:12.4f}, decoder loss {:12.4f}, latent loss {:12.4f}'.format(epoch, loss_seq.data, loss1_seq, loss2_seq)) # changed here print(net.generate_text()) print() Note: After these changes, the training loop at my end proceeds without any errors on PyTorch v1.7.1. Have a look at the comments with # changed here to understand the changes.
https://stackoverflow.com/questions/65993928/
How to add a L1 or L2 regularization to weights in pytorch
In tensorflow, we can add a L1 or L2 regularizations in the sequential model. I couldn't find equivalent approach in pytorch. How can we add regularizations to weights in pytorch in the definition of the net: class Net(torch.nn.Module): def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() self.hidden = torch.nn.Linear(n_feature, n_hidden) # hidden layer """ How to add a L1 regularization after a certain hidden layer?? """ """ OR How to add a L1 regularization after a certain hidden layer?? """ self.predict = torch.nn.Linear(n_hidden, n_output) # output layer def forward(self, x): x = F.relu(self.hidden(x)) # activation function for hidden layer x = self.predict(x) # linear output return x net = Net(n_feature=1, n_hidden=10, n_output=1) # define the network # print(net) # net architecture optimizer = torch.optim.SGD(net.parameters(), lr=0.2) loss_func = torch.nn.MSELoss() # this is for regression mean squared loss
Generally L2 regularization is handled through the weight_decay argument for the optimizer in PyTorch (you can assign different arguments for different layers too). This mechanism, however, doesn't allow for L1 regularization without extending the existing optimizers or writing a custom optimizer. According to the tensorflow docs they use a reduce_sum(abs(x)) penalty for L1 regularization and a reduce_sum(square(x)) penalty for L2 regularization. Probably the easiest way to achieve this is to just directly add these penalty terms to the loss function used for gradient computation during training. # set l1_weight and l2_weight to non-zero values to enable penalties # inside the training loop (given input x and target y) ... pred = net(x) loss = loss_func(pred, y) # compute penalty only for net.hidden parameters l1_penalty = l1_weight * sum([p.abs().sum() for p in net.hidden.parameters()]) l2_penalty = l2_weight * sum([(p**2).sum() for p in net.hidden.parameters()]) loss_with_penalty = loss + l1_penalty + l2_penalty optimizer.zero_grad() loss_with_penalty.backward() optimizer.step() # The pre-penalty loss is the one we ultimately care about print('loss:', loss.item())
https://stackoverflow.com/questions/65998695/
Why PyTorch fails to instantiate this neural network from Torch Hub?
I am on Ubuntu, running PyTorch 1.7.1 and torchvision 0.8.2: print(torch.__version__) print(torchvision.__version__) >> 1.7.1 >> 0.8.2 When I attempt to instantiate resnet18 with: model_resnet18 = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True) it fails with: Using cache found in /home/stark/.cache/torch/hub/pytorch_vision_master Traceback (most recent call last): File "/home/stark/Work/test/binary_classifier.py", line 253, in <module> build_and_train() File "/home/stark/Work/test/binary_classifier.py", line 174, in build_and_train model_resnet18 = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/hub.py", line 370, in load model = _load_local(repo_or_dir, model, *args, **kwargs) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/hub.py", line 396, in _load_local hub_module = import_module(MODULE_HUBCONF, hubconf_path) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/hub.py", line 71, in import_module spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/stark/.cache/torch/hub/pytorch_vision_master/hubconf.py", line 14, in <module> from torchvision.models.mobilenetv2 import mobilenet_v2 ModuleNotFoundError: No module named 'torchvision.models.mobilenetv2' I am on Python 3.8 and already on the latest PyTorch version, so what else can I do to fix this? Any pointers are greatly appreciated!
You could use torchvision's API instead: model_resnet18 = torchvision.models.resnet18(pretrained=True)
https://stackoverflow.com/questions/66002549/
Get a 10x10 patch from a 100x100 pytorch tensor with torus style wrap around the boundries
How can I get a 10x10 patch from a 100x100 pytorch tensor, with the added constraint that if a patch would go outside the boundaries of the array, then it wraps around the edges (as if the array was a torus, with the top joined to the bottom, and the left joined to the right)? I wrote this code that will do the job, I'm looking for something more elegant, efficient and clear: def shift_matrix(a, distances) -> Tensor: x, y = distances a = torch.cat((a[x:], a[0:x]), dim=0) a = torch.cat((a[:, y:], a[:, :y]), dim=1) return a def randomly_shift_matrix(a) -> Tensor: return shift_matrix(a, np.random.randint(low = 0, high = a.size())) def random_patch(a, size) -> Tensor: full_shifted_matrix = randomly_shift_matrix(a) return full_shifted_matrix[0:size[0], 0:size[1]] I feel like something with negative index slices should work. I haven't found it though. You can see the code in google colab here.
You are looking for torch.roll def random_patch(a, size) -> Tensor: shifts = np.random.randint(low = 0, high = a.size()) return torch.roll(a, shifts=shifts, dims=(0, 1))[:size[0], :size[1]]
https://stackoverflow.com/questions/66006277/
Add a layer to model state dictionary in PyTorch
I have a neural network in PyTorch like, class Net1(nn.Module): def __init__(self): super(Net, self).__init__() self.conv_g = ResBlock(2,81) self.conv_low1 = ResBlock(2,90) self.conv_low2 = ResBlock(2,90) self.conv_low3 = ResBlock(2,90) self.conv_low4 = ResBlock(2,90) self.conv_low5 = ResBlock(2,90) self.conv_low6 = ResBlock(2,90) self.conv_low7 = ResBlock(2,90) self.conv_low8 = ResBlock(2,90) self.conv_low9 = ResBlock(2,90) self.stagIII = stageIII() def forward(self,x): ... return out To this network I have added a new layer m whose weights I know. class Net2(nn.Module): def __init__(self): super(Net, self).__init__() self.m = nn.Linear(5,1) self.conv_g = ResBlock(2,81) self.conv_low1 = ResBlock(2,90) self.conv_low2 = ResBlock(2,90) self.conv_low3 = ResBlock(2,90) self.conv_low4 = ResBlock(2,90) self.conv_low5 = ResBlock(2,90) self.conv_low6 = ResBlock(2,90) self.conv_low7 = ResBlock(2,90) self.conv_low8 = ResBlock(2,90) self.conv_low9 = ResBlock(2,90) self.stagIII = stageIII() def forward(self,x): ... return out The state dictionary of m.state_dict() is OrderedDict([('weight', tensor([[ 1.0000, 2.0000, 3.0000, 4.0000, 5.0000]])), ('bias', tensor([0.]))]) Now given that I have Net1.state_dict() and m.state_dict(), how can I append m dictionary to Net1 dictionary to obtain state dictionary of Net2?
You can manually add them: net2 = Net2() net2.load_state_dict(net1_state_dict,strict=False) # load what you can from the state_dict of Net1 net2.m.load_state_dict(m_state_dict) # load sub module # save the entire one for future use torch.save(net2.state_dict(), 'merged_net2.pth.tar')
https://stackoverflow.com/questions/66011168/
Using self in init part of a class in Python
Is there any difference between the following two codes related to initializing a class in Python? class summation: def __init__(self, f, s): self.first = f self.second = s self.summ = self.first + self.second . . . class summation: def __init__(self, f, s): self.first = f self.second = s self.summ = f + s . . . If there exists any difference, what is that, and which code is preferable? Edit: I am going to write an artificial neural network with Python (and Pytorch). In fact, the above two codes are just some examples. In the actual case, I have seen in various resources that when there exists self.input = input in the initialization of a class, in other parts it is used as self.input, not input. My questions: What are the differences between these two approaches? Why is the use of self.input preferable, in my case? Example: (from https://docs.dgl.ai/en/latest/tutorials/models/1_gnn/4_rgcn.html#sphx-glr-tutorials-models-1-gnn-4-rgcn-py) import torch import torch.nn as nn import torch.nn.functional as F from dgl import DGLGraph import dgl.function as fn from functools import partial class RGCNLayer(nn.Module): def __init__(self, in_feat, out_feat, num_rels, num_bases=-1, bias=None, activation=None, is_input_layer=False): super(RGCNLayer, self).__init__() self.in_feat = in_feat self.out_feat = out_feat self.num_rels = num_rels self.num_bases = num_bases self.bias = bias self.activation = activation self.is_input_layer = is_input_layer # sanity check if self.num_bases <= 0 or self.num_bases > self.num_rels: self.num_bases = self.num_rels # weight bases in equation (3) self.weight = nn.Parameter(torch.Tensor(self.num_bases, self.in_feat, self.out_feat)) if self.num_bases < self.num_rels: # linear combination coefficients in equation (3) self.w_comp = nn.Parameter(torch.Tensor(self.num_rels, self.num_bases)) # add bias if self.bias: self.bias = nn.Parameter(torch.Tensor(out_feat)) # init trainable parameters nn.init.xavier_uniform_(self.weight, gain=nn.init.calculate_gain('relu')) if self.num_bases < self.num_rels: nn.init.xavier_uniform_(self.w_comp, gain=nn.init.calculate_gain('relu')) if self.bias: nn.init.xavier_uniform_(self.bias, gain=nn.init.calculate_gain('relu')) def forward(self, g): if self.num_bases < self.num_rels: # generate all weights from bases (equation (3)) weight = self.weight.view(self.in_feat, self.num_bases, self.out_feat) weight = torch.matmul(self.w_comp, weight).view(self.num_rels, self.in_feat, self.out_feat) else: weight = self.weight if self.is_input_layer: def message_func(edges): # for input layer, matrix multiply can be converted to be # an embedding lookup using source node id embed = weight.view(-1, self.out_feat) index = edges.data['rel_type'] * self.in_feat + edges.src['id'] return {'msg': embed[index] * edges.data['norm']} else: def message_func(edges): w = weight[edges.data['rel_type']] msg = torch.bmm(edges.src['h'].unsqueeze(1), w).squeeze() msg = msg * edges.data['norm'] return {'msg': msg} def apply_func(nodes): h = nodes.data['h'] if self.bias: h = h + self.bias if self.activation: h = self.activation(h) return {'h': h} g.update_all(message_func, fn.sum(msg='msg', out='h'), apply_func)
No. there is no difference between these two approaches in your case with this level of information. but could they? Yes. they could. if they have some modifications in their setters or getters. later in my answer I'll show you how. First of all, I prefer using this one: class summation: def __init__(self, f, s): self.first = f self.second = s @property def summ(self): return self.first+self.second the above implementation calculates the summation on demand. so when you change self.first or self.second, summ will be calculated automatically. you can access the sum as you did before. s = summation(1,9) print(s.summ) # 10 s.first = 2 s.second = 3 print(s.summ) # 5 So, How could they be different? let's implements them as follows. in setters I doubled the inputs to show you how setters can affect the results. it's just an imaginary example and is not exactly what you wrote. class summation1: def __init__(self, f, s): self.first = f self.second = s self.summ = self.first + self.second @property def first(self): return self.__first @first.setter def first(self,f): self.__first = f*2 @property def second(self): return self.__second @second.setter def second(self,s): self.__second = s*2 class summation2: def __init__(self, f, s): self.first = f self.second = s self.summ = f + s @property def first(self): return self.__first @first.setter def first(self,f): self.__first = f*2 @property def second(self): return self.__second @second.setter def second(self,s): self.__second = s*2 now let's take a look at the outputs: a = 3 b = 2 s1 = summation1(a,b) s2 = summation2(a,b) print(s1.summ) # 10 print(s2.summ) # 5 so, if you are not sure what to choose between those two, maybe the first approach is what you need.
https://stackoverflow.com/questions/66012667/
problem with nn.ModuleDict ('method' object is not subscriptable)
I am trying to use nn.ModuleDict following this documentation page: I have this PyTorch network: class Net(nn.Module): def __init__(self, kernel_size): super(Net, self).__init__() modules = {} modules["layer1"] = nn.Conv2d(3, 16, kernel_size=kernel_size, stride=1, padding=2) self.modules = nn.ModuleDict(modules) def forward(self, x): x = self.modules["layer1"](x) when I use the forward method, I get the following error: 'method' object is not subscriptable when I change the forward method to: def forward(self, x): x = self.modules()["layer1"](x) I get the following error: TypeError: 'generator' object is not subscriptable
The key "modules" is already used by nn.Module. The property is used to retrieve all modules of the model: see nn.Module.modules. You need to use another property name. For example: class Net(nn.Module): def __init__(self, kernel_size): super(Net, self).__init__() modules = {} modules["layer1"] = nn.Conv2d(3, 16, kernel_size=kernel_size, stride=1, padding=2) self.layers = nn.ModuleDict(modules) def forward(self, x): x = self.layers["layer1"](x) return x
https://stackoverflow.com/questions/66015472/
PyTorch model + HTTP API = very slow execution
I have: ML model (PyTorch) that vectorizes data and makes a prediction in ~3.5ms (median ≈ mean) HTTP API (FastAPI + uvicorn) that serves simple requests in ~2ms But when I combine them, the median response time becomes almost 200ms. What can be the reason for such degradation? Note that: I also tried aiohttp alone, aiohttp + gunicorn and Flask development server for serving - same result I tried to send 2, 20 and 100 requests per second - same result I do realize that parallel requests can lead to decreased latency, but not 30 times! CPU load is only ~7% Here's how I measured model performance (I measured the median time separately, it's nearly the same as the mean time): def predict_all(predictor, data): for i in range(len(data)): predictor(data[i]) data = load_random_data() predictor = load_predictor() %timeit predict_all(predictor, data) # manually divide total time by number of records in data Here's FastAPI version: from fastapi import FastAPI from starlette.requests import Request from my_code import load_predictor app = FastAPI() app.predictor = load_predictor() @app.post("/") async def root(request: Request): predictor = request.app.predictor data = await request.json() return predictor(data) HTTP performance test: wrk2 -t2 -c50 -d30s -R100 --latency -s post.lua http://localhost:8000/ EDIT. Here's a slightly modified version which I tried with and without async: @app.post("/") # async def root(request: Request, user_dict: dict): def root(request: Request, user_dict: dict): predictor = request.app.predictor start_time = time.time() y = predictor(user_dict) finish_time = time.time() logging.info(f"user {user_dict['user_id']}: " "prediction made in {:.2f}ms".format((finish_time - start_time) * 1000)) return y So I just added logging of prediction time. Log for async version: 2021-02-03 11:14:31,822: user 12345678-1234-1234-1234-123456789123: prediction made in 2.87ms INFO: 127.0.0.1:49284 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:14:56,329: user 12345678-1234-1234-1234-123456789123: prediction made in 3.93ms INFO: 127.0.0.1:49286 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:14:56,345: user 12345678-1234-1234-1234-123456789123: prediction made in 15.06ms INFO: 127.0.0.1:49287 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:14:56,351: user 12345678-1234-1234-1234-123456789123: prediction made in 4.78ms INFO: 127.0.0.1:49288 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:14:56,358: user 12345678-1234-1234-1234-123456789123: prediction made in 6.85ms INFO: 127.0.0.1:49289 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:14:56,363: user 12345678-1234-1234-1234-123456789123: prediction made in 3.71ms INFO: 127.0.0.1:49290 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:14:56,369: user 12345678-1234-1234-1234-123456789123: prediction made in 5.49ms INFO: 127.0.0.1:49291 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:14:56,374: user 12345678-1234-1234-1234-123456789123: prediction made in 5.00ms So prediction is fast, less than 10ms on average, but whole request takes 200ms. Log for sync version: 2021-02-03 11:17:58,332: user 12345678-1234-1234-1234-123456789123: prediction made in 65.49ms 2021-02-03 11:17:58,334: user 12345678-1234-1234-1234-123456789123: prediction made in 23.05ms INFO: 127.0.0.1:49481 - "POST / HTTP/1.1" 200 OK INFO: 127.0.0.1:49482 - "POST / HTTP/1.1" 200 OK 2021-02-03 11:17:58,338: user 12345678-1234-1234-1234-123456789123: prediction made in 72.39ms 2021-02-03 11:17:58,341: user 12345678-1234-1234-1234-123456789123: prediction made in 78.66ms 2021-02-03 11:17:58,341: user 12345678-1234-1234-1234-123456789123: prediction made in 85.74ms Now prediction takes long! For whatever reason, exactly the same call, but made in synchronous context, started to take ~30 times longer. But the whole request takes approximately the same time - 160-200ms.
In endpoints that does highly intensive calculations and which presumably takes longer when compared to the other endpoints, use a non-coroutine handler. When you use def instead of async def, by default FastAPI will use run_in_threadpool from Starlette and which also uses loop.run_in_executor underneath. run_in_executor will execute the function in the default loops executor, it executes the function in a seperate thread, also you might want to check options like ProcessPoolExecutor and ThreadPoolExecutor if you are doing highly CPU intensive work. This math simple math helps a lot when working with coroutines. function if function_takes ≥ 500ms use `def` else use `async def` Making your function non-coroutine should do good. @app.post("/") def root(request: Request): predictor = request.app.predictor data = await request.json() return predictor(data)
https://stackoverflow.com/questions/66018663/
Pytorch:RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
I set my model and data to the same device, device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net.to(device) and I also do this: inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) but the error still exists. When it comes to 5000 times or more, the error will take place. RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same The following is the whole training code, I hope you can answer it. Thanks! import torch import os import torchvision.transforms as transforms from PIL import Image from torch import nn from torch.utils.data import Dataset, DataLoader captcha_list = list('0123456789abcdefghijklmnopqrstuvwxyz_') captcha_length = 6 # 验证码文本转为向量 def text2vec(text): vector = torch.zeros((captcha_length, len(captcha_list))) text_len = len(text) if text_len > captcha_length: raise ValueError("验证码超过6位啦!") for i in range(text_len): vector[i,captcha_list.index(text[i])] = 1 return vector # 验证码向量转为文本 def vec2text(vec): label = torch.nn.functional.softmax(vec, dim =1) vec = torch.argmax(label, dim=1) for v in vec: text_list = [captcha_list[v] for v in vec] return ''.join(text_list) # 加载所有图片,并将验证码向量化 def make_dataset(data_path): img_names = os.listdir(data_path) samples = [] for img_name in img_names: img_path = data_path+img_name target_str = img_name.split('_')[0].lower() samples.append((img_path, target_str)) return samples class CaptchaData(Dataset): def __init__(self, data_path, transform=None): super(Dataset, self).__init__() self.transform = transform self.samples = make_dataset(data_path) def __len__(self): return len(self.samples) def __getitem__(self, index): img_path, target = self.samples[index] target = text2vec(target) target = target.view(1, -1)[0] img = Image.open(img_path) img = img.resize((140,44)) img = img.convert('RGB') # img转成向量 if self.transform is not None: img = self.transform(img) return img, target class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 第一层神经网络 # nn.Sequential: 将里面的模块依次加入到神经网络中 self.layer1 = nn.Sequential( nn.Conv2d(3, 16, kernel_size=3, padding=1), # 3通道变成16通道,图片:44*140 nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(2) # 图片:22*70 ) # 第2层神经网络 self.layer2 = nn.Sequential( nn.Conv2d(16, 64, kernel_size=3), # 16通道变成64通道,图片:20*68 nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(2) # 图片:10*34 ) # 第3层神经网络 self.layer3 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=3), # 16通道变成64通道,图片:8*32 nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(2) # 图片:4*16 ) # 第4层神经网络 self.fc1 = nn.Sequential( nn.Linear(4*16*128, 1024), nn.Dropout(0.2), # drop 20% of the neuron nn.ReLU() ) # 第5层神经网络 self.fc2 = nn.Linear(1024, 6*37) # 6:验证码的长度, 37: 字母列表的长度 #前向传播 def forward(self, x): x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = x.view(x.size(0), -1) x = self.fc1(x) x = self.fc2(x) return x net = Net() def calculat_acc(output, target): output, target = output.view(-1, len(captcha_list)), target.view(-1, len(captcha_list)) # 每37个就是一个字符 output = nn.functional.softmax(output, dim=1) output = torch.argmax(output, dim=1) target = torch.argmax(target, dim=1) output, target = output.view(-1, captcha_length), target.view(-1, captcha_length) #每6个字符是一个验证码 c = 0 for i, j in zip(target, output): if torch.equal(i, j): c += 1 acc = c / output.size()[0] * 100 return acc def train(epoch_nums): # 数据准备 transform = transforms.Compose([transforms.ToTensor()]) # 不做数据增强和标准化了 train_dataset = CaptchaData('./sougou_com_Trains/', transform=transform) train_data_loader = DataLoader(train_dataset, batch_size=32, num_workers=0, shuffle=True, drop_last=True) test_data = CaptchaData('./sougou_com_Trains/', transform=transform) test_data_loader = DataLoader(test_data, batch_size=128, num_workers=0, shuffle=True, drop_last=True) # 更换设备 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print('当前设备是:',device) net.to(device) criterion = nn.MultiLabelSoftMarginLoss() # 损失函数 optimizer = torch.optim.Adam(net.parameters(), lr=0.001) # 优化器 # 加载模型 model_path = './module_build/model.pth' if os.path.exists(model_path): print('开始加载模型') checkpoint = torch.load(model_path) net.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) # 开始训练 i = 1 for epoch in range(epoch_nums): running_loss = 0.0 net.train() # 神经网络开启训练模式 for data in train_data_loader: if i%100==0: print(i) inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) #数据发送到指定设备 #每次迭代都要把梯度置零 optimizer.zero_grad() # 关键步骤 # 前向传播 outputs = net(inputs) # 计算误差 loss = criterion(outputs, labels) # 后向传播 loss.backward() # 优化参数 optimizer.step() running_loss += loss.item() if i % 2000 == 0: acc = calculat_acc(outputs, labels) print('第%s次训练正确率: %.3f %%, loss: %.3f' % (i,acc,running_loss/2000)) running_loss = 0 # 保存模型 torch.save({ 'model_state_dict':net.state_dict(), 'optimizer_state_dict':optimizer.state_dict(), },model_path) i += 1 # 结束一个epoch,计算测试集的正确率 net.eval() #测试模式 with torch.no_grad(): for inputs, labels in test_data_loader: outputs = net(inputs) acc = calculat_acc(outputs, labels) print('测试集正确率: %.3f %%' % (acc)) break # 只测试一个batch # 每5个epoch 更新学习率 if epoch % 5 == 4: for p in optimizer.param_groups: p['lr'] *= 0.9 train(10)
In evaluation part: Do this net.eval() #测试模式 with torch.no_grad(): for inputs, labels in test_data_loader: inputs, labels = inputs.to(device), labels.to(device) outputs = net(inputs) acc = calculat_acc(outputs, labels) print('测试集正确率: %.3f %%' % (acc)) break # 只测试一个batch This will work since you dint move your data from test loader to device.
https://stackoverflow.com/questions/66021391/
What do the charts in the System Panels signify in Wandb (PyTorch)
I recently started using the wandb module with my PyTorch script, to ensure that the GPU's are operating efficiently. However, I am unsure as to what exactly the charts indicate. I have been following the tutorial in this link, https://lambdalabs.com/blog/weights-and-bias-gpu-cpu-utilization/ , and was confused by this plot: I am uncertain about the GPU % and the GPU Memory Access % charts. The descriptions in the blog are as following: GPU %: This graph is probably the most important one. It tracks the percent of the time over the past sample period during which one or more kernels was executing on the GPU. Basically, you want this to be close to 100%, which means GPU is busy all the time doing data crunching. The above diagram has two curves. This is because there are two GPUs and only of them (blue) is used for the experiment. The Blue GPU is about 90% busy, which means it is not too bad but still has some room for improvement. The reason for this suboptimal utilization is due to the small batch size (4) we used in this experiment. The GPU fetches a small amount of data from its memory very often, and can not saturate the memory bus nor the CUDA cores. Later we will see it is possible to bump up this number by merely increasing the batch size. GPU Memory Access %: This is an interesting one. It measures the percent of the time over the past sample period during which GPU memory was being read or written. We should keep this percent low because you want GPU to spend most of the time on computing instead of fetching data from its memory. In the above figure, the busy GPU has around 85% uptime accessing memory. This is very high and caused some performance problem. One way to lower the percent here is to increase the batch size, so data fetching becomes more efficient. I had the following questions: The aforementioned values do not sum to 100%. It seems as though our GPU can either be spending time on computation or spending time on reading/writing memory. How can the sum of these two values be greater than 100%? Why does increasing batch size decrease the time spent accessing GPU Memory?
GPU Utilization and GPU memory access should add up to 100% is true if the hardware is doing both the process sequentially. But modern hardware doesn't do operations like this. GPU will be busy computing the numbers at the same time it will be accessing the memory. GPU% is actually GPU Utilization %. We want this to be 100%. Thus it will do the desired computation 100% of the time. GPU memory access % is the amount of time GPU is either reading from or writing to GPU memory. We want this number to be low. If the GPU memory access % is high there can be some delay before GPU can use the data to compute on. That doesn't mean that it's a sequential process. W&B allows you to monitor both the metrics and take decisions based on them. Recently I implemented a data pipeline using tf.data.Dataset. The GPU utilization was close to 0% while memory access was close to 0% as well. I was reading three different image files and stacking them. Here CPU was the bottleneck. To counter this I created a dataset by stacking the images. The ETA went from 1h per epoch to 3 min. From the plot, you can infer that the memory access of GPU increased while GPU utilization is close to 100%. CPU utilization decreased which was the bottleneck. Here's a nice article by Lukas answering this question.
https://stackoverflow.com/questions/66022730/
print the value of variables after each line of code
I'm trying to understand how the following PyTorch code works. To know how each function works and what they output & to know the outputted variables value and size, I'm using print() after each line of code. s = pc() for _ in trange(max_length): if self.onnx: outputs = torch.tensor(self.decoder_with_lm_head.run(None, {"input_ids": generated.cpu().numpy(), "encoder_hidden_states": encoder_outputs_prompt})[0][0]) print(f'decoder output -- {outputs}') else: outputs = self.decoder_with_lm_head(input_ids=generated, encoder_hidden_states=encoder_outputs_prompt)[0] next_token_logits = outputs[-1, :] / (temperature if temperature > 0 else 1.0) print(f'next_token_logits -- {next_token_logits}') if int(next_token_logits.argmax()) == 1: print(f'next token logits argmax -- {int(next_token_logits.argmax())}') break new_logits.append(next_token_logits) print(f'new_logits { i } -- {new_logits}') print(f'generated -- {generated}') print(f'generated view list -- {set(generated.view(-1).tolist())}') for _ in set(generated.view(-1).tolist()): next_token_logits[_] /= repetition_penalty print(f'ext_token_logits[_] -- {next_token_logits[_]}') if temperature == 0: # greedy sampling: next_token = torch.argmax(next_token_logits).unsqueeze(0) print(f'next_token -- {next_token}') else: filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1) generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) print(f'generated end -- {generated}') new_tokens = torch.cat((new_tokens, next_token), 0) print(f'new_tokens end -- {new_tokens}') i += 1 print(f'--------------------------\n') e = pc() ap = e-s print(ap) print(timedelta(ap)) return self.tokenizer.decode(new_tokens), new_logit my question, is there a more efficient way of tracking these values & their shape, or are there any libraries that handle this task.
after doing a lot of searches I found a library that fits my requirements. The library is pysnooper, instead of using print() function after each line of code, now I can just use pysnooper's decorator @pysnooper.snoop() def greedy_search(input_text, num_beam, max_length, max_context_length=512): ... and it will print all the variable's values with their corresponding time of execution. for more info refer to its GitHub page. I feel like debuggers are a bit complicated & I also want to run it on notebooks. this is the simplest option I found. any suggestions are welcome.
https://stackoverflow.com/questions/66025825/
How to handle odd resolutions in Unet architecture PyTorch
I'm implementing a U-Net based architecture in PyTorch. At train time, I've patches of size 256x256 which doesn't cause any problem. However at test time, I've full HD images (1920x1080). This is causing a problem during skip connections. Downsampling 1920x1080 3 times gives 240x135. If I downsample one more time, the resolution becomes 120x68 which when upsampled gives 240x136. Now, I cannot concatenate these two feature maps. How can I solve this? PS: I thought this is a fairly common problem, but I didn't get any solution or even mentioning of this problem anywhere on the web. Am I missing something?
It is a very common problem in segmentation networks where skip-connections are often involved in the decoding process. Networks usually (depending on the actual architecture) require input size that has side lengths as integer multiples of the largest stride (8, 16, 32, etc.). There are two main ways: Resize input to the nearest feasible size. Pad the input to the next larger feasible size. I prefer (2) because (1) can cause small changes in the pixel level for all the pixels, leading to unnecessary blurriness. Note that we usually need to recover the original shape afterward in both methods. My favorite code snippet for this task (symmetric padding for height/width): import torch import torch.nn.functional as F def pad_to(x, stride): h, w = x.shape[-2:] if h % stride > 0: new_h = h + stride - h % stride else: new_h = h if w % stride > 0: new_w = w + stride - w % stride else: new_w = w lh, uh = int((new_h-h) / 2), int(new_h-h) - int((new_h-h) / 2) lw, uw = int((new_w-w) / 2), int(new_w-w) - int((new_w-w) / 2) pads = (lw, uw, lh, uh) # zero-padding by default. # See others at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.pad out = F.pad(x, pads, "constant", 0) return out, pads def unpad(x, pad): if pad[2]+pad[3] > 0: x = x[:,:,pad[2]:-pad[3],:] if pad[0]+pad[1] > 0: x = x[:,:,:,pad[0]:-pad[1]] return x A test snippet: x = torch.zeros(4, 3, 1080, 1920) # Raw data x_pad, pads = pad_to(x, 16) # Padded data, feed this to your network x_unpad = unpad(x_pad, pads) # Un-pad the network output to recover the original shape print('Original: ', x.shape) print('Padded: ', x_pad.shape) print('Recovered: ', x_unpad.shape) Output: Original: torch.Size([4, 3, 1080, 1920]) Padded: torch.Size([4, 3, 1088, 1920]) Recovered: torch.Size([4, 3, 1080, 1920]) Reference: https://github.com/seoungwugoh/STM/blob/905f11492a6692dd0d0fa395881a8ec09b211a36/helpers.py#L33
https://stackoverflow.com/questions/66028743/
How to check whether tensor values in a different tensor pytorch?
I have 2 tensors of unequal size a = torch.tensor([[1,2], [2,3],[3,4]]) b = torch.tensor([[4,5],[2,3]]) I want a boolean array of whether each value exists in the other tensor without iterating. something like a in b and the result should be [False, True, False] as only the value of a[1] is in b
I think it's impossible without using at least some type of iteration. The most succinct way I can manage is using list comprehension: [True if i in b else False for i in a] Checks for elements in b that are in a and gives [False, True, False]. Can also be reversed to get elements a in b [False, True].
https://stackoverflow.com/questions/66036375/
How to save a model using DefaultTrainer in Detectron2?
How can I save a checkpoint in Detectron2, using a DefaultTrainer? This is my setup: cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.DATASETS.TRAIN = (DatasetLabels.TRAIN,) cfg.DATASETS.TEST = () cfg.DATALOADER.NUM_WORKERS = 2 cfg.MODEL.ROI_HEADS.NUM_CLASSES = 273 # Number of output classes cfg.OUTPUT_DIR = "outputs" os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") cfg.SOLVER.IMS_PER_BATCH = 2 cfg.SOLVER.BASE_LR = 0.00025#0.00025 # Learning Rate cfg.SOLVER.MAX_ITER = 10000 # 20000 MAx Iterations cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # Batch Size trainer = DefaultTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train() # Save the model from detectron2.checkpoint import DetectionCheckpointer, Checkpointer checkpointer = DetectionCheckpointer(trainer, save_dir=cfg.OUTPUT_DIR) checkpointer.save("mymodel_0") I get the error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-94-c1116902655a> in <module>() 4 checkpointer = DetectionCheckpointer(trainer, save_dir=cfg.OUTPUT_DIR) ----> 5 checkpointer.save("mymodel_0") /usr/local/lib/python3.6/dist-packages/fvcore/common/checkpoint.py in save(self, name, **kwargs) 102 103 data = {} --> 104 data["model"] = self.model.state_dict() 105 for key, obj in self.checkpointables.items(): 106 data[key] = obj.state_dict() AttributeError: 'DefaultTrainer' object has no attribute 'state_dict' Docs: https://detectron2.readthedocs.io/en/latest/modules/checkpoint.html
checkpointer = DetectionCheckpointer(trainer.model, save_dir=cfg.OUTPUT_DIR) is the way to go. Alternatively: torch.save(trainer.model.state_dict(), os.path.join(cfg.OUTPUT_DIR, "mymodel.pth"))
https://stackoverflow.com/questions/66037566/
PyTorch: Predicting future values with LSTM
I'm currently working on building an LSTM model to forecast time-series data using PyTorch. I used lag features to pass the previous n steps as inputs to train the network. I split the data into three sets, i.e., train-validation-test split, and used the first two to train the model. My validation function takes the data from the validation data set and calculates the predicted valued by passing it to the LSTM model using DataLoaders and TensorDataset classes. Initially, I've got pretty good results with R2 values in the region of 0.85-0.95. However, I have an uneasy feeling about whether this validation function is also suitable for testing my model's performance. Because the function now takes the actual X values, i.e., time-lag features, from the DataLoader to predict y^ values, i.e., predicted target values, instead of using the predicted y^ values as features in the next prediction. This situation seems far from reality where the model has no clue of the real values of the previous time steps, especially if you forecast time-series data for longer time periods, say 3-6 months. I'm currently a bit puzzled about tackling this issue and defining a function to predict future values relying on the model's values rather than the actual values in the test set. I have the following function predict, which makes a one-step prediction, but I haven't really figured out how to predict the whole test dataset using DataLoader. def predict(self, x): # convert row to data x = x.to(device) # make prediction yhat = self.model(x) # retrieve numpy array yhat = yhat.to(device).detach().numpy() return yhat You can find how I split and load my datasets, my constructor for the LSTM model, and the validation function below. If you need more information, please do not hesitate to reach out to me. Splitting and Loading Datasets def create_tensor_datasets(X_train_arr, X_val_arr, X_test_arr, y_train_arr, y_val_arr, y_test_arr): train_features = torch.Tensor(X_train_arr) train_targets = torch.Tensor(y_train_arr) val_features = torch.Tensor(X_val_arr) val_targets = torch.Tensor(y_val_arr) test_features = torch.Tensor(X_test_arr) test_targets = torch.Tensor(y_test_arr) train = TensorDataset(train_features, train_targets) val = TensorDataset(val_features, val_targets) test = TensorDataset(test_features, test_targets) return train, val, test def load_tensor_datasets(train, val, test, batch_size=64, shuffle=False, drop_last=True): train_loader = DataLoader(train, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last) val_loader = DataLoader(val, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last) test_loader = DataLoader(test, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last) return train_loader, val_loader, test_loader Class LSTM class LSTMModel(nn.Module): def __init__(self, input_dim, hidden_dim, layer_dim, output_dim, dropout_prob): super(LSTMModel, self).__init__() self.hidden_dim = hidden_dim self.layer_dim = layer_dim self.lstm = nn.LSTM( input_dim, hidden_dim, layer_dim, batch_first=True, dropout=dropout_prob ) self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x, future=False): h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach())) out = out[:, -1, :] out = self.fc(out) return out Validation (defined within a trainer class) def validation(self, val_loader, batch_size, n_features): with torch.no_grad(): predictions = [] values = [] for x_val, y_val in val_loader: x_val = x_val.view([batch_size, -1, n_features]).to(device) y_val = y_val.to(device) self.model.eval() yhat = self.model(x_val) predictions.append(yhat.cpu().detach().numpy()) values.append(y_val.cpu().detach().numpy()) return predictions, values
I've finally found a way to forecast values based on predicted values from the earlier observations. As expected, the predictions were rather accurate in the short-term, slightly becoming worse in the long term. It is not so surprising that the future predictions digress over time, as they no longer depend on the actual values. Reflecting on my results and the discussions I had on the topic, here are my take-aways: In real-life cases, the real values can be retrieved and fed into the model at each step of the prediction -be it weekly, daily, or hourly- so that the next step can be predicted with the actual values from the previous step. So, testing the performance based on the actual values from the test set may somewhat reflect the real performance of the model that is maintained regularly. However, for predicting future values in the long term, forecasting, if you will, you need to make either multiple one-step predictions or multi-step predictions that span over the time period you wish to forecast. Making multiple one-step predictions based on the values predicted the model yields plausible results in the short term. As the forecasting period increases, the predictions become less accurate and therefore less fit for the purpose of forecasting. To make multiple one-step predictions and update the input after each prediction, we have to work our way through the dataset one by one, as if we are going through a for-loop over the test set. Not surprisingly, this makes us lose all the computational advantages that matrix operations and mini-batch training provide us. An alternative could be predicting sequences of values, instead of predicting the next value only, say using RNNs with multi-dimensional output with many-to-many or seq-to-seq structure. They are likely to be more difficult to train and less flexible to make predictions for different time periods. An encoder-decoder structure may prove useful for solving this, though I have not implemented it by myself. You can find the code for my function that forecasts the next n_steps based on the last row of the dataset X (time-lag features) and y (target value). To iterate over each row in my dataset, I would set batch_size to 1 and n_features to the number of lagged observations. def forecast(self, X, y, batch_size=1, n_features=1, n_steps=100): predictions = [] X = torch.roll(X, shifts=1, dims=2) X[..., -1, 0] = y.item(0) with torch.no_grad(): self.model.eval() for _ in range(n_steps): X = X.view([batch_size, -1, n_features]).to(device) yhat = self.model(X) yhat = yhat.to(device).detach().numpy() X = torch.roll(X, shifts=1, dims=2) X[..., -1, 0] = yhat.item(0) predictions.append(yhat) return predictions The following line shifts values in the second dimension of the tensor by one so that a tensor [[[x1, x2, x3, ... , xn ]]] becomes [[[xn, x1, x2, ... , x(n-1)]]]. X = torch.roll(X, shifts=1, dims=2) And, the line below selects the first element from the last dimension of the 3d tensor and sets that item to the predicted value stored in the NumPy ndarray (yhat), [[xn+1]]. Then, the new input tensor becomes [[[x(n+1), x1, x2, ... , x(n-1)]]] X[..., -1, 0] = yhat.item(0) Recently, I've decided to put together the things I had learned and the things I would have liked to know earlier. If you'd like to have a look, you can find the links down below. I hope you'll find it useful. Feel free to comment or reach out to me if you agree or disagree with any of the remarks I made above. Building RNN, LSTM, and GRU for time series using PyTorch Predicting future values with RNN, LSTM, and GRU using PyTorch
https://stackoverflow.com/questions/66048406/
Binary classification with PyTorch
Below is code I've written for binary classification in PyTorch: %reset -f import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import numpy as np import matplotlib.pyplot as plt import torch.utils.data as data_utils import torch.nn as nn import torch.nn.functional as F device = 'cpu' num_epochs = 10 hidden_size = 500 num_classes = 2 learning_rate = .001 torch.manual_seed(24) x1 = np.array([0,0]) x2 = np.array([0,1]) x3 = np.array([1,0]) x4 = np.array([1,1]) x = torch.tensor([x1,x2,x3,x4]).float() y = torch.tensor([1,0,1,1]).long() train = data_utils.TensorDataset(x,y) train_loader = data_utils.DataLoader(train , batch_size=2 , shuffle=True) input_size = len(x[0]) def weights_init(m): if type(m) == nn.Linear: m.weight.data.normal_(0.0, 1) class NeuralNet(nn.Module) : def __init__(self, input_size, hidden_size, num_classes) : super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size , hidden_size) self.fc2 = nn.Linear(hidden_size , 100) self.fc3 = nn.Linear(100 , num_classes) self.sigmoid = nn.Sigmoid() def forward(self, x) : out = self.fc1(x) out = self.sigmoid(out) out = self.fc2(out) out = self.sigmoid(out) out = self.fc3(out) out = self.sigmoid(out) return out model = NeuralNet(input_size, hidden_size, num_classes).to(device) model.apply(weights_init) criterionCE = nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) loss_values = [] for i in range(0 , 140) : print('in i' , i) total_step = len(train_loader) for epoch in range(num_epochs) : for i,(images , labels) in enumerate(train_loader) : images = images.to(device) labels = labels.to(device) outputs = model(images) print('outputs' , outputs) loss = criterionCE(outputs , labels) loss_values.append(loss) optimizer.zero_grad() loss.backward() optimizer.step() outputs = model(x) print(outputs.data.max(1)[1]) This code throws an error : ~/opt/miniconda3/envs/ds1/lib/python3.8/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction) 2067 stacklevel=2) 2068 if input.numel() != target.numel(): -> 2069 raise ValueError("Target and input must have the same number of elements. target nelement ({}) " 2070 "!= input nelement ({})".format(target.numel(), input.numel())) 2071 ValueError: Target and input must have the same number of elements. target nelement (2) != input nelement (4) I printed where I think is the cause of the error: print('outputs' , outputs) renders outputs tensor([[9.9988e-01, 1.4011e-05], [9.9325e-01, 1.2087e-05]], grad_fn=<SigmoidBackward>) So outputs should be a 2x1 instead of a 2x2 result. Have I not defined the setup of the model correctly? As I'm using sigmoid shouldn't a 2x1 output be created instead of 2x2?
If you are working on a binary classification task your model should only output one logit. Since you've set self.fc3 to have 2 neurons, you will get 2 logits as the output. Therefore, you should set self.fc3 as nn.Linear(100 , 1).
https://stackoverflow.com/questions/66052668/
Why is the timm visual transformer position embedding initializing to zeros?
I'm looking at the timm implementation of visual transformers and for the positional embedding, he is initializing his position embedding with zeros as follows: self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim)) See here: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L309 I'm not sure how this actually embeds anything about the position when it is later added to the patch? x = x + self.pos_embed Any feedback is appreciated.
The positional embedding is a parameter that gets included in the computational graph and gets updated during training. So, it doesn't matter if you initialize with zeros; they are learned during training.
https://stackoverflow.com/questions/66054042/
How to implement randomised log space search of learning rate in PyTorch?
I am looking to fine tune a GNN and my supervisor suggested exploring different learning rates. I came across this tutorial video where he mentions that a randomised log space search of hyper parameters is typically done in practice. For sake of the introductory tutorial this was not covered. Any help or pointers on how to achieve this in PyTorch is greatly appreciated. Thank you!
Setting the scale in logarithm terms let you take into account more desirable values of the learning rate, usually values lower than 0.1 Imagine you want to take learning rate values between 0.1 (1e-1) and 0.001 (1e-4). Then you can set this lower and upper bound on a logarithm scale by applying a logarithm base 10 on it, log10(0.1) = -1 and log10(0.001) = -4. Andrew Ng provides a clearer explanation in this video. In Python you can use np.random.uniform() for this searchable_learning_rates = [10**(-4 * np.random.uniform(0.5, 1)) for _ in range(10)] searchable_learning_rates >>> [0.004890650359810075, 0.007894672127828331, 0.008698831627963768, 0.00022779163472045743, 0.0012046829055603172, 0.00071395500159473, 0.005690032483124896, 0.000343368839731761, 0.0002819402550629178, 0.0006399571804618883] as you can see you're able to try learning rate values from 0.0002819402550629178 up to 0.008698831627963768 which is close to the upper bound. The longer the array the more values you will try. Following the example code in the video you provided you can implement the randomized log search for the learning rate by replacing learning_rates for searchable learning_rates for batch_size in batch_sizes: for learning_rate in searchable_learning_rates: ... ...
https://stackoverflow.com/questions/66055798/
why torch.Tensor subtract works well when tensor size is different?
This example will make it easier to understand. The following fails: A = tensor.torch([[1, 2, 3], [4, 5, 6]]) # shape : (2, 3) B = tensor.torch([[1, 2], [3, 4], [5, 6]]) # shape : (3, 2) print((A - B).shape) # RuntimeError: The size of tensor A (3) must match the size of tensor B (2) at non-singleton dimension 1 # ================================================================== A = tensor.torch([[1, 2], [3, 4], [5, 6]]) # shape : (3, 2) B = tensor.torch([[1, 2], [3, 4],]) # shape : (2, 2) print((A - B).shape) # RuntimeError: The size of tensor A (3) must match the size of tensor B (2) at non-singleton dimension 0 But the following works well: a = torch.ones(8).unsqueeze(0).unsqueeze(-1).expand(4, 8, 7) a_temp = a.unsqueeze(2) # shape : ( 4, 8, 1, 7 ) b_temp = torch.transpose(a_temp, 1, 2) # shape : ( 4, 1, 8, 7 ) print(a_temp-b_temp) # shape : ( 4, 8, 8, 7 ) Why does the latter work, but not the former? How/why has the result shape been expanded?
This is well explained by the broadcasting semantics. The important part is : Two tensors are “broadcastable” if the following rules hold: Each tensor has at least one dimension. When iterating over the dimension sizes, starting at the trailing dimension, the dimension sizes must either be equal, one of them is 1, or one of them does not exist. In your case, (3,2) and (2,3) cannot be broadcast to a common shape (3 != 2 and neither are equal to 1), but (4,8,1,7), (4,1,8,7) and (4,8,8,7) are broadcast compatible. This is basically what the error states : all dimensions must be either equal ("match") or singletons (i.e equal to 1) What happens when the shape are broadcasted is basically a tensor expansion to make the shape match (expand to [4,8,8,7]), and then perform the subtraction as usual. Expansion duplicates your data (in a smart way) to reach the required shape.
https://stackoverflow.com/questions/66059474/
How to actually apply a Conv2d filter in Pytorch
I’m new to Python and trying to do some manipulations with filters in PyTorch. I’m struggling re how to apply a Conv2d. I’ve got the following code which creates a 3x3 moving average filter: resized_image4D = np.reshape(image_noisy, (1, 1, image_noisy.shape[0], image_noisy.shape[1])) t = torch.from_numpy(resized_image4D) conv = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, padding=1, bias=False) conv.weight = torch.nn.Parameter(torch.ones((1,1,3, 3))/9.0) Normally in NumPy I’d just call filtered_image = convolve2d(image, kernel), but I’ve not been able to figure out what the PyTorch equivalent is after days of searching.
I think you are looking for torch.nn.functional.conv2d. Hence, your snippets becomes: resized_image4D = np.reshape(image_noisy, (1, 1, image_noisy.shape[0], image_noisy.shape[1])) t = torch.from_numpy(resized_image4D) conv = torch.nn.functional.conv2d(in_channels=1, out_channels=1, kernel_size=3, padding=1, bias=False) conv.weight = torch.nn.Parameter(torch.ones((1,1,3, 3))/9.0)
https://stackoverflow.com/questions/66061320/
How to train faster-rcnn on dataset including negative data in pytorch
I am trying to train the torchvision Faster R-CNN model for object detection on my custom data. I used the code in torchvision object detection fine-tuning tutorial. But getting this error: Expected target boxes to be a tensor of shape [N, 4], got torch.Size([0]) This has to do with negative data (empty training images / no bounding boxes) in my custom dataset. How can we change the below Dataset class to enable training faster-rcnn on dataset including negative data? class MyCustomDataset(Dataset): def __init__(self, root, transforms): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) def __len__(self): return len(self.imgs) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) img = Image.open(img_path).convert("RGB") # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background mask = Image.open(mask_path) # convert the PIL Image into a numpy array mask = np.array(mask) # instances are encoded as different colors obj_ids = np.unique(mask) # first id is the background, so remove it obj_ids = obj_ids[1:] # split the color-encoded mask into a set of binary masks masks = mask == obj_ids[:, None, None] # get bounding box coordinates for each mask num_objs = len(obj_ids) boxes = [] for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) # convert everything into a torch.Tensor boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class labels = torch.ones((num_objs,), dtype=torch.int64) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["image_id"] = torch.tensor([idx]) target["area"] = area target["iscrowd"] = iscrowd return img, target
We need to make two changes to the Dataset Class. 1- Empty boxes are fed as: if num_objs == 0: boxes = torch.zeros((0, 4), dtype=torch.float32) else: boxes = torch.as_tensor(boxes, dtype=torch.float32) 2- Assign area=0 for empty bounding box case, change the code used for calculating area, and make it a torch tensor: area = 0 for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) area += (xmax-xmin)*(ymax-ymin) area = torch.as_tensor(area, dtype=torch.float32) We will incorporate step#2 in existing for loop. So, the modified Dataset Class will look like: class MyCustomDataset(Dataset): def __init__(self, root, transforms): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) def __len__(self): return len(self.imgs) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) img = Image.open(img_path).convert("RGB") # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background mask = Image.open(mask_path) # convert the PIL Image into a numpy array mask = np.array(mask) # instances are encoded as different colors obj_ids = np.unique(mask) # first id is the background, so remove it obj_ids = obj_ids[1:] # split the color-encoded mask into a set of binary masks masks = mask == obj_ids[:, None, None] # get bounding box coordinates for each mask num_objs = len(obj_ids) boxes = [] area = 0 for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) area += (xmax-xmin)*(ymax-ymin) area = torch.as_tensor(area, dtype=torch.float32) # Handle empty bounding boxes if num_objs == 0: boxes = torch.zeros((0, 4), dtype=torch.float32) else: boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class labels = torch.ones((num_objs,), dtype=torch.int64) image_id = torch.tensor([idx]) #area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["image_id"] = torch.tensor([idx]) target["area"] = area target["iscrowd"] = iscrowd return img, target
https://stackoverflow.com/questions/66063046/
Customizing the batch with specific elements
I am a fresh starter with PyTorch. Strangely I cannot find anything related to this, although it seems rather simple. I want to structure my batch with specific examples, like all examples per batch having the same label, or just fill the batch with examples of just 2 classes. How would I do that? For me, it seems the right place within the data loader and not in the dataset? As the data loader is responsible for the batches and not the dataset? Is there a simple minimal example?
TLDR; Default DataLoader only uses a sampler, not a batch sampler. You can define a sampler, plus a batch sampler, a batch sampler will override the sampler. The sampler only yields the sequence of dataset elements, not the actual batches (this is handled by the data loader, depending on batch_size). To answer your initial question: Working with a sampler on an iterable dataset doesn't seem to be possible cf. Github issue (still open). Also, read the following note on pytorch/dataloader.py. Samplers (for map-style datasets): That aside, if you are switching to a map-style dataset, here are some details on how samplers and batch samplers work. You have access to a dataset's underlying data using indices, just like you would with a list (since torch.utils.data.Dataset implements __getitem__). In other words, your dataset elements are all dataset[i], for i in [0, len(dataset) - 1]. Here is a toy dataset: class DS(Dataset): def __getitem__(self, index): return index def __len__(self): return 10 In a general use case you would just give torch.utils.data.DataLoader the arguments batch_size and shuffle. By default, shuffle is set to false, which means it will use torch.utils.data.SequentialSampler. Else (if shuffle is true) torch.utils.data.RandomSampler will be used. The sampler defines how the data loader accesses the dataset (in which order it accesses it). The above dataset (DS) has 10 elements. The indices are 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. They map to elements 0, 10, 20, 30, 40, 50, 60, 70, 80, and 90. So with a batch size of 2: SequentialSampler: DataLoader(ds, batch_size=2) (implictly shuffle=False), identical to DataLoader(ds, batch_size=2, sampler=SequentialSampler(ds)). The dataloader will deliver tensor([0, 10]), tensor([20, 30]), tensor([40, 50]), tensor([60, 70]), and tensor([80, 90]). RandomSampler: DataLoader(ds, batch_size=2, shuffle=True), identical to DataLoader(ds, batch_size=2, sampler=RandomSampler(ds)). The dataloader will sample randomly each time you iterate through it. For instance: tensor([50, 40]), tensor([90, 80]), tensor([0, 60]), tensor([10, 20]), and tensor([30, 70]). But the sequence will be different if you iterate through the dataloader a second time! Batch sampler Providing batch_sampler will override batch_size, shuffle, sampler, and drop_last altogether. It is meant to define exactly the batch elements and their content. For instance: >>> DataLoader(ds, batch_sampler=[[1,2,3], [6,5,4], [7,8], [0,9]])` Will yield tensor([10, 20, 30]), tensor([60, 50, 40]), tensor([70, 80]), and tensor([ 0, 90]). Batch sampling on the class Let's say I just want to have 2 elements (different or not) of each class in my batch and have to exclude more examples of each class. So ensuring that not 3 examples are inside of the batch. Let's say you have a dataset with four classes. Here is how I would do it. First, keep track of dataset indices for each class. class DS(Dataset): def __init__(self, data): super(DS, self).__init__() self.data = data self.indices = [[] for _ in range(4)] for i, x in enumerate(data): if x > 0 and x % 2: self.indices[0].append(i) if x > 0 and not x % 2: self.indices[1].append(i) if x < 0 and x % 2: self.indices[2].append(i) if x < 0 and not x % 2: self.indices[3].append(i) def classes(self): return self.indices def __getitem__(self, index): return self.data[index] For example: >>> ds = DS([1, 6, 7, -5, 10, -6, 8, 6, 1, -3, 9, -21, -13, 11, -2, -4, -21, 4]) Will give: >>> ds.classes() [[0, 2, 8, 10, 13], [1, 4, 6, 7, 17], [3, 9, 11, 12, 16], [5, 14, 15]] Then for the batch sampler, the easiest way is to create a list of class indices that are available, and have as many class index as there are dataset element. In the dataset defined above, we have 5 items from class 0, 5 from class 1, 5 from class 2, and 3 from class 3. Therefore we want to construct [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3]. We will shuffle it. Then, from this list and the dataset classes content (ds.classes()) we will be able to construct the batches. class Sampler(): def __init__(self, classes): self.classes = classes def __iter__(self): classes = copy.deepcopy(self.classes) indices = flatten([[i for _ in range(len(klass))] for i, klass in enumerate(classes)]) random.shuffle(indices) grouped = zip(*[iter(indices)]*2) res = [] for a, b in grouped: res.append((classes[a].pop(), classes[b].pop())) return iter(res) Note - deep copying the list is required since we're popping elements from it. A possible output of this sampler would be: [(15, 14), (16, 17), (7, 12), (11, 6), (13, 10), (5, 4), (9, 8), (2, 0), (3, 1)] At this point we can simply use torch.data.utils.DataLoader: >>> dl = DataLoader(ds, batch_sampler=sampler(ds.classes())) Which could yield something like: [tensor([ 4, -4]), tensor([-21, 11]), tensor([-13, 6]), tensor([9, 1]), tensor([ 8, -21]), tensor([-3, 10]), tensor([ 6, -2]), tensor([-5, 7]), tensor([-6, 1])] An easier approach Here is another - easier - approach that will not guarantee to return all elements from the dataset, on average it will... For each batch, first sample class_per_batch classes, then sample batch_size elements from these selected classes (by first sampling a class from that class subset, then sampling from a data point from that class). class Sampler(): def __init__(self, classes, class_per_batch, batch_size): self.classes = classes self.n_batches = sum([len(x) for x in classes]) // batch_size self.class_per_batch = class_per_batch self.batch_size = batch_size def __iter__(self): classes = random.sample(range(len(self.classes)), self.class_per_batch) batches = [] for _ in range(self.n_batches): batch = [] for i in range(self.batch_size): klass = random.choice(classes) batch.append(random.choice(self.classes[klass])) batches.append(batch) return iter(batches) You can try it this way: >>> s = Sampler(ds.classes(), class_per_batch=2, batch_size=4) >>> list(s) [[16, 0, 0, 9], [10, 8, 11, 2], [16, 9, 16, 8], [2, 9, 2, 3]] >>> dl = DataLoader(ds, batch_sampler=s) >>> list(iter(dl)) [tensor([ -5, -6, -21, -13]), tensor([ -4, -4, -13, -13]), tensor([ -3, -21, -2, -5]), tensor([-3, -5, -4, -6])]
https://stackoverflow.com/questions/66065272/
Why doesn't this input pass through this simple PyTorch model?
I have an input/tensor whose shape is: torch.Size([256, 3, 28, 28]) (batch size here is 256, 3 channels, 28x28 image) And a model like so: class Model(nn.Module): def __init__(self): super().__init__() self.network = nn.Sequential( nn.Conv2d(3, 28, kernel_size=3, padding=1), nn.ReLU(), nn.Conv2d(28, 56, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # output: 56 x 16 x 16 nn.Conv2d(56, 112, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(112, 112, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # output: 112 x 8 x 8 nn.Conv2d(112, 224, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(224, 224, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # output: 224 x 4 x 4 nn.Flatten(), nn.Linear(224 * 4 * 4, 896), nn.ReLU(), nn.Linear(896, 512), nn.ReLU(), nn.Linear(512, 2)) def forward(self, xb): return self.network(xb) When I attempt to pass data forward, it fails with: ... return self.network(xb) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "/home/stark/anaconda3/envs/torch-env/lib/python3.8/site-packages/torch/nn/functional.py", line 1690, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: mat1 dim 1 must match mat2 dim 0 What am I missing? Thanks!
nn.MaxPool2d(2, 2), # output: 56 x 16 x 16 This is wrong. The original input is of size (256, 3, 28, 28). The convolutional and ReLU layers you're using do not change the batch, height, or width dimensions; they only change the "channel" dimension. Prior to the max pooling layer, the tensor size is (256, 56, 28, 28). The max pooling layer has a kernel size of 2 and a stride of 2, so it will cut both the height and width in half. So the output of this max pooling layer is of size (256, 56, 14, 14). For the same reason, the output of the next max pooling layer is of size (256, 112, 7, 7), and the output of the last max pooling layer is of size (256, 224, 3, 3). So you can either fix this by changing your input size to (256, 3, 32, 32), if that's an option, or changing the first linear layer to nn.Linear(224 * 3 * 3, 896)
https://stackoverflow.com/questions/66072748/
BERT Convert 'SpanAnnotation' to answers using scores from hugging face models
I'm following along with the documentation for importing a pretrained model question and answer model from huggingface from transformers import BertTokenizer, BertForQuestionAnswering import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForQuestionAnswering.from_pretrained('bert-base-uncased') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits this returns start and end scores, but how can I get a meaningful text answer from here?
So I did a little digging around and it looks like scores can be converted to tokens which can be used to build the answer. Here is a short example: answer_start = torch.argmax(start_scores) answer_end = torch.argmax(end_scores) + 1 tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0][answer_start:answer_end]))
https://stackoverflow.com/questions/66073395/
"RuntimeError: expected scalar type Double but found Float" in Pytorch CNN training
I just begin to learn Pytorch and create my first CNN. The dataset contains 3360 RGB images and I converted them to a [3360, 3, 224, 224] tensor. The data and label are in the dataset(torch.utils.data.TensorDataset). Below is the training code. def train_net(): dataset = ld.load() data_iter = Data.DataLoader(dataset, batch_size=168, shuffle=True) net = model.VGG_19() summary(net, (3, 224, 224), device="cpu") loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9, dampening=0.1) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.1) for epoch in range(5): print("epoch:", epoch + 1) train_loss = 0 for i, data in enumerate(data_iter, 0): x, y = data print(x.dtype) optimizer.zero_grad() out = net(x) loss = loss_func(out, y) loss.backward() optimizer.step() train_loss += loss.item() if i % 100 == 99: print("loss:", train_loss / 100) train_loss = 0.0 print("finish train") Then I have this error: Traceback (most recent call last): File "D:/python/DeepLearning/VGG/train.py", line 52, in <module> train_net() File "D:/python/DeepLearning/VGG/train.py", line 29, in train_net out = net(x) File "D:\python\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "D:\python\DeepLearning\VGG\model.py", line 37, in forward out = self.conv3_64(x) File "D:\python\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "D:\python\lib\site-packages\torch\nn\modules\container.py", line 117, in forward input = module(input) File "D:\python\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "D:\python\lib\site-packages\torch\nn\modules\conv.py", line 423, in forward return self._conv_forward(input, self.weight) File "D:\python\lib\site-packages\torch\nn\modules\conv.py", line 419, in _conv_forward return F.conv2d(input, weight, self.bias, self.stride, RuntimeError: expected scalar type Double but found Float I think there is something wrong with x and I print its type by print(x.dtype): torch.float64 which is double instead of float. Do you know what`s wrong? Thanks for your help!
that error is actually refering to the weights of the conv layer which are in float32 by default when the matrix multiplication is called. Since your input is double(float64 in pytorch) while the weights in conv are float So the solution in your case is : def train_net(): dataset = ld.load() data_iter = Data.DataLoader(dataset, batch_size=168, shuffle=True) net = model.VGG_19() summary(net, (3, 224, 224), device="cpu") loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9, dampening=0.1) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.1) for epoch in range(5): print("epoch:", epoch + 1) train_loss = 0 for i, data in enumerate(data_iter, 0): x, y = data # //_______________ x = x.float() # HERE IS THE CHANGE \\ print(x.dtype) optimizer.zero_grad() out = net(x) loss = loss_func(out, y) loss.backward() optimizer.step() train_loss += loss.item() if i % 100 == 99: print("loss:", train_loss / 100) train_loss = 0.0 print("finish train") This will work for sure
https://stackoverflow.com/questions/66074684/
Pytorch nn.Linear RuntimeError: mat1 dim 1 must match mat2 dim 0
class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 16, 3, padding=1) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) self.max_pool = nn.MaxPool2d(2) self.fc1 = nn.Linear(7*7*32, 128) self.fc1 = nn.Linear(128, 10) def forward(self, x): x = self.max_pool(F.relu(self.conv1(x))) x = self.max_pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.softmax(x, dim=1) return x The above is the model. The shape of input image is 1×28×28 with batch_size 16. I tried to print x.shape after flatten, which is (16, 1568), completely corresponding to the input size of self.fc1(). Does anyone have any idea?
you have named this function nn.Linear(128, 10) as self.fc1 i.e self.fc1 = nn.Linear(128, 10) Please change it to self.fc2 = nn.Linear(128, 10)
https://stackoverflow.com/questions/66074696/
Best input size of a the first layer 1D CNN
I'm trying to build a 1D CNN with time series. The input is of length 500. There are (only) 2 labels. The architecture which I built so far is the following: there are 3 convolution layers each, of them followed by an activation layer. The first convolution layer takes 50 channels as input. import torch import torch.nn as nn import numpy as np import random class Simple1DCNN3(torch.nn.Module): def __init__(self): super(Simple1DCNN5, self).__init__() self.sequence = nn.Sequential( torch.nn.Conv1d(in_channels=50, out_channels=64, kernel_size=5, stride=2), torch.nn.ReLU(), torch.nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3), torch.nn.ReLU(), torch.nn.Conv1d(in_channels=128, out_channels=256, kernel_size=1), torch.nn.ReLU(), ) self.fc1 = nn.Linear(256, 2) def forward(self, x): x = x.view(1, 50,-1) for layer in self.sequence: x = layer(x) print(x.size()) x = x.view(1,-1) #print(x.size()) x = self.fc1(x) #print(x.size()) return x net = Simple1DCNN3() input_try = np.random.uniform(-10, 10, 500) input_try = torch.from_numpy(input_try).float() net(input_try) print("input successfull passed to net") input_try_modif = input_try.view(1, 50,-1) print(input_try.shape) print(input_try_modif.shape) As far as I understood, that forced me to segment the input in 10 segments of 50 timepoints. Am I understanding it wrong ? Wouldn't it be wiser to construct the first layer with 500 channels as inputs and have a sliding window kernel? I tried it in the following other script but got the following error message import torch import torch.nn as nn import numpy as np import random class Simple1DCNN4(torch.nn.Module): def __init__(self): super(Simple1DCNN5, self).__init__() self.sequence = nn.Sequential( torch.nn.Conv1d(in_channels=500, out_channels=64, kernel_size=5, stride=2), torch.nn.ReLU(), torch.nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3), torch.nn.ReLU(), torch.nn.Conv1d(in_channels=128, out_channels=256, kernel_size=1), torch.nn.ReLU(), ) self.fc1 = nn.Linear(256, 2) def forward(self, x): x = x.view(1, 50,-1) for layer in self.sequence: x = layer(x) print(x.size()) x = x.view(1,-1) #print(x.size()) x = self.fc1(x) #print(x.size()) return x net = Simple1DCNN4() input_try = np.random.uniform(-10, 10, 500) input_try = torch.from_numpy(input_try).float() net(input_try) print("input successfull passed to net") input_try_modif = input_try.view(1, 50,-1) print(input_try.shape) print(input_try_modif.shape) Error message: RuntimeError: Given groups=1, weight of size [64, 500, 5], expected input[1, 50, 10] to have 500 channels, but got 50 channels instead EDIT Thanks to the answer of @ghchoi, here is the code of the working kernel. For this, I also had to change the kernel size to all the convolutional layers to 1. class Simple1DCNN5(torch.nn.Module): def __init__(self): super(Simple1DCNN5, self).__init__() self.sequence = nn.Sequential( torch.nn.Conv1d(in_channels=500, out_channels=64, kernel_size=1, stride=2), torch.nn.ReLU(), torch.nn.Conv1d(in_channels=64, out_channels=128, kernel_size=1), torch.nn.ReLU(), torch.nn.Conv1d(in_channels=128, out_channels=256, kernel_size=1), torch.nn.ReLU(), ) self.fc1 = nn.Linear(256, 2) def forward(self, x): x = x.view(1, 500,-1) for layer in self.sequence: x = layer(x) #print(x.size()) x = x.view(1,-1) #print(x.size()) x = self.fc1(x) #print(x.size()) return x The kind of data I have is mono-derivation ECG (electrocardiogram) signal of 2 seconds. This is a recording of the electrical signal of the hear. Here is the idea of what a sample could look like (ploted on a 2D graphs), where you have time on the x-axis and voltage/amplitude on the y-axis
Try def forward(self, x): x = x.view(1, 500, -1) ... net = Simple1DCNN4() input_try = np.random.uniform(-10, 10, 5000) In this way, the input for the first Conv1d will have the 500 channels.
https://stackoverflow.com/questions/66082592/
Get some layers in a pytorch model that is not defined by nn.Sequential
I have a network defined below. class model_dnn_2(nn.Module): def __init__(self): super(model_dnn_2, self).__init__() self.flatten = Flatten() self.fc1 = nn.Linear(784, 200) self.fc2 = nn.Linear(200, 100) self.fc3 = nn.Linear(100, 100) self.fc4 = nn.Linear(100, 10) def forward(self, x): x = self.flatten(x) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.relu(x) x = self.fc4(x) I would like to take the last two layers along with the relu functions. Using children method I get the following >>> new_model = nn.Sequential(*list(model.children())[-2:]) >>> new_model Sequential( (0): Linear(in_features=100, out_features=100, bias=True) (1): Linear(in_features=100, out_features=10, bias=True) ) But I would like to have the Relu function present in between the layers-just like the original model, i.e the new model should be like: >>> new_model Sequential( (0): Linear(in_features=100, out_features=100, bias=True) (1): Relu() (2): Linear(in_features=100, out_features=10, bias=True) ) I think the children method of the model is using the class initialization to create the model and thus the problem arises. How can I obtain the model?
The way you implemented your model, the ReLU activations are not layers, but rather functions. When listing sub-layers (aka "children") of your module you do not see the ReLUs. You can change your implementation: class model_dnn_2(nn.Module): def __init__(self): super(model_dnn_2, self).__init__() self.layers = nn.Sequential( nn.Flatten(), nn.Linear(784, 200), nn.ReLU(), # now you are using a ReLU _layer_ nn.Linear(200, 100), nn.ReLU(), # this is a different ReLU _layer_ nn.Linear(100, 100), nn.ReLU(), nn.Linear(100, 10) ) def forward(self, x): y = self.layers(x) return y More on the difference between layers and functions can be found here.
https://stackoverflow.com/questions/66085134/
Calculate padding for 3D CNN in Pytorch
I'm currently trying to apply a 3D CNN to a set of images with the dimensions of 193 x 229 x 193 and would like to retain the same image dimensions through each convolutional layer (similar to tensorflow's padding=SAME). I know that the padding can be calculated as follow: S=Stride P=Padding W=Width K=Kernal size P = ((S-1)*W-S+K)/2 Which yields a padding of 1 for the first layer: P = ((1-1)*193-1+3)/2 P= 1.0 Although I also get a result of 1.0 for each of the subsequent layers. Anyone have any suggestions? Sorry, beginner here! Reproducible example: import torch import torch.nn as nn x = torch.randn(1, 1, 193, 229, 193) padding = ((1-1)*96-1+3)/2 print(padding) x = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=3, padding=1)(x) print("shape after conv1: " + str(x.shape)) x = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3,padding=1)(x) x = nn.BatchNorm3d(8)(x) print("shape after conv2 + batch norm: " + str(x.shape)) x = nn.ReLU()(x) print("shape after reLU:" + str(x.shape)) x = nn.MaxPool3d(kernel_size=2, stride=2)(x) print("shape after max pool" + str(x.shape)) x = nn.Conv3d(in_channels=8, out_channels=16, kernel_size=3,padding=1)(x) print("shape after conv3: " + str(x.shape)) x = nn.Conv3d(in_channels=16, out_channels=16, kernel_size=3,padding=1)(x) print("shape after conv4: " + str(x.shape)) Current output: shape after conv1: torch.Size([1, 8, 193, 229, 193]) shape after conv2 + batch norm: torch.Size([1, 8, 193, 229, 193]) shape after reLU:torch.Size([1, 8, 193, 229, 193]) shape after max pooltorch.Size([1, 8, 96, 114, 96]) shape after conv3: torch.Size([1, 16, 96, 114, 96]) shape after conv4: torch.Size([1, 16, 96, 114, 96]) Desired output: shape after conv1: torch.Size([1, 8, 193, 229, 193]) shape after conv2 + batch norm: torch.Size([1, 8, 193, 229, 193]) ... shape after conv3: torch.Size([1, 16, 193, 229, 193]) shape after conv4: torch.Size([1, 16, 193, 229, 193])
TLDR; your formula also applies to nn.MaxPool3d You are using a max pool layer of kernel size 2 (implicitly (2,2,2)) with a stride of 2 (implicitly (2,2,2)). This means for every 2x2x2 block you're only getting a single value. In other words - as the name implies: only the maximum value from every 2x2x2 block is pooled to the output array. That's why you're going from (1, 8, 193, 229, 193) to (1, 8, 96, 114, 96) (notice the division by 2). Of course, if you set kernel_size=3 and stride=1 on nn.MaxPool3d, you will preserve the shape of your blocks. Let #x be the input shape, and #w the kernel shape. If we want the output to have the same size, then #x = floor((#x + 2p - #w)/s + 1) needs to be true. That's 2p = s(#x - 1) - #x + #w = #x(s - 1) + #w - s (your formula) Since s = 2 and #w = 2, then 2p = #x which is not possible.
https://stackoverflow.com/questions/66088309/
Back-Propagation of y = x / sum(x, dim=0) where size of tensor x is (H,W)
Q1. I'm trying to make my custom autograd function with pytorch. But I had a problem with making analytical back propagation with y = x / sum(x, dim=0) where size of tensor x is (Height, Width) (x is 2-dimensional). Here's my code class MyFunc(torch.autograd.Function): @staticmethod def forward(ctx, input): ctx.save_for_backward(input) input = input / torch.sum(input, dim=0) return input @staticmethod def backward(ctx, grad_output): input = ctx.saved_tensors[0] H, W = input.size() sum = torch.sum(input, dim=0) grad_input = grad_output * (1/sum - input*1/sum**2) return grad_input I used (torch.autograd import) gradcheck to compare Jacobian matrix, from torch.autograd import gradcheck func = MyFunc.apply input = (torch.randn(3,3,dtype=torch.double,requires_grad=True)) test = gradcheck(func, input) and the result was Please someone help me to get correct back propagation result Thanks! Q2. Thanks for answers! Because of your help, I could implement back propagation in case of (H,W) tensor. However, while I implemented back propagation in case of (N,H,W) tensor, I got a problem. I think the problem would be initializing new tensor. Here's my new code import torch import torch.nn as nn import torch.nn.functional as F class MyFunc(torch.autograd.Function): @staticmethod def forward(ctx, input): ctx.save_for_backward(input) N = input.size(0) for n in range(N): input[n] /= torch.sum(input[n], dim=0) return input @staticmethod def backward(ctx, grad_output): input = ctx.saved_tensors[0] N, H, W = input.size() I = torch.eye(H).unsqueeze(-1) sum = input.sum(1) grad_input = torch.zeros((N,H,W), dtype = torch.double, requires_grad=True) for n in range(N): grad_input[n] = ((sum[n] * I - input[n]) * grad_output[n] / sum[n]**2).sum(1) return grad_input Gradcheck code is from torch.autograd import gradcheck func = MyFunc.apply input = (torch.rand(2,2,2,dtype=torch.double,requires_grad=True)) test = gradcheck(func, input) print(test) and result is enter image description here I don't know why the error occurs... Your help will be very helpful for me to implement my own convolutional network. Thanks! Have a nice day.
Let's look an example with a single column, for instance: [[x1], [x2], [x3]]. Let sum be x1 + x2 + x3, then normalizing x will give y = [[y1], [y2], [y3]] = [[x1/sum], [x2/sum], [x3/sum]]. You're looking for dL/dx1, dL/x2, and dL/x3 - we'll just write them as: dx1, dx2, and dx3. Same for all dL/dyi. So dx1 is equal to dL/dy1*dy1/dx1 + dL/dy2*dy2/dx1 + dL/dy3*dy3/dx1. That's because x1 contributes to all ouput element on the corresponding column: y1, y2, and y3. We have: dy1/dx1 = d(x1/sum)/dx1 = (sum - x1)/sum² dy2/dx1 = d(x2/sum)/dx1 = -x2/sum² similarly, dy3/dx1 = d(x3/sum)/dx1 = -x3/sum² Therefore dx1 = (sum - x1)/sum²*dy1 - x2/sum²*dy2 - x3/sum²*dy3. Same for dx2 and dx3. As a result, the Jacobian is [dxi]_i = (sum - xi)/sum² and [dxi]_j = -xj/sum² (for all j different to i). In your implementation, you seem to be missing all non-diagonal components. Keeping the same one-column example, with x1=2, x2=3, and x3=5: >>> x = torch.tensor([[2.], [3.], [5.]]) >>> sum = input.sum(0) tensor([10]) The Jacobian will be: >>> J = (sum*torch.eye(input.size(0)) - input)/sum**2 tensor([[ 0.0800, -0.0200, -0.0200], [-0.0300, 0.0700, -0.0300], [-0.0500, -0.0500, 0.0500]]) For an implementation with multiple columns, it's a bit trickier, more specifically for the shape of the diagonal matrix. It's easier to keep the column axis last so we don't have to bother with broadcastings: >>> x = torch.tensor([[2., 1], [3., 3], [5., 5]]) >>> sum = x.sum(0) tensor([10., 9.]) >>> diag = sum*torch.eye(3).unsqueeze(-1).repeat(1, 1, len(sum)) tensor([[[10., 9.], [ 0., 0.], [ 0., 0.]], [[ 0., 0.], [10., 9.], [ 0., 0.]], [[ 0., 0.], [ 0., 0.], [10., 9.]]]) Above diag has a shape of (3, 3, 2) where the two columns are on the last axis. Notice how we didn't need to broadcast sum. What I wouldn't have done is: torch.eye(3).unsqueeze(0).repeat(len(sum), 1, 1). Since with this kind of shape - (2, 3, 3) - you will have to use sum[:, None, None], and will need further broadcasting down the road... The Jacobian is simply: >>> J = (diag - x)/sum**2 tensor([[[ 0.0800, 0.0988], [-0.0300, -0.0370], [-0.0500, -0.0617]], [[-0.0200, -0.0123], [ 0.0700, 0.0741], [-0.0500, -0.0617]], [[-0.0200, -0.0123], [-0.0300, -0.0370], [ 0.0500, 0.0494]]]) You can check the results by backpropagating through the operation using an arbitrary dy vector (not with torch.ones though, you'll get 0s because of J!). After backpropagating, x.grad should equal to torch.einsum('abc,bc->ac', J, dy).
https://stackoverflow.com/questions/66089318/
AllenNLP DatasetReader.read returns generator instead of AllennlpDataset
While studying AllenNLP framework (version 2.0.1), I tried to implement the example code from https://guide.allennlp.org/training-and-prediction#1. While reading the data from a Parquet file I got: TypeError: unsupported operand type(s) for +: 'generator' and 'generator' for the next line: vocab = build_vocab(train_data + dev_data) I suspect the return value should be AllennlpDataset but maybe I got it mixed up. What did I do wrong? Full code: train_path = <some_path> test_path = <some_other_path> class ClassificationJobReader(DatasetReader): def __init__(self, lazy: bool = False, tokenizer: Tokenizer = None, token_indexers: Dict[str, TokenIndexer] = None, max_tokens: int = None): super().__init__(lazy) self.tokenizer = tokenizer or WhitespaceTokenizer() self.token_indexers = token_indexers or {'tokens': SingleIdTokenIndexer()} self.max_tokens = max_tokens def _read(self, file_path: str) -> Iterable[Instance]: df = pd.read_parquet(data_path) for idx in df.index: text = row['title'][idx] + ' ' + row['description'][idx] print(f'text : {text}') label = row['class_id'][idx] print(f'label : {label}') tokens = self.tokenizer.tokenize(text) if self.max_tokens: tokens = tokens[:self.max_tokens] text_field = TextField(tokens, self.token_indexers) label_field = LabelField(label) fields = {'text': text_field, 'label': label_field} yield Instance(fields) def build_dataset_reader() -> DatasetReader: return ClassificationJobReader() def read_data(reader: DatasetReader) -> Tuple[Iterable[Instance], Iterable[Instance]]: print("Reading data") training_data = reader.read(train_path) validation_data = reader.read(test_path) return training_data, validation_data def build_vocab(instances: Iterable[Instance]) -> Vocabulary: print("Building the vocabulary") return Vocabulary.from_instances(instances) dataset_reader = build_dataset_reader() train_data, dev_data = read_data(dataset_reader) vocab = build_vocab(train_data + dev_data) Thanks for your help
Please find below the code fix first and the explanation afterwards. Code Fix # the extend_from_instances expands your vocabulary with the instances passed as an arg # and is therefore equivalent to Vocabulary.from_instances(train_data + dev_data) # previously vocabulary.extend_from_instances(train_data) vocabulary.extend_from_instances(dev_data) Explanation This is because the AllenNLP API have had couple of breaking changes in allennlp==2.0.1. You can find the changelog here and the upgrade guide here.The guide is outdated as per my understanding (it reflects allennlp<=1.4). The DatasetReader returns a generator now as opposed to a List previously. DatasetReader used to have a parameter called "lazy" which was for lazy loading data. It was False by default and therefore dataset_reader.read would return a List previously. However, as of v2.0 (if i remember exactly), lazy loading is applied by default and it therefore returns a generator by default. As you know, the "+" operator has not been overridden for generator objects and therefore you cannot simply add two generators. So, you can simply use vocab.extend_from_instances to achieve same behavior as before. Hope this helped you. If you need a full code snippet, please leave a comment below, I could post a rekated gist and share it with you. Good day!
https://stackoverflow.com/questions/66091092/
how to access value of a learned parameter of an activation function in a sequential network
I am implementing my custom activation function with learnable parameters. For example this can be similar to PReLu https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html which a learnable parameter a. How can I access/view the a parameter value after training ? Solution the issue was related to the fact that it was a sequential network and this helped solve the issue https://discuss.pytorch.org/t/access-weights-of-a-specific-module-in-nn-sequential/3627
The nn.PReLU layer is a nn.Module, like most other layers you can access the weights directly using the weight property. >>> act = nn.PReLU() >>> act.weight Parameter containing: tensor([0.2500], requires_grad=True)
https://stackoverflow.com/questions/66100598/
Using Pytorch how to define a tensor with indices and corresponding values
Problem I have a list of indices and a list of values like so: i = torch.tensor([[2, 2, 1], [2, 0, 2]]) v = torch.tensor([1, 2, 3]) I want to define a (3x3 for the example) matrix which contains the values v at the indices i (1 at position (2,2), 2 at position (2, 0) and 3 at position (1,2)): tensor([[0, 0, 0], [0, 0, 3], [2, 0, 1]]) What I have tried I can do it using a trick, with torch.sparse and .to_dense() but I feel that it's not the "pytorchic" way to do that nor the most efficient: f = torch.sparse.FloatTensor(indices, values, torch.Size([3, 3])) print(f.to_dense()) Any idea for a better solution ? Ideally I would appreciate a solution at least as fast than the one provided above. Of course this was just an example, no particular structure in tensors i and v are assumed (neither for the dimension).
There is an alternative, as below: import torch i = torch.tensor([[2, 2, 1], [2, 0, 2]]) v = torch.tensor([1, 2, 3], dtype=torch.float) # enforcing same data-type target = torch.zeros([3,3], dtype=torch.float) # enforcing same data-type target.index_put_(tuple([k for k in i]), v) print(target) The target tensor will be as follows: tensor([[0., 0., 0.], [0., 0., 3.], [2., 0., 1.]]) This medium.com blog article provides a comprehensive list of all index functions for PyTorch Tensors.
https://stackoverflow.com/questions/66103930/
Memory leak when using lambda in Python class
I have detected a memory leak in Python if I use lambda function inside a class. Here's the code to reproduce the leak: import torch # import numpy as np class Class1(object): def __init__(self, x0): self.x0 = x0 self.obj2 = Class2() self._leak_fcn = lambda: self.obj2.fcn() # source of memory leak! class Class2(object): def fcn(self): pass def fcn(x0): obj1 = Class1(x0) return x0 def test_fcn(): shape = (50000000, 3) y0 = torch.randn(shape).to(torch.double) # y0 = np.random.randn(*shape) y = fcn(y0) return y for i in range(1000): print(i) test_fcn() The memory leak happens even if I changed it to numpy (without using pytorch). No memory leak is detected if comment the line containing self._leak_fcn or if I write the _leak_fcn as a method instead of a lambda. What is happening here? I am not sure if this leak is from Python, or both PyTorch and NumPy are suffering the same leak. FYI: I am using Python 3.8.5. EDIT: I know there is a memory leak here because if I run it for a long period, my memory is filled up (observed with htop) and the process is killed when it is run out of memory.
There is no leak here but simply a race as to whether the instances of Class1 get garbage collected soon enough to allow the torch or numpy buffers indirectly anchored by those Class1 buffers to be freed before the process no longer has enough memory to allocate another torch or numpy buffer. Garbage collection is needed to break the PyObject reference cycle mentioned by @ThierryLathuille. The fact that a delay in garbage collection is the issue is provable by simply changing the source from the example to import from gc and to add a call to gc.collect() in the last loop of the program, because this makes it so that garbage collection will always happen soon enough (assuming you have enough memory available on the system that the program can make it even one time through the loop). To prove this, add "import gc" to the top of the program and also make the last loop look like this: for i in range(1000): print(i) test_fcn() gc.collect() You will then see that the program can run to completion (again, assuming you have enough memory to make it through the loop at least once). The second thing one might want to confirm is that the garbage collection is simply not happening soon enough for the particular configuration but would happen eventually. This is certainly the case, and provably so. The way to do this is to reduce the memory used per numpy or torch buffer enough that the program does not run out of memory before garbage collection starts allowing some of those buffers to be freed, but not reduce it so much that the program could run to completion without any of those numpy or torch buffers being garbage collected at all. To understand exactly what these numbers are, one needs understand how large the program will be allowed to grow. On Linux, one limit is the total available memory to be used as "committed memory", but the system may be configured to allow some over-commit to occur. This can be roughly checked by looking at /proc/meminfo. On my system, subtracting Committed_AS from CommitLimit would suggest that if I run the program I will have less than 10 GB available to that program (assuming that other programs don't start or stop or change how much committed memory they use in the meantime). $ grep Commit /proc/meminfo CommitLimit: 9325344 kB Committed_AS: 573964 kB As already reported, the space used per torch or numpy buffer is roughly 1.2 GB (50,000,000 * 3 * 8) so even with some over-commit of memory allowed I would expect my program to store around 8 of those numpy or torch buffers before it crashes. It fact, on my system (using numpy rather than torch by starting with the original program from the question and commenting out the torch lines and removing the # from the numpy lines) it crashes at around 10 times through the loop: $ python3 junk.py 0 1 2 3 4 5 6 7 8 9 10 Traceback (most recent call last): File "junk.py", line 29, in <module> test_fcn() File "junk.py", line 23, in test_fcn y0 = np.random.randn(*shape) File "mtrand.pyx", line 1233, in numpy.random.mtrand.RandomState.randn File "mtrand.pyx", line 1390, in numpy.random.mtrand.RandomState.standard_normal File "_common.pyx", line 577, in numpy.random._common.cont numpy.core._exceptions.MemoryError: Unable to allocate 1.12 GiB for an array with shape (50000000, 3) and data type float64 So suppose I make the following change in the program to the arguments to shape() to reduce the size of the numpy buffer by a factor of 25: # shape = (50000000, 3) shape = (2000000, 3) Now I expect the numpy buffer to take at least 2,000,000 * 3 * 8 = 48,000,000 bytes. There is no way on my small system that the program could have 1,000 of those allocated but not yet freed (because that would take at least 48,000,000,000 bytes). However, the program runs to completion with the modified size, showing that garbage collection must be working. The next question one might ask is how someone who was not able to spot the python reference cycle using just the source as Thierry apparently did, could figure this out by analysis. One way to do this analysis to use chap (an open source tool that runs on Linux and for which source is available at https://github.com/vmware/chap). The input required by chap is a core from the program to be analyzed. As seen above, on my system the program crashed after around 10 times through the loop so I chose to gather a live core for the program (using the original program with the minor commenting changes to use numpy) by running "gcore " after the program had run 8 or so times through the loop. Here is the analysis using chap, starting from the point where chap has reached the chap prompt. We know the numpy buffers are large, so we can find them just by using a command that finds any buffers that are at least 0x1000000 bytes. Running that command shows that there were 7 such allocations at the time the core was gathered: chap> describe used /minsize 1000000 Anchored allocation at 7fc6255b6010 of size 47868ff0 Anchored allocation at 7fc66ce1f010 of size 47868ff0 Anchored allocation at 7fc6b4688010 of size 47868ff0 Anchored allocation at 7fc6fbef1010 of size 47868ff0 Anchored allocation at 7fc74375a010 of size 47868ff0 Anchored allocation at 7fc78afc3010 of size 47868ff0 Anchored allocation at 7fc7d282c010 of size 47868ff0 7 allocations use 0x1f4adef90 (8,400,007,056) bytes. If we pick one of those large allocations, we can see how it is referenced, to see why it is still in memory. One way to do this is to use the following command, which specifies that we should start from the given allocation, extend to any allocations that contain a pointer to the start (offset 0) of that allocation, then stop: chap> describe allocation 7fc7d282c010 /extend @0<-=>StopHere Anchored allocation at 7fc7d282c010 of size 47868ff0 Anchored allocation at 7fc82242c620 of size 50 This allocation matches pattern SimplePythonObject. This has reference count 1 and python type 0x7fc8220a88e0 (numpy.ndarray) 2 allocations use 0x47869040 (1,200,001,088) bytes. The result shows that there is just one such allocation and, not surprisingly given the source code, it is of type numpy.ndarray. It matches the chap pattern SimplePythonObject because allocations of type numpy.ndarray cannot reference other python objects (the big buffer is not one) and don't have garbage collection headers. Such an object will only be freed when the reference count transitions to 0. The key thing to observe here is that the reference count is 1, meaning we are just looking for one reference to that numpy.darray to understand why it is in memory. Continuing from the numpy.darray we see that 7 things reference the start of that allocation. 6 of them are instances of python type frame, and are unlikely uninteresting because there are 6 of them and we also only want to explain one reference to the numpy.darray. There is also a single allocation that matches pattern PyDictValuesArray (because it holds the values for a split python dict) and this one is of more interest (remember to scroll through the output below): chap> describe allocation 7fc82242c620 /extend @0<-=>StopHere Anchored allocation at 7fc82242c620 of size 50 This allocation matches pattern SimplePythonObject. This has reference count 1 and python type 0x7fc8220a88e0 (numpy.ndarray) Anchored allocation at 562868cdae20 of size 268 This allocation matches pattern ContainerPythonObject. This allocation is not currently tracked by the garbage collector. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 0 and python type 0x7fc82393aa00 (frame) Anchored allocation at 562868e44ba0 of size 218 This allocation matches pattern ContainerPythonObject. This allocation is not currently tracked by the garbage collector. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 0 and python type 0x7fc82393aa00 (frame) Anchored allocation at 562868e48830 of size 238 This allocation matches pattern ContainerPythonObject. This allocation is not currently tracked by the garbage collector. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 0 and python type 0x7fc82393aa00 (frame) Anchored allocation at 562868e65b10 of size 238 This allocation matches pattern ContainerPythonObject. This allocation is not currently tracked by the garbage collector. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 0 and python type 0x7fc82393aa00 (frame) Anchored allocation at 7fc81a1f5030 of size 1f8 This allocation matches pattern ContainerPythonObject. This allocation is not currently tracked by the garbage collector. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 0 and python type 0x7fc82393aa00 (frame) Anchored allocation at 7fc81e5a3200 of size 1d0 This allocation matches pattern ContainerPythonObject. This allocation is not currently tracked by the garbage collector. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 0 and python type 0x7fc82393aa00 (frame) Anchored allocation at 7fc8223f9968 of size 28 This allocation matches pattern PyDictValuesArray. It contains values for a split python dict. 8 allocations use 0xd30 (3,376) bytes. An allocation that matches pattern PyDictValuesArray must be referenced by the ma_values field of a python dict. The allocation holding the values is not itself reference counted, but depends on being freed either when the dict is freed or when the dict no longer needs that allocation. Continuing from that allocation we can see the dict: chap> describe allocation 7fc8223f9968 /extend @0<-=>StopHere Anchored allocation at 7fc8223f9968 of size 28 This allocation matches pattern PyDictValuesArray. It contains values for a split python dict. Anchored allocation at 7fc823a646a8 of size 48 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc823939780 (dict) 2 allocations use 0x70 (112) bytes. It is worth noting here that the dict has reference count 1 (so again only one reference needs to be explained), that the garbage collector is tracking this object and that the header for the actual dict, as opposed to the preceding garbage collection header starts at offset 0x18 to the dict. This means that for references to the dict we need to look for things that point to offset 0x18 of the allocation. (Links to offset 0 of the allocation would generally be used by garbage collection). Using this information we can continue to see who references the dict and see that the dict is referenced by an instance of Class1. chap> describe allocation 7fc823a646a8 /extend @18<-=>StopHere Anchored allocation at 7fc823a646a8 of size 48 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc823939780 (dict) Anchored allocation at 7fc822428d50 of size 38 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x562868afee88 (Class1) 2 allocations use 0x80 (128) bytes. This is not surprising, given the program. The dict must be being used to hold the various fields of the instance of Class1. Again, note that the instance of Class1 has a reference count of 1, meaning we need to explain just one reference to understand why that instance of Class1 could still be in memory. Continuing, again noticing that the allocation has a garbage collector, we can see that the Class1 instance is referenced by an instance of the python cell type, which again is being tracked for garbage collection and has reference count 1: chap> describe allocation 7fc822428d50 /extend @18<-=>StopHere Anchored allocation at 7fc822428d50 of size 38 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x562868afee88 (Class1) Anchored allocation at 7fc823a1a870 of size 30 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc82393d6c0 (cell) 2 allocations use 0x68 (104) bytes. Continuing, we can see that the cell is referenced by a tuple, which again has reference count 1: chap> describe allocation 7fc823a1a870 /extend @18<-=>StopHere Anchored allocation at 7fc823a1a870 of size 30 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc82393d6c0 (cell) Anchored allocation at 7fc822428fb8 of size 38 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc823936320 (tuple) 2 allocations use 0x68 (104) bytes. When checking for the tuple, it appears to be referenced by both a tuple and a function but the referencing tuple in this case does not appear to be of interest for our purposes because the garbage collector is not tracking it and also the reference count is 0. The function is definitely interesting because it is being tracked, because it has reference count 1, and because from the source we know that we are looking for such a thing: chap> describe allocation 7fc822428fb8 /extend @18<-=>StopHere Anchored allocation at 7fc822428fb8 of size 38 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc823936320 (tuple) Anchored allocation at 562868aff290 of size 2b8 This allocation matches pattern ContainerPythonObject. This allocation is not currently tracked by the garbage collector. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 0 and python type 0x7fc823936320 (tuple) Anchored allocation at 7fc81a114938 of size 88 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc82393a860 (function) 3 allocations use 0x378 (888) bytes. Continuing from the function, we complete the cycle because the allocation at 0x7fc8223f9968 that matches pattern PyDictKeysArray has already been seen earlier in our traversal. chap> describe allocation 7fc81a114938 /extend @18<-=>StopHere Anchored allocation at 7fc81a114938 of size 88 This allocation matches pattern ContainerPythonObject. The garbage collector considers this allocation to be reachable. This has a PyGC_Head at the start so the real PyObject is at offset 0x18. This has reference count 1 and python type 0x7fc82393a860 (function) Anchored allocation at 7fc8223f9968 of size 28 This allocation matches pattern PyDictValuesArray. It contains values for a split python dict. 2 allocations use 0xb0 (176) bytes. So we have a cycle that looks like this: Class1 -> dict -> %PyDictValuesArray -> function -> tuple -> cell -> The allocations in the cycle itself do not take all that much memory, but it does require garbage collection to free it, and the %PyDictValuesArray in that cycle also holds the numpy.darray which holds the big buffer. People including the author of the question and several people in the comments have discussed valid solutions to avoid the growth, so I will avoid any discussion of the fix itself and limit this answer to the above analysis of the reason for the growth.
https://stackoverflow.com/questions/66106830/
Installing YOLOv5 dependencies - torchvision?
I want to use yolov5. According to https://pytorch.org/hub/ultralytics_yolov5/, you should have Python>=3.8 and PyTorch>=1.7 installed, as well as YOLOv5 dependencies. Python and pytorch are up to date: pip show torch Version: 1.7.1 python --version Python 3.9.1 But when I try to install the yolov5 dependencies, I get an error message: pip install -qr https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt ERROR: Could not find a version that satisfies the requirement torchvision>=0.8.1 ERROR: No matching distribution found for torchvision>=0.8.1 An updated version of torchvision is needed (???). So I run the update, but when I check the version it hasn't worked. pip show torchvision Version: 0.2.2.post3 -m pip install --upgrade torchvision pip show torchvision Version: 0.2.2.post3 Is torchvision needed for installing the yolov5 dependencies? How do I move forward? I'm on Windows 10. Thanks!
I was getting similar error. Installing fresh compatible libraries (torch==1.7.1 & torchvision==0.8.2) worked for me. virtualenv -p python3.8 torch17 source torch17/bin/activate pip install cython matplotlib tqdm scipy ipython ninja yacs opencv-python ffmpeg opencv-contrib-python Pillow scikit-image scikit-learn lmfit imutils pyyaml jupyterlab==3 pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html Be careful about your cuda version and install accordingly. Mine is 10.1. Simply using pip install torch==1.7.1 will install latest cuda version (11) compatible lib which might not be your case. Virtual environment activation command will be different for Windows. I am on Linux
https://stackoverflow.com/questions/66107660/
how to convert HuggingFace's Seq2seq models to onnx format
I am trying to convert the Pegasus newsroom in HuggingFace's transformers model to the ONNX format. I followed this guide published by Huggingface. After installing the prereqs, I ran this code: !rm -rf onnx/ from pathlib import Path from transformers.convert_graph_to_onnx import convert convert(framework="pt", model="google/pegasus-newsroom", output=Path("onnx/google/pegasus-newsroom.onnx"), opset=11) and got these errors: ValueError Traceback (most recent call last) <ipython-input-9-3b37ed1ceda5> in <module>() 3 from transformers.convert_graph_to_onnx import convert 4 ----> 5 convert(framework="pt", model="google/pegasus-newsroom", output=Path("onnx/google/pegasus-newsroom.onnx"), opset=11) 6 7 6 frames /usr/local/lib/python3.6/dist-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, encoder_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 938 input_shape = inputs_embeds.size()[:-1] 939 else: --> 940 raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") 941 942 # past_key_values_length ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds I have never seen this error before. Any ideas?
Pegasus is a seq2seq model, you can't directly convert a seq2seq model (encoder-decoder model) using this method. The guide is for BERT which is an encoder model. Any only encoder or only decoder transformer model can be converted using this method. To convert a seq2seq model (encoder-decoder) you have to split them and convert them separately, an encoder to onnx and a decoder to onnx. you can follow this guide (it was done for T5 which is also a seq2seq model) Why are you getting this error? while converting PyTorch to onnx _ = torch.onnx._export( model, dummy_input, ... ) you need to provide a dummy variable to both encoder and to the decoder separately. by default when converting using this method it provides the encoder the dummy variable. Since this method of conversion didn't accept decoder of this seq2seq model, it won't give a dummy variable to the decoder and you get the above error. ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
https://stackoverflow.com/questions/66109084/
How to tell PyTorch which CUDA version to take?
I have two version of CUDA installed on my Ubuntu 16.04 machine: 9.0 and 10.1. They are located in /usr/local/cuda-9.0 and /usr/local/10.1 respectively. If I install PyTorch 1.6.0 (which needs CUDA 10.1) via pip (pip install torch==1.6.0), it uses version 9.0 and thus detects no GPUs. I already changed my LD_LIBRARY_PATH to "/usr/local/cuda-10.1/lib64:/usr/local/cuda-10.1/cuda/extras/CUPTI/lib64" but PyTorch is still using CUDA 9.0. How do I tell PyTorch to use CUDA 10.1?
Prebuilt wheels for torch built with different versions of CUDA are available at torch stable releases page. For example you can install torch v1.9.0 built with CUDA v11.1 like this: pip install --upgrade torch==1.9.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html But not all the combinations are available.
https://stackoverflow.com/questions/66116155/
Is there a way to make a model that creates a mask to drop certain inputs before feeding the masked data to another network?
This might be a question that is kind of dumb but I'm trying to construct a model that is able to filter out inputs before feeding the filtered output to another network. For example, I have an image that I would to match with a database of about 100 pictures, then I would apply the first network to do some operations that would output the top 10 pictures that is most likely to correctly match. Afterwards, I would apply a second network to rematch those top 10 pictures using a secondary network. INPUT --> | NETWORK 1 | --> FILTERED OUTPUT --> | NETWORK 2 | --> FINAL OUTPUT Wondering if there is a way to accomplish this sort of filtration behaviour where that filtered output is fed to the second model like that.
You could maybe take a look at Boolean index arrays with numpy >>> import numpy as np >>> x = np.array(range(20)) >>> x array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> x[x > 10] array([11, 12, 13, 14, 15, 16, 17, 18, 19]) x > 10 returns an array with 20 booleans, so you can maybe try something like this: x = pic_arr[network1(pic_arr)] network2(x) Where pic_arr is an array with your pictures and network1 returns a list with booleans of which pictures to select.
https://stackoverflow.com/questions/66116608/
while running netron on colab, getting this "OSError: [Errno 98] Address already in use" error
I'm using Netron, for visualizing the model on Colab. as shown in this notebook line 11. when I run the following script to view the model, import netron enable_netron = True if enable_netron: netron.start(optimized_model_path) am getting this error: --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-13-86b0c5c2423c> in <module>() 5 if enable_netron: ----> 6 netron.start(optimized_model_path) 5 frames /usr/lib/python3.6/socketserver.py in server_bind(self) 468 if self.allow_reuse_address: 469 self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) --> 470 self.socket.bind(self.server_address) 471 self.server_address = self.socket.getsockname() 472 OSError: [Errno 98] Address already in use how can I fix this issue? Ik I can use its desktop version of the app and give it the model but how can use Netron in colab?
Should be able to use portpicker.pick_unused_port(). Here's a simple example: https://colab.research.google.com/gist/blois/227d21df87fe8a390c2a23a93b3726f0/netron.ipynb
https://stackoverflow.com/questions/66119656/
Partial Backwards in PyTorch Graph
I have a medium sized tensor x. On this mediums sized tensor an computationally expensive function (both forward and backwards) q is applied to obtain another medium sized tensor y. Using y I evaluate many functions f producing a scalar, they are not particularly computationally expensive however use large internal states causing a large computational graph for torch that. Now I want to compute the gradient on x in the following way y = q(x) for f in functions res += f(y) res.backward() The proble with this implementation is that the graphs of all the functions f are retained. That causes the total memory usage to explode. The other possibility would be to compute y = q(x) for f in functions partial = f(y) partial.backward(retain_graph = True) The advantage is that after every function evaluation f the result goes out of scope and the graph is freed, saving massive amounts of memory. However, in this situation the function q(x) is evaluated backwards multiple times which is very time expensive. In the ideal situation I would want to first compute the gradient for y using code similar to the second example and then only once evaluate q backwards to get the gradient for x. What is the proper way to do that with PyTorch?
I think that would be the way to achieve it : y = q(x) z = y.detach() z.requires_grad_(True) for f in functions: partial = f(y) partial.backward(retain_graph = True) y.backward(z.grad) You accumulate all the gradients in z, which y but in another computational graph, then you propagate these gradients (z.grad) in the first graph
https://stackoverflow.com/questions/66119892/
How can i detect if a callback is triggered in pytorch?
I am fine-tuning a BERT model. First, I want to freeze layers and train a bit. When a certain callback is triggered (let's say ReduceLROnPlateau) I want to unfreeze layers. How can I do it?
I'm afraid learning rate schedulers in PyTorch don't provide hooks. Looking at the implementation of ReduceLROnPlateau here, two properties are reset when the scheduler is triggered (i.e. when it identifies a plateau and reduces the learning rate): if self.num_bad_epochs > self.patience: self._reduce_lr(epoch) self.cooldown_counter = self.cooldown self.num_bad_epochs = 0 Based on that, you could wrap your scheduler step call and find out if _reduce_lr was triggered by checking that both scheduler.cooldown_counter == scheduler.cooldown and scheduler.num_bad_epochs == 0 are true.
https://stackoverflow.com/questions/66121082/
PyTorch nn.Conv2d output comptation
I am using Python 3.8 and PyTorch 1.7.1. I saw a code which defines a Conv2d layer as follows: Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) The input 'X' being passed to it is a 4D tensor- X.shape # torch.Size([4, 3, 6, 6]) The output volume for this conv layer is: c1(X).shape # torch.Size([4, 6, 3, 3]) I am trying to use the formula to compute output spatial dimensions for any conv layer: O = ((W - K + 2P)/S) + 1, where W = spatial dimension of image, K = filter/kernel size, P = zero padding & S = stride. For 'c1' conv layer, we get, W = 6, K = 3, S = 2 & P = 1. Using the formula, you get O = ((6 - 3 + (2 x 1)) / 2) + 1 = 5/2 + 1 = 3.5. The output volume: (4, 6, 3, 3) since number of filters used = 6. How is the spatial output from 'c1' then (3, 3)? What am I not getting? Thanks!
How would you have half a pixel? You're missing the floor function: O = floor(((W - K + 2P)/S) + 1) So the shape of the outputted maps is (3, 3). Here's the complete formula (with dilation) for nn.Conv2d:
https://stackoverflow.com/questions/66121934/
Exporting ONNX Model from Deep Learning Frameworks at Operator-level
Hi, I have some questions regarding exporting ONNX models. Let's say we have an LSTM cell from PyTorch. Using torch.onnx.export produces ONNX model with LSTM layer. However, I am interested in whether it can produce the ONNX model at the operator level, i.e, matmul, add. Is there a way to do so? If not, is there another way to make an operator level ONNX model? Thanks, Jake
When you export a model from PyTorch to onnx using the torch.onnx.export() function, it records all the operations that the initial model has used. as mentioned here. we call the torch.onnx.export() function. This will execute the model, recording a trace of what operators are used to computing the outputs. so, yes it does produce the onnx model at the operator level, you can even visualize the exported .onnx model graph using netron if you still want to use the onnx operator, here is the ONNX Operator Schemas.
https://stackoverflow.com/questions/66130001/
What does the difference between 'torch.backends.cudnn.deterministic=True' and 'torch.set_deterministic(True)'?
My network includes 'torch.nn.MaxPool3d' which throw a RuntimeError when cudnn deterministic flag is on according to the PyTorch docs (version 1.7 - https://pytorch.org/docs/stable/generated/torch.set_deterministic.html#torch.set_deterministic), however, when I inserted the code 'torch.backends.cudnn.deterministic=True' at the beginning of my code, there was no RuntimeError. Why doesn't that code throw a RuntimeError? I wonder whether that code guarantees the deterministic computation of my training process.
torch.backends.cudnn.deterministic=True only applies to CUDA convolution operations, and nothing else. Therefore, no, it will not guarantee that your training process is deterministic, since you're also using torch.nn.MaxPool3d, whose backward function is nondeterministic for CUDA. torch.set_deterministic(), on the other hand, affects all the normally-nondeterministic operations listed here (note that set_deterministic has been renamed to use_deterministic_algorithms in 1.8): https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html?highlight=use_deterministic#torch.use_deterministic_algorithms As the documentation states, some of the listed operations don't have a deterministic implementation. So if torch.use_deterministic_algorithms(True) is set, they will throw an error. If you need to use nondeterministic operations like torch.nn.MaxPool3d, then, at the moment, there is no way for your training process to be deterministic--unless you write a custom deterministic implementation yourself. Or you could open a GitHub issue requesting a deterministic implementation: https://github.com/pytorch/pytorch/issues In addition, you might want to check out this page: https://pytorch.org/docs/stable/notes/randomness.html
https://stackoverflow.com/questions/66130547/
Why do I get a ValueError when using CrossEntropyLoss
I am trying to train a pretty simple 2-layer neural network for a multi-class classification class. I am using CrossEntropyLoss and I get the following error: ValueError: Expected target size (128, 4), got torch.Size([128]) in my training loop at the point where I am trying to calculate the loss. My last layer is a softmax so it outputs the probabilities of each of the 4 classes. My target values are a vector of dimension 128 (just the class values). Am I initializing the CrossEntropyLoss object incorrectly? I looked up existing posts, this one seemed the most relevant: https://discuss.pytorch.org/t/valueerror-expected-target-size-128-10000-got-torch-size-128-1/29424 However, if I had to squeeze my target values, how would that work? Like right now they are just class values for e.g., [0 3 1 0]. Is that not how they are supposed to look? I would think that the loss function maps the highest probability from the last layer and associates that to the appropriate class index. Details: This is using PyTorch Python version is 3.7 NN architecture is: embedding -> pool -> h1 -> relu -> h2 -> softmax Model Def (EDITED): self.embedding_layer = create_embedding_layer(embeddings) self.pool = nn.MaxPool1d(1) self.h1 = nn.Linear(embedding_dim, embedding_dim) self.h2 = nn.Linear(embedding_dim, 4) self.s = nn.Softmax(dim=2) forward pass: x = self.embedding_layer(x) x = self.pool(x) x = self.h1(x) x = F.relu(x) x = self.h2(x) x = self.s(x) return x
The issue is that the output of your model is a tensor shaped as (batch, seq_length, n_classes). Each sequence element in each batch is a four-element tensor corresponding to the predicted probability associated with each class (0, 1, 2, and 3). Your target tensor is shaped (batch,) which is usually the correct shape (you didn't use one-hot-encodings). However, in this case, you need to provide a target for each one of the sequence elements. Assuming the target is the same for each element of your sequence (this might not be true though and is entirely up to you to decide), you may repeat the targets seq_length times. nn.CrossEntropyLoss allows you to provide additional axes, but you have to follow a specific shape layout: Input: (N, C) where C = number of classes, or (N, C, d_1, d_2, ..., d_K) with K≥1 in the case of K-dimensional loss. Target: (N) where each value is 0 ≤ targets[i] ≤ C−1 , or (N, d_1, d_2, ..., d_K) with K≥1 in the case of K-dimensional loss. In your case, C=4 and seq_length (what you referred to as D) would be d_1. >>> seq_length = 10 >>> out = torch.rand(128, seq_length, 4) # mocking model's output >>> y = torch.rand(128).long() # target tensor >>> criterion = nn.CrossEntropyLoss() >>> out_perm = out.permute(0, 2, 1) >>> out_perm.shape torch.Size([128, 4, 10]) # (N, C, d_1) >>> y_rep = y[:, None].repeat(1, seq_length) >>> y_rep.shape torch.Size([128, 10]) # (N, d_1) Then call your loss function with criterion(out_perm, y_rep).
https://stackoverflow.com/questions/66131350/
Pytorch GRU error RuntimeError : size mismatch, m1: [1600 x 3], m2: [50 x 20]
Currently, I'm trying to make the training model for an LSTM and GRU. The LSTM is working perfectly, but once I switched into GRU training, errors pop-out such as size mismatch error. This is my code path = "new_z_axis" device = "cuda:0" in_size = 3 h_size = 50 n_layers = 3 fc = 20 out = 1 batch_size = 16 seq = 100 epoch = 100 learning_rate = 1e-3 ratio = 0.8 checkpoint = os.path.join("checkpoints","model_"+path+"_"+str(in_size)+".pth") class GRUNet(nn.Module): def __init__(self,in_size,h_size,n_layers,fc_out,out_size,dropout=0.5): super(GRUNet, self).__init__() self.gru = nn.GRU(input_size=in_size,hidden_size=h_size,num_layers=n_layers,dropout=dropout,bias=False) self.fc = nn.Linear(in_features=h_size,out_features=fc_out,bias=False) self.relu = nn.ReLU(inplace=True) self.out = nn.Linear(in_features=fc_out,out_features=out_size,bias=False) self.tanh = nn.Tanh() def forward(self, x, hidden): out, hidden = self.gru(x, hidden) x = self.fc(x) x = self.relu(x) x = self.out(x) x = self.tanh(x) return x, hidden class MyLstm(nn.Module): def __init__(self,in_size,h_size,n_layers,fc_out,out_size,dropout=0.5): super(MyLstm, self).__init__() self.lstm = nn.LSTM(input_size=in_size,hidden_size=h_size,num_layers=n_layers,dropout=dropout,bias=False) self.fc = nn.Linear(in_features=h_size,out_features=fc_out,bias=False) self.relu = nn.ReLU(inplace=True) self.out = nn.Linear(in_features=fc_out,out_features=out_size,bias=False) self.tanh = nn.Tanh() def forward(self,x,hidden): x, hidden = self.lstm(x,hidden) # x = x[-1:] x = self.fc(x) x = self.relu(x) x = self.out(x) x = self.tanh(x) return x, hidden def train(model,train_list,val_list,path,seq,epoch,batch_size,criterion,optimizer,model_type): for e in range(epoch): train_data = load_data(train_list,batch_size) a_loss = 0 a_size = 0 model.train() for x,y in train_data: x,y = x.to(device),y.to(device) bs = x.size()[1] # hidden = (hidden[0].detach(),hidden[1].detach()) # print(x.size(),hidden[0].size()) if model_type == "GRU": h1 = torch.zeros((n_layers,bs,h_size)).to("cuda:0") hidden = h1 hidden = hidden.data else: h1 = torch.zeros((n_layers,bs,h_size)).to("cuda:0") h2 = torch.zeros((n_layers,bs,h_size)).to("cuda:0") hidden = (h1,h2) hidden = tuple([e.data for e in hidden]) model.zero_grad() print (len(hidden)) pred,hidden = model(x,hidden) loss = criterion(pred,y) loss.backward() nn.utils.clip_grad_norm_(model.parameters(),5) optimizer.step() a_loss += loss.detach() a_size += bs # print(e,a_loss/a_size*1e+6) model.eval() with torch.no_grad(): val_data = load_data(val_list,batch_size) b_loss = 0 b_size = 0 for x,y in val_data: x,y = x.to(device),y.to(device) bs = x.size()[1] if model_type == "GRU": h1 = torch.zeros((n_layers,bs,h_size)).to("cuda:0") hidden = h1 hidden = hidden.data else: h1 = torch.zeros((n_layers,bs,h_size)).to("cuda:0") h2 = torch.zeros((n_layers,bs,h_size)).to("cuda:0") hidden = (h1,h2) hidden = tuple([e.data for e in hidden]) pred,hidden = model(x,hidden) loss = criterion(pred,y) b_loss += loss.detach() b_size += bs print("epoch: {} - train_loss: {} - val_loss: {}".format(e+1,float(a_loss.item()/a_size*1e+6),b_loss.item()/b_size*1e+6)) train(modelGRU,train_list,val_list,path,seq,epoch,batch_size,criterionGRU,optimizerGRU,model_type="GRU") This is the error i got -------------------------------------------------- ------------------------- RuntimeError Traceback (most recent call last) in ---- > 1 train ( modelGRU , train_list , val_list , path , seq , epoch , batch_size , criterionGRU , optimizerGRU , model_type = "GRU" ) in train (model, train_list, val_list, path, seq, epoch, batch_size, criterion, optimizer, model_type) 61 model . zero_grad ( ) 62 print ( len ( hidden ) ) ---> 63 pred , hidden = model ( x , hidden ) 64 loss = criterion ( pred , y ) 65 loss .backward ( ) ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call (self, *input, **kwargs) 539 result = self . _slow_forward ( * input , ** kwargs ) 540 else : --> 541 result = self . forward ( * input , ** kwargs ) 542 for hook in self . _forward_hooks . values ( ) : 543 hook_result = hook ( self , input , result ) in forward (self, x, hidden) 11 def forward ( self , x , hidden ) : 12 out , hidden = self . gru ( x , hidden ) ---> 13 x = self . fc ( x ) 14 x = self . relu ( x ) 15 x =self . out ( x ) ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call (self, *input, **kwargs) 539 result = self . _slow_forward ( * input , ** kwargs ) 540 else : --> 541 result = self . forward ( * input , ** kwargs ) 542 for hook in self . _forward_hooks . values ( ) : 543 hook_result = hook ( self , input , result ) ~ \ Anaconda3 \ lib \ site-packages \ torch \ nn \ modules \ linear.py in forward (self, input) 85 86 def forward ( self , input ) : ---> 87 return F . Linear ( the Input , Self . Weight , Self . negative bias ) 88 89 def extra_repr ( Self ) : ~\Anaconda3\lib\site-packages\torch\nn\functional.py in linear (input, weight, bias) 1370 ret = torch . addmm ( bias , input , weight . t ( ) ) 1371 else : -> 1372 output = input . matmul ( weight . t ( ) ) 1373 if bias is not None : 1374 output += bias RuntimeError : size mismatch, m1: [1600 x 3], m2: [50 x 20] at C:/w/1/s/tmp_conda_3.7_104508/conda/conda-bld/pytorch_1572950778684/work/aten/src\THC/ generic/THCTensorMathBlas.cu:290 Any advice? Thank you
This might have to do with the fact that you are not passing the output of your nn.GRU to the first linear layer in GRUNet's forward function: def forward(self, x, hidden): out, hidden = self.gru(x, hidden) x = self.fc(out)
https://stackoverflow.com/questions/66131870/
Launching the right number of CUDA blocks for a custom PyTorch activation function
I'm currently writing a CUDA kernel for a custom operation (an activation) for PyTorch, but I'm quite unfamiliar with any form of GPU programming. For reference, I was following the Custom C++ & CUDA extension tutorial. A simplified example of the sort of operation I want to do: Let's say I have an input tensor, X_in, which can be of any shape and dims (e.g. something like (16, 3, 50, 100) ). Let's say I also have a 1D tensor, T (for example, T can be a tensor of shape (100,) ). For each value in X_in, I want to calculate an "index" value that should be < len(T). Then the output would be basically the value of that index in T, multiplied or added with some constant. This is something like a "lookup table" operation. An example kernel: __global__ void inplace_lookup_kernel( const scalar_t* __restrict__ T, scalar_t* __restrict__ X_in, const int N) { const int i = blockIdx.x * blockDim.x + threadIdx.x; const int idx = int(X_in[i]) % N; X_in[i] = 5 * T[idx] - 3; } I also wish to do the operation in-place, which is why the output is being computed into X_in. My question is, for an operation like this, which is to be applied pointwise on each value of X_in, how to determine the way to launch a good number of threads/blocks? In the Custom C++ & CUDA extension tutorial, they do so by: const int threads = 1024; const dim3 blocks((state_size + threads - 1) / threads, batch_size); For their use-case, the operation (an lstm variant) has a specific format of input, and thus a fixed number of dimensions, from which blocks can be calculated. But the operation I'm writing should accept inputs of any dimensions and shape. What is the right way to calculate the block number for this situation? I'm currently doing the following: const int threads = 1024; const int nelems = int(X_in.numel()); const dim3 blocks((nelems + threads - 1) / threads); However I'm doing this by intuition, and not with any certainty. Is there a better or correct way to do this? And is there any computational advantage if I define blocks in the format blocks(otherdim_size, batch_size) like in the tutorial?
I'm speculating here, but - since your operations seems to be completely elementwise (w.r.t. X_in); and you don't seem to using interesting SM-specific resources like shared memory, nor a lot of registers per thread, I don't think the grid partition matters all that much. Just treat X_in as a 1-D array according to its layout in memory, and use a 1D grid with a block size of, oh, 256, or 512 or 1024. Of course - always try out your choices to make sure you don't get bitten by unexpected behavior.
https://stackoverflow.com/questions/66132237/
PyTorch training with Batches of different lenghts?
Is it possible to train model with batches that have unequal lenght during an epoch? I am new to pytorch.
If you take a look at the dataloader documentation, you'll see a drop_last parameter, which explains that sometimes when the dataset size is not divisible by the batch size, then you get a last batch of different size. So basically the answer is yes, it is possible, it happens often and it does not affect (too much) the training of a neural network. However you must a bit careful, some pytorch layers deal poorly with very small batch sizes. For example if you happen to have Batchnorm layers, and if you get a batch of size 1, you'll get errors due to the fact that batchnorm at some point divides by len(batch)-1. More generally, training a network that has batchnorms generally require batches of significant sizes, say at least 16 (literature generally aims for 32 or 64). So if you happen to have variable size batches, take the time to check whether your layers have requirement in terms of batch size for optimal training and convergence. But except in particular cases, your network will train anyway, no worries. As for how to make your batches with custom sizes, I suggest you look at and take inspiration from the pytorch implementation of dataloader and sampler. You may want to implement something similar to BatchSampler and use the batch_sampler argument of Dataloader
https://stackoverflow.com/questions/66133492/
How to convert raw image bytes, read from stream, to a tensor of a shape valid to perform Conv2d?
Community, I have a file of bytes organized into chunks of 16384 bytes. Each of the chunks contains an uncompressed image of size 64x64 pixels. The format of the image pixel is 8-bit ABGR. Let's say I have successfully read the chunk into numpy.array: buf = numpy.fromfile( dataFile, dtype=np.uint8, count=16384, offset=offs) The question is how could I convert this array of bytes into a Pytorch tensor so that to be able to perform convolution (Conv2d)? If I understand it properly, the aforementioned convolution (Conv2d) expects the tensor to have separate channel planes instead of a single plane of multichannel pixels. And the extra question is how to get rid of alpha-channel in the meanwhile?
In the following code, I'm assuming that your image is stored in a row-major fashion (so your bytes are abgrabgrabgr.... and not aaaa....bbbb....gggg....rrrr) # First I reshape the buf into an image with 4 channels buf = buf.reshape((4, 64, 64)) # Then I remove the alpha channel by taking only the 3 last : bgr_buf = buf[1:, :, :] # Then I make it into a pytorch tensor : tensor = torch.from_numpy(bgr_buf) # And finally, pytorch convolutions expects batchs of images. So let's make it a batch of 1 batch = tensor.view(1, 3, 64, 64)
https://stackoverflow.com/questions/66134941/
Pytorch Linear Regression with squared features
I am new to PyTorch and I would like to implement linear regression partly with PyTorch and partly on my own. I want to use squared features for my regression: import torch # init x = torch.tensor([1,2,3,4,5]) y = torch.tensor([[1],[4],[9],[16],[25]]) w = torch.tensor([[0.5], [0.5], [0.5]], requires_grad=True) iterations = 30 alpha = 0.01 def forward(X): # feature transformation [1, x, x^2] psi = torch.tensor([[1.0, x[0], x[0]**2]]) for i in range(1, len(X)): psi = torch.cat((psi, torch.tensor([[1.0, x[i], x[i]**2]])), 0) return torch.matmul(psi, w) def loss(y, y_hat): return ((y-y_hat)**2).mean() for i in range(iterations): y_hat = forward(x) l = loss(y, y_hat) l.backward() with torch.no_grad(): w -= alpha * w.grad w.grad.zero_() if i%10 == 0: print(f'Iteration {i}: The weight is:\n{w.detach().numpy()}\nThe loss is:{l}\n') When I execute my code, the regression doesn't learn the correct features and the loss increases permanently. The output is the following: Iteration 0: The weight is: [[0.57 ] [0.81 ] [1.898]] The loss is:25.450000762939453 Iteration 10: The weight is: [[ 5529.5835] [22452.398 ] [97326.12 ]] The loss is:210414632960.0 Iteration 20: The weight is: [[5.0884394e+08] [2.0662339e+09] [8.9567642e+09]] The loss is:1.7820802835250162e+21 Does somebody know, why my model is not learning? UPDATE Is there a reason why it performs so poorly? I thought it's because of the low number of training data. But also with 10 data points, it is not performing well :
You should normalize your data. Also, since you're trying to fit x -> ax² + bx + c, c is essentially the bias. It should be wiser to remove it from the training data (I'm referring to psi here) and use a separate parameter for the bias. What could be done: normalize your input data and targets with mean and standard deviation. separate the parameters into w (a two-component weight tensor) and b (the bias). you don't need to construct psi on every inference since x is identical. you can build psi with torch.stack([torch.ones_like(x), x, x**2], 1), but here we won't need the ones, as we've essentially detached the bias from the weight tensor. Here's how it would look like: x = torch.tensor([1,2,3,4,5]).float() psi = torch.stack([x, x**2], 1).float() psi = (psi - psi.mean(0)) / psi.std(0) y = torch.tensor([[1],[4],[9],[16],[25]]).float() y = (y - y.mean(0)) / y.std(0) w = torch.tensor([[0.5], [0.5]], requires_grad=True) b = torch.tensor([0.5], requires_grad=True) iterations = 30 alpha = 0.02 def loss(y, y_hat): return ((y-y_hat)**2).mean() for i in range(iterations): y_hat = torch.matmul(psi, w) + b l = loss(y, y_hat) l.backward() with torch.no_grad(): w -= alpha * w.grad b -= alpha * b.grad w.grad.zero_() b.grad.zero_() if i%10 == 0: print(f'Iteration {i}: The weight is:\n{w.detach().numpy()}\nThe loss is:{l}\n') And the results: Iteration 0: The weight is: [[0.49954653] [0.5004535 ]] The loss is:0.25755801796913147 Iteration 10: The weight is: [[0.49503425] [0.5049657 ]] The loss is:0.07994867861270905 Iteration 20: The weight is: [[0.49056274] [0.50943726]] The loss is:0.028329044580459595
https://stackoverflow.com/questions/66143829/
Pytorch model runtime error when testing U-Net
I've defined a U-Net model using Pytorch but it won't accept my input. I've checked the model layers and they seem to be applying the operations as I would expect them to but I still get an error. I've just switched to Pytorch after mostly using Keras so I'm not really sure how to debug this issue, the error I get is: RuntimeError: Given groups=1, weight of size [32, 64, 3, 3], expected input[1, 128, 65, 65] to have 64 channels, but got 128 channels instead Here's the code I'm using: class UNET(nn.Module): def __init__(self, in_channels=2, out_channels=2): super().__init__() self.conv1 = self.contract_block(in_channels, 32, 3, 1) self.conv2 = self.contract_block(32, 64, 3, 1) self.conv3 = self.contract_block(64, 128, 3, 1) self.upconv3 = self.expand_block(128, 64, 3, 1) self.upconv2 = self.expand_block(64, 32, 3, 1) self.upconv1 = self.expand_block(32, out_channels, 3, 1) def __call__(self, x): # downsampling part conv1 = self.conv1(x) conv2 = self.conv2(conv1) conv3 = self.conv3(conv2) upconv3 = self.upconv3(conv3) upconv2 = self.upconv2(torch.cat([upconv3, conv2], 1)) upconv1 = self.upconv1(torch.cat([upconv2, conv1], 1)) return upconv1 def contract_block(self, in_channels, out_channels, kernel_size, padding): contract = nn.Sequential( torch.nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.Conv2d(out_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.MaxPool2d(kernel_size=2, stride=2, padding=1) ) return contract def expand_block(self, in_channels, out_channels, kernel_size, padding): expand = nn.Sequential( torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.Conv2d(out_channels, out_channels, kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.ConvTranspose2d(out_channels, out_channels, kernel_size=2, stride=2, padding=1, output_padding=1) ) return expand model = UNET() x = torch.randn(1, 2, 256, 256) print(model(x))
Your problem is in the model layer definition. You defined self.upconv2 = self.expand_block(64, 32, 3, 1) but what you do is concatenating 2 tensors each with 64 channels so in total you get 128. You should fix the channels of the up-sampling part of the U-Net to match the number of channels after the concatenation. Doing the mentioned fix will give you: class UNET(nn.Module): def __init__(self, in_channels=2, out_channels=2): super().__init__() self.conv1 = self.contract_block(in_channels, 32, 3, 1) self.conv2 = self.contract_block(32, 64, 3, 1) self.conv3 = self.contract_block(64, 128, 3, 1) self.upconv3 = self.expand_block(128, 64, 3, 1) self.upconv2 = self.expand_block(64 + 64, 32, 3, 1) self.upconv1 = self.expand_block(32 + 32, out_channels, 3, 1) def __call__(self, x): # downsampling part conv1 = self.conv1(x) conv2 = self.conv2(conv1) conv3 = self.conv3(conv2) upconv3 = self.upconv3(conv3) upconv2 = self.upconv2(torch.cat([upconv3, conv2], 1)) upconv1 = self.upconv1(torch.cat([upconv2, conv1], 1)) return upconv1 def contract_block(self, in_channels, out_channels, kernel_size, padding): contract = nn.Sequential( torch.nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.Conv2d(out_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.MaxPool2d(kernel_size=2, stride=2, padding=1) ) return contract def expand_block(self, in_channels, out_channels, kernel_size, padding): expand = nn.Sequential( torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.Conv2d(out_channels, out_channels, kernel_size, stride=1, padding=padding), torch.nn.BatchNorm2d(out_channels), torch.nn.Tanh(), torch.nn.ConvTranspose2d(out_channels, out_channels, kernel_size=2, stride=2, padding=1, output_padding=1) ) return expand model = UNET() x = torch.randn(1, 2, 256, 256) print(model(x)) Based on the comment, if you have other spatial dimensions that might not fit the parameters of the convolutions operations you can do one of 2 options: Start play with the parameter based on the formula at the bottom of the Conv2d so that you will match the input dimension. You could force pad the target to desired dimension using the following 2 functions: import torch import torch.nn as nn import torch.nn.functional as F def pad_tensor(source, target): """ Pad source tensor to match target tensor size :param source: tensor that need to get padding :param target: tensor of the desired shape :return: source tensor with shape equal to target """ diff_y = target.size()[2] - source.size()[2] diff_x = target.size()[3] - source.size()[3] source = F.pad(source, [diff_x // 2, diff_x - diff_x // 2, diff_y // 2, diff_y - diff_y // 2]) return source def concatenate_tensors(x1, x2): """ Concatenate both tensors :param x1: first tensor to be concatenated :param x2: second tensor to be concatenated :return: concatenation of both tensors """ x1 = pad_tensor(x1, x2) return torch.cat([x1, x2], dim=1) Now call concatenate_tensors instead of torch.cat this will fix the dimensions to match the size you need.
https://stackoverflow.com/questions/66144359/
How to assign a name for a pytorch layer?
Following a previous question, I want to plot weights, biases, activations and gradients to achieve a similar result to this. Using for name, param in model.named_parameters(): summary_writer.add_histogram(f'{name}.grad', param.grad, step_index) as was suggested in the previous question gives sub-optimal results, since layer names come out similar to '_decoder._decoder.4.weight', which is hard to follow, especially since the architecture is changing due to research. 4 in this run won't be the same in the next, and is really meaningless. Thus, I wanted to give my own string names to each layer. I found this Pytorch forum discussion, but no single best practice was agreed upon. What is the recommended way to assign names to Pytorch layers? Namely, layers defined in various ways: Sequential: self._seq = nn.Sequential(nn.Linear(1, 2), nn.Linear(3, 4),) Dynamic: self._dynamic = nn.ModuleList() for _ in range(self._n_features): self._last_layer.append(nn.Conv1d(in_channels=5, out_channels=6, kernel_size=3, stride=1, padding=1,),) Direct: self._direct = nn.Linear(7, 8) Other ways I didn't think about I would like to be able to give a string name to each layer, defined in each of the above ways.
Sequential Pass an instance of collections.OrderedDict. Code below gives conv1.weights, conv1.bias, conv2.weight, conv2.bias (notice lack of torch.nn.ReLU(), see end of this answer). import collections import torch model = torch.nn.Sequential( collections.OrderedDict( [ ("conv1", torch.nn.Conv2d(1, 20, 5)), ("relu1", torch.nn.ReLU()), ("conv2", torch.nn.Conv2d(20, 64, 5)), ("relu2", torch.nn.ReLU()), ] ) ) for name, param in model.named_parameters(): print(name) Dynamic Use ModuleDict instead of ModuleList: class MyModule(torch.nn.Module): def __init__(self): super().__init__() self.whatever = torch.nn.ModuleDict( {f"my_name{i}": torch.nn.Conv2d(10, 10, 3) for i in range(5)} ) Will give us whatever.my_name{i}.weight (or bias) for each created module dynamically. Direct Just name it however you want and that's how it will be named self.my_name_or_whatever = nn.Linear(7, 8) You didn't think about If you want to plot weights, biases and their gradients you can go along this route You can't plot activations this way (or output from activations). Use PyTorch hooks instead (if you want per-layer gradients as they pass through network use this also) For last task you can use third party library torchfunc (disclaimer: I'm the author) or go directly and write your own hooks.
https://stackoverflow.com/questions/66152766/
PyTorch conv2d doesn't propagate torch.channels_last memory format
When I try to use the torch.nn.conv2d operator on tensors which have the channels_last memory format, then the output does not keep this memory format. I don't understand why, since conv2d is listed in the PyTorch wiki listing operators with channels last support. Am I missing something? Code to reproduce (tested with pytorch 1.6.0 and 1.7.0, on CPU, on Ubuntu 20.04): import torch import torch.nn.functional as F N, C, H, W = 10, 4, 64, 128 out_channels = 2 kernel_size = (3, 3) memory_format = torch.channels_last tsr = torch.randn(N, C, H, W).to(memory_format=memory_format) kernel = torch.randn(out_channels, C, *kernel_size).to(memory_format=memory_format) conv_out = F.conv2d(tsr, kernel) print(conv_out.is_contiguous(memory_format=memory_format)) # False
The conv2d operator is listed under the list of GPU operators supporting channels_last. This is not true for the CPU version of conv2d: If you switch to cuda device, it will return True: tsr = torch.randn(N, C, H, W).to('cuda', memory_format=memory_format) kernel = torch.randn(out_channels, C, *kernel_size).to('cuda', memory_format=memory_format) conv_out = F.conv2d(tsr, kernel) >>> conv_out.is_contiguous(memory_format=memory_format) True
https://stackoverflow.com/questions/66160528/
Best way to cut a pytorch tensor into overlapping chunks?
If for instance I have: eg6 = torch.tensor([ [ 1., 7., 13., 19.], [ 2., 8., 14., 20.], [ 3., 9., 15., 21.], [ 4., 10., 16., 22.], [ 5., 11., 17., 23.], [ 6., 12., 18., 24.]]) batch1 = eg6 batch2 = -eg6 x = torch.cat((batch1,batch2)).view(2,6,4) And then I want to slice it up into overlapping chunks, like a sliding window function, and have the chunks be batch-processable. For example, and just looking at the first dimension, I want 1,2,3, 3,4,5, 5,6 (or 5,6,0). It seems unfold() kind of does what I want. It transposes the last two dimensions for some reason, but that is easy enough to repair. Changing it from shape [2,3,3,4] to [6,3,4] requires a memory copy, but I believe that is unavoidable? SZ=3 x2 = x.unfold(1,SZ,2).transpose(2,3).reshape(-1,SZ,4) This works perfectly when x is of shape [2,7,4]. But with only 6 rows, it throws away the final row. Is there a version of unfold() that can be told to use all data, ideally taking a pad character? Or do I need to pad x before calling unfold()? What is the best way to do that? I'm wondering if "pad" is the wrong word, as I'm only finding functions that want to put padding characters at both ends, with convolutions in mind. Aside: Looking at the source of unfold, it seems the strange transpose is there deliberately and explicitly?! For that reason, and the undesired chop behaviour, it made me think the correct answer to my question might be write a new low-level function. But that is too much effort for me, at least for today... (I think a second function for the backwards pass also needs to be written.)
The operation performed here is similar to what a 1D convolution would behave like. With kernel_size=SZ and stride=2. As you noticed if you don't provide sufficient padding (you're correct on the wording) the last element won't be used. A general approach (for any SZ and any input shape x.size(1)) is to figure out if padding is necessary, and if so what amount is needed. The size of the output is given by out = floor((x.size(1) - SZ)/2 + 1). The number of unused elements is x.size(1) - out*(SZ-1) - 1. If the number of unused elements is non zero, you need to add a padding of (out+1)*(SZ-1) + 1 - x.size(1) This example won't need padding: >>> x = torch.stack((torch.tensor([ [ 1., 7., 13., 19.], [ 2., 8., 14., 20.], [ 3., 9., 15., 21.], [ 4., 10., 16., 22.], [ 5., 11., 17., 23.]]),)*2) >>> x.shape torch.Size([2, 5, 4]) >>> out = floor((x.size(1) - SZ)/2 + 1) 2 >>> unused = x.size(1) - out*(SZ-1) - 1 0 While this one will: >>> x = torch.stack((torch.tensor([ [ 1., 7., 13., 19.], [ 2., 8., 14., 20.], [ 3., 9., 15., 21.], [ 4., 10., 16., 22.], [ 5., 11., 17., 23.], [ 6., 12., 18., 24.]]),)*2) >>> x.shape torch.Size([2, 6, 4]) >>> out = floor((x.size(1) - SZ)/2 + 1) 2 >>> unused = x.size(1) - out*(SZ-1) - 1 1 >>> p = (out+1)*(SZ-1) + 1 - x.size(1) 1 Now, to actually add padding you could just use torch.cat. Although I am the built-in, nn.functional.pad, would work... >>> torch.cat((x, torch.zeros(x.size(0), p, x.size(2))), dim=1) tensor([[[ 1., 7., 13., 19.], [ 2., 8., 14., 20.], [ 3., 9., 15., 21.], [ 4., 10., 16., 22.], [ 5., 11., 17., 23.], [ 6., 12., 18., 24.], [ 0., 0., 0., 0.]], [[ 1., 7., 13., 19.], [ 2., 8., 14., 20.], [ 3., 9., 15., 21.], [ 4., 10., 16., 22.], [ 5., 11., 17., 23.], [ 6., 12., 18., 24.], [ 0., 0., 0., 0.]]])
https://stackoverflow.com/questions/66171288/
Number of learnable parameters of MultiheadAttention
While testing (using PyTorch's MultiheadAttention), I noticed that increasing or decreasing the number of heads of the multi-head attention does not change the total number of learnable parameters of my model. Is this behavior correct? And if so, why? Shouldn't the number of heads affect the number of parameters the model can learn?
The standard implementation of multi-headed attention divides the model's dimensionality by the number of attention heads. A model of dimensionality d with a single attention head would project embeddings to a single triplet of d-dimensional query, key and value tensors (each projection counting d2 parameters, excluding biases, for a total of 3d2). A model of the same dimensionality with k attention heads would project embeddings to k triplets of d/k-dimensional query, key and value tensors (each projection counting d×d/k=d2/k parameters, excluding biases, for a total of 3kd2/k=3d2). References: From the original paper: The Pytorch implementation you cited:
https://stackoverflow.com/questions/66171956/
How to save a Pytorch Model directly in s3 Bucket?
The title says it all - I want to save a pytorch model in an s3 bucket. What I tried was the following: import boto3 s3 = boto3.client('s3') saved_model = model.to_json() output_model_file = output_folder + "pytorch_model.json" s3.put_object(Bucket="power-plant-embeddings", Key=output_model_file, Body=saved_model) Unfortunately this doesn't work, as .to_json() only works for tensorflow models. Does anyone know how to do it in pytorch?
Try serializing model to a buffer and write it to S3: buffer = io.BytesIO() torch.save(model, buffer) s3.put_object(Bucket="power-plant-embeddings", Key=output_model_file, Body=buffer.getvalue())
https://stackoverflow.com/questions/66175881/
Focal loss implementation
In the paper introducing focal loss, they state that the loss function is formulated as such: Where I found an implementation of it on a Github page from another author who used it in their paper. I tried the function out on a segmentation problem dataset I have and it seems to work quite well. Below is the implementation: def binary_focal_loss(pred, truth, gamma=2., alpha=.25): eps = 1e-8 pred = nn.Softmax(1)(pred) truth = F.one_hot(truth, num_classes = pred.shape[1]).permute(0,3,1,2).contiguous() pt_1 = torch.where(truth == 1, pred, torch.ones_like(pred)) pt_0 = torch.where(truth == 0, pred, torch.zeros_like(pred)) pt_1 = torch.clamp(pt_1, eps, 1. - eps) pt_0 = torch.clamp(pt_0, eps, 1. - eps) out1 = -torch.mean(alpha * torch.pow(1. - pt_1, gamma) * torch.log(pt_1)) out0 = -torch.mean((1 - alpha) * torch.pow(pt_0, gamma) * torch.log(1. - pt_0)) return out1 + out0 The part that I don't understand is the calculation of pt_0 and pt_1. I created a small example for myself to try and figure it out but it still confuses me a bit. # one hot encoded prediction tensor pred = torch.tensor([ [ [.2, .7, .8], # probability [.3, .5, .7], # of [.2, .6, .5] # background class ], [ [.8, .3, .2], # probability [.7, .5, .3], # of [.8, .4, .5] # class 1 ] ]) # one-hot encoded ground truth labels truth = torch.tensor([ [1, 0, 0], [1, 1, 0], [1, 0, 0] ]) truth = F.one_hot(truth, num_classes = 2).permute(2,0,1).contiguous() print(truth) # gives me: # tensor([ # [ # [0, 1, 1], # [0, 0, 1], # [0, 1, 1] # ], # [ # [1, 0, 0], # [1, 1, 0], # [1, 0, 0] # ] # ]) pt_0 = torch.where(truth == 0, pred, torch.zeros_like(pred)) pt_1 = torch.where(truth == 1, pred, torch.ones_like(pred)) print(pt_0) # gives me: # tensor([[ # [0.2000, 0.0000, 0.0000], # [0.3000, 0.5000, 0.0000], # [0.2000, 0.0000, 0.0000] # ], # [ # [0.0000, 0.3000, 0.2000], # [0.0000, 0.0000, 0.3000], # [0.0000, 0.4000, 0.5000] # ] # ]) print(pt_1) # gives me: # tensor([[ # [1.0000, 0.7000, 0.8000], # [1.0000, 1.0000, 0.7000], # [1.0000, 0.6000, 0.5000] # ], # [ # [0.8000, 1.0000, 1.0000], # [0.7000, 0.5000, 1.0000], # [0.8000, 1.0000, 1.0000] # ] # ]) What I don't understand is why in pt_0 we are placing zeros where the torch.where statement is false, and in pt_1 we place ones. From how I understood the paper, I would have thought that instead of placing zeros or ones, you would place 1-p. Can anyone help explain this to me?
So the part you try to understand is a procedure people usually do when they want zero out the additional calculations that not needed. Take another look at the formula of pt: The following code is does exactly this by separate the two condition: # if y=1 pt_1 = torch.where(truth == 1, pred, torch.ones_like(pred)) # otherwise pt_0 = torch.where(truth == 0, pred, torch.zeros_like(pred)) Where it set to zero in pt_0 and one in pt_1 will result zero in output thus have no effect for contribute loss value, i.e: # Because pow(0., gamma) == 0. and log(1.) == 0. # out1 == 0. if pt_1 == 1. out1 = -torch.mean(alpha * torch.pow(1. - pt_1, gamma) * torch.log(pt_1)) # out0 == 0. if pt_0 == 0. out0 = -torch.mean((1 - alpha) * torch.pow(pt_0, gamma) * torch.log(1. - pt_0)) And the reason for pt_0 to using value of p instead of 1-p is the same reason as your last question, i.e: 1 - (1 - p) == 1 - 1 + p == p So it can later calculate the FL(pt) by: # -a * pow(1 - (1 - p), gamma )* log(1 - p) == -a * pow(p, gamma )* log(1 - p) out0 = -torch.mean((1 - alpha) * torch.pow(pt_0, gamma) * torch.log(1. - pt_0))
https://stackoverflow.com/questions/66178979/
PyTorch Geometric: What is the function utility to split train/validation/test for node classification
I am new to PyTorch Geometric. I know there is a function that gives you the train, test, and validation node mask of a custom ratio in the node classification task. Does anyone know it?
I found the function AddTrainValTestMask
https://stackoverflow.com/questions/66183508/
model loss does not changing in pytorch
Im dealing with titanic data with pytorch these are my model & training code import torch.nn.functional as F class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.fc1_1=torch.nn.Linear(13, 512) self.fc1_2=torch.nn.Linear(512, 64) self.fc1_3=torch.nn.Linear(64, 10) self.fc2_1=torch.nn.Linear(13, 64) self.fc2_2=torch.nn.Linear(64, 512) self.fc2_3=torch.nn.Linear(512, 10) self.fc3_1=torch.nn.Linear(13, 128) self.fc3_2=torch.nn.Linear(128, 128) self.fc3_3=torch.nn.Linear(128, 10) self.fc_full_1=torch.nn.Linear(30, 64) self.fc_full_2=torch.nn.Linear(64, 128) self.fc_full_3=torch.nn.Linear(128, 2) def forward(self, x): x1=self.fc1_1(x) x1=F.relu(x1) x1=self.fc1_2(x1) x1=F.relu(x1) x1=self.fc1_3(x1) x1=F.relu(x1) x2=self.fc2_1(x) x2=F.relu(x2) x2=self.fc2_2(x2) x2=F.relu(x2) x2=self.fc2_3(x2) x2=F.relu(x2) x3=self.fc3_1(x) x3=F.relu(x3) x3=self.fc3_2(x3) x3=F.relu(x3) x3=self.fc3_3(x3) x3=F.relu(x3) x=torch.cat((x1, x2, x3), dim=1) x=self.fc_full_1(x) x=F.relu(x) x=self.fc_full_2(x) x=F.relu(x) x=self.fc_full_3(x) return x model=Net() as seen above, they are just fully connected layers model loss function and optimization cross ehtropy loss and adam criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model1.parameters(), lr=0.05) these are training code for epoch in range(100): model.train() x_var = Variable(torch.FloatTensor(x_train)) y_var = Variable(torch.LongTensor(y_train)) optimizer.zero_grad() train_pred = model(x_var) loss =criterion(train_pred, y_var) loss.backward() optimizer.step() train_acc=calc_accuracy(train_pred, y_var) loss=loss.data.numpy() and finaly, the accuracy and loss printed Epoch 0 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 10 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 20 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 30 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 40 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 50 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 60 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 70 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 80 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 Epoch 90 0.6900209 0.531578947368421 valid: 0.692668 0.4621212121212121 as seen above, model training loss and valid loss does not change at all. what does seems to be the problem?
Your optimizer does not use your model's parameters, but some other model1's. optimizer = torch.optim.Adam(model1.parameters(), lr=0.05) BTW, you do not have to use model.train() for each epoch.
https://stackoverflow.com/questions/66186662/
Libtorch works with g++, but fails with Intel compiler
I want to use a neural network developed in Python (PyTorch) in a Fortran program. My OS is Ubuntu 18.04. What I am doing: save it as torchscript: TurbNN.pt call it from c++ program: call_ts.cpp, call_ts.h call c++ program from Fortran program (using bind©): main.f90 I successfully compiled the codes using CMake (3.19.4) and g++ (7.5.0). However, I cannot compile them using Intel compilers (HPCKit 2021.1.0.2684): # downloaded torchlib export Torch_DIR=/home/aiskhak/nn/fortran_calls_torchscript3/build/libtorch/share/cmake/Torch/ # set environment for Intel compilers . /opt/intel/oneapi/setvars.sh # cmake cmake \ -DCMAKE_CXX_COMPILER=icpc \ -DCMAKE_Fortran_COMPILER=ifort .. # make make After “cmake” everything looks fine (just like for g++): -- The CXX compiler identification is Intel 20.2.1.20201112 -- The Fortran compiler identification is Intel 20.2.1.20201112 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /opt/intel/oneapi/compiler/2021.1.1/linux/bin/intel64/icpc - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting Fortran compiler ABI info -- Detecting Fortran compiler ABI info - done -- Check for working Fortran compiler: /opt/intel/oneapi/compiler/2021.1.1/linux/bin/intel64/ifort - skipped -- Checking whether /opt/intel/oneapi/compiler/2021.1.1/linux/bin/intel64/ifort supports Fortran 90 -- Checking whether /opt/intel/oneapi/compiler/2021.1.1/linux/bin/intel64/ifort supports Fortran 90 - yes -- Looking for C++ include pthread.h -- Looking for C++ include pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found Torch: /home/aiskhak/nn/fortran_calls_torchscript3/build/libtorch/lib/libtorch.so -- Performing Test COMPILER_HAS_HIDDEN_VISIBILITY -- Performing Test COMPILER_HAS_HIDDEN_VISIBILITY - Success -- Performing Test COMPILER_HAS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_HAS_HIDDEN_INLINE_VISIBILITY - Success -- Performing Test COMPILER_HAS_DEPRECATED_ATTR -- Performing Test COMPILER_HAS_DEPRECATED_ATTR - Success -- Configuring done -- Generating done -- Build files have been written to: /home/aiskhak/nn/fortran_calls_torchscript3/build However, after “make” I am getting: Scanning dependencies of target call_ts_cpp [ 25%] Building CXX object CMakeFiles/call_ts_cpp.dir/src/call_ts.cpp.o [ 50%] Linking CXX shared library lib/libcall_ts_cpp.so [ 50%] Built target call_ts_cpp Scanning dependencies of target fortran_calls_ts.x [ 75%] Building Fortran object CMakeFiles/fortran_calls_ts.x.dir/src/main.f90.o [100%] Linking Fortran executable bin/fortran_calls_ts.x lib/libcall_ts_cpp.so: undefined reference to `c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)' lib/libcall_ts_cpp.so: undefined reference to `torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&)' lib/libcall_ts_cpp.so: undefined reference to `c10::DeviceTypeName[abi:cxx11](c10::DeviceType, bool)' lib/libcall_ts_cpp.so: undefined reference to `torch::jit::Object::find_method(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const' lib/libcall_ts_cpp.so: undefined reference to `torch::jit::Method::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&)' CMakeFiles/fortran_calls_ts.x.dir/build.make:106: recipe for target 'bin/fortran_calls_ts.x' failed make[2]: *** [bin/fortran_calls_ts.x] Error 1 CMakeFiles/Makefile2:123: recipe for target 'CMakeFiles/fortran_calls_ts.x.dir/all' failed make[1]: *** [CMakeFiles/fortran_calls_ts.x.dir/all] Error 2 Makefile:148: recipe for target 'all' failed make: *** [all] Error 2 What I am supposed to do with? I fixed similar problem when I had old g++, but my Intel compiler is new. Below are my codes: call_ts.cpp #include "call_ts.h" #include <torch/script.h> #include <iostream> #include <memory> // c++ function invariant_nn // // takes inputs, reads a neural network TurbNN.pt, do a forward pass // and returns outputs // // inputs: 5 tensor invariants I[0:4] (float) // outputs: 10 tensor basis coefficients G[0:9] (float) void invariant_nn(float I[], float G[]) { // deserialize scriptmodule from a .pt file torch::jit::script::Module module; const char *arg; arg = "../src/TurbNN.pt"; module = torch::jit::load(arg); // create inputs std::vector<torch::jit::IValue> inputs; float data[] = {I[0], I[1], I[2], I[3], I[4]}; inputs.push_back(torch::from_blob(data, {1, 5})); //std::cout << "inputs\n" << inputs; //std::cout << "\n"; // do forward pass and turn its output into a tensor at::Tensor outputs = module.forward(inputs).toTensor(); //std::cout << "outputs\n" << outputs; //std::cout << "\n"; // return values for (int k = 0; k < 10; k++) { G[k] = outputs[0][k].item().to<float>(); //std::cout << "G\n" << G[k]; } return; } call_ts.h #pragma once /* export macros for library generated by CMake */ #ifndef CALL_TS_API #include "call_ts_export.h" #define CALL_TS_API CALL_TS_EXPORT #endif #ifdef __cplusplus extern "C" { #endif CALL_TS_API void invariant_nn(float I[], float G[]); #ifdef __cplusplus } #endif main. f90 ! fortran program main ! ! calls c++ function invariant_nn, which calls a torchscript with ! a neural network ! program main ! define interface to interact with c++ use, intrinsic :: iso_c_binding, only: c_float implicit none interface invariant_nn subroutine invariant_nn(I, G) bind (c) import :: c_float real(c_float) :: I(5) real(c_float) :: G(10) end subroutine end interface ! fortran program real(4) I(5), G(10) ! invariants I(1) = 1.01 I(2) = 1.01 I(3) = 1.01 I(4) = 1.01 I(5) = 1.01 print *, "Tensor invariants ", I ! tensor basis coefficients call invariant_nn(I, G) print *, "Tensor basis coefficients ", G end program main CMakeLists.txt # stop configuration if cmake version is below 3.0 cmake_minimum_required(VERSION 3.0 FATAL_ERROR) # project name and enabled languages project(fortran_calls_ts CXX Fortran) # find libtorch find_package(Torch REQUIRED) # if CMAKE_BUILD_TYPE undefined, set it to Release if(NOT CMAKE_BUILD_TYPE) set(CMAKE_BUILD_TYPE "Release") endif() # compiler flags for release mode set(CMAKE_CXX_FLAGS_RELEASE "-O3") set(CMAKE_Fortran_FLAGS_RELEASE "-O3") # set default build paths set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin) set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib) # generated export header will be placed here include_directories(${PROJECT_BINARY_DIR}) # c library add_library(call_ts_cpp SHARED src/call_ts.cpp) # fortran executable add_executable(fortran_calls_ts.x src/main.f90) # linked against c library target_link_libraries(fortran_calls_ts.x call_ts_cpp) target_link_libraries(call_ts_cpp "${TORCH_LIBRARIES}") # we let cmake generate the export header include(GenerateExportHeader) generate_export_header(call_ts_cpp BASE_NAME call_ts) install(TARGETS call_ts_cpp LIBRARY DESTINATION lib ARCHIVE DESTINATION lib) install(FILES src/call_ts.h ${PROJECT_BINARY_DIR}/call_ts_export.h DESTINATION include) set_property(TARGET fortran_calls_ts.x PROPERTY CXX_STANDARD 14)
Do you see cxx11 in the linker errors? It looks like your libcall_ts_cpp is compiled in a way that expects the new C++11 ABI for std::string but perhaps the library where those functions are implemented was compiled with the old ABI. Here's a PyTorch forum post about the same problem: https://discuss.pytorch.org/t/issues-linking-with-libtorch-c-11-abi/29510/11 The solution is to download a new copy of the PyTorch libraries built with the new C++11 ABI.
https://stackoverflow.com/questions/66192285/
What should be the input shape for 3D CNN on a sequence of images?
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#conv3d Describes that the input to do convolution on 3D CNN is (N,Cin,D,H,W). Imagine if I have a sequence of images which I want to pass to 3D CNN. Am I right that: N -> number of sequences (mini batch) Cin -> number of channels (3 for rgb) D -> Number of images in a sequence H -> Height of one image in the sequence W -> Width of one image in the sequence The reason why I am asking is that when I stack image tensors: a = torch.stack([img1, img2, img3, img4, img5]) I get shape of a torch.Size([5, 3, 396, 247]), so is it compulsory to reshape my tensor to torch.Size([3, 5, 396, 247]) so that number of channels would go first or it does not matter inside the Dataloader? Note that Dataloader would add one more dimension automatically which would correspond to N.
Yes it matters, you need to ensure that dimensions are ordered correctly (assuming you use DataLoader's default collate function). One way to do this is to invoke torch.stack using dim=1 instead of the default of dim=0. For example a = torch.stack([img1, img2, img3, img4, img5], dim=1) results in a being the desired shape of [3, 5, 396, 247].
https://stackoverflow.com/questions/66199022/
RuntimeError: Expected hidden[0] size (2, 1, 100), got (1, 1, 100)
I put together a LSTM model and it works. But only as long as I set num_layers = 1. If I set it for example to 2 it gives my a long error message that tells me: RuntimeError: Expected hidden[0] size (2, 1, 100), got (1, 1, 100) I am pretty new at Python and deep learning in general, so I could need some advice how to fix my problem. Code: import torch import torch.nn as nn import numpy as np import dataset import time amount_hidden_layers = 2 amount_neurons_hidden_layers = 100 input_layer_size = 12 output_layer_size = 48 small = True hours = 20 learning_rate = 1/1000 #0,001 early_stoping = False train_obs_dataset = dataset.return_observation_dataset(hours=hours, split="train", small=small) val_obs_dataset = dataset.return_observation_dataset(hours=hours, split="val", small=small) test_obs_dataset = dataset.return_observation_dataset(hours=hours, split="test", small=small) class LSTM(nn.Module): def __init__(self, input_layer_size, hidden_layer_size, output_layer_size, num_layers): super().__init__() self.hidden_layer_size = hidden_layer_size self.input_layer_size = input_layer_size self.num_layers = num_layers self.output_layer_size = output_layer_size # lstm_layers: self.lstm = nn.LSTM(input_size=self.input_layer_size, hidden_size=self.hidden_layer_size, num_layers=num_layers) self.hidden_cell = (torch.zeros(self.num_layers, 1, self.hidden_layer_size), torch.zeros(self.num_layers, 1, self.hidden_layer_size)) # output_layer: self.linear = nn.Linear(self.hidden_layer_size, self.output_layer_size) def forward(self, input_seq): lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell) predictions = self.linear(lstm_out.view(len(input_seq), -1)) return predictions[-1] model = LSTM(input_layer_size=input_layer_size, hidden_layer_size=amount_neurons_hidden_layers, output_layer_size=output_layer_size, num_layers=amount_hidden_layers) loss_function = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) def training(dataset=train_obs_dataset, epochs=5): start_time = time.time() val_test = float(100000000) # for early stoping, saves the last evaluation value val_worse_counter = 0 # for early stoping, couts how often tests with validation data get worse for i in range(epochs): # every epoch for datapoint in range(dataset[0].__len__()): # every datapoint optimizer.zero_grad() model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size), torch.zeros(1, 1, model.hidden_layer_size)) y_pred = model(torch.FloatTensor(dataset[0][datapoint])) # model needs a tensor of shape [sequence][features] # squeeze: does not change data, prevents error, single_loss = loss_function(y_pred, torch.FloatTensor(dataset[1][datapoint]).squeeze(-1)) single_loss.backward() optimizer.step() if datapoint%100 == 1: print(f' Time: {time.time() - start_time: 10.1f} sec, Epoch: {i:4} Datapoint: {datapoint:3} Loss: {single_loss.item():10.8f} ') #print(y_pred[0]) "early stoping: if tests with the validation data does not get better for 100 steps, the model stops" if (early_stoping == True): if datapoint%20 == 1: val = test(val_obs_dataset) if (val[0] <= val_test): val_test = val[0] val_worse_counter = 0 if(val[0] > val_test): val_worse_counter = val_worse_counter + 1 if(val[0] >= 100): break print("Val: ", val[0], val[1][0], "val_test: ", val_test, " val_worse_counter: ", val_worse_counter) """ returns a tupel with 1. the average loss and 2. an array with the losses for average losses seperated for all 48 hours """ def test(dataset=val_obs_dataset): with torch.no_grad(): losses = [] #sperated per hour loss = 0 for i in range(48): losses.append(0) for datapoint in range(dataset.__len__()): val_pred = model(torch.FloatTensor(dataset[0][datapoint])) "seperates the loss values per hour" for i in range(48): losses[i] = losses[i] + float(loss_function(val_pred[i], torch.FloatTensor(dataset[1][datapoint]).squeeze(-1)[i])) loss = loss + float(loss_function(val_pred, torch.FloatTensor(dataset[1][datapoint]).squeeze(-1))) for i in range(48): # calculates the average losses[i] = losses[i]/dataset.__len__() loss = loss/dataset.__len__() return (loss, losses) print(model) training(epochs=200) Thanks in advance for every helpfull comment
That is because of this line in your training loop: model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size), torch.zeros(1, 1, model.hidden_layer_size)) Even though you correctly defined hidden_cell in your model, here you hard coded num_layers to be 1 and replaced the one you did correctly. To fix it, you can change it to this; model.hidden_cell = (torch.zeros(model.num_layers, 1, model.hidden_layer_size), torch.zeros(model.num_layers, 1, model.hidden_layer_size)) or even remove it totally as you are basically repeating what you've already done. And I don't suppose hidden_layer_size will change mid-training. Maybe when you batch your data, then leaving it here would make more sense, but it is clear that your batch_size = 1.
https://stackoverflow.com/questions/66201362/
How can I convert the dimension in the model form 2D to 1D?
I am beginner of using pytorch. I would like to classify 2d binary array (17 * 20 ) to 8 classes, I am using cross entropy as loss function . I have 512 batch size . the input is 512 batches of size (17 * 20 )and the final outpu 512 batches of size 8. I applied the following model , I would like to get the final output to be only list of length 8. like [512,8] but I got that dim [512,680,8] (I printed the dimensions i git from the model after the code). How can I get [512,8] from that network as final output. def __init__(self, M=1): super(PPS, self).__init__() #input layer self.layer1 = nn.Sequential( nn.Conv2d(17, 680, kernel_size=1, stride=1, padding=0), nn.ReLU()) self.drop1 = nn.Sequential(nn.Dropout()) self.batch1 = nn.BatchNorm2d(680) self.lstm1=nn.Sequential(nn.LSTM( input_size=20, hidden_size=16, num_layers=1, bidirectional=True, batch_first= True)) self.gru = nn.Sequential(nn.GRU( input_size=16*2, hidden_size=16, num_layers=2, bidirectional=True, batch_first=True)) self.fc1 = nn.Linear(16*2,8) def forward(self, x): out = self.layer1(x) out = self.drop1(out) out = self.batch1(out) out = out.squeeze() out,_ = self.lstm1(out) out,_ = self.gru(out) out = self.fc1(out) return out cov2d torch.Size([512, 680, 20, 1]) drop torch.Size([512, 680, 20, 1]) batch torch.Size([512, 680, 20]) lstm1 torch.Size([512, 680, 32]) lstm2 torch.Size([512, 680, 32]) linear1 torch.Size([512, 680, 8])
If you want the output to be (512, 8) then you would have to change your last linear layer to something like this: def __init__(self, M=1): ... self.gru = nn.Sequential(nn.GRU( input_size=16*2, hidden_size=16, num_layers=2, bidirectional=True, batch_first=True)) self.fc1 = nn.Linear(680 * 16*2, 8) def forward (self, x): ... out, _ = self.gru(out) out = self.fc1(out.reshape(-1, 680 * 16*2)) return out The goal is to reduce the number of feature from 680 * 16 * 2 to 8. You can (and probably should) add more final linear layers that will do this reduction for you.
https://stackoverflow.com/questions/66201785/
Pytorch model running out of memory on both CPU and GPU, can’t figure out what I’m doing wrong
Trying to implement a simple multi-label image classifier using Pytorch Lightning. Here's the model definition: import torch from torch import nn # creates network class class Net(pl.LightningModule): def __init__(self): super().__init__() # defines conv layers self.conv_layer_b1 = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), nn.Flatten(), ) # passes dummy x matrix to find the input size of the fc layer x = torch.randn(1, 3, 800, 600) self._to_linear = None self.forward(x) # defines fc layer self.fc_layer = nn.Sequential( nn.Linear(in_features=self._to_linear, out_features=256), nn.ReLU(), nn.Linear(256, 5), ) # defines accuracy metric self.accuracy = pl.metrics.Accuracy() self.confusion_matrix = pl.metrics.ConfusionMatrix(num_classes=5) def forward(self, x): x = self.conv_layer_b1(x) if self._to_linear is None: # does not run fc layer if input size is not determined yet self._to_linear = x.shape[1] else: x = self.fc_layer(x) return x def cross_entropy_loss(self, logits, y): criterion = nn.CrossEntropyLoss() return criterion(logits, y) def training_step(self, train_batch, batch_idx): x, y = train_batch logits = self.forward(x) train_loss = self.cross_entropy_loss(logits, y) train_acc = self.accuracy(logits, y) train_cm = self.confusion_matrix(logits, y) self.log('train_loss', train_loss) self.log('train_acc', train_acc) self.log('train_cm', train_cm) return train_loss def validation_step(self, val_batch, batch_idx): x, y = val_batch logits = self.forward(x) val_loss = self.cross_entropy_loss(logits, y) val_acc = self.accuracy(logits, y) return {'val_loss': val_loss, 'val_acc': val_acc} def validation_epoch_end(self, outputs): avg_val_loss = torch.stack([x['val_loss'] for x in outputs]).mean() avg_val_acc = torch.stack([x['val_acc'] for x in outputs]).mean() self.log("val_loss", avg_val_loss) self.log("val_acc", avg_val_acc) def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=0.0008) return optimizer The issue is probably not the machine since I'm using a cloud instance with 60 GBs of RAM and 12 GBs of VRAM. Whenever I run this model even for a single epoch, I get an out of memory error. On the CPU it looks like this: RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 1966080000 bytes. Error code 12 (Cannot allocate memory) and on the GPU it looks like this: RuntimeError: CUDA out of memory. Tried to allocate 7.32 GiB (GPU 0; 11.17 GiB total capacity; 4.00 KiB already allocated; 2.56 GiB free; 2.00 MiB reserved in total by PyTorch) Clearing the cache and reducing the batch size did not work. I'm a novice so clearly something here is exploding but I can't tell what. Any help would be appreciated. Thank you!
Indeed, it's not a machine issue; the model itself is simply unreasonably big. Typically, if you take a look at common CNN models, the fc layers occur near the end, after the inputs already pass through quite a few convolutional blocks (and have their spatial resolutions reduced). Assuming inputs are of shape (batch, 3, 800, 600), while passing the conv_layer_b1 layer, the feature map shape would be (batch, 32, 400, 300) after the MaxPool operation. After flattening, the inputs become (batch, 32 * 400 * 300), ie, (batch, 3840000). The immediately following fc_layer thus contains nn.Linear(3840000, 256), which is simply absurd. This single linear layer contains ~983 million trainable parameters! For reference, popular image classification CNNs roughly have 3 to 30 million parameters on average, with larger variants reaching 60 to 80 million. Few ever really cross the 100 million mark. You can count your model params with this: def count_params(model): return sum(map(lambda p: p.data.numel(), model.parameters())) My advice: 800 x 600 is really a massive input size. Reduce it to something like 400 x 300, if possible. Furthermore, add several convolutional blocks similar to conv_layer_b1, before the FC layer. For example: def get_conv_block(C_in, C_out): return nn.Sequential( nn.Conv2d(in_channels=C_in, out_channels=C_out, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) class Net(pl.LightningModule): def __init__(self): super().__init__() # defines conv layers self.conv_layer_b1 = get_conv_block(3, 16) self.conv_layer_b2 = get_conv_block(16, 32) self.conv_layer_b3 = get_conv_block(32, 64) self.conv_layer_b4 = get_conv_block(64, 128) self.conv_layer_b5 = get_conv_block(128, 256) # passes dummy x matrix to find the input size of the fc layer x = torch.randn(1, 3, 800, 600) self._to_linear = None self.forward(x) # defines fc layer self.fc_layer = nn.Sequential( nn.Flatten(), nn.Linear(in_features=self._to_linear, out_features=256), nn.ReLU(), nn.Linear(256, 5) ) # defines accuracy metric self.accuracy = pl.metrics.Accuracy() self.confusion_matrix = pl.metrics.ConfusionMatrix(num_classes=5) def forward(self, x): x = self.conv_layer_b1(x) x = self.conv_layer_b2(x) x = self.conv_layer_b3(x) x = self.conv_layer_b4(x) x = self.conv_layer_b5(x) if self._to_linear is None: # does not run fc layer if input size is not determined yet self._to_linear = nn.Flatten()(x).shape[1] else: x = self.fc_layer(x) return x Here, because more conv-relu-pool layers are applied, the input is reduced to a feature map of a much smaller shape, (batch, 256, 25, 18), and the overall number of trainable parameters would be reduced to about ~30 million parameters.
https://stackoverflow.com/questions/66203862/
Image size in DefaultPredictor of Detectron2
For object detection, I'm using detectron2. I want to fix the input image size so I made my customized dataloader: def build_train_loader(cls, cfg): dataloader = build_detection_train_loader(cfg, mapper=DatasetMapper(cfg, is_train=True, augmentations=[ T.Resize((1200, 1200)) ])) What I wonder is for the prediction, I can use the DefaultPredictor of detectron2 and resize my images to (1200, 1200) as prepossessing before sending to the predictor? Or the DefaultPredictor is resizing the image before the prediction and I have to override a function to resize to (1200, 1200)?
You have to preprocess the images yourself or to write your own predictor that will apply the resize before calling the model. The DefaultPredictor applies a ResizeShortestEdge transform (that can be configured in the config file), but this is not exactly what you want.
https://stackoverflow.com/questions/66211135/
How to assign NaN to tensor element?
I want to assign NaN to a tensor element. import torch x = torch.tensor([1, 2, 3]) x[x == 2] = None I have the error TypeError: can't assign a NoneType to a torch.LongTensor. I need it to make sure that some later sophisticated calculations are not made for certain values of x.
The following code will set the desired value to nan: import torch x = torch.tensor([1, 2, 3]).float() x[x == 2] = float('nan')
https://stackoverflow.com/questions/66217552/
Cannot setup package in conda environment with Pytorch installed
All After setting up the PyTorch 1.7.1 with CUDA 11.2 on a conda virtual environment, I run python setup.py install it always returns me the following error message. Traceback (most recent call last): File "setup.py", line 2, in <module> from torch.utils.cpp_extension import BuildExtension, CUDAExtension File "anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/__init__.py", line 189, in <module> _load_global_deps() File "anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/__init__.py", line 142, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "anaconda3/envs/pytorch/lib/python3.7/ctypes/__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/../../../../libcublas.so.11: symbol free_gemm_select version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference Could anyone can help with this? Thanks!
Finally, I find the solution by just using the pip from the Pytorch official website. pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/66219625/
ResNet family classification layer activation function
I am using the ResNet18 pre-trained model which will be used for a simple binary image classification task. However, all the tutorials including PyTorch itself use nn.Linear(num_of_features, classes) for the final fully connected layer. What I fail to understand is where is the activation function for that module? Also what if I want to use sigmoid/softmax how do I go about that? Thanks for your help in advance, I am kinda new to Pytorch
No you do not use activation in the last layer if your loss function is CrossEntropyLoss because pytorch CrossEntropyLoss loss combines nn.LogSoftmax() and nn.NLLLoss() in one single class. They do they do that ? You actually need logits (output of sigmoid) for loss calculation so it is a correct design to not have it as part of forward pass. More over for predictions you don't need logits because argmax(linear(x)) == argmax(softmax(linear(x)) i.e softmax does not change the ordering but only change the magnitudes (squashing function which converts arbitrary value into [0,1] range, but preserves the partial ordering] If you want to use activation functions to add some sort of non-linearity you normally do that by using a multi-layer NN and having the activation functions in the last but other layers. Finally, if you are using other loss function like NLLLoss, PoissonNLLLoss, BCELoss then you have to calculates sigmoid yourself. Again on the same note if you are using BCEWithLogitsLoss you don't need to calculate sigmoid again because this loss combines a Sigmoid layer and the BCELoss in one single class. check the pytorch docs to see how to use the loss.
https://stackoverflow.com/questions/66222699/
How to import deep learning models from MATLAB to PyTorch?
I’m trying to import a DNN trained model from MATLAB to PyTorch. I’ve found solutions for the opposite case (from PyTorch to MATLAB), but no proposed solutions on how to import a trained model from MATLAB to PyTorch. Any ideas, please?
You can first export your model to ONNX format, and then load it using ONNX; prerequisites are: pip install onnx onnxruntime Then, onnx.load('model.onnx') # Check that the IR is well formed onnx.checker.check_model(model) Until this point, you still don't have a PyTorch model. This can be done through various ways since it's not natively supported. A workaround (by loading only the model parameters) import onnx onnx_model = onnx.load('model.onnx') graph = onnx_model.graph initalizers = dict() for init in graph.initializer: initalizers[init.name] = numpy_helper.to_array(init) for name, p in model.named_parameters(): p.data = (torch.from_numpy(initalizers[name])).data Using onnx2pytorch import onnx from onnx2pytorch import ConvertModel onnx_model = onnx.load('model.onnx') pytorch_model = ConvertModel(onnx_model) Note: Time Consuming Using onnx2keras, then MMdnn to convert from Keras to PyTorch (Examples)
https://stackoverflow.com/questions/66223768/