instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Align feature map with ego motion (problem of zooming ratio ) | I want to align the feature map using ego motion, as mentioned in the paper An LSTM Approach to Temporal 3D Object Detection in LiDAR Point Clouds
I use VoxelNet as backbone, which will shrink the image for 8 times. The size of my voxel is 0.1m x 0.1m x 0.2m(height)
So given an input bird-eye-view image size of 1408 x 1024,
the extracted feature map size would be 176 x 128, shrunk for 8 times.
The ego translation of the car between the "images"(point clouds actually) is 1 meter in both x and y direction. Am I right to adjust the feature map for 1.25 pixels?
1m/0.1m = 10 # meter to pixel
10/8 = 1.25 # shrink ratio of the network
However, though experiments, I found the feature maps align better if I adjust the feature map with only 1/32 pixel for the 1 meter translation in real world.
Ps. I am using the function torch.nn.functional.affine_grid to perform the translation, which takes a 2x3 affine matrix as input.
| It's caused by the function torch.nn.functional.affine_grid I used.
I didn't fully understand this function before I use it.
These vivid images would be very helpful on showing what this function actually do(with comparison to the affine transformations in Numpy.
| https://stackoverflow.com/questions/66983586/ |
CUDA Illegal Memory Access error when using torch.cat | I was playing around with pytorch concatenate and wanted to see if I could use an output tensor that had a different device to the input tensors, here is the code:
import torch
a = torch.ones(4)
b = torch.ones(4)
c = torch.zeros(8).cuda()
print(c)
ab = torch.cat([a,b], out=c)
print(c)
I am running this inside a jupyter notebook. pytorch version: 1.7.1
I get the following error:
...
\Anaconda3\envs\...\lib\site-packages\torch\_tensor_str.py in __init__(self, tensor)
87
88 else:
---> 89 nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0))
90
91 if nonzero_finite_vals.numel() == 0:
RuntimeError: CUDA error: an illegal memory access was encountered
It happens if you try to access the tensors c (in this case with a print).
I couldnt find anything in the documentation that said I couldn't do this, other than perhaps this line:
" ... any python sequence of tensors of the same type ... "
The error is kind of curious though... any ideas?
| It appears that the behaviors changes according to the version of pytorch. With the version 1.3.0 I get the error expected object of backend CUDA but got CPU, but the version 1.5.0 I do indeed get the same error as you do. This would probably be worth mentioning on their github, because I believe the former error is more useful than the latter.
Anyway, both errors come from the fact that you concatenate cpu tensors into a GPU one. You can solve it very easily :
# Move the tensors to the GPU prior to concatenating
ab = torch.cat([a.cuda(),b.cuda()], out=c)
or
# Move the tensor after concatenating
c.copy_(torch.cat([a,b]).cuda())
I don't have a notebook but I believe you will have to restart your kernel, the error you get seems to break it down really bad. My python shell just cannot compute anything anymore after getting the illegal memory access.
| https://stackoverflow.com/questions/66985008/ |
image translation in Pytorch, using affine_grid & grid_sample functions | I am going to move the image for 1 or 2 pixels, as I specified a small number (1.25 , 1.9) in the affine matrix.
BUT, the image is moved far far away, like hundreds of pixels:
( my input image is fully filled with yellow pineapples)
Below is a working example.
import torch
import numpy as np
import matplotlib.pyplot as plt
from torchvision import datasets, transforms
import torch.nn.functional as F
rotation_simple = np.array([[1,0, 1.25],
[ 0,1, 1.9]])
#load image
transform = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()])
dataloader = torch.utils.data.DataLoader(datasets.ImageFolder('/home/Pictures',transform=transform,), shuffle=True)
dtype = torch.FloatTensor
i = 0
while i<3:
img, labels = next(iter(dataloader))
img = img#.double() # 有时候要转为double有时候不用转
rotation_simple = torch.as_tensor(rotation_simple)[None]
grid = F.affine_grid(rotation_simple, img.size()).type(dtype)
x = F.grid_sample(img, grid)
plt.imshow(x[0].permute(1, 2, 0))
plt.show()
i+=1
I wonder why does the function move the the image so far away instead of moving it for just 1 pixel in x and y direction.
Ps. Setting "align_corners=True" didn't help for this case.
Pps. My pytorch version is 1.4.0+cu100
| The "unit of measures" for the grid and the affine transformation are not pixels, but rather normalized coordinates:
grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1]. For example, values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input.
Therefore, translating by [1.25, 1.9] is actually translating by almost the entire image size. You need to divide the translation values by 2*img.shape to get pixel-wise translations.
See the doc for grid_sample for more information.
| https://stackoverflow.com/questions/66987451/ |
How does one use Pytorch (+ cuda) with an A100 GPU? | I was trying to use my current code with an A100 gpu but I get this error:
---> backend='nccl'
/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/cuda/__init__.py:104: UserWarning:
A100-SXM4-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the A100-SXM4-40GB GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
which is reather confusing because it points to the usual pytorch installation but doesn't tell me which combination of pytorch version + cuda version to use for my specific hardware (A100). What is the right way to install pytorch for an A100?
These are some versions I've tried:
# conda install -y pytorch==1.8.0 torchvision cudatoolkit=10.2 -c pytorch
# conda install -y pytorch torchvision cudatoolkit=10.2 -c pytorch
#conda install -y pytorch==1.7.1 torchvision torchaudio cudatoolkit=10.2 -c pytorch -c conda-forge
# conda install -y pytorch==1.6.0 torchvision cudatoolkit=10.2 -c pytorch
#conda install -y pytorch==1.7.1 torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
# conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
# conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
# conda install -y pytorch torchvision cudatoolkit=9.2 -c pytorch # For Nano, CC
# conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
note that this can be subtle because I've had this error with this machine + pytorch version in the past:
How to solve the famous `unhandled cuda error, NCCL version 2.7.8` error?
Bonus 1:
I still have errors:
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
Traceback (most recent call last):
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py", line 1423, in <module>
main()
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py", line 1365, in main
train(args=args)
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py", line 1385, in train
args.opt = move_opt_to_cherry_opt_and_sync_params(args) if is_running_parallel(args.rank) else args.opt
File "/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/distributed.py", line 456, in move_opt_to_cherry_opt_and_sync_params
args.opt = cherry.optim.Distributed(args.model.parameters(), opt=args.opt, sync=syn)
File "/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/cherry/optim.py", line 62, in __init__
self.sync_parameters()
File "/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/cherry/optim.py", line 78, in sync_parameters
dist.broadcast(p.data, src=root)
File "/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1090, in broadcast
work = default_pg.broadcast([tensor], opts)
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8
one of the answers suggested to have nvcca & pytorch.version.cuda to match but they do not:
(meta_learning_a100) [miranda9@hal-dgx ~]$ python -c "import torch;print(torch.version.cuda)"
11.1
(meta_learning_a100) [miranda9@hal-dgx ~]$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
How do I match them? I this the error? Can someone display their pip, conda and nvcca version to see what set up works?
More error messages:
hal-dgx:21797:21797 [0] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83<0> [1]virbr0:192.168.122.1<0>
hal-dgx:21797:21797 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
hal-dgx:21797:21797 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83<0>
hal-dgx:21797:21797 [0] NCCL INFO Using network IB
NCCL version 2.7.8+cuda11.1
hal-dgx:21805:21805 [2] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83<0> [1]virbr0:192.168.122.1<0>
hal-dgx:21799:21799 [1] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83<0> [1]virbr0:192.168.122.1<0>
hal-dgx:21805:21805 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
hal-dgx:21799:21799 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
hal-dgx:21811:21811 [3] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83<0> [1]virbr0:192.168.122.1<0>
hal-dgx:21811:21811 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
hal-dgx:21811:21811 [3] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83<0>
hal-dgx:21811:21811 [3] NCCL INFO Using network IB
hal-dgx:21799:21799 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83<0>
hal-dgx:21805:21805 [2] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83<0>
hal-dgx:21799:21799 [1] NCCL INFO Using network IB
hal-dgx:21805:21805 [2] NCCL INFO Using network IB
hal-dgx:21797:27906 [0] misc/ibvwrap.cc:280 NCCL WARN Call to ibv_create_qp failed
hal-dgx:21797:27906 [0] NCCL INFO transport/net_ib.cc:360 -> 2
hal-dgx:21797:27906 [0] NCCL INFO transport/net_ib.cc:437 -> 2
hal-dgx:21797:27906 [0] NCCL INFO include/net.h:21 -> 2
hal-dgx:21797:27906 [0] NCCL INFO include/net.h:51 -> 2
hal-dgx:21797:27906 [0] NCCL INFO init.cc:300 -> 2
hal-dgx:21797:27906 [0] NCCL INFO init.cc:566 -> 2
hal-dgx:21797:27906 [0] NCCL INFO init.cc:840 -> 2
hal-dgx:21797:27906 [0] NCCL INFO group.cc:73 -> 2 [Async thread]
hal-dgx:21811:27929 [3] misc/ibvwrap.cc:280 NCCL WARN Call to ibv_create_qp failed
hal-dgx:21811:27929 [3] NCCL INFO transport/net_ib.cc:360 -> 2
hal-dgx:21811:27929 [3] NCCL INFO transport/net_ib.cc:437 -> 2
hal-dgx:21811:27929 [3] NCCL INFO include/net.h:21 -> 2
hal-dgx:21811:27929 [3] NCCL INFO include/net.h:51 -> 2
hal-dgx:21811:27929 [3] NCCL INFO init.cc:300 -> 2
hal-dgx:21811:27929 [3] NCCL INFO init.cc:566 -> 2
hal-dgx:21811:27929 [3] NCCL INFO init.cc:840 -> 2
hal-dgx:21811:27929 [3] NCCL INFO group.cc:73 -> 2 [Async thread]
after putting
import os
os.environ["NCCL_DEBUG"] = "INFO"
| From the link pytorch site from @SimonB 's answer, I did:
pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
This solved the problem for me.
| https://stackoverflow.com/questions/66992585/ |
Concatination on a 3 dimensional tensor (Tensor Re-Shaping) | I have 2 tensors,
their format currently is [13, 2] respectively. I am trying to combine both into a 3 dimensional tensor with dimensions [2, 13, 2] such that they stack on top of each other, however are separated as batches.
here is an example of one of the tensors in format [13, 2]:
tensor([[[-1.8588, 0.3776],
[ 0.1683, 0.2457],
[-1.2740, 0.5683],
[-1.7262, 0.4350],
[-1.7262, 0.4350],
[ 0.1683, 0.2457],
[-1.0160, 0.5940],
[-1.3354, 0.5565],
[-0.7497, 0.5792],
[-0.2024, 0.4251],
[ 1.0791, -0.2770],
[ 0.3032, 0.1706],
[ 0.8681, -0.1607]])
I would like to maintain the shape, but have them in 2 groups in the same tensor. below is an example of the format I am after:
tensor([[[-1.8588, 0.3776],
[ 0.1683, 0.2457],
[-1.2740, 0.5683],
[-1.7262, 0.4350],
[-1.7262, 0.4350],
[ 0.1683, 0.2457],
[-1.0160, 0.5940],
[-1.3354, 0.5565],
[-0.7497, 0.5792],
[-0.2024, 0.4251],
[ 1.0791, -0.2770],
[ 0.3032, 0.1706],
[ 0.8681, -0.1607]],
[[-1.8588, 0.3776],
[ 0.1683, 0.2457],
[-1.2740, 0.5683],
[-1.7262, 0.4350],
[-1.7262, 0.4350],
[ 0.1683, 0.2457],
[-1.0160, 0.5940],
[-1.3354, 0.5565],
[-0.7497, 0.5792],
[-0.2024, 0.4251],
[ 1.0791, -0.2770],
[ 0.3032, 0.1706],
[ 0.8681, -0.1607]]])
Anyone have any ideas on how to do this using concatination? I have tried using .unsqueeze when using torch.cat((a, b.unsqueeze(0)), dim=-1) however it changed the format to [13, 4, 1] which is not the format I am after.
solution below works, however, my idea was that I would keep stacking to y via a loop without being restricted by the shape. Sorry for not projecting my idea clearly enough.
They will all be of size [13,2] so it will go up in the form of [1,13,2], [2,13,2], [3,13,2], [4,13,2] etc...
| In this case, you need torch.stack rather than torch.cat, at least it's more convenient :
x1 = torch.randn(13,2)
x2 = torch.randn(13,2)
y = torch.stack([x1,x2], 0) # creates a new dimension 0
print(y.shape)
>>> (2, 13, 2)
You can indeed use unsqueeze and cat though, but you need to unsqueeze both input tensors :
x1 = torch.randn(13,2).unsqueeze(0) # shape: (1,13,2)
x2 = torch.randn(13,2).unsqueeze(0) # same
y = torch.cat([x1, x2], 0)
print(y.shape)
>>> (2,13,2)
Here is a useful thread to understand the difference : difference between cat and stack
If you need to stack more tensors, it's not really much harder, stack works on an arbitrary number of tensors:
# This list of tensors is what you will build in your loop
tensors = [torch.randn(13, 2) for i in range(10)]
# Then at the end of the loop, you stack them all together
y = torch.stack(tensors, 0)
print(y.shape)
>>> (10, 13, 2)
Or, if you don't want to use the list :
# first, build the y tensor to which the other ones will be appended
y = torch.empty(0, 13, 2)
# Then the loop, and don't forget to unsqueeze
for i in range(10):
x = torch.randn(13, 2).unsqueeze(0)
y = torch.cat([y, x], 0)
print(y.shape)
>>> (10, 13, 2)
| https://stackoverflow.com/questions/66995252/ |
Why can't I use my graphic card with PyTorch? | I am trying to use PyTorch with GPU in my Ubuntu 18.04. The GPU is a GeForce GTX 1070.
nvidia-smi:
| NVIDIA-SMI 460.67 Driver Version: 460.67 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:0B:00.0 Off | N/A |
| 21% 49C P2 60W / 180W | 4598MiB / 8119MiB | 17% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 1070 Off | 00000000:42:00.0 Off | N/A |
| 0% 48C P8 7W / 180W | 20MiB / 8117MiB | 0% Default |
| | | N/A |
nvcc --version:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
.bashrc file:
export PATH="/usr/local/cuda-10.2/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH"
I installed pytorch using the command below (from here):
pip install torch torchvision torchaudio
Torch version:
PyTorch Version: 1.8.0
Python version:
Python 3.8.7
gcc/g++ versions:
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
If I try to get the GPU, I get the following:
>>> import torch
>>> print(torch.cuda.is_available())
False
Can anyone advice me please
| If you look at pytorch page, they advise to use special command to install torch with cuda, so probably, you would like to use this one:
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/66995297/ |
Large datasets and Cuda memory Issue | I was processing a large dataset and ran into this error: "RuntimeError: CUDA out of memory. Tried to allocate 1.35 GiB (GPU 0; 8.00 GiB total capacity; 3.45 GiB already allocated; 1.20 GiB free; 4.79 GiB reserved in total by PyTorch).
Any thought on how to solve this?
| I met the same problem before. It's not a bug, you just ran out of memory on your GPU.
One way to solve it is to reduce the batch size until your code will run without this error.
if it not works, better to understand your model. A single 8GiB GPU may not handle a large and deep model. You should consider changing a GPU with larger memory and find a lab to help you (Google Colab can help)
if you are just doing evaluate, force a tensor to be run CPU would be fine
Try model compression algorithm
| https://stackoverflow.com/questions/66997068/ |
How to use pytorch lightning Accuracy with ignore class? | I have some training pipeline which uses CrossEntropyLoss with an ignore class.
The model outputs log_probs of shape (150, 3) - meaning 3 possible classes in batches of 150.
The label_batch is of shape 150, and torch.max(label_batch) == tensor(3, device='cuda:0'), meaning there is an extra class labeled 3, which is the ignore class.
The loss handles it fine:
self._criterion = nn.CrossEntropyLoss(
reduction='mean',
ignore_index=3
)
But the accuracy metric thinks class 3 is valid and gives very wrong results:
self.train_acc = pl.metrics.Accuracy()
wrong result with self.train_acc.update(log_probs, label_batch) because of the 3 label should be ignored.
How to correctly use pl.metrics.Accuracy() with an ignore class?
| Copying response from the discussion thread in the github forum https://github.com/PyTorchLightning/pytorch-lightning/discussions/6890
It is currently not supported in the accuracy metric, but we have an open PR for implementing that exact feature PyTorchLightning/metrics#155
Currently what you can is instead calculate the confusion matrix and then ignore some classes based on that (remember that the true positive/correctly classified are found on the diagonal of the confusion matrix):
ignore_index = 3
metric = ConfusionMatrix(num_classes=3)
confmat = metric(preds, target)
confmat = confmat[:2,:2] # remove last column and row corresponding to class 3
acc = confmat.trace() / confmat.sum()
| https://stackoverflow.com/questions/67002099/ |
Give correct datatype to pytorch, but it does not accept | class Model(nn.Module):
def __init__(self,
input_size=12175,
hidden_size=6,
num_layers=1,
batch_size=1,
sequence_length=1,
num_classes=6):
"""RNN and FC. hidden_size and num_classes MUST equal."""
super().__init__()
self.rnn = nn.RNN(input_size=input_size,
hidden_size=hidden_size,
batch_first=True)
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.batch_size = batch_size
self.sequence_length = sequence_length
self.num_classes = num_classes
# Fully-Connected layer
self.fc = nn.Linear(num_classes, num_classes)
def forward(self, x, hidden):
import ipdb; ipdb.set_trace()
# Reshape input in (batch_size, sequence_length, input_size)
x = x.view(self.batch_size, self.sequence_length, self.input_size)
x = x.double()
hidden = hidden.double()
out, hidden = self.rnn(x, hidden)
out = self.fc(out) # Add here
return hidden, out
def init_hidden(self):
return torch.zeros(self.num_layers, self.batch_size, self.hidden_size)
And training
import itertools
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
CLASS_LENGTH = 6
def train(model, device, train_loader, optimizer, epoch, criterion):
"""
This function has one line different from the ordinary `train()` function
It has `make_variables()` to convert tuple of names to be a tensor
"""
model.train()
hidden = model.init_hidden()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
# hidden = hidden.view(model.batch_size, model.sequence_length, CLASS_LENGTH)
output, hidden = model(data, hidden)
tmp = output.view(model.batch_size, CLASS_LENGTH)
loss = criterion(tmp, target)
loss.backward(retain_graph=True)
optimizer.step()
if batch_idx % 1000 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(model, device, test_loader, criterion):
model.eval()
test_loss = 0
correct = 0
y_test = []
y_pred = []
with torch.no_grad():
for data, target in tqdm(test_loader):
data, target = data.to(device), target.to(device)
output, hidden = model(data, hidden)
tmp = output.view(-1, COUNTRY_LENGTH)
test_loss += criterion(tmp, target).item() # sum up batch loss
pred = tmp.max(1, keepdim=True)[1] # get the index of the max log-probability
pred_tmp = pred.view(-1)
pred_list = pred_tmp.tolist()
target_list = target.tolist()
y_test += target_list
y_pred += pred_list
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# Confusion matrix
confusion_mtx = confusion_matrix(y_test, y_pred)
plot_confusion_matrix(confusion_mtx, classes=test_loader.dataset.countries, normalize=True,
title='Confusion matrix')
Problem:
I do not understand why it raise error since I am sure that I put correct datatype into it
ipdb> n
> /tmp/ipykernel_16/2018411812.py(30)forward()
29 hidden = hidden.double()
---> 30 out, hidden = self.rnn(x, hidden)
31 out = self.fc(out) # Add here
ipdb> n
RuntimeError: expected scalar type Double but found Float
> /tmp/ipykernel_16/2018411812.py(30)forward()
29 hidden = hidden.double()
---> 30 out, hidden = self.rnn(x, hidden)
31 out = self.fc(out) # Add here
ipdb> x
tensor([[[-5.6964e-01, -5.1070e-01, -5.9109e-01, ..., 1.5597e-15,
1.5597e-15, 1.5597e-15]]], dtype=torch.float64)
ipdb> hidden
tensor([[[0., 0., 0., 0., 0., 0.]]], dtype=torch.float64)
pytorch version: 1.8.1+cu102
OSX 10.15.7
i7
Question:
How to solve this problem?
| You are ensure type for RNNs input and hidden state, but it also contain some tensor parameters, weights, biases etc. To make them double, try torch.set_default_tensor_type(t) before RNN object construction, but I personaly would use Float.
...
super().__init__()
torch.set_default_tensor_type(torch.DoubleTensor)
self.rnn = nn.RNN(input_size=input_size,
hidden_size=hidden_size,
batch_first=True)
# may be turn defaults back to floats:
# torch.set_default_tensor_type(torch.FloatTensor)
| https://stackoverflow.com/questions/67003517/ |
Multi-output regression using skorch & sklearn pipeline gives runtime error due to dtype | I want to use skorch to do multi-output regression. I've created a small toy example as can be seen below. In the example, the NN should predict 5 outputs. I also want to use a preprocessing step that is incorporated using sklearn pipelines (in this example PCA is used, but it could be any other preprocessor). When executing this example I get the following error in the Variable._execution_engine.run_backward step of torch:
RuntimeError: Found dtype Double but expected Float
Am I forgetting something? I suspect, somewhere something has to be cast, but as skorch handles a lot of the pytorch stuff, I don't see what and where.
Example:
import torch
import skorch
from sklearn.datasets import make_classification, make_regression
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.decomposition import PCA
X, y = make_regression(n_samples=1000, n_features=40, n_targets=5)
X = X.astype('float32')
class RegressionModule(torch.nn.Module):
def __init__(self, input_dim=80):
super().__init__()
self.l0 = torch.nn.Linear(input_dim, 10)
self.l1 = torch.nn.Linear(10, 5)
def forward(self, X):
y = self.l0(X)
y = self.l1(y)
return y
class InputShapeSetter(skorch.callbacks.Callback):
def on_train_begin(self, net, X, y):
net.set_params(module__input_dim=X.shape[-1])
net = skorch.NeuralNetRegressor(
RegressionModule,
callbacks=[InputShapeSetter()],
)
pipe = make_pipeline(PCA(n_components=10), net)
pipe.fit(X, y)
print(pipe.predict(X))
Edit 1:
Casting X to float32 at the start won't work for every preprocessor as can be seen from this example:
import torch
import skorch
from sklearn.datasets import make_classification, make_regression
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from category_encoders import OneHotEncoder
X, y = make_regression(n_samples=1000, n_features=40, n_targets=5)
X = pd.DataFrame(X,columns=[f'feature_{i}' for i in range(X.shape[1])])
X['feature_1'] = pd.qcut(X['feature_1'], 3, labels=["good", "medium", "bad"])
y = y.astype('float32')
class RegressionModule(torch.nn.Module):
def __init__(self, input_dim=80):
super().__init__()
self.l0 = torch.nn.Linear(input_dim, 10)
self.l1 = torch.nn.Linear(10, 5)
def forward(self, X):
y = self.l0(X)
y = self.l1(y)
return y
class InputShapeSetter(skorch.callbacks.Callback):
def on_train_begin(self, net, X, y):
net.set_params(module__input_dim=X.shape[-1])
net = skorch.NeuralNetRegressor(
RegressionModule,
callbacks=[InputShapeSetter()],
)
pipe = make_pipeline(OneHotEncoder(cols=['feature_1'], return_df=False), net)
pipe.fit(X, y)
print(pipe.predict(X))
| By default OneHotEncoder returns numpy array of dtype=float64. So one could simply cast the input-data X when being fed into forward() of the model:
class RegressionModule(torch.nn.Module):
def __init__(self, input_dim=80):
super().__init__()
self.l0 = torch.nn.Linear(input_dim, 10)
self.l1 = torch.nn.Linear(10, 5)
def forward(self, X):
X = X.to(torch.float32)
y = self.l0(X)
y = self.l1(y)
return y
| https://stackoverflow.com/questions/67004312/ |
Pytorch "Unfold" equivalent in Tensorflow | Say I have grayscale images of size (50*50), with a batch size of 2 in this case, and i use the Pytorch Unfold function as follows:
import numpy as np
from torch import nn
from torch import tensor
image1 = np.random.rand(1,50,50)
image2 = np.random.rand(1,50,50)
image = np.stack((image1,image2))
image = tensor(image)
ds = nn.Unfold(kernel_size=(2,2),stride=2)
x = ds(image).numpy()
x.shape
## OUTPUT: (2, 4, 625)
What would be the equivalent tensorflow implementation so that the output of the tensorflow implementation would exactly match 'x'? I've tried using the tf.image.extract_patches function but it seems to not be giving me what I quite want.
The question is then: What is the tensorflow implementation of Unfold?
| tf.image.extract_patches() is analogous to torch.nn.Unfold, but you need to rejig the parameters slightly:
tf.image.extract_patches(image, sizes=[1,2,2,1], strides=[1,2,2,1], rates=[1,1,1,1], padding='SAME')
| https://stackoverflow.com/questions/67005504/ |
Working of nn.Linear with multiple dimensions | PyTorch's nn.Linear(in_features, out_features) accepts a tensor of size (N_batch, N_1, N_2, ..., N_end), where N_end = in_features. The output is a tensor of size (N_batch, N_1, N_2, ..., out_features).
It isn't very clear to me how it behaves in the following situations:
If v is a row, the output will be A^Tv+b
If M is a matrix, it is treated as a batch of rows, and for every row v, A^Tv+b is performed, and then everything is put back into matrix form
What if the input tensor is of a higher rank? Say the input tensor has dimensions (N_batch, 4, 5, 6, 7). Is it true that the layer will output a batch of size N_batch of (1, 1, 1, N_out)-shaped vectors, everything shaped into a (N_batch, 4, 5, 6, N_out) tensor?
| for 1 dimension, the input is vector with dim in_features, output is out_features. calculated as what you said
for 2 dimensions, the input is N_batch vectors with dim in_features, output is N_batch vectors with dim out_features. calculated as what you said
for 3 dimensions, the input is (N_batch, C, in_features), which is N_batch matrices, each with C rows of vectors with dim in_features, output is N_batch matrices, each with C rows of vectors with dim out_features.
If you feel it's hard to think of 3 dimensional case. One simple way is to flatten the shape to (N_batch * C, in_features), so the input becomes N_batch * C rows of vectors with dim in_features which is the same case as the two dimensional case. This flatten part involves no computation, just rearrange the input.
So in your case 3, its output is N_batch of (3, 4, 5, 6, N_out) vectors, or after rearrange its N_batch * 3 * 4 * 5 * 6 vectors with dim N_out. your shape with all 1 dims are not correct, since there are only N_batch * N_out elements in total.
If you dig into the internal C implementations of pytorch, you can find the matmul implementation actually flatten the dimensions as I have described native matmul which is the exact function used by nn.Linear
| https://stackoverflow.com/questions/67006014/ |
How to call method without method name in python | class LinearRegression(nn.Module):
def __init__(self,input_size,output_size):
# super function. It inherits from nn.Module and we can access everythink in nn.Module
super(LinearRegression,self).__init__()
# Linear function.
self.linear = nn.Linear(input_dim,output_dim)
def forward(self,x):
return self.linear(x)
input_dim = 1
output_dim = 1
model = LinearRegression(input_dim,output_dim)
In this code block when I want to call forward method in model object I can call 2 different ways
The first way
results = model.forward(car_price_tensor)
Second way
results = model(car_price_tensor)
And I try the second way on a different class and object it doesnt work. How it possible?
| This behavior that lets you call a Python object like a function is enabled through a special method __call__, some explanation here.
A torch.nn.Module is a class that implements this behavior. You class LinearRegression is a subclass of it so it inherits this behavior. By default the __call__ method is derived from your forward implementation but it not the same.
The difference in behavior is explained here.
| https://stackoverflow.com/questions/67008756/ |
How to achieve removing/pruning the near-zero parameters in neural network? | I need to remove the near-zero weights of the Neural network so that the distribution of parameters is far away from the zero point.
The distribution of weights after removing nearzero weights and weight-scaling
I met the problem from this paper: https://ieeexplore.ieee.org/document/7544366
I wonder how can I achieve this in my PyTorch/TensorFlow program, such as use a customized activation layer? Or Define a loss function that punishes the near-zero weight?
Thank you if you can provide any help.
| You're looking for L1 regularization, read the docs.
import tensorflow as tf
tf.keras.layers.Dense(units=128,
kernel_regularizer=tf.keras.regularizers.L1(.1))
Smaller coefficients will be turned to zero.
| https://stackoverflow.com/questions/67009335/ |
Python TypeError: 'Tensor' object is not callable when sorting dictionary | Here is my code. The packages imported are not shown. I am trying to feed the CIFAR-10 test data into alexnet. The dictionary at the end needs to be sorted so I can find the most common classification. Please help, I have tried everything!
............................................................................................................................................................................................................................................................................................................
alexnet = models.alexnet(pretrained=True)
transform = transforms.Compose([ #[1]
transforms.Resize(256), #[2]
transforms.CenterCrop(224), #[3]
transforms.ToTensor(), #[4]
transforms.Normalize( #[5]
mean=[0.485, 0.456, 0.406], #[6]
std=[0.229, 0.224, 0.225] #[7]
)])
# Getting the CIFAR-10 dataset
dataset = CIFAR10(root='data/', download=True, transform=transform)
test_dataset = CIFAR10(root='data/', train=False, transform=transform)
classes = dataset.classes
#print(classes)
torch.manual_seed(43)
val_size = 10000
train_size = len(dataset) - val_size
train_ds, val_ds = random_split(dataset, [train_size, val_size])
#print(len(train_ds), len(val_ds))
batch_size=100
train_loader = DataLoader(train_ds, batch_size, shuffle=True, num_workers=8, pin_memory=True)
val_loader = DataLoader(val_ds, batch_size, num_workers=8, pin_memory=True)
test_loader = DataLoader(test_dataset, batch_size, num_workers=8, pin_memory=True)
with open("/home/shaan/Computer Science/CS4442/Ass4/imagenet_classes.txt") as f:
classes = eval(f.read())
holder = []
dic = {}
current = ''
#data_iter = iter(test_loader)
#images,labels = data_iter.next()
#alexnet.eval()
with torch.no_grad():
for data in test_loader:
images, labels = data
out = alexnet(images)
#print(out.shape)
for j in range(0,batch_size):
sorted, indices = torch.sort(out,descending=True)
percentage = F.softmax(out,dim=1)[j]*100
results = [(classes[i.item()],percentage[i].item()) for i in indices[j][:5]]
holder.append(results[0][0])
holder.sort()
for z in holder:
if current != z:
count = 1
dic[z] = count
current = z
else:
count = count + 1
dic[z] = count
current = z
This is where im getting the error:
for w in sorted(dic, key=dic.get, reverse=True):
print(w, dic[w])
| This line is the problem
sorted, indices = torch.sort(out,descending=True)
You created a variable named sorted, which is exactly the same name as sorted function you call when it error.
Just change this to something else like
sorted_out, indices = torch.sort(out,descending=True)
| https://stackoverflow.com/questions/67011955/ |
Load a saved NN model in different Python file | I am trying to implement the code from a Pytorch beginner's tutorial. But I have written the code for loading the saved model in another Python file.
The FashionClassify file contains the code exactly as its in the tutorial.
Below is the code:
from FashionClassify import NeuralNetwork
from FashionClassify import test_data
import torch
model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
classes = [
"T-shirt/top", "Trouser","Pullover","Dress","Coat","Sandal","Shirt","Sneaker","Bag","Ankle boot",
]
model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)],classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
However, when I run this, the entire training process starts again. Why is that so ?
OR
Is it an expected behavior ?
(I have gone through a couple of webpages and StackOverflow answers but couldn't find my problem)
FashionClassify file code:
import torch
from torch import nn
from torch.utils.data import DataLoader # wraps an iterable around dataset
from torchvision import datasets # stores samples and their label
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib as plt
training_data = datasets.FashionMNIST(root='data', train=True, download=True, transform=ToTensor(), )
test_data = datasets.FashionMNIST(root='data', train=False, download=True, transform=ToTensor(), )
batch_size = 64
train_dataLoader = DataLoader(training_data, batch_size=batch_size)
test_dataLoader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataLoader:
print('Shape of X [N,C,H,W]:', X.size())
print('Shape of y:', y.shape, y.dtype)
break
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Using {} device'.format(device))
# to define a NN, we inherit a class from nn.Module
class NeuralNetwork(nn.Module):
def __init__(self):
# will specify how data will proceed in the forward pass
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28 * 28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X,y) in enumerate(dataloader):
X,y = X.to(device), y.to(device)
#compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
#backprop
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch%100 ==0:
loss,current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test(dataloader, model):
size = len(dataloader.dataset)
model.eval()
test_loss, correct = 0,0
with torch.no_grad():
for X, y in dataloader:
X,y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= size
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataLoader, model, loss_fn, optimizer)
test(test_dataLoader, model)
print("Done!")
torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
| That's what happens when you import another file. All the code gets rerun.
Instead, in your training file:
class FancyNetwork(nn.Module):
[...]
def train():
[train code]
if __name__ == "__main__":
train()
Now when you run this file train() will get called, but when you import this file in another one, train won't get called automatically.
| https://stackoverflow.com/questions/67012541/ |
Apply vectorised function to Cartesian product of two ranges in PyTorch | I am kind of new to pytorch, and have a very simple question. Let's say we have a scalar function f():
def f(x,y):
return np.cos(x)+y
What I want to do is use the GPU to generate all pairs of data-points from two ranges x and y. For simple case, take x=y=[0,1,2].
Can I do that without changing the function? If not, how would you change the function?
| You can take the Cartesian product of the values before applying your function to their first and second elements:
x = y = torch.tensor([0,1,2])
pairs = torch.cartesian_prod(x,y)
# tensor([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2]])
x_, y_ = pairs[:,0], pairs[:,1]
f(x_,y_)
| https://stackoverflow.com/questions/67016205/ |
How to split duplicate samples to train test with no overlapping? | I have a nlp datasets (about 300K samples) where there exits duplicate data. I want to split it to train test (70%-30%), and they should have no overlapping.
For instance:
|dataset: | train | test |
| a | a | c |
| a | a | c |
| b | b | c |
| b | b | |
| b | b | |
| c | d | |
| c | d | |
| c | | |
| d | | |
| d | | |
I have tired exhaustively random sample, but it too time consuming.
| It is doable, but requires a few steps to be accomplished.
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
# original dataset with duplicates
dataset = pd.DataFrame(["a", "a", "b", "b", "b", "c", "c", "c", "d", "d"])
# get unique values, remove duplicates, but keep original counts
data_no_dup, counts = np.unique(dataset, return_counts=True)
# split using the standard Scikit-Learn way
train_no_dup, test_no_dup = train_test_split(data_no_dup, test_size=0.2, random_state=0)
# retrieve original counts
train, test = [], []
for sample in train_no_dup:
train.extend([sample] * counts[list(data_no_dup).index(sample)])
for sample in test_no_dup:
test.extend([sample] * counts[list(data_no_dup).index(sample)])
print("Train: {}".format(train))
print("Test: {}".format(test))
Output
Train: ['d', 'd', 'b', 'b', 'b', 'a', 'a']
Test: ['c', 'c', 'c']
| https://stackoverflow.com/questions/67016628/ |
Numpy: get only elements on odd or even diagonal offsets in a matrix, change the rest to zeros | I have a 2D (square) matrix, for example it can be like this:
1 2 3
4 5 6
7 8 9
I want to get only the elements on the odd or even diagonal offsets of it, and let the rest be zeros. For example, with even diagonal offsets (%2 = 0), the resulting matrix is:
1 0 3
0 5 0
7 0 9
Explanation: main diagonal has offset 0, which is 1 5 9. The next diagonal offsets are 2, 6 and 4, 8 thus they are changed to zeros. Repeat the process until we reach the last diagonal.
And with odd diagonal index, the resulting matrix is:
0 2 0
4 0 6
0 8 0
I look at the np.diag(np.diag(x)) but it only returns the main diagonal and the rest are zeros. How can I extend it to odd/even offsets?
I can also use PyTorch.
| I would do it following way using numpy
import numpy as np
arr = np.array([[1,2,3],[4,5,6],[7,8,9]])
masktile = np.array([[True,False],[False,True]])
mask = np.tile(masktile, (2,2)) # this must be at least as big as arr
arr0 = np.where(mask[:arr.shape[0],:arr.shape[1]], arr, 0)
arr1 = np.where(mask[:arr.shape[0],:arr.shape[1]], 0, arr)
print(arr0)
print(arr1)
output:
[[1 0 3]
[0 5 0]
[7 0 9]]
[[0 2 0]
[4 0 6]
[0 8 0]]
Explanation: I am creating mask which is array of Trues and Falses to use to decide if given element is to remain or should be replaced by 0. I create single tile which then I feed into np.tile to get "chessboard" of sufficient size, then I use part of apprioate size of it together with np.where to replace selected elements with 0.
| https://stackoverflow.com/questions/67017881/ |
PyTorch Lightning: includes some Tensor objects in checkpoint file | As Pytorch Lightning provides automatic saving for model checkpoints, I use it to save top-k best models. Specifically in Trainer setting,
checkpoint_callback = ModelCheckpoint(
monitor='val_acc',
dirpath='checkpoints/',
filename='{epoch:02d}-{val_acc:.2f}',
save_top_k=5,
mode='max',
)
This is working well but it does not save some attribute of the model object. My model stores some Tensor at every training epoch end such that
class SampleNet(pl.LightningModule):
def __init__(self):
super().__init__()
self.save_hyperparameters()
self.layer = torch.nn.Linear(100, 1)
self.loss = torch.nn.CrossEntropy()
self.some_data = None # Initialize as None
def training_step(self, batch):
x, t = batch
out = self.layer(x)
loss = self.loss(out, t)
results = {'loss': loss}
return results
def training_epoch_end(self, outputs):
self.some_data = some_tensor_object
This is a simplified example but I want the checkpoint file made by above checkpoint_callback to remember the attribute self.some_data but when I load the model from checkpoint, it always reset to None. I confirmed that it is successfully updated during the training.
I tried not to initialize it as None in init but then the attribute will disappear when loading model.
Saving the attribute as a distinct pt file is something I want to avoid as it is associated with model configuration so I manually need to match the file with corresponding checkpoint file later.
Would it be possible to include such tensor attribute in checkpoint file?
| Simply use the model class hooks on_save_checkpoint() and on_load_checkpoint() for all sorts of objects that you want to save alongside the default attributes.
def on_save_checkpoint(self, checkpoint) -> None:
"Objects to include in checkpoint file"
checkpoint["some_data"] = self.some_data
def on_load_checkpoint(self, checkpoint) -> None:
"Objects to retrieve from checkpoint file"
self.some_data= checkpoint["some_data"]
See module docs
| https://stackoverflow.com/questions/67019757/ |
how to exclude particular dependency package version from setup file in python | I've a small package that does not work on torch v1.8.0 but it works fine on the new v1.8.1 version and other older versions v1.7.1 so want to exclude the v1.8.0 version.
I could just set
install_requires=[
"torch>=1.8.1",
...
but the torch package size is huge and also want mypackage to work on the older versions of the torch.
I've tried
install_requires=[
"torch>=1.8.1,!=1.8.0,<=1.7.1",
...
but when installing the package with pip install mypackage getting the following error:
ERROR: Could not find a version that satisfies the requirement torch!=1.8.0,<=1.7.1,>=1.8.1 (from mypackage) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.4.1, 0.4.1.post2, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1)
ERROR: No matching distribution found for torch!=1.8.0,<=1.7.1,>=1.8.1 (from mypackage)
how can exclude the v1.8.0 version? thank you.
| One can specify a minimum version and also exclude certain versions. Below, the minimum version is 1.0.0. This should be set to a reasonable value depending on the project.
torch>=1.0.0,!=1.8.0
The issue with torch>=1.8.1,!=1.8.0,<=1.7.1 is that it requests torch greater than or equal to 1.8.1 and less than or equal to 1.7.1. That is not possible, so pip cannot fulfill the request.
PEP 508 and PEP 440 are relevant here.
| https://stackoverflow.com/questions/67022465/ |
I don't find a way to use my wav file as dataset in PyTorch | hello I am new to PyTorch and I want to make a simple speech recognition but I don't want to use pytorch.datasets I have some voices for dataset but I don't find anywhere to help me.
I want to use .wav files. I saw a tutorial but he used pytorch dataset.
import torch
from torch import nn, optim
import torch.nn.functional as F
import torchaudio
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from torchaudio.datasets import SPEECHCOMMANDS
import os
class SpeechSubset(SPEECHCOMMANDS):
def __init__(self, subset, str=None):
super().__init__("./", download=True)
def load_list(filename):
filepath = os.path.join(self._path, file.name)
with open(filepath) as fileob:
return [os.path.join(self._path, line.strip())]
if subset == "validation":
self._walker = load_list("validation_list.txt")
elif subset == "testing":
self._walker = load_list("testing_list.txt")
elif subset == "training":
excludes = load_list("validation_list.txt") + load_list("testing_list.txt")
excludes = set(excludes)
self._walker = [w for w in self._walker if w not in excludes]
train_set = SpeechSubset("training")
test_set = SpeechSubset("testing")
waveform, sample_rate, label, speaker_id, utterance_number = train_set[0]
sorry my english isn't too good.
EDIT
Im using the SPEECHCOMMANDS dataset but I want to use my own
thank you for reading.
| Since you are talking about the speech recognition and pytorch, I would recommend you to use a well-developed set of tools instead of doing speech-related training tasks from scratch.
A good repo on github is Espnet. It contains some quite recent work on text-to-speech and speech-to-text models as well as ready-to-use scripts to train on popular open-source dataset in different languages. It also includes trained models for you to use directly.
Back to your question, if you want to use pytorch to train your own speech recognition model on your own dataset, I would recommend you to go to this Espnet Librispeech ASR recipe. Although it uses .flac files, some little modifications on data preparation script and change some parameters in the major entry script asr.sh may feed your demand.
Note that, in addition to knowledge on python and torch, espnet needs you to be familiar with shell scripts as well. Their asr.sh script is quite long. This may not be an easy task for people who are more comfort with minimal pytorch codes for one specific model. Espnet is designed to accomodate many models and many datasets. It contains many preprocessing stages, e.g. speech feature extracting, length filtering, token preparation, language model training and so on, which are necessary for good speech recognition models.
If you insist on the repo that you found. You need to write a custom Dataset and Dataloader classes. You can refer to pytorch dataloading tutorial, but this link uses images as an example, if you want an audio example, maybe from some github repos like deepspeech pytorch dataloader
| https://stackoverflow.com/questions/67022524/ |
Tensorboard - ValueError: too many values to unpack (expected 2) | I have tried to use tensorboard to visualize a model. I was following the pytorch.org tutorial. Here is the code for dataloader.
writer_train = SummaryWriter('runs/training')
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=config.train_batch_size, shuffle=True,
num_workers=config.num_workers, pin_memory=True)
images, labels = next(iter(train_loader))
writer_train.graph_model(light_net, images)
and I got this error in the iter line.
images, labels = next(iter(train_loader))
ValueError: too many values to unpack (expected 2)
I have debugged the code and found this.
| The error is likely caused by the use of a built-in function instead of the .next() method of the train_loader object.
next() and iter() are builtin methods in Python. See from the docs iter and next.
In the tutorial is shows the following
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
Where it uses the next() method to unpack values into the 2 variables. This is not the same as your usage of next(iter(train_loader)). Do it in the way as shown and it should solve your problem.
| https://stackoverflow.com/questions/67024159/ |
How does a Pytorch neural network load dataset into GPU | When loading a dataset into the GPU for training, would a Pytorch NN load the entire dataset or just the batch?
I have a 33GB dataset that fits comfortably on my normal RAM (64GB) but i only have a 16GB of GPU RAM (T4). As long as Pytorch only loads one batch at a time into the GPU, that should work fine without any memory problems?
| You can load one batch of data at a time into GPU. You should use data loader to fetch a batch of data and also initialize a torch device instance to use GPU.
You can check the following tutorial. It uses data loader to get data as batches and load them to GPU by using torch device.
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
| https://stackoverflow.com/questions/67024926/ |
Combining multiple models trained in different parts of the dataset in PyTorch | In PyTorch, is it theoretically possible to 'merge' multiple models together into one model - effectively combining all the data learnt so far? The models are exactly identical, however, are trained with different parts of the training data.
If so, would it be possible to split a dataset into equal parts and distribute training between many computers in a similar manner to folding@home? Would the new model be as good as if it was not distributed?
| I believe what you are asking for is a distributed implementation of a tensorflow/pytorch model. Similar to distributed databases, chunks of data can be used on separate clusters to train a single model in parallel on each cluster. The resultant model will be trained on all the separate chunks of data on different clusters, as though it has been trained on the complete data together.
While there are multiple tools that do the same, the one I can recommend is Analytics Zoo.
A unified Data Analytics and AI platform for distributed TensorFlow, Keras and PyTorch on Apache Spark/Flink & Ray. You can easily apply AI models (e.g., TensorFlow, Keras, PyTorch, BigDL, OpenVINO, etc.) to distributed big data.
More details here.
| https://stackoverflow.com/questions/67030330/ |
Difference between torch.Size([64]) and (64,)? | I created a Pytorch dataset class to store 64 lines of text. The file only has text, no label so I artificially generated an index list y (just to follow along with a tutorial https://medium.com/swlh/how-to-use-pytorch-dataloaders-to-work-with-enormously-large-text-files-bbd672e955a0#4fe0). After I created the dataset object and wrapped it around a dataloader, y.shape returned torch.Size([64]) while the tutorial said it would return (64,). (torch version is 1.8.1. torchvision version is 0.9.1. Python is 3.7.10.)
Is there a difference between torch.Size([64]) and (64,)? Thank you.
##### IMPORT PACKAGES #####
import nltk
import string
from nltk import word_tokenize
from torch.utils.data import IterableDataset, DataLoader, Dataset
##### DEFINE CLASS #####
class CustomDataset(Dataset):
# A Pytorch Dataset class to store text
def __init__(self, filename):
'''
Input: filename (Each line is a string.)
Output: member variable X (list of unprocessed strings)
member variable y (index list of X)
'''
# Open file and store contents in list
with open(filename) as f:
lines = f.read().split('\n')
X, y = [], []
i = 0
for line in lines:
X.append(line)
y.append(i)
i +=1
# Store in member variables
self.X = X
self.y = y
def preprocess(self, text):
'''
Input: a string from X
Output: a preprocessed string
'''
text_pp = text.lower() # lower case
return text_pp
def __len__(self):
return len(self.y)
def __getitem__(self, index):
'''
Input: a number (within range of X's indices)
Output: string at specified index
'''
return self.preprocess(self.X[index]), self.y[index]
##### CREATE OBJECT #####
dataset = CustomDataset('micro.txt')
dataloader = DataLoader(dataset, batch_size = 64, num_workers = 2)
for X, y in dataloader:
print(y.shape) # torch.Size([64]) [Is it same as (64,)?])
| In a way they are same thing. You are printing shape of a one dimensional tensor. Shape written in tutorial is shape format of one dimensional numpy array.
If shape of y is printed after converting it into numpy array, mentioned format will appear. You can see both format with following code.
print((torch.rand(64)).shape) # torch.Size([64])
print((torch.rand(64)).numpy().shape) # (64,)
| https://stackoverflow.com/questions/67038376/ |
Dropout Layer with zero dropping rate | I'm having trouble understanding a certain aspect of dropout layers in PyTorch.
As stated in the Pytorch Documentation the method's signature is torch.nn.Dropout(p=0.5, inplace=False) where p is the dropout rate.
What does this layer do when choosing p=0? Does it change its input in any way?
| Dropout with p=0 is equivalent to the identity operation.
In fact, this is the exact behaviour of Dropout modules when set in eval mode:
During evaluation the module simply computes an identity function.
| https://stackoverflow.com/questions/67039272/ |
TypeError: forward() takes 2 positional arguments but 3 were given in pytorch | I have the following error in my training loop and I don't really understand what the issue is. I am currently in the process of writing this code so stuff isn't final but I cannot figure out what this problem is.
I have tried googling the error and read some of the answers but still couldn't seem to understand the crux of the issue.
Dataset and Dataloader
(X and Y are already given to me, they are both [2000, 40, 1] tensors)
class TrainingDataset(data.Dataset):
def __init__(self, X, y):
self.X = X
self.y = y
def __len__(self):
return Nf
# returns corresponding input/output pairs
def __getitem__(self, t):
X = self.X[t]
y = self.y[t]
#print(X.shape, y.shape)
return X, y
# prints torch.Size([2000, 40, 1]) torch.Size([2000, 40, 1])
print(x.size(), y.size())
dataset = TrainingDataset(x,y)
batchSize = 20
dataIter = data.DataLoader(dataset, batchSize)
Model:
class Encoder(nn.Module):
def __init__(self, num_inputs = 40, num_outputs = 40):
super(Encoder, self).__init__()
self.num_inputs = num_inputs
self.num_hidden = num_hidden
self.num_outputs = num_outputs
self.layers = nn.Sequential(
nn.Linear(num_inputs, num_outputs),
nn.ReLU(),
nn.Linear(num_outputs, num_outputs),
nn.ReLU(),
nn.Linear(num_outputs, num_outputs)
)
def forward(self, x_c, y_c):
return self.layers(x_c, y_c)
Training Loop:
for epoch in range(epochs):
for batch in dataIter:
optimiser.zero_grad()
l = loss(encoder(x_c=batch[0], y_c=batch[1]), batch[1])
l.backward()
optimiser.step()
Error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-15-aa1c60616d82> in <module>()
6 for batch in dataIter:
7 optimiser.zero_grad()
----> 8 l = loss(encoder(x_c=batch[0], y_c=batch[1]), batch[1])
9 l.backward()
10 optimiser.step()
2 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
TypeError: forward() takes 2 positional arguments but 3 were given
Can anyone point me in the right direction? I have just started to learn and do pytorch so I am not good at any of this yet.
| def forward(self, x_c, y_c):
return self.layers(x_c, y_c)
Your error lies here, this function should have only 1 argument apart from self.
| https://stackoverflow.com/questions/67039926/ |
How to speed up the process of loading data while training my neural network? | I am now training my LSTM neural network with my GPU.
The problem is:
I have 23,000 .csv files in my training set, which has a shape of (40,76). Each time I got a batch(64) to load my data, I found that it took about 1s to load the data(read 64 .csv files), and it took about 0.08s to compute the loss and update the parameters. When I checked the power and utilization of my GPU, I found it was of low efficiency. Therefore, how can I improve the organization of my training data?
Here is my own dataset class.enter image description here
| Combine the CSV files into a single file. Or, load the data from the CSVs and save the data in some other form. This way, you only have to read from one file instead of 23,000. Reading files is relatively very slow because it requires a system call (your program has to ask the operating system to read the file).
Easiest thing to do is just combine the csvs and then save them as a new csv. Then just use that csv to load the data. I would bet most of the run time of your code is from opening/closing the files
| https://stackoverflow.com/questions/67041659/ |
Dataset size is smaller than memory, What's wrong with my code? | The following is part of the code, epoch=300, each npz file is 2.73M, but the batch size of my dataloader gives 64, a total of 8 gpuss, so a mini batch should be 64×8×2.73M≈1.1G, my actual memory is 128G. Even if it becomes larger after decompression, it will not reach the size of 128G. The following figure link shows that all 128G of memory is occupied. How should I change my code?
class VimeoDataset(Dataset):
def __init__(self, dataset_name, batch_size=64):
def __init__(self, dataset_name, batch_size=32):
self.batch_size = batch_size
self.path = '/data/train_sample/dataset/'
self.dataset_name = dataset_name
#self.load_data()
self.h = 256
self.w = 448
xx = np.arange(0, self.w).reshape(1,-1).repeat(self.h,0) #xx shape is(256,448)
yy = np.arange(0, self.h).reshape(-1,1).repeat(self.w,1) #yy shape is(448,256)
self.grid = np.stack((xx,yy),2).copy()
def __len__(self):
return len(self.meta_data)
def getimg(self, index):
f = np.load('/data/train_sample/dataset/'+ str(index) + '.npz')
if index < 8000:
train_data = f['i0i1gt']
flow_data = f['ft0ft1']
elif 8000 <= index < 10000:
val_data = f['i0i1gt']
else:
pass
if self.dataset_name == 'train':
meta_data = train_data
else:
meta_data = val_data
data = meta_data
img0 = data[0:3].transpose(1, 2, 0)
img1 = data[3:6].transpose(1, 2, 0)
gt = data[6:9].transpose(1, 2, 0)
flow_gt = flow_data.transpose(1, 2, 0)
return img0, gt, img1, flow_gt
dataset = VimeoDataset('train')
def __getitem__(self, index):
img0, gt, img1, flow_gt = self.getimg(index)
...
sampler = DistributedSampler(dataset)
train_data = DataLoader(dataset, batch_size=args.batch_size, num_workers=8, pin_memory=True,
drop_last=True, sampler=sampler)
system usage figure
| I have given fixing your dataset given our comments above a go. Essentially you need to pass more variables into your class so that it can easily differentiate between your train and validation data. This is without loading all of your data into memory, although sometimes this is necessary (sequentially, not at once) to calculate some data statistics and such.
Disclaimer: I took a guess at using glob to find your npz files and that you use flow_data in your validation set (missing in your code for validation data).
from glob import glob
class VimeoDataset(Dataset):
def __init__(self, npzs, batch_size=64,train_set=False):
self.batch_size = batch_size
self.train_set = train_set
self.h = 256
self.w = 448
xx = np.arange(0, self.w).reshape(1,-1).repeat(self.h,0) #xx shape is(256,448)
yy = np.arange(0, self.h).reshape(-1,1).repeat(self.w,1) #yy shape is(448,256)
self.grid = np.stack((xx,yy),2).copy()
self.npzs = npzs
def __len__(self):
return len(self.npzs)
def getimg(self, index):
f = np.load(self.npzs[index])
data = f['i0i1gt']
if self.train_set:
flow_data = f['ft0ft1']
else:
flow_data = np.zeros([self.h,self.w,4])
img0 = data[0:3].transpose(1, 2, 0)
img1 = data[3:6].transpose(1, 2, 0)
gt = data[6:9].transpose(1, 2, 0)
flow_gt = flow_data.transpose(1, 2, 0)
return img0, gt, img1, flow_gt
def __getitem__(self, index):
img0, gt, img1, flow_gt = self.getimg(index)
npzs = glob('/data/train_sample/dataset/*.npz')
train_val_split = 8000
train_dataset = VimeoDataset(npzs[:train_val_split],train_set = True)
val_dataset = VimeoDataset(npzs[train_val_split:])
| https://stackoverflow.com/questions/67051492/ |
Python ValueError: Inserting a list of values using df.loc | I have a dataframe df with paths of images. I am performing object detection and want to save the names of the detected objects in a new column (named objects) of the dataframe.
However the code works fine when only 1 object is detected, when there is a list of multiple objects the code gives an error.
ValueError: cannot set using a multi-index selection indexer with a different length than the value
I think there is some problem with my way of inserting values in the dataframe. I have actually created an empty column first and then using loop inserting vaues in the objects column.
Code
train_image_paths = 'data/train_images/' + df['image'] # train_images is the folder containing images and
# df['image'] contains all image paths
df['objects'] = '' # Creating an empty column for storing detected objects
for idx, img in enumerate(train_image_paths):
..
#object detection code # Performs object detection and
# stores the detected objects in a list named detected_objects
..
df.loc[idx, 'objects'] = detected_objects # Adding the detected objects to the dataframe
return df
Output of df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 34250 entries, 0 to 34249
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 image 34250 non-null object
dtypes: object(1)
memory usage: 267.7+ KB
| I think you should create new list or Pandas Series then append it to dataframe.
train_image_paths = 'data/train_images/' + df['image'] # train_images is the folder containing images and
# df['image'] contains all image paths
objects = [] # Creating an empty column for storing detected objects
for idx, img in enumerate(train_image_paths):
..
#object detection code # Performs object detection and
# stores the detected objects in a list named detected_objects
..
objects.append(detected_objects) # Adding the detected objects to the dataframe
df['objects'] = objects
return df
Here is the references: https://www.geeksforgeeks.org/adding-new-column-to-existing-dataframe-in-pandas/
| https://stackoverflow.com/questions/67053463/ |
Computing gradients only for the front-end network in Pytorch | I have a very simple question.
Let's say that I have two networks to train (i.e., net1, net2).
The output of net1 will be fed into net2 while training.
In my case, I would like to only update net1:
optimizer=Optimizer(net1.parameters(), **kwargs)
loss=net2(net1(x))
loss.backward()
optimizer.step()
Although this will achieve what I'm aiming for, it takes up too much redundant memory since this will compute the gradients for net2 (causes OOM error).
Therefore I have tried out several attempts to solve this issue:
torch.no_grad:
z=net1(x)
with torch.no_grad():
loss=net2(z)
Didn't raise OOM but removed all the gradients including the ones from net1.
requires_grad=False:
net2.requires_grad=False
loss=net2(net1(x))
Raised OOM.
detach():
z=net1(x)
loss=net2(z).detach()
Didn't raise OOM but removed all the gradients including the ones from net1.
eval():
net2.eval()
loss=net2(net1(x))
Raised OOM.
Is there any way to compute the gradients only for the front-end network (net1) for memory efficiency?
Any suggestions would be appreciated.
| First let's try to understand why your methods don't work.
This context manager disables all gradient computation.
Since net1 requires a gradient, the subsequent requires_grad=False are ignored.
If you detach at that state, this means the gradient computation already stops right there
Eval just sets net2 to eval mode, that doesn't influence the gradient computation at all.
Depending on your architecture, the OOM error could already come from saving all the intermediate values in your computation graph (frequently an issue in CNNs), or it could come from having to store gradients (more common in fully connected networks).
What you probably are looking for is called "checkpointing" which you don't even have to implement on your own, you can use the checkpointing API of pytorch, check out the documentation.
This basically lets you compute and process the gradient for net1 and net2 separately. Note that you do need all the gradient informatino to go through net2, otherwise you cannot compute the gradients wrt. net1!
| https://stackoverflow.com/questions/67056438/ |
CUDA out of memory error when calling PyTorch train script in a loop from another python script | I'm trying to run the same pytorch training script with different arguments (argparse) from another python script. I'm using os.system() for the same.
Here's what I'm trying to do -
train.py = > the script which contains the train-loop.
runner.py => the file which runs the train script in a loop.
# runner.py
for hp in hyperparams:
os.system(f"CUDA_VISIBLE_DEVICES=1 python train.py --arg1 hp")
A few models get trained but I eventually end up getting a CUDA out of memory error. For instance, if there were 10 models, it will successfully train 8 and then give a CUDA error for 9 and 10.
My guess is that the GPU memory is not being cleared after every loop. What can I do to mitigate this?
| torch.cuda.empty_cache() along with deleting the models helped.
| https://stackoverflow.com/questions/67056876/ |
mat1 dim 1 must match mat2 dim 0 -Pytorch | I have attempted to solve this error but it has been to no avail. My CNN model is below:
The shape of X_train and X_test are:
X_train shape: torch.Size([12271, 3, 100, 100]) | X_test shape: torch.Size([3068, 3, 100, 100])
class Net(nn.Module):
def __init__(self, num_classes=7):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size= 5,stride=1,padding=2)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5,stride=1,padding=2)
self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=5,stride=1,padding=2)
self.pool = nn.MaxPool2d(2, 2)
self.drop = nn.Dropout2d(p=0.2)
self.fc1 = nn.Linear(9216, 1000)
self.fc2 = nn.Linear(1000, 500)
self.fc3 = nn.Linear(500, out_features=num_classes)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = x.reshape(x.size(0), -1)
x = F.dropout(self.drop(x), training=self.training)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)
The training on the model on data code as required is show below:
The required traceback is below:
count = 0
total_step = len(trainloader)
loss_list = []
acc_list = []
iteration_list = []
for epoch in range(20):
for i, data in enumerate(trainloader,0):
images, labels = data[0].to(device), data[1].to(device)
# Run the forward pass
outputs = net(images.float())
loss = criterion(outputs, labels)
loss_list.append(loss.item())
iteration_list.append(i)
# Backprop and perform Adam optimisation
optimizer.zero_grad()
loss.backward()
optimizer.step()
#count += 1
#if count % 100 == 0:
# Track the accuracy
total = labels.size(0)
_, predicted = torch.max(outputs.data, 1)
correct = (predicted == labels).sum().item()
acc_list.append(correct / total)
if (i + 1) % 1000 == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}, Accuracy: {:.2f}%'
.format(epoch + 1, num_epochs, i + 1, total_step, loss.item(),
(correct / total) * 100))
| Your final convolution's (conv3) output dimensions don't match the input dimension of first Linear layer. self.conv3's output shape will be BatchSize x 128 x 12 x 12 when resized:
x = x.reshape(x.size(0), -1)
print(x.shape) # torch.Size([BS, 18432])
So 18432 doesn't match 9216. You can do one of these changes and it will match the dimension and work:
# Reduce output channels here
self.conv3 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=5,stride=1,padding=2)
# OR change input dimension of Linear layer
self.fc1 = nn.Linear(18432, 1000)
Just a side note, you don't need to use a functional dropout in forward pass, you have already created a callable dropout in init. You can just call that:
x = self.drop(x) # Instead of x = F.dropout(self.drop(x), training=self.training)
Training/testing mode for the dropout is handled internally when you switch between modes net.eval() or net.train(). So you don't need to worry about that.
| https://stackoverflow.com/questions/67065236/ |
Inference Discrepancy between PyTorch and DJL implementation via Kotlin | I have a PyTorch model trained on the 17flowers dataset, and converted via PyTorch's tracing to a JIT model. I have tested the inference output for the PyTorch model and the JIT converted model, and the results are equivalent there. This leads me to believe there is an issue with my implementation of the DJL framework.
There is an issue when I attempt to utilized DJL for inference utilizing the converted JIT model, which is necessary for DJL. I am not getting 100% match, which I expected.
The Kotlin implementation for djl.ai is straightforward and essentially follows the instructions here.
I have a sanitized version of the Kotlin code below:
@Throws(IOException::class, ModelException::class, TranslateException::class)
internal fun main(args: Array<String>) {
val artifactId = "ai.djl.localmodelzoo:torchscript_17flowers"
val pipeline = Pipeline()
pipeline.add(CenterCrop(224, 224))
.add(Resize(224, 224))
.add(ToTensor())
.add(Normalize(floatArrayOf(0.485f, 0.456f, 0.406f), floatArrayOf(0.229f, 0.224f, 0.225f)))
val translator = ImageClassificationTranslator.builder()
.setPipeline(pipeline)
.optSynsetArtifactName("synset.txt")
.optApplySoftmax(true)
.build();
System.setProperty("ai.djl.repository.zoo.location","build/pytorch_models/torchscript_17flowers")
val criteria = Criteria.builder()
.setTypes(Image::class.java, Classifications::class.java) // defines input and output data type
.optTranslator(translator)
.optArtifactId(artifactId) // defines which model to load
.optProgress(ProgressBar())
.build()
val model = ModelZoo.loadModel(criteria)
// single image test
var img = ImageFactory.getInstance().fromUrl("https://image.jpg");
img.getWrappedImage()
val predictor: Predictor<Image, Classifications> = model.newPredictor()
val classifications: Classifications = predictor.predict(img)
val best = classifications.best<Classifications.Classification>()
}
My issue isn't getting things to run so much as it is getting the inference results to match. It is my understanding that they should match, and that Kotlin should work fine as DJL is meant to work for Java. I'm curious if there are any thoughts surrounding this encountered issue.
| The discrepancy most likely comes from image pre-processing:
pipeline.add(CenterCrop(224, 224))
.add(Resize(224, 224))
.add(ToTensor())
.add(Normalize(floatArrayOf(0.485f, 0.456f, 0.406f), floatArrayOf(0.229f, 0.224f, 0.225f)))
Many of PyTorch CV models, they don't do a center crop.
In order to get the same result as python, you must to make sure you process them the same way as python code does.
DJL image operation align with openCV operators, if you use PILLOW in python, you might see some difference in the result.
| https://stackoverflow.com/questions/67066218/ |
Is this a right way to train and test the model using Pytorch? | I am trying to implement a binary classification with Neural Networks with Pytorch to train a model as shown below:
model = nn.Sequential(
bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=196, out_features=300),
nn.ReLU(),
bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=300, out_features=196),
)
cross_entropy_loss = nn.CrossEntropyLoss()
klloss = bnn.BKLLoss(reduction='mean', last_layer_only=False)
klweight = 0.01
optimizer = optim.Adam(model.parameters(), lr=0.01)
Traing part:
#training
for step in range(200):
models = model(data_tensor)
cross_entropy = cross_entropy_loss(models, target_tensor)
#cross_entropy = 0
kl = klloss(model)
total_cost = cross_entropy + klweight*kl
optimizer.zero_grad()
total_cost.backward()
optimizer.step()
_, predicted = torch.max(models.data, 1)
final = target_tensor.size(0)
correct = (predicted == target_tensor).sum()
print('- Accuracy: %f %%' % (100 * float(correct) / final))
print('- CE : %2.2f, KL : %2.2f' % (cross_entropy.item(), kl.item()))
Question 1: Is it a right way to train a model? In many articles I found that there is a section to iterate over the DataLoader for training data, such as :
for i, data in enumerate(trainloader, 0):
# Get inputs
inputs, targets = data
Question 2: What is the use of this loop if I can directly give the data features(inputs) as data_tensor and data labels(target) as target_tensor? Because when iterating through data loader, it takes more time.
I am not sure how to test my model. I did as given below:
correct = 0
total = 0
with torch.no_grad():
for step in range(data_tensor_test.size(0)):
models = model(data_tensor_test)
_, predicted = torch.max(models.data, 1)
total += target_tensor_test.size(0)
correct += (predicted == target_tensor_test).sum().item()
print('Accuracy of the test data: %d %%' % (
100 * correct / total))
Question 3: Is it right way to test the model that I trained above?
| I'll attempt to answer your questions:
Question 1 & 2:
Is it a right way to train a model? In many articles I found that there is a section to iterate over the DataLoader for training data.
You should use a dataset with a dataloader when training in pytorch for several reasons:
It allows us to sample our data randomly
It doesn't preload data into memory, which is particularly useful for huge datasets
It operates in the background of code so fetches data parallel to training, thus saving time
It's very efficient at batching your data
What you're doing here is seemingly running your model on every element in your data at once. If you only have 32 points in your data this might be fine (although really not optimal because you have such limited data) but there is a balance to strike between running your optimizer and exposing your model to learning opportunities.
I would guess this takes longer because your model is very small and it probably takes longer to run the data fetching than it does when it's already preloaded in memory. It is hard to answer this without knowing the size of your dataset and the batch size of your processing.
Question 3: Is it right way to test the model that I trained above?
You need to set your model to its evaluation stage using model.eval() before you run any inference code. I also don't understand the point of your for loop since you just pass through the same data every time. I would generally run something like this:
correct = 0
total = 0
with torch.no_grad():
model.eval()
for step,(dat,lab) in enumerate(dataloader_test):
models = model(dat)
_, predicted = torch.max(models.data, 1)
total += dat.size(0)
correct += (predicted == lab).sum().item()
print('Accuracy of the test data: %d %%' % (
100 * correct / total))
| https://stackoverflow.com/questions/67066452/ |
PyTorch - How to use Avg 2d Pooling as a dataset transform? | In Pytorch, I have a dataset of 2D images (or alternatively, 1 channel images) and I'd like to apply average 2D pooling as a transform. How do I do this? The following does not work:
omniglot_dataset = torchvision.datasets.Omniglot(
root=data_dir,
download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.CenterCrop((80, 80)),
# torchvision.transforms.Resize((10, 10))
torch.nn.functional.avg_pool2d(kernel_size=3, strides=1),
])
)
| Transforms have to be a callable object. But torch.nn.functional.avg_pool2d doesn't return a callable object, but rather it is just a function you can call to process, that is why they are packaged under torch.nn.functional where all functionals receives the input and parameters. You need to use the other version:
torch.nn.AvgPool2d(kernel_size=3, stride=1)
Which returns a callable object, that can be called to process a given input, for example:
pooler = torch.nn.AvgPool2d(kernel_size=3, stride=1)
output = pooler(input)
With this change here you can see different versions how you can use callable version:
import torchvision
import torch
import matplotlib.pyplot as plt
omniglotv1 = torchvision.datasets.Omniglot(
root='./dataset/',
download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.CenterCrop((80, 80))
])
)
x1, y = omniglotv1[0]
print(x1.size()) # torch.Size([1, 80, 80])
omniglotv2 = torchvision.datasets.Omniglot(
root='./dataset/',
download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.CenterCrop((80, 80)),
torch.nn.AvgPool2d(kernel_size=3, stride=1)
])
)
x2, y = omniglotv2[0]
print(x2.size()) # torch.Size([1, 78, 78])
pooler = torch.nn.AvgPool2d(kernel_size=3, stride=1)
omniglotv3 = torchvision.datasets.Omniglot(
root='./dataset/',
download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.CenterCrop((80, 80)),
pooler
])
)
x3, y = omniglotv3[0]
print(x3.size()) # torch.Size([1, 78, 78])
Here, I just added a short code for image printing to see how the transform looks:
x_img = x1.squeeze().cpu().numpy()
ave_img = x2.squeeze().cpu().numpy()
combined = np.zeros((158,80))
combined[0:80,0:80] = x_img
combined[80:,0:78] = ave_img
plt.imshow(combined)
plt.show()
| https://stackoverflow.com/questions/67068424/ |
In pytorch, how to train a model with two or more outputs? | output_1, output_2 = model(x)
loss = cross_entropy_loss(output_1, target_1)
loss.backward()
optimizer.step()
loss = cross_entropy_loss(output_2, target_2)
loss.backward()
optimizer.step()
However, when I run this piece of code, I got this error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 4]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Then, I really wanna know what I am supposed to do to train a model with 2 or more outputs
| The entire premise on which pytorch (and other DL frameworks) is founded on is the backporpagation of the gradients of a scalar loss function.
In your case, you have a vector (of dim=2) loss function:
[cross_entropy_loss(output_1, target_1), cross_entropy_loss(output_2, target_2)]
You need to decide how to combine these two losses into a single scalar loss.
For instance:
weight = 0.5 # relative weight
loss = weight * cross_entropy_loss(output_1, target_1) + (1. - weight) * cross_entropy_loss(output_2, target_2)
# now loss is a scalar
loss.backward()
optimizer.step()
| https://stackoverflow.com/questions/67071168/ |
where I can deploy ml algorithm without time out? | I used Heroku to deploy ml algorithm with Pytorch but when I sent request to app to train model with 100 epochs then I got this
error
at=error code=H12 desc="Request Timeout" method=POST path="/train"
I searched about request time out and I find that the limit to request time
from Heroku is 30 sec
What's the solution of time out problem and is there an platform for training model without time out?
| Model training can be an expensive and lengthy operation that can take from minutes to hours (depending on context and amount of data), so it is likely that you will have a request timeout on other providers too.
The solution is to make the model train a background task:
create a worker Dyno which schedule the model training (ie everyday at h11:00)
OR
keep the web Dyno but make sure the /train request spawns a background thread (hence the web request completes quickly and the training happens in background)
| https://stackoverflow.com/questions/67073615/ |
Torch: how to concatenate tensors of different sizes? | I have two tensors:
rc of size: torch.Size([128, 16, 1])
xt of size: torch.Size([128, 40, 1])
I would like to concatenate xt to rc along dimension 2 so that the final size of rc_xt is:
rc_xt = torch.Size([128, 40, 2])
In short, I want to 'increase' the size of rc's dimension 1 (16) to 40 -- through any means, even just repeating elements -- and then concatenate them both along dimension 2.
I have tried to google how to do this but I cannot get it working no matter what I do, I'm a little confused about how to go about doing this.
Thank you in advance.
| "Increasing" the size of rc can be done simply by padding.
For instance, you can pad it by zeros:
p_rc = nnf.pad(rc, (0, 0, 0, xt.shape[1]-rc.shape[1], 0, 0), 'constant', 0)
Once you have a padded version of rc you can concat:
rc_xt = torch.concat((p_rc, xt), dim=-1)
| https://stackoverflow.com/questions/67074115/ |
Variational Autoencoder (VAE) returns consistent output | I'm working on the signal compression and reconstruction with VAE. I've trained 1600 fragments but the values of 1600 reconstructed signals are very similar. Moreover, results from same batch are almost consistent. As using the VAE, loss function of the model contains binary cross entropy (BCE) and the output of the train model should be located between 0 to 1 (The input data also normalized to 0~1).
VAE model(LSTM) :
class LSTM_VAE(nn.Module):
def __init__(self,
input_size=3000,
hidden=[1024, 512, 256, 128, 64],
latent_size=64,
num_layers=8,
bidirectional=True):
super().__init__()
self.input_size = input_size
self.hidden = hidden
self.latent_size = latent_size
self.num_layers = num_layers
self.bidirectional = bidirectional
self.actv = nn.LeakyReLU()
self.encode = nn.LSTM(input_size=self.input_size,
hidden_size=self.hidden[0],
num_layers=self.num_layers,
batch_first=True,
bidirectional=True)
self.bn_encode = nn.BatchNorm1d(1)
self.decode = nn.LSTM(input_size=self.latent_size,
hidden_size=self.hidden[2],
num_layers=self.num_layers,
batch_first=True,
bidirectional=True)
self.bn_decode = nn.BatchNorm1d(1)
self.fc1 = nn.Linear(self.hidden[0]*2, self.hidden[1])
self.fc2 = nn.Linear(self.hidden[1], self.hidden[2])
self.fc31 = nn.Linear(self.hidden[2], self.latent_size)
self.fc32 = nn.Linear(self.hidden[2], self.latent_size)
self.bn1 = nn.BatchNorm1d(1)
self.bn2 = nn.BatchNorm1d(1)
self.bn3 = nn.BatchNorm1d(1)
self.fc4 = nn.Linear(self.hidden[2]*2, self.hidden[1])
self.fc5 = nn.Linear(self.hidden[1], self.hidden[0])
self.fc6 = nn.Linear(self.hidden[0], self.input_size)
self.bn4 = nn.BatchNorm1d(1)
self.bn5 = nn.BatchNorm1d(1)
self.bn6 = nn.BatchNorm1d(1)
def encoder(self, x):
x = torch.unsqueeze(x, 1)
x, _ = self.encode(x)
x = self.actv(x)
x = self.fc1(x)
x = self.actv(x)
x = self.fc2(x)
x = self.actv(x)
mu = self.fc31(x)
log_var = self.fc32(x)
return mu, log_var
def decoder(self, z):
z, _ = self.decode(z)
z = self.bn_decode(z)
z = self.actv(z)
z = self.fc4(z)
z = self.bn4(z)
z = self.fc5(z)
z = self.bn5(z)
z = self.fc6(z)
z = self.bn6(z)
z = torch.sigmoid(z)
return torch.squeeze(z)
def sampling(self, mu, log_var):
std = torch.exp(0.5 * log_var)
eps = torch.randn_like(std)
return mu + eps * std
def forward(self, x):
mu, log_var = self.encoder(x.view(-1, self.input_size))
z = self.sampling(mu, log_var)
z = self.decoder(z)
return z, mu, log_var
Loss function and Train code :
def lossF(recon_x, x, mu, logvar, input_size):
BCE = F.binary_cross_entropy(recon_x, x.view(-1, input_size), reduction='sum')
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return BCE + KLD
optim = torch.optim.Adam(model.parameters(), lr=opt.lr)
for epoch in range(opt.epoch):
for batch_idx, data in enumerate(train_set):
data = data.to(device)
optim.zero_grad()
recon_x, mu, logvar = model(data)
loss = lossF(recon_x, data, mu, logvar, opt.input_size)
loss.backward()
train_loss += loss.item()
optim.step()
I built the code by refer the example codes of others and only changed very few parameters. I rebuilt the code, change the dataset, update parameters but nothing worked. If you have any suggestion to solve this problem, PLEASE let me know.
| I've find out the reason of the issue. It turns out that the decoder model derives output value in the range of 0.4 to 0.6 to stabilize the BCE loss. BCE loss can't be 0 even if the prediction is correct to answer. Also the loss value is non-linear to the range of the output. The easiest way to lower the loss is give 0.5 for the output, and my model did.
To avoid this error, I standardize my data and added some outlier data to avoid BCE issue. VAE is such complicated network for sure.
| https://stackoverflow.com/questions/67075117/ |
Having problems with linear pytorch model initialization | I am trying to figure out what is wrong with my initialization of the neural network model. I have already set a pdb trace to see that the defining neural network part is the source of error. Also, I get yellow marks on the defining neural network code because the module is expected to be returned but if I return the module, it causes a recursion error. It is a linear model that has to have an input dimension of the batch size * 81 and an output dimension of the batch size * 1. I am relatively new at pytorch and defining deep neural networks so this may not be a good question. My syntax may also be very bad. Any help is appreciated. The code below is the defining of the neural network and training of the pytorch model.
def get_nnet_model(module_list=nn.ModuleList(), input_dim: int = 8100, layer_dim: int = 100) -> nn.Module:
""" Get the neural network model
@return: neural network model
"""
device = torch.device('cpu')
module_list.append(nn.Linear(input_dim, layer_dim))
module_list[-1].weight.data.normal_(0, 0.1)
module_list[-1].bias.data.zero_()
def train_nnet(nnet: nn.Module, states_nnet: np.ndarray, outputs: np.ndarray, batch_size: int = 100, num_itrs: int = 10000, train_itr: int = 10000, device: torch.device, lr=0.01, lr_d=1):
nnet.train()
criterion = nn.MSELoss()
optimizer = optim.Adam(nnet.parameters(), lr=lr)
while train_itr < num_itrs:
optimizer.zero_grad()
lr_itr = lr + (lr_d ** train_itr)
for param_group in optimizer.param_groups:
param_group['lr'] = lr_itr
data = pickle.load(open("data/data.pkl", "rb"))
nnet_inputs_np, nnet_targets_np = data
nnet_inputs_np = nnet_inputs_np.astype(np.float32)
nnet_inputs = torch.tensor(nnet_inputs_np, device=device)
nnet_targets = torch.tensor(nnet_targets_np, device=device)
nnet_inputs = nnet_inputs.float()
nnet_outputs = nnet(nnet_inputs)
loss = criterion(nnet_outputs, nnet_targets)
loss.backward()
optimizer.step()
| Based on your comment, somewhere else in your code you have something like:
nnet = get_nnet_model(...)
However, get_nnet_model(...) isn't returning anything. Change the def get_nnet_model to:
def get_nnet_model(module_list=nn.ModuleList(), input_dim: int = 8100, layer_dim: int = 100) -> nn.Module:
""" Get the neural network model
@return: neural network model
"""
device = torch.device('cpu')
module_list.append(nn.Linear(input_dim, layer_dim))
module_list[-1].weight.data.normal_(0, 0.1)
module_list[-1].bias.data.zero_()
return module_list # add this one
| https://stackoverflow.com/questions/67079513/ |
what is wrong with the following implementation of Conv1d? | I am trying to implement a Conv1d layer with Batch Normalization but I keep getting the following error:
RuntimeError Traceback (most recent call last)
<ipython-input-32-ef6e122ea50c> in <module>()
----> 1 test()
2 for epoch in range(1, n_epochs + 1):
3 train(epoch)
4 test()
7 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
258 _single(0), self.dilation, self.groups)
259 return F.conv1d(input, weight, bias, self.stride,
--> 260 self.padding, self.dilation, self.groups)
261
262 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [25, 40, 5], but got 2-dimensional input of size [32, 40] instead
The data is passed on in batches of 32 using DataLoader class and it has 40 features and 10 labels. Here is my model:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
#self.flatten=nn.Flatten()
self.net_stack=nn.Sequential(
nn.Conv1d(in_channels=40, out_channels=25, kernel_size=5, stride=2), #applying batch norm
nn.ReLU(),
nn.BatchNorm1d(25, affine=True),
nn.Conv1d(in_channels=25, out_channels=20, kernel_size=5, stride=2), #applying batch norm
nn.ReLU(),
nn.BatchNorm1d(20, affine=True),
nn.Linear(20, 10),
nn.Softmax(dim=1))
def forward(self,x):
#x=torch.reshape(x, (1,-1))
result=self.net_stack(x)
return result
I have tried given in other answers like unsqueezing the input tensor, but none of the models in such questions is using Conv1d with batchnorm1d so I am not able to narrow down the problem to which layer must be causing the error. I have just started with using Pytorch and was able to implement a simple linear NN model, but I am facing this error while using a convolutional NN for the same data.
| You need to add a batch dimension to your input (and also change the number of input channels).
A conv1d layer accepts inputs of shape [B, C, L], where B is the batch size, C is the number of channels and L is the width/length of your input. Also, your conv1d layer expects 40 input channels:
nn.Conv1d(in_channels=40, out_channels=25, kernel_size=5, stride=2)
hence, your input tensor x must have shape [B, 40, L] while now it has shape [32, 40].
Try:
def forward(self,x):
result=self.net_stack(x[None])
return result
you will get another error complaining about dimensions mismatch, suggesting you need to change the number of input channels to 40.
| https://stackoverflow.com/questions/67089289/ |
How to resolve the error "mat1 and mat2 shapes cannot be multiplied" for the following CNN architecture? | I am trying to implement a Conv1d model with Batch Normalization but I am getting the error :
RuntimeError Traceback (most recent call last)
<ipython-input-117-ef6e122ea50c> in <module>()
----> 1 test()
2 for epoch in range(1, n_epochs + 1):
3 train(epoch)
4 test()
7 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755
RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x140 and 100x10)
I use a batch size of 32, with the number of features of the data being 40. I've been trying to calculate where the 32 x 140 is coming from but I wasn't able to do that. Here is the architecture for the CNN I am trying to use:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
#self.flatten=nn.Flatten()
self.net_stack=nn.Sequential(
nn.Conv1d(in_channels=1, out_channels=25, kernel_size=5, stride=2), #applying batch norm
nn.ReLU(),
nn.BatchNorm1d(25, affine=True),
nn.Conv1d(in_channels=25, out_channels=20, kernel_size=5, stride=2), #applying batch norm
nn.ReLU(),
nn.BatchNorm1d(20, affine=True),
nn.Flatten()
nn.Linear(20*5, 10),
nn.Softmax(dim=1))
def forward(self,x):
# result=self.net_stack(x[None])
result=self.net_stack(x[:, None, :])
return result
| This fully-connnected should change from:
nn.Linear(20*5, 10)
to:
nn.Linear(20*7, 10)
Why?
If your input data length is 40, then (B is the batch size):
Output after first conv (K=25): B x 25 x 18
Output after second conv (K=20): B x 20 x 7
Output after nn.Flatten(): B x 140, i.e., if B=32, then 32x140
| https://stackoverflow.com/questions/67094261/ |
[python]Colab crashes for unknown reason while training gan(pytorch) | I tried to train gan on some monkey pics but it crashes colab for unknown reason if try to train it.
I am using 1370 128*128 monkey images.
I have no idea where the issue might be, please respond
btw the runtime is gpu, so the problem doesn't linked to that
from torch import optim
import torchvision
from torchvision import transforms
import torch, torch.nn as nn
batch_size = 4
generic_transform = transforms.Compose([
transforms.ToTensor(),
transforms.ToPILImage(),
transforms.Resize((128,128)),
transforms.ToTensor(),
transforms.Normalize((0., 0., 0.), (6, 6, 6)),
transforms.Grayscale(),
])
trainset=torchvision.datasets.ImageFolder(root='drive/My Drive/monkeys', transform=generic_transform)
trainloader = torch.utils.data.DataLoader(trainset,batch_size=batch_size, shuffle=True)
def _init_weights(m):
if isinstance(m, nn.Conv2d):
nn.init.normal_(m.weight, 0.0, 0.02)
def gen_noise(noise_shape, n_samples, device='cuda:0'):
return torch.randn(noise_shape, n_samples).to(device)
class Discriminator(nn.Module):
#convolutional discriminator
def __init__(self) -> None:
super(Discriminator, self).__init__()
self.hidden_dim = 64
self.relu = nn.ReLU(inplace=False)
self.sigmoid = nn.Sigmoid()
self.conv_1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=32, stride = 2)
self.maxPooling_1 = nn.MaxPool2d(kernel_size=3)
self.conv_2 = nn.Conv2d(in_channels=32, out_channels=16, kernel_size=8, stride = 2)
self.maxPooling_2 = nn.MaxPool2d(kernel_size=2)
self.linear_layer = nn.Linear(in_features=self.hidden_dim, out_features=1)
def forward(self, x) -> float:
self.x = x
self.x = self.relu(self.conv_1(self.x))
self.x = self.maxPooling_1(self.x)
self.x = self.relu(self.conv_2(self.x))
self.x = self.maxPooling_2(self.x)
print(self.x.shape)
self.x = self.x.view(self.x.shape[0],
self.x.shape[1]*self.x.shape[2]*self.x.shape[3])
self.x = self.sigmoid(self.linear_layer(self.x))
return self.x
class Generator(nn.Module):
#fully connected generator
def __init__(self, hidden_dim, output_dim, z_dim) -> None:
super(Generator, self).__init__()
self.relu = nn.ReLU(inplace=False)
self.hidden_dim = hidden_dim
self.output_dim = output_dim
self.z_dim = z_dim
self.linear_layer_1 = nn.Linear(in_features=self.z_dim, out_features=self.hidden_dim)
self.linear_layer_2 = nn.Linear(in_features=self.hidden_dim, out_features=self.hidden_dim*2)
self.linear_layer_3 = nn.Linear(in_features=self.hidden_dim*2, out_features=self.output_dim)
def forward(self, x) -> torch.tensor:
self.x = x
self.x = self.relu(self.linear_layer_1(self.x))
self.x = self.relu(self.linear_layer_2(self.x))
self.x = self.relu(self.linear_layer_3(self.x))
return self.x
class GAN():
def __init__(self, hidden_dim, output_dim, z_dim, criterion, device="cuda:0") -> None:
if device == "cuda:0":
assert torch.cuda.is_available(), "apply gpu"
self.hidden_dim = hidden_dim
self.output_dim = output_dim
self.device = device
self.criterion = criterion
self.z_dim = torch.tensor(z_dim).long()
self.discriminator = Discriminator().to(self.device)
self.d_opt = optim.Adam(self.discriminator.parameters(), lr=0.0001)
self.generator = Generator(hidden_dim=self.hidden_dim, output_dim=self.output_dim, z_dim=self.z_dim).to(self.device)
self.g_opt = optim.Adam(self.generator.parameters(), lr=0.0001)
self.generator = self.generator.apply(_init_weights)
self.discriminator = self.discriminator.apply(_init_weights)
class GAN_Trainer():
def __init__(self, z_dim, model, device="cuda:0") -> None:
self.device = device
self.gan = model
self.z_dim = z_dim
self._d_mean_train_loss = None
self._g_mean_train_loss = None
def train(self, batch) -> None:
print(1)
self.batch = batch.to(self.device)
self.noise = gen_noise(self.batch.shape[0],self.z_dim).to(self.device)
self.gan.g_opt.zero_grad()
self._g_output = self.gan.generator.forward(self.noise.to(self.device))
self._g_output = self._g_output.view(self.batch.shape[0],
1,
torch.sqrt(torch.tensor(self.gan.output_dim)).int(),
torch.sqrt(torch.tensor(self.gan.output_dim)).int())
print(self._g_output.shape)
self._d_for_g_pred = self.gan.discriminator.forward(self._g_output)
self._g_loss = self.gan.criterion(self._d_for_g_pred, torch.zeros_like(self._d_for_g_pred))
self._g_loss.backward()
self.gan.g_opt.step()
self.gan.d_opt.zero_grad()
self._d_fake_pred = self.gan.discriminator.forward(self._g_output)
self._d_fake_loss = self.gan.criterion(self._g_output, torch.zeros_like(self._g_output))
self._d_real_pred = self.gan.discriminator.forward(self.batch)
self._d_real_loss = self.gan.criterion(self.batch, torch.ones_like(self.batch))
self._d_mean_loss = torch.mean(torch.cat((self._d_fake_loss, self._d_real_loss),0))
self._d_mean_loss.backward(retain_graph=True)
self.gan.d_opt.step()
self._d_mean_train_loss = self._d_mean_train_loss + self._d_mean_loss.detach()
self._g_mean_train_loss = self._g_mean_train_loss + self._g_loss.detach()
torch.cuda.empty_cache()
gan = GAN(hidden_dim=1200,
output_dim=16384,
z_dim = 1000,
criterion=nn.BCEWithLogitsLoss())
trainer = GAN_Trainer(model=gan, z_dim=1000)
#here is where it crashes
from tqdm import trange
torch.cuda.empty_cache()
image = trainset[0][0].to("cuda:0").view(1,
trainset[0][0].shape[0],
trainset[0][0].shape[1],
trainset[0][0].shape[2])
trainer.train(batch=image)
please help! I am starting to lose my sanity already, thanks! <3
| I've debugged your code a bit, and found that the crash is happening at line:
self._d_mean_loss = torch.mean(torch.cat((self._d_fake_loss, self._d_real_loss),0))
I have tried to find out the reason it is crashing, it looks like your operations are not properly done and some inplace operations are changing the graph and causing pytorch to malfunction.
There are some major issues in the code that you need to change in terms of GAN logic. Here:
self._d_real_loss = self.gan.criterion(self.batch, torch.ones_like(self.batch))
Real loss should be based on the output of discriminator self._d_real_pred. In general you should have three backward calls:
You feed the discriminator with real batch and expect it to output 'real' class.
Generate fake images with generator. Feed it to discriminator and expect it to output 'fake' class.
Lastly for generator, you feed the discriminator fake images again and expect it to fail, optimize for 'fake' class so that the generator can learn outputting better looking fakes.
I would highly recommend this tutorial: https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html . It is a great tutorial for understanding how GANs are trained.
Lastly, I would remove all self.... storing in forward and train functions. Any operations are done tensors update the graph by storing tensors as members and modifying them may cause problems and cause gradients to wipe etc.
| https://stackoverflow.com/questions/67096033/ |
What output_padding does in nn.ConvTranspose2d? | What is the working of Output_padding in Conv2dTranspose? Please Help me to understand this?
Conv2dTranspose(1024, 512, kernel_size=3, stride=2, padding=1, output_padding=1)
| According to documentation here: https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html when applying Conv2D operation with Stride > 1 you can get same output dimensions with different inputs. For example, 7x7 and 8x8 inputs would both return 3x3 output with Stride=2:
import torch
conv_inp1 = torch.rand(1,1,7,7)
conv_inp2 = torch.rand(1,1,8,8)
conv1 = torch.nn.Conv2d(1, 1, kernel_size = 3, stride = 2)
out1 = conv1(conv_inp1)
out2 = conv1(conv_inp2)
print(out1.shape) # torch.Size([1, 1, 3, 3])
print(out2.shape) # torch.Size([1, 1, 3, 3])
And when applying the transpose convolution, it is ambiguous that which output shape to return, 7x7 or 8x8 for stride=2 transpose convolution. Output padding helps pytorch to determine 7x7 or 8x8 output with output_padding parameter. Note that, it doesn't pad zeros or anything to output, it is just a way to determine the output shape and apply transpose convolution accordingly.
conv_t1 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2)
conv_t2 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2, output_padding=1)
transposed1 = conv_t1(out1)
transposed2 = conv_t2(out2)
print(transposed1.shape) # torch.Size([1, 1, 7, 7])
print(transposed2.shape) # torch.Size([1, 1, 8, 8])
| https://stackoverflow.com/questions/67096544/ |
Updating packages in conda | I have a problem with updating packages in conda. The list of my installed packages is:
#
# Name Version Build Channel
_anaconda_depends 2020.07 py38_0
_ipyw_jlab_nb_ext_conf 0.1.0 py38_0
alabaster 0.7.12 pyhd3eb1b0_0
anaconda custom py38_1
anaconda-client 1.7.2 py38_0
anaconda-navigator 1.10.0 py38_0
anaconda-project 0.9.1 pyhd3eb1b0_1
argh 0.26.2 py38_0
argon2-cffi 20.1.0 py38h2bbff1b_1
ase 3.21.0 pypi_0 pypi
asn1crypto 1.4.0 py_0
astroid 2.5 py38haa95532_1
astropy 4.2.1 py38h2bbff1b_1
async_generator 1.10 pyhd3eb1b0_0
atomicwrites 1.4.0 py_0
attrs 20.3.0 pyhd3eb1b0_0
autopep8 1.5.6 pyhd3eb1b0_0
babel 2.9.0 pyhd3eb1b0_0
backcall 0.2.0 pyhd3eb1b0_0
backports 1.0 pyhd3eb1b0_2
backports.functools_lru_cache 1.6.3 pyhd3eb1b0_0
backports.shutil_get_terminal_size 1.0.0 pyhd3eb1b0_3
backports.tempfile 1.0 pyhd3eb1b0_1
backports.weakref 1.0.post1 py_1
bcrypt 3.2.0 py38he774522_0
beautifulsoup4 4.9.3 pyha847dfd_0
bitarray 1.9.1 py38h2bbff1b_1
bkcharts 0.2 py38_0
blas 1.0 mkl
bleach 3.3.0 pyhd3eb1b0_0
blosc 1.21.0 h19a0ad4_0
bokeh 2.3.1 py38haa95532_0
boto 2.49.0 py38_0
bottleneck 1.3.2 py38h2a96729_1
brotli 1.0.9 ha925a31_2
brotlipy 0.7.0 py38h2bbff1b_1003
bzip2 1.0.8 he774522_0
ca-certificates 2021.1.19 haa95532_1
certifi 2020.12.5 py38haa95532_0
cffi 1.14.5 py38hcd4344a_0
chardet 4.0.0 py38haa95532_1003
charls 2.2.0 h6c2663c_0
click 7.1.2 pyhd3eb1b0_0
cloudpickle 1.6.0 py_0
clyent 1.2.2 py38_1
colorama 0.4.4 pyhd3eb1b0_0
comtypes 1.1.9 py38haa95532_1002
conda 4.10.0 py38haa95532_0
conda-build 3.20.5 py38_1
conda-env 2.6.0 1
conda-package-handling 1.7.3 py38h8cc25b3_1
conda-verify 3.4.2 py_1
console_shortcut 0.1.1 4
contextlib2 0.6.0.post1 py_0
cpuonly 1.0 0 pytorch
cryptography 3.4.7 py38h71e12ea_0
curl 7.71.1 h2a8f88b_1
cycler 0.10.0 py38_0
cython 0.29.22 py38hd77b12b_0
cytoolz 0.11.0 py38he774522_0
dask 2021.4.0 pyhd3eb1b0_0
dask-core 2021.4.0 pyhd3eb1b0_0
decorator 5.0.6 pyhd3eb1b0_0
defusedxml 0.7.1 pyhd3eb1b0_0
diff-match-patch 20200713 py_0
distributed 2021.4.0 py38haa95532_0
docutils 0.17 py38haa95532_1
entrypoints 0.3 py38_0
et_xmlfile 1.0.1 py_1001
fastcache 1.1.0 py38he774522_0
filelock 3.0.12 pyhd3eb1b0_1
flake8 3.9.0 pyhd3eb1b0_0
flask 1.1.2 pyhd3eb1b0_0
freetype 2.10.4 hd328e21_0
fsspec 0.9.0 pyhd3eb1b0_0
future 0.18.2 py38_1
get_terminal_size 1.0.0 h38e98db_0
gevent 21.1.2 py38h2bbff1b_1
giflib 5.2.1 h62dcd97_0
glob2 0.7 pyhd3eb1b0_0
gmpy2 2.0.8 py38h7edee0f_3
googledrivedownloader 0.4 pypi_0 pypi
greenlet 1.0.0 py38hd77b12b_2
h5py 2.10.0 py38h5e291fa_0
hdf5 1.10.4 h7ebc959_0
heapdict 1.0.1 py_0
html5lib 1.1 py_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pyhd3eb1b0_0
imagecodecs 2021.3.31 py38h5da4933_0
imageio 2.9.0 pyhd3eb1b0_0
imagesize 1.2.0 pyhd3eb1b0_0
importlib-metadata 3.10.0 py38haa95532_0
importlib_metadata 3.10.0 hd3eb1b0_0
iniconfig 1.1.1 pyhd3eb1b0_0
intel-openmp 2020.2 254
intervaltree 3.1.0 py_0
ipykernel 5.3.4 py38h5ca1d4c_0
ipynb 0.5.1 pypi_0 pypi
ipython 7.22.0 py38hd4e2768_0
ipython_genutils 0.2.0 pyhd3eb1b0_1
ipywidgets 7.6.3 pyhd3eb1b0_1
isodate 0.6.0 pypi_0 pypi
isort 5.8.0 pyhd3eb1b0_0
itsdangerous 1.1.0 pyhd3eb1b0_0
jdcal 1.4.1 py_0
jedi 0.17.1 py38_0
jinja2 2.11.3 pyhd3eb1b0_0
joblib 1.0.1 pyhd3eb1b0_0
jovian 0.2.28 pypi_0 pypi
jpeg 9b hb83a4c4_2
json5 0.9.5 py_0
jsonschema 3.2.0 py_2
jupyter 1.0.0 py38_7
jupyter_client 6.1.12 pyhd3eb1b0_0
jupyter_console 6.4.0 pyhd3eb1b0_0
jupyter_core 4.7.1 py38haa95532_0
jupyterlab 2.2.6 py_0
jupyterlab_pygments 0.1.2 py_0
jupyterlab_server 1.2.0 py_0
jupyterlab_widgets 1.0.0 pyhd3eb1b0_1
keyring 22.3.0 py38haa95532_0
kiwisolver 1.3.1 py38hd77b12b_0
krb5 1.18.2 hc04afaa_0
lazy-object-proxy 1.6.0 py38h2bbff1b_0
lcms2 2.12 h83e58a3_0
lerc 2.2.1 hd77b12b_0
libaec 1.0.4 h33f27b4_1
libarchive 3.4.2 h5e25573_0
libcurl 7.71.1 h2a8f88b_1
libdeflate 1.7 h2bbff1b_5
libiconv 1.15 h1df5818_7
liblief 0.10.1 ha925a31_0
libllvm9 9.0.1 h21ff451_0
libpng 1.6.37 h2a8f88b_0
libsodium 1.0.18 h62dcd97_0
libspatialindex 1.9.3 h6c2663c_0
libssh2 1.9.0 h7a1dbc1_1
libtiff 4.2.0 hd0e1b90_0
libuv 1.40.0 he774522_0
libxml2 2.9.10 hb89e7f3_3
libxslt 1.1.34 he774522_0
libzopfli 1.0.3 ha925a31_0
llvmlite 0.36.0 py38h34b8924_4
locket 0.2.1 py38haa95532_1
lxml 4.6.3 py38h9b66d53_0
lz4-c 1.9.3 h2bbff1b_0
lzo 2.10 he774522_2
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
markupsafe 1.1.1 py38he774522_0
matplotlib 3.3.4 py38haa95532_0
matplotlib-base 3.3.4 py38h49ac443_0
mccabe 0.6.1 py38_1
menuinst 1.4.16 py38he774522_1
mistune 0.8.4 py38he774522_1000
mkl 2020.2 256
mkl-service 2.3.0 py38h196d8e1_0
mkl_fft 1.3.0 py38h46781fe_0
mkl_random 1.1.1 py38h47e9c7a_0
mock 4.0.3 pyhd3eb1b0_0
more-itertools 8.7.0 pyhd3eb1b0_0
mpc 1.1.0 h7edee0f_1
mpfr 4.0.2 h62dcd97_1
mpir 3.0.0 hec2e145_1
mpmath 1.2.1 py38haa95532_0
msgpack-python 1.0.2 py38h59b6b97_1
msys2-conda-epoch 20160418 1
multipledispatch 0.6.0 py38_0
navigator-updater 0.2.1 py38_0
nbclient 0.5.3 pyhd3eb1b0_0
nbconvert 6.0.7 py38_0
nbformat 5.1.3 pyhd3eb1b0_0
nest-asyncio 1.5.1 pyhd3eb1b0_0
networkx 2.5 py_0
ninja 1.10.2 h6d14046_1
nltk 3.6.1 pyhd3eb1b0_0
nose 1.3.7 pyhd3eb1b0_1006
notebook 6.3.0 py38haa95532_0
numba 0.53.1 py38hf11a4ad_0
numexpr 2.7.3 py38hcbcaa1e_0
numpy 1.19.2 py38hadc3359_0
numpy-base 1.19.2 py38ha3acd2a_0
numpydoc 1.1.0 pyhd3eb1b0_1
olefile 0.46 py_0
opencv-python 4.5.1.48 pypi_0 pypi
openjpeg 2.3.0 h5ec785f_1
openpyxl 3.0.7 pyhd3eb1b0_0
openssl 1.1.1k h2bbff1b_0
packaging 20.9 pyhd3eb1b0_0
pandas 1.2.3 py38hf11a4ad_0
pandoc 2.12 haa95532_0
pandocfilters 1.4.3 py38haa95532_1
paramiko 2.7.2 py_0
parso 0.7.0 py_0
partd 1.2.0 pyhd3eb1b0_0
path 15.1.2 py38haa95532_0
path.py 12.5.0 0
pathlib2 2.3.5 py38haa95532_2
pathtools 0.1.2 py_1
patsy 0.5.1 py38_0
pep8 1.7.1 py38_0
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 8.2.0 py38h4fa10fc_0
pip 20.2.4 py38haa95532_0
pkginfo 1.7.0 py38haa95532_0
pluggy 0.13.1 py38haa95532_0
ply 3.11 py38_0
powershell_shortcut 0.0.1 3
prometheus_client 0.10.1 pyhd3eb1b0_0
prompt-toolkit 3.0.17 pyh06a4308_0
prompt_toolkit 3.0.17 hd3eb1b0_0
protobuf 3.14.0 pypi_0 pypi
psutil 5.8.0 py38h2bbff1b_1
ptyprocess 0.7.0 pyhd3eb1b0_2
py 1.10.0 pyhd3eb1b0_0
py-lief 0.10.1 py38ha925a31_0
pycodestyle 2.6.0 pyhd3eb1b0_0
pycosat 0.6.3 py38h2bbff1b_0
pycparser 2.20 py_2
pycurl 7.43.0.6 py38h7a1dbc1_0
pydocstyle 6.0.0 pyhd3eb1b0_0
pyerfa 1.7.2 py38h2bbff1b_0
pyflakes 2.2.0 pyhd3eb1b0_0
pygments 2.8.1 pyhd3eb1b0_0
pylint 2.7.4 py38haa95532_1
pynacl 1.4.0 py38h62dcd97_1
pyodbc 4.0.30 py38ha925a31_0
pyopenssl 20.0.1 pyhd3eb1b0_1
pyparsing 2.4.7 pyhd3eb1b0_0
pyqt 5.9.2 py38ha925a31_4
pyreadline 2.1 py38_1
pyrsistent 0.17.3 py38he774522_0
pysocks 1.7.1 py38haa95532_0
pytables 3.6.1 py38ha5be198_0
pytest 6.2.3 py38haa95532_2
python 3.8.5 h5fd99cc_1
python-dateutil 2.8.1 pyhd3eb1b0_0
python-jsonrpc-server 0.4.0 py_0
python-language-server 0.35.1 py_0
python-libarchive-c 2.9 pyhd3eb1b0_1
python-louvain 0.15 pypi_0 pypi
pytorch 1.7.1 py3.8_cpu_0 [cpuonly] pytorch
pytz 2021.1 pyhd3eb1b0_0
pywavelets 1.1.1 py38he774522_2
pywin32 227 py38he774522_1
pywin32-ctypes 0.2.0 py38_1000
pywinpty 0.5.7 py38_0
pyyaml 5.4.1 py38h2bbff1b_1
pyzmq 20.0.0 py38hd77b12b_1
qdarkstyle 3.0.2 pyhd3eb1b0_0
qt 5.9.7 vc14h73c81de_0
qtawesome 1.0.2 pyhd3eb1b0_0
qtconsole 5.0.3 pyhd3eb1b0_0
qtpy 1.9.0 py_0
rdflib 5.0.0 pypi_0 pypi
regex 2021.4.4 py38h2bbff1b_0
requests 2.25.1 pyhd3eb1b0_0
rope 0.18.0 py_0
rtree 0.9.4 py38h21ff451_1
ruamel_yaml 0.15.100 py38h2bbff1b_0
scikit-image 0.18.1 py38hf11a4ad_0
scikit-learn 0.24.1 py38hf11a4ad_0
scipy 1.6.2 py38h14eb087_0
seaborn 0.11.1 pyhd3eb1b0_0
send2trash 1.5.0 pyhd3eb1b0_1
setuptools 52.0.0 py38haa95532_0
simplegeneric 0.8.1 py38_2
singledispatch 3.6.1 pyhd3eb1b0_1001
sip 4.19.13 py38ha925a31_0
six 1.15.0 py38haa95532_0
snappy 1.1.8 h33f27b4_0
snowballstemmer 2.1.0 pyhd3eb1b0_0
sortedcollections 2.1.0 pyhd3eb1b0_0
sortedcontainers 2.3.0 pyhd3eb1b0_0
soupsieve 2.2.1 pyhd3eb1b0_0
sphinx 3.2.1 py_0
sphinxcontrib 1.0 py38_1
sphinxcontrib-applehelp 1.0.2 pyhd3eb1b0_0
sphinxcontrib-devhelp 1.0.2 pyhd3eb1b0_0
sphinxcontrib-htmlhelp 1.0.3 pyhd3eb1b0_0
sphinxcontrib-jsmath 1.0.1 pyhd3eb1b0_0
sphinxcontrib-qthelp 1.0.3 pyhd3eb1b0_0
sphinxcontrib-serializinghtml 1.1.4 pyhd3eb1b0_0
sphinxcontrib-websupport 1.2.4 py_0
spyder 4.1.5 py38_0
spyder-kernels 1.9.4 py38_0
sqlalchemy 1.4.7 py38h2bbff1b_0
sqlite 3.35.4 h2bbff1b_0
statsmodels 0.12.2 py38h2bbff1b_0
sympy 1.7.1 py38haa95532_0
tbb 2020.3 h74a9793_0
tblib 1.7.0 py_0
tensorboardx 2.1 pypi_0 pypi
terminado 0.9.4 py38haa95532_0
testpath 0.4.4 pyhd3eb1b0_0
threadpoolctl 2.1.0 pyh5ca1d4c_0
tifffile 2021.3.31 pyhd3eb1b0_1
tk 8.6.10 he774522_0
toml 0.10.2 pyhd3eb1b0_0
toolz 0.11.1 pyhd3eb1b0_0
torch-cluster 1.5.8 pypi_0 pypi
torch-geometric 1.6.3 pypi_0 pypi
torch-scatter 2.0.5 pypi_0 pypi
torch-sparse 0.6.8 pypi_0 pypi
torch-spline-conv 1.2.0 pypi_0 pypi
torchaudio 0.7.2 py38 pytorch
torchvision 0.8.2 py38_cpu [cpuonly] pytorch
tornado 6.1 py38h2bbff1b_0
tqdm 4.59.0 pyhd3eb1b0_1
traitlets 5.0.5 pyhd3eb1b0_0
typing_extensions 3.7.4.3 pyha847dfd_0
ujson 4.0.2 py38hd77b12b_0
unicodecsv 0.14.1 py38_0
urllib3 1.26.4 pyhd3eb1b0_0
uuid 1.30 pypi_0 pypi
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
watchdog 1.0.2 py38haa95532_1
wcwidth 0.2.5 py_0
webencodings 0.5.1 py38_1
werkzeug 1.0.1 pyhd3eb1b0_0
wheel 0.36.2 pyhd3eb1b0_0
widgetsnbextension 3.5.1 py38_0
win_inet_pton 1.1.0 py38haa95532_0
win_unicode_console 0.5 py38_0
wincertstore 0.2 py38_0
winpty 0.4.3 4
wrapt 1.12.1 py38he774522_1
xlrd 2.0.1 pyhd3eb1b0_0
xlsxwriter 1.3.8 pyhd3eb1b0_0
xlwings 0.23.0 py38haa95532_0
xlwt 1.3.0 py38_0
xmltodict 0.12.0 py_0
xz 5.2.5 h62dcd97_0
yaml 0.2.5 he774522_0
yapf 0.31.0 pyhd3eb1b0_0
zeromq 4.3.3 ha925a31_3
zfp 0.5.5 hd77b12b_6
zict 2.0.0 pyhd3eb1b0_0
zipp 3.4.1 pyhd3eb1b0_0
zlib 1.2.11 h62dcd97_4
zope 1.0 py38_1
zope.event 4.5.0 py38_0
zope.interface 5.3.0 py38h2bbff1b_0
zstd 1.4.5 h04227a9_0
and I want to update
torch-cluster 1.5.8 pypi_0 pypi
torch-geometric 1.6.3 pypi_0 pypi
torch-scatter 2.0.5 pypi_0 pypi
torch-sparse 0.6.8 pypi_0 pypi
torch-spline-conv 1.2.0 pypi_0 pypi
However, when I enter the command
conda update torch-cluster
I get
PackageNotInstalledError: Package is not installed in prefix.
prefix: C:\Users\Rostyslav\anaconda3
package name: torch-cluster
I may be wrong about it, but I think the reason of the error is that the packages that I want to update are located in different channels, namely pypi. But even if it true, I do not understand how to reach this channel.
Can anybody explain?
Thanks a lot!
| Channel pypi means that the package was installed with pip.
You may need to upgrade it with pip as well
pip install torch-cluster --upgrade
| https://stackoverflow.com/questions/67097308/ |
sampler argument in DataLoader of Pytorch | While using Pytorch's DataLoader utility, in sampler what is the purpose of RandomIdentitySampler? And in RandomIdentitySampler there is an argument instances. Does instances depends upon number of workers? If there is are 4 workers then should there be 4 instances as well?
Following is the chunk of code:
c_dataloaders = DataLoader(Preprocessor(cluster_dataset.train_set,
root=cluster_dataset.images_dir,
transform=train_transformer),
batch_size=args.batch_size_stage2,
num_workers=args.workers,
sampler=RandomIdentitySampler(cluster_dataset.train_set,
args.batch_size_stage2,
args.instances)
| This sampler is not part of the PyTorch or any other official lib (torchvision, torchtext, etc.). Anyway, there is a RandomIdentitySampler in the torchreid from KaiyangZhou. Assuming this is the case:
While using Pytorch's DataLoader utility, in sampler what is the purpose of RandomIdentitySampler?
As you can see in the DataLoader documentation: the sampler "defines the strategy to draw samples from the dataset". More specifically, based on RandomIdentitySampler documentation, it "randomly samples N identities each with K instances".
And in RandomIdentitySampler there is an argument instances. Does instances depends upon number of workers?
Based on the previous answer, you can note that instances does not depend on the number of workers. It simply sets the number of instances of each identity that will be drawn from the dataset for each batch.
If there are 4 workers then should there be 4 instances as well?
Not necessarily. The only constraint is that the number of instances should not be smaller than the batch size.
| https://stackoverflow.com/questions/67098245/ |
Difference between two implemantations of Seq layers - pytorch | I have implemented a model using the traditional way of implementation. and the code is like this.
def __init__(self):
super(enhance_net_nopool, self).__init__()
number_f = 32
self.relu = nn.ReLU(inplace=True)
self.e_conv1 = nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True)
self.e_conv2 = nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True)
self.e_conv3 = nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True)
def forward(self, x):
x1 = self.relu(self.e_conv1(x))
x2 = self.relu(self.e_conv2(x1))
x3 = self.relu(self.e_conv3(x2))
I need to know is it possible to rewrite that code using
seq_layers=nn.Sequential(*layers)
this *layers.
If it's implemented like this.
self.conv_block = [
relu = nn.ReLU(inplace=True)
conv = nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True)
]
are they the same?
| Your code can be rewritten as:
layers = [nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True), nn.ReLU(inplace=True),
nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True), nn.ReLU(inplace=True),
nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True), nn.ReLU(inplace=True)
]
seq_layers = nn.Sequential(*layers)
or
def get_conv(number_f):
return nn.Sequential(
nn.Conv2d(number_f, number_f, 3, 1, 1, bias=True),
nn.ReLU(inplace=True)
)
layers = [get_conv(number_f) for _ in range(3)]
seq_layers = nn.Sequential(*layers)
print(seq_layers)
| https://stackoverflow.com/questions/67103482/ |
RuntimeError: expected scalar type Double but found Float | I'm a newbie in PyTorch and I got the following error from my cnn layer: "RuntimeError: expected scalar type Double but found Float". I converted each element into .astype(np.double) but the error message remains. Then after converting Tensor tried to use .double() and again the error message remains.
Here is my code for a better understanding:
import torch.nn as nn
class CNN(nn.Module):
# Contructor
def __init__(self, shape):
super(CNN, self).__init__()
self.cnn1 = nn.Conv1d(in_channels=shape, out_channels=32, kernel_size=3)
self.act1 = torch.nn.ReLU()
# Prediction
def forward(self, x):
x = self.cnn1(x)
x = self.act1(x)
return x
X_train_reshaped = np.zeros([X_train.shape[0],int(X_train.shape[1]/depth),depth])
for i in range(X_train.shape[0]):
for j in range(X_train.shape[1]):
X_train_reshaped[i][int(j/3)][j%3] = X_train[i][j].astype(np.double)
X_train = torch.tensor(X_train_reshaped)
y_train = torch.tensor(y_train)
# Dataset w/o any tranformations
train_dataset_normal = CustomTensorDataset(tensors=(X_train, y_train), transform=None)
train_loader = torch.utils.data.DataLoader(train_dataset_normal, shuffle=True, batch_size=16)
model = CNN(X_train.shape[1]).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
# Train the model
#how to implement batch_size??
for epoch in range(epochno):
#for i, (dataX, labels) in enumerate(X_train_reshaped,y_train):
for i, (dataX, labels) in enumerate(train_loader):
dataX = dataX.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(dataX)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
And following is the error I received:
RuntimeError Traceback (most recent call last)
<ipython-input-39-d99b62b3a231> in <module>
14
15 # Forward pass
---> 16 outputs = model(dataX.double())
17 loss = criterion(outputs, labels)
18
~\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
<ipython-input-27-7510ac2f1f42> in forward(self, x)
22 # Prediction
23 def forward(self, x):
---> 24 x = self.cnn1(x)
25 x = self.act1(x)
~\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\torch\nn\modules\conv.py in forward(self, input)
261
262 def forward(self, input: Tensor) -> Tensor:
--> 263 return self._conv_forward(input, self.weight, self.bias)
264
265
~\torch\nn\modules\conv.py in _conv_forward(self, input, weight, bias)
257 weight, bias, self.stride,
258 _single(0), self.dilation, self.groups)
--> 259 return F.conv1d(input, weight, bias, self.stride,
260 self.padding, self.dilation, self.groups)
261
RuntimeError: expected scalar type Double but found Float
| I don't know It's me or Pytorch but the error message is trying to say convert into float somehow. Therefore inside forward pass I resolved the problem by converting dataX to float as following: outputs = model(dataX.float())
| https://stackoverflow.com/questions/67104073/ |
Where I can find an intuitive explanation of PyTorch's Tensor.unfold() being used to get image patches? | Recently I came across some code that extracted (sliding-window style) a number of square patches from an RGB image (or set of them) of shape N x B x H x W. They did this as follows:
patch_width = 3
patches = image.permute(0,2,3,1).unfold(dim = 1, size = patch_width, stride = patch_width) \
.unfold(dim = 2, size = patch_width, stride = patch_width)
I understand that the unfold() method "returns all all slices of size size from self tensor in the dimension dim," from reading the documentation, but try as I might, I just can't get a good intuition for why stacking two .unfold() calls produces square patches. I get what happens when you use unfold() once on a tensor. I don't get what happens when you call it twice successively along two different dimensions.
I've seen this approach used multiple times, always without a good explanation as to why it works (1, 2), and it's driving me bonkers. Why are the spatial dimensions H and W permuted to be dims 1 and 2, while the channel dim is set to 3? Why does unfolding the same way on dim 1, then on dim 2 result in square patch_width by patch_width patches?
Any insight would be hugely appreciated, even if it's just a link to an article I missed. I've been Googling for well over an hour now and have met very little success. Thank you!
[1]PyTorch forum post
[2]Another forum post doing the same thing
| I suppose, there are two distinct parts in your question, first one is why you need to permute, and second how two unfolds combined produce square image slices.
First moment is rather technical - unfold places produced slices in the new dimension of tensor, being 'inserted at the end of the shape'. permute here is needed to place it near channel or depth dimension, for merging them natural way using view later.
Now second part. Consider a deck of imaginary cards, each card is a picture channel. Take a card and cut it on vertical slices, then place slices on top of each other. Take a second card and do the same, placing result on the first one, do it with all cards. Now repeat the procedure, with cutting slices horisontaly. At the end you have much thinner but taller deck, where former cards become subdecks of patches.
| https://stackoverflow.com/questions/67104280/ |
Still error trying to run libtorch helloworld pod install on apple silicon m1 | When trying to create the cocoapod for ios pytorch helloworld on my mac mini m1 (big sur 11.2.1)
with installed miniconda3, homebrew 3.1.1, ruby 3.0.1.p64, gem 3.2.15, pod 1.10.1
and PATH set to
/opt/homebrew/lib/ruby/gems/3.0.0/bin:/opt/homebrew/opt/ruby/bin:$PATH
according to pod install giving error related to ruby gems and libffi
running pod install dies with a segmentation fault, see the following ruby crash file
So maybe someone can give me an idea how to get ios 14.4 and m1 big sur 11 working together?
Process: ruby [80736]
Path: /opt/homebrew/*/ruby
Identifier: ruby
Version: 0
Code Type: ARM-64 (Native)
Parent Process: bash [70856]
Responsible: Terminal [70854]
User ID: 474179448
Date/Time: 2021-04-15 16:58:57.213 +0200
OS Version: macOS 11.2.1 (20D74)
Report Version: 12
Anonymous UUID: 5893C804-2FF8-FBB1-5889-A883C6E0135A
Sleep/Wake UUID: 0044F30E-9B3E-4B4D-B6D2-B508B271D6B1
Time Awake Since Boot: 85000 seconds
Time Since Wake: 2300 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGABRT)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000001
Exception Note: EXC_CORPSE_NOTIFY
VM Regions Near 0x1:
-->
__TEXT 10234c000-102350000 [ 16K] r-x/r-x SM=COW /opt/homebrew/*
Application Specific Information:
abort() called
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x0000000185430cec __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000185461c24 pthread_kill + 292
2 libsystem_c.dylib 0x00000001853a9864 abort + 104
3 libruby.3.0.dylib 0x00000001026923a0 die + 12
4 libruby.3.0.dylib 0x00000001026923f8 rb_bug_for_fatal_signal + 88
5 libruby.3.0.dylib 0x0000000102774490 sigsegv + 96
6 libsystem_platform.dylib 0x00000001854a9c44 _sigtramp + 56
7 ??? 0xffff80019719d8cc 0 + 18446603343051217100
8 libcurl.4.dylib 0x000000019719d8cc curl_easy_getinfo + 40
9 libffi.dylib 0x0000000191ef4050 ffi_call_SYSV + 80
10 libffi.dylib 0x0000000191efc9d8 ffi_call_int + 944
11 ffi_c.bundle 0x00000001042394cc rbffi_CallFunction + 260 (Call.c:400)
12 ffi_c.bundle 0x000000010423d250 attached_method_invoke + 44 (MethodHandle.c:174)
13 libffi.dylib 0x0000000191efce10 ffi_closure_SYSV_inner + 800
14 libffi.dylib 0x0000000191ef41e8 ffi_closure_SYSV + 56
15 libruby.3.0.dylib 0x00000001027e0b84 vm_call_cfunc_with_frame + 228
16 libruby.3.0.dylib 0x00000001027db078 vm_sendish + 1116
17 libruby.3.0.dylib 0x00000001027c7f70 vm_exec_core + 6948
18 libruby.3.0.dylib 0x00000001027d72c8 rb_vm_exec + 1652
19 libruby.3.0.dylib 0x00000001027e3df8 invoke_block_from_c_bh + 616
20 libruby.3.0.dylib 0x00000001027d1bf4 rb_yield + 180
21 libruby.3.0.dylib 0x00000001026359a0 rb_ary_each + 84
22 libruby.3.0.dylib 0x00000001027e0b84 vm_call_cfunc_with_frame + 228
23 libruby.3.0.dylib 0x00000001027db078 vm_sendish + 1116
24 libruby.3.0.dylib 0x00000001027c7f10 vm_exec_core + 6852
25 libruby.3.0.dylib 0x00000001027d72c8 rb_vm_exec + 1652
26 libruby.3.0.dylib 0x00000001027e3df8 invoke_block_from_c_bh + 616
27 libruby.3.0.dylib 0x00000001027d1bf4 rb_yield + 180
28 libruby.3.0.dylib 0x00000001026359a0 rb_ary_each + 84
29 libruby.3.0.dylib 0x00000001027e0b84 vm_call_cfunc_with_frame + 228
30 libruby.3.0.dylib 0x00000001027db078 vm_sendish + 1116
31 libruby.3.0.dylib 0x00000001027c7f10 vm_exec_core + 6852
32 libruby.3.0.dylib 0x00000001027d72c8 rb_vm_exec + 1652
33 libruby.3.0.dylib 0x00000001027e3df8 invoke_block_from_c_bh + 616
34 libruby.3.0.dylib 0x00000001027d1bf4 rb_yield + 180
35 libruby.3.0.dylib 0x0000000102639ca8 rb_ary_collect + 168
36 libruby.3.0.dylib 0x00000001027e0b84 vm_call_cfunc_with_frame + 228
37 libruby.3.0.dylib 0x00000001027db078 vm_sendish + 1116
38 libruby.3.0.dylib 0x00000001027c7f10 vm_exec_core + 6852
39 libruby.3.0.dylib 0x00000001027d72c8 rb_vm_exec + 1652
40 libruby.3.0.dylib 0x00000001026dd2cc load_iseq_eval + 200
41 libruby.3.0.dylib 0x00000001026db528 rb_load_internal + 56
42 libruby.3.0.dylib 0x00000001026dc850 rb_f_load + 180
43 libruby.3.0.dylib 0x00000001027e0b84 vm_call_cfunc_with_frame + 228
44 libruby.3.0.dylib 0x00000001027db078 vm_sendish + 1116
45 libruby.3.0.dylib 0x00000001027c7f70 vm_exec_core + 6948
46 libruby.3.0.dylib 0x00000001027d72c8 rb_vm_exec + 1652
47 libruby.3.0.dylib 0x0000000102698f8c rb_ec_exec_node + 268
48 libruby.3.0.dylib 0x0000000102698e20 ruby_run_node + 96
49 ruby 0x000000010234fec0 main + 92
50 libdyld.dylib 0x000000018547df34 start + 4
Thread 1:
0 libsystem_kernel.dylib 0x0000000185430e04 poll + 8
1 libruby.3.0.dylib 0x00000001027a9530 timer_pthread_fn + 124
2 libsystem_pthread.dylib 0x000000018546206c _pthread_start + 320
3 libsystem_pthread.dylib 0x000000018545cda0 thread_start + 8
Thread 2:: FFI Callback Dispatcher
0 libsystem_kernel.dylib 0x000000018542c488 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x0000000185462568 _pthread_cond_wait + 1192
2 ffi_c.bundle 0x000000010423b80c async_cb_wait + 88 (Function.c:605)
3 libruby.3.0.dylib 0x00000001027a1c48 rb_nogvl + 280
4 ffi_c.bundle 0x000000010423afa0 async_cb_event + 140 (Function.c:545)
5 libruby.3.0.dylib 0x00000001027a86b8 thread_start_func_2 + 1104
6 libruby.3.0.dylib 0x00000001027a8174 thread_start_func_1 + 152
7 libsystem_pthread.dylib 0x000000018546206c _pthread_start + 320
8 libsystem_pthread.dylib 0x000000018545cda0 thread_start + 8
Thread 3:: open3.rb:403
0 libsystem_kernel.dylib 0x000000018542c488 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x0000000185462568 _pthread_cond_wait + 1192
2 libruby.3.0.dylib 0x000000010279f7b0 native_cond_timedwait + 120
3 libruby.3.0.dylib 0x00000001027a81e0 thread_start_func_1 + 260
4 libsystem_pthread.dylib 0x000000018546206c _pthread_start + 320
5 libsystem_pthread.dylib 0x000000018545cda0 thread_start + 8
Thread 0 crashed with ARM Thread State (64-bit):
...
Binary Images:
0x10234c000 - 0x10234ffff +ruby (0) <F0E1ACB6-DADC-3709-A7C9-B20CA91730D9> /opt/homebrew/*/ruby
0x1023ec000 - 0x1023effff +encdb.bundle (0) <0512F22A-C22F-3133-B7DF-C0630FBB7291> /opt/homebrew/*/encdb.bundle
0x102400000 - 0x102403fff +transdb.bundle (0) <0749C9D7-559E-3089-8034-3F04F71589F5> /opt/homebrew/*/transdb.bundle
0x102414000 - 0x102417fff +monitor.bundle (0) <46C2FFE4-EE5B-3C46-98D9-E6FE36564805> /opt/homebrew/*/monitor.bundle
0x102428000 - 0x10242ffff +pathname.bundle (0) <52304C1D-6B1E-3CF4-927D-65D050FB8E7F> /opt/homebrew/*/pathname.bundle
0x102440000 - 0x102443fff +escape.bundle (0) <80C6BFF1-ECCE-3004-96A4-3BC8A836F380> /opt/homebrew/*/escape.bundle
0x102454000 - 0x102483fff +date_core.bundle (0) <5BEB2E2A-A969-3022-91F0-91F2A901D94B> /opt/homebrew/*/date_core.bundle
0x1024a0000 - 0x1024affff +bigdecimal.bundle (0) <1292C18E-90FF-3113-A794-70FC5E7A61E6> /opt/homebrew/*/bigdecimal.bundle
0x1024c0000 - 0x1024c7fff +stringio.bundle (0) <1A3F1181-6690-3E8C-BD5F-27F8A4A2F38D> /opt/homebrew/*/stringio.bundle
0x1024d8000 - 0x1024dbfff +etc.bundle (0) <1E536D8B-A4A9-30F3-84E5-ADF950666B25> /opt/homebrew/*/etc.bundle
0x1024ec000 - 0x1024effff +digest.bundle (0) <F596BA9F-A8BF-3D95-B5EB-6C617E4F0927> /opt/homebrew/*/digest.bundle
0x102500000 - 0x102503fff +strscan.bundle (0) <9548F035-0164-3C07-A923-BE0DC07C4229> /opt/homebrew/*/strscan.bundle
0x102514000 - 0x102517fff +wait.bundle (0) <AFC11C5F-E127-30EF-86C1-9DAD1B8027DA> /opt/homebrew/*/wait.bundle
0x10252c000 - 0x1025a7fff dyld (832.7.3) <4AB185B3-DC20-3C03-A193-67C0E6C589D7> /usr/lib/dyld
0x102630000 - 0x10289bfff +libruby.3.0.dylib (0) <F87CC4ED-8475-3AEE-AD7B-2AF63B4EA539> /opt/homebrew/*/libruby.3.0.dylib
0x103e1c000 - 0x103e3bfff +socket.bundle (0) <278655A9-7512-3AFC-AA68-6A824BA8C2FA> /opt/homebrew/*/socket.bundle
0x103e54000 - 0x103e5ffff +zlib.bundle (0) <34A11F55-C4B0-3781-84CD-7847CC56B29E> /opt/homebrew/*/zlib.bundle
0x103e74000 - 0x103e77fff +windows_31j.bundle (0) <610116E4-A042-3F9E-8CEF-0A0B4A608E4F> /opt/homebrew/*/windows_31j.bundle
0x103e88000 - 0x103e8ffff +parser.bundle (0) <E08CB57E-FA0E-368E-8C2A-988494BD82A6> /opt/homebrew/*/parser.bundle
0x103ea0000 - 0x103ea7fff +generator.bundle (0) <EA3FFFF6-797E-3172-9750-EFA602F9DA15> /opt/homebrew/*/generator.bundle
0x103eb8000 - 0x103ef7fff +openssl.bundle (0) <D2ACCAC6-35B7-365D-A590-DC0A5683A383> /opt/homebrew/*/openssl.bundle
0x103f24000 - 0x103f6ffff +libssl.1.1.dylib (0) <387D1B0F-EE36-3D80-96B0-21BCDA8B4E62> /opt/homebrew/*/libssl.1.1.dylib
0x103fa4000 - 0x104123fff +libcrypto.1.1.dylib (0) <6703626D-9366-31E0-8697-C74FFA806779> /opt/homebrew/*/libcrypto.1.1.dylib
0x1041c4000 - 0x1041c7fff +nonblock.bundle (0) <4285445A-338C-3B5F-9501-8132C66F1367> /opt/homebrew/*/nonblock.bundle
0x1041d8000 - 0x1041dffff +psych.bundle (0) <6E68A386-AEB1-38A8-9AA3-4CB61FFA0439> /opt/homebrew/*/psych.bundle
0x1041f0000 - 0x104207fff +libyaml-0.2.dylib (0) <86720355-4A4B-32E0-8BD0-18AB73A96463> /opt/homebrew/*/libyaml-0.2.dylib
0x104218000 - 0x10421bfff +sha2.bundle (0) <F641AD10-D5E2-3D7D-BBFA-56F8294B442B> /opt/homebrew/*/sha2.bundle
0x10422c000 - 0x104247fff +ffi_c.bundle (0) <6ADCB3BE-9056-33F0-AD30-10DAB38241E2> /opt/homebrew/*/ffi_c.bundle
0x104270000 - 0x104277fff libffi-trampolines.dylib (27) <ADFD2779-8444-3C1E-8AF1-F5BDCFDDA05B> /usr/lib/libffi-trampolines.dylib
0x104280000 - 0x104283fff +sha1.bundle (0) <B63454E5-2319-352D-864B-8CEFF2C94880> /opt/homebrew/*/sha1.bundle
0x1851be000 - 0x1851bffff libsystem_blocks.dylib (78) <9B6D4883-03E9-3785-851E-EA79FA64ADC1> /usr/lib/system/libsystem_blocks.dylib
0x1851c0000 - 0x1851f7fff libxpc.dylib (2038.80.3) <BD0DFD42-0DC3-3F3D-9C04-5A2B3D93794D> /usr/lib/system/libxpc.dylib
0x1851f8000 - 0x18520ffff libsystem_trace.dylib (1277.80.2) <4A466196-D2DD-367B-80AB-988F281EC3B8> /usr/lib/system/libsystem_trace.dylib
0x185210000 - 0x185284fff libcorecrypto.dylib (1000.80.5) <9BD8FED7-2A36-3602-A5A7-0CA87C03FB84> /usr/lib/system/libcorecrypto.dylib
0x185285000 - 0x1852b0fff libsystem_malloc.dylib (317.40.8) <21120432-52C1-34E4-BF01-623722FA3A41> /usr/lib/system/libsystem_malloc.dylib
0x1852b1000 - 0x1852f4fff libdispatch.dylib (1271.40.12) <F5BFBD55-EF70-3659-854D-9061325EB26D> /usr/lib/system/libdispatch.dylib
0x1852f5000 - 0x18532dfff libobjc.A.dylib (818.2) <B03625B0-501E-3AC1-8E16-08B621120EAD> /usr/lib/libobjc.A.dylib
0x18532e000 - 0x185330fff libsystem_featureflags.dylib (28.60.1) <297CC4DD-AFA4-3BA3-B4E1-0DF47E49C21E> /usr/lib/system/libsystem_featureflags.dylib
0x185331000 - 0x1853b1fff libsystem_c.dylib (1439.40.11) <A7147E08-E7C0-3842-916E-F2270A689F47> /usr/lib/system/libsystem_c.dylib
0x1853b2000 - 0x18540dfff libc++.1.dylib (904.4) <B139607F-1E80-3A8E-870D-0AC022069EA1> /usr/lib/libc++.1.dylib
0x18540e000 - 0x185427fff libc++abi.dylib (904.4) <1DD3A1C9-D765-34FB-B8C1-87BF52CE49C0> /usr/lib/libc++abi.dylib
0x185428000 - 0x18545afff libsystem_kernel.dylib (7195.81.3) <55FCA547-4877-3075-8A08-FE1620BFC682> /usr/lib/system/libsystem_kernel.dylib
0x18545b000 - 0x185466fff libsystem_pthread.dylib (454.80.2) <8E907E6C-C227-312E-944C-767093692AFF> /usr/lib/system/libsystem_pthread.dylib
0x185467000 - 0x1854a5fff libdyld.dylib (832.7.3) <EF759BF3-97FA-30EA-A1CA-EDECFEA726FE> /usr/lib/system/libdyld.dylib
0x1854a6000 - 0x1854acfff libsystem_platform.dylib (254.80.2) <8633A39C-10A2-3B44-93F7-617AB09FF640> /usr/lib/system/libsystem_platform.dylib
0x1854ad000 - 0x1854d8fff libsystem_info.dylib (542.40.3) <4CC96CFC-7198-3F26-8C8C-20FB010CDF98> /usr/lib/system/libsystem_info.dylib
0x1854d9000 - 0x185982fff com.apple.CoreFoundation (6.9 - 1774.101) <EA76C90A-23ED-3791-8FBC-8292916F0B16> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x185983000 - 0x185bb5fff com.apple.LaunchServices (1122.11 - 1122.11) <B79A592B-8036-3E24-AD9D-3FB4E7BE2983> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices
0x185bb6000 - 0x185c8efff com.apple.gpusw.MetalTools (1.0 - 1) <ED9E3F77-4900-3B5B-978A-70AA6762DFBA> /System/Library/PrivateFrameworks/MetalTools.framework/Versions/A/MetalTools
0x185c8f000 - 0x185ee9fff libBLAS.dylib (1336.40.1) <96EAD889-D898-3884-A36C-F433DC2C64DD> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
0x185eea000 - 0x185f34fff com.apple.Lexicon-framework (1.0 - 86.1) <81EA9F2C-6059-322A-B336-56CD7F3AB6C2> /System/Library/PrivateFrameworks/Lexicon.framework/Versions/A/Lexicon
0x185f35000 - 0x185f97fff libSparse.dylib (106) <1A70E696-43E3-3D8B-A3E1-ADB624729BF4> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparse.dylib
0x185f98000 - 0x18601ffff com.apple.SystemConfiguration (1.20 - 1.20) <DD6AB615-BB7B-3634-9D33-3923E5038BAA> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration
0x186020000 - 0x186053fff libCRFSuite.dylib (50) <79C1501B-B0F6-341A-96CC-F4FE066E3D59> /usr/lib/libCRFSuite.dylib
0x186054000 - 0x18627efff libmecabra.dylib (929.1.1) <29B77781-FA91-3180-AFE8-608A355AE97E> /usr/lib/libmecabra.dylib
0x18627f000 - 0x186610fff com.apple.Foundation (6.9 - 1774.101) <8F7A0D5B-0E89-36F6-AC84-D3BEC2C44792> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
0x186611000 - 0x186702fff com.apple.LanguageModeling (1.0 - 247.1) <66B05273-1979-3DB2-9F68-C0E3A6CD88B1> /System/Library/PrivateFrameworks/LanguageModeling.framework/Versions/A/LanguageModeling
0x18728c000 - 0x1875eafff com.apple.security (7.0 - 59754.80.3) <C76855AD-6EE4-3413-9E6E-CC450BDB20E2> /System/Library/Frameworks/Security.framework/Versions/A/Security
0x1875eb000 - 0x18785cfff libicucore.A.dylib (66109) <3CE58F97-7FC3-37D8-BB99-D6BECFC86DD2> /usr/lib/libicucore.A.dylib
0x18785d000 - 0x187867fff libsystem_darwin.dylib (1439.40.11) <B790A863-2D74-300E-9698-A25B5602B32F> /usr/lib/system/libsystem_darwin.dylib
0x187868000 - 0x187b57fff com.apple.CoreServices.CarbonCore (1307 - 1307) <3EC22291-65E5-3EB6-9498-9A1244C90147> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore
0x187b93000 - 0x187bcefff com.apple.CSStore (1122.11 - 1122.11) <52D8D7A1-4879-3488-9D39-5F2C7696EFA8> /System/Library/PrivateFrameworks/CoreServicesStore.framework/Versions/A/CoreServicesStore
0x187bcf000 - 0x187c99fff com.apple.framework.IOKit (2.0.2 - 1845.81.1) <516911DA-18D7-3D17-8646-BBF7C75CD070> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
0x187c9a000 - 0x187ca5fff libsystem_notify.dylib (279.40.4) <A7B6BDA8-5371-352E-8A36-95D46C4B07F1> /usr/lib/system/libsystem_notify.dylib
0x1890bc000 - 0x1897c6fff libnetwork.dylib (2288.80.2) <07EC53A0-293C-3403-8394-755AE0BDDFA4> /usr/lib/libnetwork.dylib
0x1897c7000 - 0x189c54fff com.apple.CFNetwork (1220.1 - 1220.1) <3C5F5D1E-DB7C-3027-BBB0-91E6DEA3E264> /System/Library/Frameworks/CFNetwork.framework/Versions/A/CFNetwork
0x189c55000 - 0x189c64fff libsystem_networkextension.dylib (1295.80.3) <B6BD1267-BE59-3E42-B2B5-2BF13F17D02D> /usr/lib/system/libsystem_networkextension.dylib
0x189c65000 - 0x189c66fff libenergytrace.dylib (22) <C5CFEF87-BB69-3351-A0C8-9B601383A45C> /usr/lib/libenergytrace.dylib
0x189c67000 - 0x189cbbfff libMobileGestalt.dylib (978.80.1) <93C6E288-C098-357F-B8A5-3E133DF39ECE> /usr/lib/libMobileGestalt.dylib
0x189cbc000 - 0x189cd3fff libsystem_asl.dylib (385) <31E28E59-1CDD-3B83-8BF0-56C675227FA2> /usr/lib/system/libsystem_asl.dylib
0x189cd4000 - 0x189cedfff com.apple.TCC (1.0 - 1) <C55FE947-0C86-3AAC-9306-9EFA7C033D07> /System/Library/PrivateFrameworks/TCC.framework/Versions/A/TCC
0x18aea2000 - 0x18aeb9fff com.apple.ProtocolBuffer (1 - 285.23.11.29.1) <38163CA8-14FF-34A9-8AE4-D7D69B8C8854> /System/Library/PrivateFrameworks/ProtocolBuffer.framework/Versions/A/ProtocolBuffer
0x18aeba000 - 0x18b061fff libsqlite3.dylib (321.1) <8592B35B-9EA3-3C84-8453-9C86FB5C039C> /usr/lib/libsqlite3.dylib
0x18b1c8000 - 0x18b23bfff com.apple.AE (918.0.1 - 918.0.1) <7D13C9B5-D195-3E9E-B6C7-254F95A925C6> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE
0x18b23c000 - 0x18b243fff libdns_services.dylib (1310.80.1) <64D4BA25-C388-3AB8-BDA8-2E81459DA46A> /usr/lib/libdns_services.dylib
0x18b244000 - 0x18b24bfff libsystem_symptoms.dylib (1431.40.36) <0657E539-C0CE-30F8-B630-FBAE36109542> /usr/lib/system/libsystem_symptoms.dylib
0x18b24c000 - 0x18b3a8fff com.apple.Network (1.0 - 1) <486C55B3-900C-3D09-AB0D-F99A152CFB84> /System/Library/Frameworks/Network.framework/Versions/A/Network
0x18b3a9000 - 0x18b3cefff com.apple.analyticsd (1.0 - 1) <E47FE17B-2ED2-3BE2-A5AB-046DB3C02EA0> /System/Library/PrivateFrameworks/CoreAnalytics.framework/Versions/A/CoreAnalytics
0x18b3cf000 - 0x18b3d1fff libDiagnosticMessagesClient.dylib (112) <20AD555E-DF00-3C91-A95B-AB2AD23780AA> /usr/lib/libDiagnosticMessagesClient.dylib
0x18b3d2000 - 0x18b41dfff com.apple.spotlight.metadata.utilities (1.0 - 2150.7.5) <5400DF7A-9249-30E9-B692-AC431C7F74D9> /System/Library/PrivateFrameworks/MetadataUtilities.framework/Versions/A/MetadataUtilities
0x18b41e000 - 0x18b4b7fff com.apple.Metadata (10.7.0 - 2150.7.5) <4B03E6F8-1568-338B-AA75-480F9D824516> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata
0x18b4b8000 - 0x18b4befff com.apple.DiskArbitration (2.7 - 2.7) <7ED2211D-BA3C-37EC-BBA4-4320FBBC8A6A> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration
0x18b4bf000 - 0x18b7ddfff com.apple.vImage (8.1 - 544.2) <B1B84588-8B57-3F98-9D50-AAC142DFF36E> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage
0x18bcda000 - 0x18bce7fff com.apple.OpenDirectory (11.2 - 230.40.1) <C509DC91-F994-34B7-A5C5-A108E7DA5E4E> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory
0x18bce8000 - 0x18bd07fff com.apple.CFOpenDirectory (11.2 - 230.40.1) <A5449895-6129-3BDF-864B-49ACA82E3052> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/Frameworks/CFOpenDirectory.framework/Versions/A/CFOpenDirectory
0x18bd08000 - 0x18bd10fff com.apple.CoreServices.FSEvents (1290.40.2 - 1290.40.2) <72CF142E-3792-318C-B2D6-B60C5E219312> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/FSEvents.framework/Versions/A/FSEvents
0x18bd11000 - 0x18bd35fff com.apple.coreservices.SharedFileList (144 - 144) <E4152CCC-4A03-3959-B788-D7DD1ADFF8A6> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SharedFileList.framework/Versions/A/SharedFileList
0x18bd36000 - 0x18bd38fff libapp_launch_measurement.dylib (14.1) <A5637442-ADCB-30F0-AAB4-72FA1C5E3811> /usr/lib/libapp_launch_measurement.dylib
0x18bd39000 - 0x18bd7cfff com.apple.CoreAutoLayout (1.0 - 21.10.1) <6026D662-F75F-3C82-8C80-F6CEBF6369BF> /System/Library/PrivateFrameworks/CoreAutoLayout.framework/Versions/A/CoreAutoLayout
0x18bd7d000 - 0x18be68fff libxml2.2.dylib (34.9) <E170FFEE-EB9F-3252-9B16-4A47FBAC425A> /usr/lib/libxml2.2.dylib
0x18ccb9000 - 0x18cccafff libsystem_containermanager.dylib (318.80.2) <0B742EA4-AFA2-36B6-AB4B-2F8ACA7211AA> /usr/lib/system/libsystem_containermanager.dylib
0x18cccb000 - 0x18ccdcfff com.apple.IOSurface (289.3 - 289.3) <64E3394D-C908-378C-B5CE-B89C6BE61E9F> /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface
0x18ccdd000 - 0x18cce6fff com.apple.IOAccelerator (439.52 - 439.52) <2995471C-4A7A-342A-B702-496519D138E2> /System/Library/PrivateFrameworks/IOAccelerator.framework/Versions/A/IOAccelerator
0x18cce7000 - 0x18cdddfff com.apple.Metal (244.32.7 - 244.32.7) <E8BC8E6D-4359-3A04-87CB-6F0D4F8F5C4B> /System/Library/Frameworks/Metal.framework/Versions/A/Metal
0x18d8e4000 - 0x18d93bfff com.apple.MetalPerformanceShaders.MPSCore (1.0 - 1) <F07355C4-C893-3534-B74E-DD5FBCBEC76C> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSCore.framework/Versions/A/MPSCore
0x18d93c000 - 0x18d940fff libsystem_configuration.dylib (1109.60.2) <AEC5E654-A5B8-343E-80B7-27D5D0D856D9> /usr/lib/system/libsystem_configuration.dylib
0x18d941000 - 0x18d945fff libsystem_sandbox.dylib (1441.60.4) <D7CDDE27-978E-3511-AE6F-296D901290B2> /usr/lib/system/libsystem_sandbox.dylib
0x18d946000 - 0x18d947fff com.apple.AggregateDictionary (1.0 - 1) <516D38F6-E0E1-36B4-AC96-E5079ECC6ED4> /System/Library/PrivateFrameworks/AggregateDictionary.framework/Versions/A/AggregateDictionary
0x18d948000 - 0x18d94bfff com.apple.AppleSystemInfo (3.1.5 - 3.1.5) <E6509790-A434-3A6A-AF9E-EA1FDBF15F6A> /System/Library/PrivateFrameworks/AppleSystemInfo.framework/Versions/A/AppleSystemInfo
0x18d94c000 - 0x18d94dfff liblangid.dylib (136) <12979BA7-28E3-3E74-AC24-65166A921235> /usr/lib/liblangid.dylib
0x18d94e000 - 0x18d9dcfff com.apple.CoreNLP (1.0 - 245.1) <92E28F08-9AB8-3B02-A889-677A716E393C> /System/Library/PrivateFrameworks/CoreNLP.framework/Versions/A/CoreNLP
0x18d9dd000 - 0x18d9e4fff com.apple.LinguisticData (1.0 - 399) <2B3E7B26-D669-38C0-9B7F-FFB2E94BC23B> /System/Library/PrivateFrameworks/LinguisticData.framework/Versions/A/LinguisticData
0x18d9e5000 - 0x18de9afff libBNNS.dylib (288.80.1) <695BEB14-BA36-3386-8C72-A219A73C9601> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBNNS.dylib
0x18de9b000 - 0x18df4dfff libvDSP.dylib (760.40.6) <9FAC0A5B-0CD5-3999-815A-3C663EC71F65> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib
0x18df4e000 - 0x18df60fff com.apple.CoreEmoji (1.0 - 128) <0FCD33E9-8DC4-3FB1-86DC-8ECA2FA10E37> /System/Library/PrivateFrameworks/CoreEmoji.framework/Versions/A/CoreEmoji
0x18df61000 - 0x18df6bfff com.apple.IOMobileFramebuffer (343.0.0 - 343.0.0) <57F4592F-D6FA-3ED9-ACEF-B1D7A7359DC1> /System/Library/PrivateFrameworks/IOMobileFramebuffer.framework/Versions/A/IOMobileFramebuffer
0x18e26b000 - 0x18e2f0fff com.apple.securityfoundation (6.0 - 55240.40.4) <6D4E864F-4AAC-31AF-85AF-C308528C0F0B> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation
0x18e2f1000 - 0x18e2fafff com.apple.coreservices.BackgroundTaskManagement (1.0 - 104) <8981AD5E-DC12-3B54-BAC0-E9E113C4B1A0> /System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement
0x18e2fb000 - 0x18e300fff com.apple.xpc.ServiceManagement (1.0 - 1) <7B4325ED-9CF7-3D9E-A39C-F0AE7819F0C0> /System/Library/Frameworks/ServiceManagement.framework/Versions/A/ServiceManagement
0x18e301000 - 0x18e303fff libquarantine.dylib (119.40.2) <66942A5C-57B3-3524-BA49-0F2DA4A584D1> /usr/lib/system/libquarantine.dylib
0x18e304000 - 0x18e313fff libCheckFix.dylib (31) <05E93C9D-45F9-3758-95FD-481E5EA2D5EF> /usr/lib/libCheckFix.dylib
0x1913f9000 - 0x19140cfff libsasl2.2.dylib (214) <D2C32F92-0728-3C67-B774-5CFC95F83497> /usr/lib/libsasl2.2.dylib
0x191cb4000 - 0x191cc4fff com.apple.Kerberos (3.0 - 1) <862CCDF8-B5F2-3EDE-B728-B62CE0F158F7> /System/Library/Frameworks/Kerberos.framework/Versions/A/Kerberos
0x191cc5000 - 0x191d0dfff com.apple.GSS (4.0 - 2.0) <F4CC9D06-2046-3621-B654-96E7575115ED> /System/Library/Frameworks/GSS.framework/Versions/A/GSS
0x191d0e000 - 0x191d1dfff com.apple.CommonAuth (4.0 - 2.0) <DEE913BB-9741-3BB8-933A-64623CBCCC70> /System/Library/PrivateFrameworks/CommonAuth.framework/Versions/A/CommonAuth
0x191eea000 - 0x191eeafff liblaunch.dylib (2038.80.3) <327A64BF-8C2C-303D-91E4-70A168C5D87F> /usr/lib/system/liblaunch.dylib
0x191eec000 - 0x191efdfff libffi.dylib (27) <92403FAB-6424-3AA3-ADB4-98FCD46D892A> /usr/lib/libffi.dylib
| After ffi has been fixed, I was able to successfully run pod install
without Rosetta
by running sudo gem install ethon with my homebrewn gem that installed ethon-0.13.0
and sudo gem install ffi that installed ffi-1.15.0
Thanks for trying to help
| https://stackoverflow.com/questions/67111397/ |
How to calculate mean on second axis until row-specific column in Pytorch? | I am looking for a fast way to calculate the mean for each row of a 2d matrix but only until a specific column. The remaining values of each row can be ignored. The column is different for each row.
In Numpy, it could be coded like this. However, I am hoping to find a solution without a for loop which also does not break the gradients.
import numpy as np
arr = np.linspace(0, 10, 15).reshape(3,5)
cols = [2,0,4]
for row, col in enumerate(cols):
arr[row, col+1:] = np.nan
result = np.nanmean(arr, axis=1)
Any suggestions?
Edit: Best solution I have found so far:
result = torch.stack([arr[i, 0:cols[i]+1].mean() for i in range(len(arr))])
But I would still like to avoid the for loop.
| Try creating a mask:
t = torch.tensor(arr)
mask = torch.arange(t.shape[1]) <= torch.tensor(cols).unsqueeze(-1)
result = (t*mask).sum(1)/mask.sum(1)
Output:
tensor([0.7143, 3.5714, 8.5714], dtype=torch.float64)
| https://stackoverflow.com/questions/67114882/ |
How to manually obtain the minus log-likelihood in Pytorch? | I'm implementing a VAE and I want to obtain the negativev log-likelihood manually (not using an existing function). The given equation is equation1, and I have also found it can as well be expressed as equation2. I have been stuck on this for a couple of days now, and don't know where my code is wrong.
def loss_loglik(y_mean, y_logvar, x):
out_1 = (x.size()[2]*x.size()[3] / 2) * np.log(2 * np.pi)
out_2 = (x.size()[2]*x.size()[3] / 2) * torch.log(y_logvar.exp())
x_diff = x - y_mean
out_3 = torch.sum(x_diff.pow(2)) / (2 * y_logvar.exp())
loss = out_1 + out_2 + out_3
The shape of the three arguments is (batch_size, 1, 28, 28).
| It seems like eq. 2 is wrong.
It should have been something like. I did not derive it, just trying to match with the input... so, please verify.
I modified your function below.
def loss_loglik(y_mean, y_logvar, x):
m, n = x.size()[2], x.size()[3]
b = x.size()[0]
y_mean = y_mean.reshape(B, -1)
y_logvar = y_loagvar.reshape(B, -1)
out_1 = (m*n / 2) * np.log(2 * np.pi)
out_2 = (1 / 2) * y_logvar.sum(dim=1)
x_diff = x - y_mean
# note sigma is inside the sum
out_3 = torch.sum(x_diff.pow(2) / (2 * y_logvar.exp()), dim=1)
loss = out_1 + out_2 + out_3
return -loss
# shape of loss will be (batchsize,)
| https://stackoverflow.com/questions/67115005/ |
reshape tensor in a determined way | I have some tensor x3. I got it in the following way:
x = torch.tensor([0, 0, 0, 0, 1, 0, 0, 0, 0])
x2 = torch.stack(5 * [x], 0)
x2 = x2.reshape(-1)
x3 = torch.stack(4 * [x2], 0)
x3 = torch.stack(6 * [x3], -1)
x3 = torch.stack(7 * [x3], -1)
In short it means that
x[0, :9, 0, 0] = [0, 0, 0, 0, 1, 0, 0, 0, 0]
x[0, 9:18, 0, 0] = [0, 0, 0, 0, 1, 0, 0, 0, 0]
and so on.
Then I want to reshape it that every nine values of the 1st dimension go to the new dimension. In other words, I want x3[0, 0, 0, 0, :] to give me tensor([0, 0, 0, 0, 1, 0, 0, 0, 0])
I tried to do:
x3.reshape(4, 5, 6, 7, 9)[0, 0, 0, 0, :]
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0])
x3.reshape(4, 9, 6, 7, 5).transpose(1, -1)[0, 0, 0, 0, :]
tensor([0, 0, 0, 0, 0, 0, 0, 0, 1])
As you see, none of it gives me the right answer
UPD: added x3 = torch.stack(7 * [x3], -1)
| If you want to modify the 1st dimension and create a new dimension at the end, you need to first move that dimension to last by using permute. Something like this should do the trick:
xpermuted = x3.permute(0, 2, 3, 1)
xreshaped = xpermuted.reshape(xpermuted.shape[0], xpermuted.shape[1], xpermuted.shape[2], int(xpermuted.shape[3] / 9), 9)
print(xreshaped[0, 0, 0, 0, :]) # tensor([0, 0, 0, 0, 1, 0, 0, 0, 0])
print(xreshaped[0, 0, 0, 1, :]) # tensor([0, 0, 0, 0, 1, 0, 0, 0, 0])
print(xreshaped[0, 0, 0, 2, :]) # tensor([0, 0, 0, 0, 1, 0, 0, 0, 0])
After that, you can restore the initial dimension order by using permute again if you need the original order of dimensions:
xrestored = xreshaped.permute(0, 3, 1, 2, 4)
print(xrestored.shape) # torch.Size([4, 5, 6, 7, 9])
Technically you don't have to move the first dimension to last initially, you can also do the reverse first reshape, then permute. Actually, now that I think about it, this is better since it has one less permute:
xreshaped = x3.reshape(x3.shape[0], int(x3.shape[1]/9), 9, x3.shape[2], x3.shape[3])
xrestored = xreshaped.permute(0, 1, 3, 4, 2)
print(xrestored.shape) # torch.Size([4, 5, 6, 7, 9])
print(xrestored[0, 0, 0, 0, :]) # tensor([0, 0, 0, 0, 1, 0, 0, 0, 0])
print(xrestored[0, 1, 0, 0, :]) # tensor([0, 0, 0, 0, 1, 0, 0, 0, 0])
| https://stackoverflow.com/questions/67116896/ |
Flutter- Connecting to Python/Pytorch Backend | I'm building a mobile app using Flutter that translates American Sign Language (ASL) to written English. As a prototype, I've limited my scope to translating photos of single alphanumeric characters. I have two parts to the project right now:
A python script that takes the path to an image, processes it with Torch/Torchvision and a custom model, and returns the alphanumeric character that is most likely to have been in the image. This part works great.
A Flutter front end that connects to the user's camera and saves the path to a taken image.
Q: What is the best way to connect these two parts of my project?
So far, I've tried two methods:
Using Starflut, which allows me to call Python code from Flutter and receive return values. I've tested it with some basic Python code (no dependencies) and it works great. However, for my more complex use case, where my Python code imports "torch", the code crashes.
Error Message:
:/data/data/com.example.asl/files/flutter_assets/starfiles/image_prediction.py, run failed
ModuleNotFoundError: No module named 'torch'
This happens when calling the following line of code:
var result = await widget.srvGroup.loadRawModule("python", "",
resPath + "/flutter_assets/starfiles/" + "image_prediction.py", false);
I've also tried Pytorch_mobile Flutter package, but beyond not being working exactly in the way I want and implemented in my own Python script, also crashes.
Error Message:
java.lang.NullPointerException: Attempt to invoke virtual method 'org.pytorch.IValue org.pytorch.Module.forward(org.pytorch.IValue[])' on a null object reference
This happens when calling the following line of code:
String prediction =
await TorchMobile.getPrediction(_image, maxWidth: 256, maxHeight: 256);
Any suggestions as to how to get Starflut functioning would be ideal, as I'd like to use my Python interface, but if that's not possible, I'd love suggestions as to how to approach this problem.
| starflut package is used to compile python programs or any other scripting language in Flutter. This package will not help you to integrate your backend model with the application
To connect backend with your mobile application you'll have to write your own API or you can use Pytorch_mobile Flutter package.
Your app is crashing at -
String prediction =
await TorchMobile.getPrediction(_image, maxWidth: 256, maxHeight: 256);
because it seems you forgot to initialize _image. Unfortunately, NULL is being stored in _image. So recheck your code where you've initialized _image.
To get a clear understanding you can definitely check out the following videos
Building An App From Scratch: Connecting Python Backend with Flutter Frontend | #5
How To Use Pytorch Mobile on Flutter
| https://stackoverflow.com/questions/67117258/ |
How to load a learning rate scheduler state dict? | I have a model and a learning rate scheduler. I'm saving the model and optimizer using the state dict method that is shown here.
import torch
import torch.nn as nn
import torch.optim as optim
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(2, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 4)
def forward(self, x):
x=self.fc1(x)
x=self.fc2(x)
x=self.out(x)
return x
nx = net_x()
r = torch.tensor([1.0,2.0])
optimizer = optim.Adam(nx.parameters(), lr = 0.1)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-2, max_lr=0.1, step_size_up=1, mode="triangular2", cycle_momentum=False)
path = 'opt.pt'
for epoch in range(10):
optimizer.zero_grad()
net_predictions = nx(r)
loss = torch.sum(torch.randint(0,10,(4,)) - net_predictions)
loss.backward()
optimizer.step()
scheduler.step()
print('loss:' , loss)
torch.save({ 'epoch': epoch,
'net_x_state_dict': nx.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'scheduler': scheduler,
}, path)
PATH = control_path
checkpoint = torch.load(path)
nx.load_state_dict(checkpoint['net_x_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
scheduler.load_state_dict(checkpoint['scheduler'])
The code runs just fine without the part that I'm loading the scheduler state dict, so I'm not sure what I'm doing wrong. I'm trying to load the state dict as mentioned here, but I'm getting this error:
TypeError Traceback (most recent call last)
<ipython-input-7-e3217d6dd870> in <module>
42 nx.load_state_dict(checkpoint['net_x_state_dict'])
43 optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
---> 44 scheduler.load_state_dict(checkpoint['scheduler'])
~/anaconda3/lib/python3.7/site-packages/torch/optim/lr_scheduler.py in load_state_dict(self, state_dict)
92 from a call to :meth:`state_dict`.
93 """
---> 94 self.__dict__.update(state_dict)
95
96 def get_last_lr(self):
TypeError: 'CyclicLR' object is not iterable
| Since we have to extract the state_dict() values from scheduler before saving i.e in torch.save() method
Below code will work
import torch
import torch.nn as nn
import torch.optim as optim
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(2, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 4)
def forward(self, x):
x=self.fc1(x)
x=self.fc2(x)
x=self.out(x)
return x
nx = net_x()
r = torch.tensor([1.0,2.0])
optimizer = optim.Adam(nx.parameters(), lr = 0.1)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-2, max_lr=0.1, step_size_up=1, mode="triangular2", cycle_momentum=False)
path = 'opt.pt'
for epoch in range(10):
optimizer.zero_grad()
net_predictions = nx(r)
loss = torch.sum(torch.randint(0,10,(4,)) - net_predictions)
loss.backward()
optimizer.step()
scheduler.step()
print('loss:' , loss)
torch.save({ 'epoch': epoch,
'net_x_state_dict': nx.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'scheduler': scheduler.state_dict(), # HERE IS THE CHANGE
}, path)
PATH = control_path
checkpoint = torch.load(path)
nx.load_state_dict(checkpoint['net_x_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
scheduler.load_state_dict(checkpoint['scheduler'])
| https://stackoverflow.com/questions/67119827/ |
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not str | When I try to train the model I get the following error:
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not
str
The code I'm using is:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.dropout = nn.Dropout2d()
self.fc1 = nn.Linear(256, 64)
self.fc2 = nn.Linear(64, 1)
self.hybrid = Hybrid(qiskit.Aer.get_backend('qasm_simulator'), 100, np.pi / 2)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2)
x = self.dropout(x)
x = x.view(1, -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = self.hybrid(x)
return torch.cat((x, 1 - x), -1)
model = Net()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_func = nn.NLLLoss()
epochs = 20
loss_list = []
model.train()
for epoch in range(epochs):
total_loss = []
for i, data in enumerate(train_ldr, 0):
# get the inputs; data is a list of [inputs, labels]
X_train, Y_train = data
print(data)
optimizer.zero_grad()
# Forward pass
output = model(X_train)
# Calculating loss
loss = loss_func(output, Y_train)
# Backward pass
loss.backward()
# Optimize the weights
optimizer.step()
total_loss.append(loss.item())
loss_list.append(sum(total_loss)/len(total_loss))
print('Training [{:.0f}%]\tLoss: {:.4f}'.format(
100. * (epoch + 1) / epochs, loss_list[-1]))
The full traceback is:
{'data': tensor([[715.9147, 679.4994, 131.4772, 9.4777, 9.4777, 13.8722, 85.8577,
2.5333]]), 'Target': tensor([0])}
TypeError Traceback (most recent call last)
<ipython-input-52-7c8c9f3a38b7> in <module>
20
21 # Forward pass
---> 22 output = model(X_train)
23 # Calculating loss
24 loss = loss_func(output, Y_train)
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
<ipython-input-39-6b9a402c220d> in forward(self, x)
10
11 def forward(self, x):
---> 12 x = F.relu(self.conv1(x))
13 x = F.max_pool2d(x, 2)
14 x = F.relu(self.conv2(x))
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\anaconda3\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
397
398 def forward(self, input: Tensor) -> Tensor:
--> 399 return self._conv_forward(input, self.weight, self.bias)
400
401 class Conv3d(_ConvNd):
~\anaconda3\lib\site-packages\torch\nn\modules\conv.py in _conv_forward(self, input, weight, bias)
393 weight, bias, self.stride,
394 _pair(0), self.dilation, self.groups)
--> 395 return F.conv2d(input, weight, bias, self.stride,
396 self.padding, self.dilation, self.groups)
397
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not str
When I try to debug the code, the X_train and Y_train hold 'Data' & 'Target'.
I don't understand why the tensor values are not being taken by the enumerate (dataloader) in the for loop.
As otherwise, values are present in the dataloader. If each tensor row in the dataset has 'Data' and 'Target' prefixed before it, how do I remove that. Please suggest any solution.
| The problem is that data is a dictionary and when you unpack it the way you did (X_train, Y_train = data) you unpack the keys while you are interested in the values.
refer to this simple example:
d = {'a': [1,2], 'b': [3,4]}
x, y = d
print(x,y) # a b
So you should change this:
X_train, Y_train = data
into this:
X_train, Y_train = data.values()
| https://stackoverflow.com/questions/67120788/ |
Streamlit : TypeError: a bytes-like object is required, not 'Tensor' | I am working on a style transfer task, my model returns a tensor. recently I was saving that image using torchvision.utils
torchvision.utils.save_image(genarated_image, result_path)
now I have passed the same image to streamlit.
def image_input():
content_file = st.sidebar.file_uploader("Choose a Content Image", type=["png", "jpg", "jpeg"])
if content_file is not None:
content = Image.open(content_file)
content = np.array(content) # pil to cv
content = cv2.cvtColor(content, cv2.COLOR_RGB2BGR)
else:
st.warning("Upload an Image OR Untick the Upload Button)")
st.stop()
WIDTH = st.sidebar.select_slider('QUALITY (May reduce the speed)', list(range(150, 501, 50)), value=200)
content = imutils.resize(content, width=WIDTH)
generated = genarate_image(content)
st.sidebar.image(content, width=300, channels='BGR')
st.image(generated, channels='BGR', clamp=True)
But now streamlit giving me this error.
TypeError: a bytes-like object is required, not 'Tensor'
is there a way to convert tensor into a "bytes-like object" ?
| It can be solved by transforming tonsor to PIL Image.
from torchvision import transforms
def trans_tensor_to_pil(tensor_img):
pil_image = transforms.ToPILImage()(tensor_img.squeeze_(0))
return pil_image
| https://stackoverflow.com/questions/67122588/ |
PyTorch - AssertionError: Size mismatch between tensors | I am trying to adapt a Pytorch script that was created for linear regression. It was originally written to take in a set of random values(created with np.random) as features and targets.
I have now created a dataframe of actual data for analysis:
df = pd.read_csv('file_name.csv')
The df looks like this:
X1 X2 X3 X4 X5 X6 X7 X8 Y1 Y2
0 0.98 514.5 294.0 110.25 7.0 2 0.0 0 15.55 21.33
1 0.98 514.5 294.0 110.25 7.0 3 0.0 0 15.55 21.33
2 0.98 514.5 294.0 110.25 7.0 4 0.0 0 15.55 21.33
3 0.98 514.5 294.0 110.25 7.0 5 0.0 0 15.55 21.33
4 0.90 563.5 318.5 122.50 7.0 2 0.0 0 20.84 28.28
...and I am currently extracting just two columns(X1 and X2) as my features, and one column(Y1) as my targets, like this:
x = df[['X1', 'X2']]
y = df['Y1']
So features look like this:
X1 X2
0 0.98 514.5
1 0.98 514.5
2 0.98 514.5
3 0.98 514.5
4 0.90 563.5
and targets look like this:
Y1
0 15.55
1 15.55
2 15.55
3 15.55
4 20.84
However, when I attempt to convert the features (X1 and X1) and targets(Y1) to tensors, in order to feed them to the NN, the code fails at the line:
dataset = TensorDataset(x_tensor_flat, y_tensor_flat)
I get the error:
line 45, in <module> dataset = TensorDataset(x_tensor, y_tensor)
AssertionError: Size mismatch between tensors
There's clearly some shaping issue at play, but I can't work out what.
I have tried to flatten as well as transposing the tensors, but I get the same error.
Any help would be hugely appreciated.
Here's the full section of code that is causing the issue:
import pandas as pd
import torch
import torch.optim as optim
import torch.nn as nn
from torch.utils.data import Dataset, TensorDataset, DataLoader
from torch.utils.data.dataset import random_split
device = 'cuda' if torch.cuda.is_available() else 'cpu'
df = pd.read_csv('file_name.csv')
x = df[['X1', 'X2']]
y = df['Y1']
x_tensor = torch.from_numpy(np.array(x)).float()
y_tensor = torch.from_numpy(np.array(y)).float()
train_loader = DataLoader(dataset=train_dataset, batch_size=10)
val_loader = DataLoader(dataset=val_dataset, batch_size=10)
class ManualLinearRegression(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 1)
def forward(self, x):
return self.linear(x)
def make_train_step(model, loss_fn, optimizer):
def train_step(x, y):
model.train()
yhat = model(x)
loss = loss_fn(y, yhat)
loss.backward()
optimizer.step()
optimizer.zero_grad()
return loss.item()
return train_step
torch.manual_seed(42)
model = ManualLinearRegression().to(device)
loss_fn = nn.MSELoss(reduction='mean')
optimizer = optim.SGD(model.parameters(), lr=1e-1)
train_step = make_train_step(model, loss_fn, optimizer)
n_epochs = 50
training_losses = []
validation_losses = []
print(model.state_dict())
for epoch in range(n_epochs):
batch_losses = []
for x_batch, y_batch in train_loader:
x_batch = x_batch.to(device)
y_batch = y_batch.to(device)
loss = train_step(x_batch, y_batch)
batch_losses.append(loss)
training_loss = np.mean(batch_losses)
training_losses.append(training_loss)
with torch.no_grad():
val_losses = []
for x_val, y_val in val_loader:
x_val = x_val.to(device)
y_val = y_val.to(device)
model.eval()
yhat = model(x_val)
val_loss = loss_fn(y_val, yhat).item()
val_losses.append(val_loss)
validation_loss = np.mean(val_losses)
validation_losses.append(validation_loss)
print(f"[{epoch+1}] Training loss: {training_loss:.3f}\t Validation loss: {validation_loss:.3f}")
print(model.state_dict())
| The problem is with how you have called the random_split function. Note that it takes lengths as input, not the percentage or ratio of the split. The error is about the same, i.e., the sum of lengths (80+20) that you have specified is not the same as the length of data (5).
The below code snippet should fix your problem. Also, you do not need to flatten tensors... I think.
dataset = TensorDataset(x_tensor, y_tensor)
val_size = int(len(dataset)*0.2)
train_size = len(dataset)- int(len(dataset)*0.2)
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
| https://stackoverflow.com/questions/67124787/ |
Load the ,trained model save, in pytorch for chatbots | I ran the code of this tutorial (link) and after a while learning the model was completed and I chatted with the trained model. After exit the program, in the save directory next to the program file, 8 files with a tar extension were created.
I guess they are trained model saved files, what should I do to load those files and reuse them?
| In "Run the Model", they describe what you should do. More specifically, there is this code in their snippet:
# Set checkpoint to load from; set to None if starting from scratch
loadFilename = None
checkpoint_iter = 4000
#loadFilename = os.path.join(save_dir, model_name, corpus_name,
# '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size),
# '{}_checkpoint.tar'.format(checkpoint_iter))
And you basically need to uncomment the #loadFilename and set the checkpoint_iter to the iteration you want. After that, you can skip the training part, because you already ran it, and run the evaluation code again.
| https://stackoverflow.com/questions/67127172/ |
Best way to debug or step over a sequential pytorch model | I used to write the PyTorch model with nn.Module which included __init__ and forward so that I can step over my model to check how the variable dimension changes along the network.
However I have since realized that you can also do it with nn.Sequential which only requires an __init__, you don't need to write a forward function as below:
However, the problem is when I try to step over this network, it is not easy to check the variable any more. It just jumps to another place and back.
Does anyone know how to do step over in this situation?
P.S: I am using PyCharm.
| You can iterate over the children of model like below and print sizes for debugging. This is similar to writing forward but you write a separate function instead of creating an nn.Module class.
import torch
from torch import nn
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
def print_sizes(model, input_tensor):
output = input_tensor
for m in model.children():
output = m(output)
print(m, output.shape)
return output
input_tensor = torch.rand(100, 1, 28, 28)
print_sizes(model, input_tensor)
# output:
# Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) torch.Size([100, 20, 24, 24])
# ReLU() torch.Size([100, 20, 24, 24])
# Conv2d(20, 64, kernel_size=(5, 5), stride=(1, 1)) torch.Size([100, 64, 20, 20])
# ReLU() torch.Size([100, 64, 20, 20])
# you can also nest the Sequential models like this. In this case inner Sequential will be considered as module itself.
model1 = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Sequential(
nn.Conv2d(20,64,5),
nn.ReLU()
)
)
print_sizes(model1, input_tensor)
# output:
# Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) torch.Size([100, 20, 24, 24])
# ReLU() torch.Size([100, 20, 24, 24])
# Sequential(
# (0): Conv2d(20, 64, kernel_size=(5, 5), stride=(1, 1))
# (1): ReLU()
# ) torch.Size([100, 64, 20, 20])
| https://stackoverflow.com/questions/67132348/ |
loss defined on repeated evaluation of the same model | I have a model denoted by f(),
Suppose the target is t, f(x1) = y1 and f(x2) = y2 and my loss is defined as
loss = mse(y1,y2) + mse(y2,t)
Since both y1 and y2 reguires grad, I have received error such as
one of the variables needed for gradient computation has been modified by an inplace operation
My understanding is that, suppose I evaluate y1 first, the graph has been changed upon my evaluation of y2. Should I fix some tensor such as ,e.g., y1_no_grad = y1.detach().numpy() and then use
loss = mse(y1_no_grad,y2) + mse(y2,t)?
However, I still receive error Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy(), which I am not sure if it is because y1_no_grad is a numpy array while y2 is a tensor.
Update:
I realized my problem afterwards. It was due to that I created multiple loss tensors and that I backwarded one loss tensor first which changed the parameters in-place. This caused error when I wanted to backward another loss tensor.
E.g.
f(x1) = y1
f(x2) = y2
f(x3) = y3
...
f(xn) = yn
f(x) = y
for i in range(n):
optimizer.zero_grad()
loss = mse(y,yi) + mse(yi,t)
loss.backward()
optimizer.step()
To me the solutions are either:
1.accumulate the loss tensors before doing backward , i.e.
for i in range(n):
loss = mse(y,yi) + mse(yi,t)
loss.backward()
optimizer.step()
2.Evaluate again before each backward, i.e.:
for i in range(n):
optimizer.zero_grad()
y = f(x)
yi = f(xi)
loss = mse(y,yi) + mse(yi,t)
loss.backward()
optimizer.step()
|
Suppose the target is t, f(x1) = y1 and f(x2) = y2 and my loss is defined as
loss = mse(y1,y2) + mse(y2,t)
Since both y1 and y2 reguires grad, I have received error such as
This statement is incorrect. What you've described does not necessiate in-place assignment errors. For example
import torch
from torch.nn.functional import mse_loss
def f(x):
return x**2
t = torch.ones(1)
x1 = torch.randn(1, requires_grad=True)
x2 = torch.randn(1, requires_grad=True)
y1 = f(x1)
y2 = f(x2)
loss = mse_loss(y1, y2) + mse_loss(y2, t)
loss.backward()
does not produce any errors. Likely your issue is somewhere else.
For the general case you described you should get a computation graph that could be visualized as
The only issue here could be that your function f is not differentiable or is somehow invalid (perhaps in in-place assignment is taking place in f).
| https://stackoverflow.com/questions/67134902/ |
Creating a tensor where each entry is a function of its index | I want to create a matrix D that is defined by D[i,j]=d(i-j) where d is some arbitrary function that I can choose.
It can easily be done with loops, but it is very slow. Is there an efficiant way of creating this matrix with torch or numpy?
| You could apply the function (if vectorised) to numpy.indices:
import numpy as np
i, j = np.indices((n, m))
D = d(i - j)
| https://stackoverflow.com/questions/67141615/ |
RuntimeError: mat1 dim 1 must match mat2 dim 0 | I am still grappling with PyTorch, having played with Keras for a while (which feels a lot more intuitive).
Anyway - I have the nn.linear model code below, which works fine for just one input feature, where:
inputDim = 1
I am now trying to expand the same code to include 2 features, and so I have included another column in my feature dataframe and also set:
inputDim = 2
However, when I run the code, I get the dreaded error:
RuntimeError: mat1 dim 1 must match mat2 dim 0
This error references line 63, which is:
outputs = model(inputs)
I have gone through several other posts here relating to this dimensionality error, but I still can't see what is wrong with my code. Any help would be appreciated.
The full code looks like this:
import numpy as np
import pandas as pd
import torch
from torch.autograd import Variable
import matplotlib.pyplot as plt
device = 'cuda' if torch.cuda.is_available() else 'cpu'
df = pd.read_csv('Adjusted Close - BAC-UBS-WFC.csv')
x = df[['BAC', 'UBS']]
y = df['WFC']
# number_of_features = x.shape[1]
# print(number_of_features)
x_train = np.array(x, dtype=np.float32)
x_train = x_train.reshape(-1, 1)
y_train = np.array(y, dtype=np.float32)
y_train = y_train.reshape(-1, 1)
class linearRegression(torch.nn.Module):
def __init__(self, inputSize, outputSize):
super(linearRegression, self).__init__()
self.linear = torch.nn.Linear(inputSize, outputSize)
def forward(self, x):
out = self.linear(x)
return out
inputDim = 2
outputDim = 1
learningRate = 0.01
epochs = 500
# Model instantiation
torch.manual_seed(42)
model = linearRegression(inputDim, outputDim)
if torch.cuda.is_available(): model.cuda()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learningRate)
# Model training
loss_series = []
for epoch in range(epochs):
# Converting inputs and labels to Variable
inputs = Variable(torch.from_numpy(x_train).cuda())
labels = Variable(torch.from_numpy(y_train).cuda())
# Clear gradient buffers because we don't want any gradient from previous epoch to carry forward, dont want to cummulate gradients
optimizer.zero_grad()
# get output from the model, given the inputs
outputs = model(inputs)
# get loss for the predicted output
loss = criterion(outputs, labels)
loss_series.append(loss.item())
print(loss)
# get gradients w.r.t to parameters
loss.backward()
# update parameters
optimizer.step()
print('epoch {}, loss {}'.format(epoch, loss.item()))
# Calculate predictions on training data
with torch.no_grad(): # we don't need gradients in the testing phase
predicted = model(Variable(torch.from_numpy(x_train).cuda())).cpu().data.numpy()
| General advice: For errors with dimension, it usually helps to print out dimensions at each step of the computation.
Most likely in this specific case, you have made mistake in reshaping the input with this x_train = x_train.reshape(-1, 1)
Your input is (N,1) but NN expects (N,2).
| https://stackoverflow.com/questions/67142365/ |
pytorch initialize two sub-modules with same weights? | I was wondering if there's a typical way to initialize two parts of one network to same some initial values? Say I got two separate auto-encoders for both query and document, and I would like to initialize the weights of these two auto-encoders to same weights (not sharing the weights).
Thanks!
| I think the most easy way would be to init one of the sub-modules at random, save the state_dict and then load_state_dict from the other module.
| https://stackoverflow.com/questions/67146226/ |
PyTorch: RuntimeError: The size of tensor a (224) must match the size of tensor b (244) at non-singleton dimension 3 | I want to create and train AutoEncoder to extract features and use that features for the clustering algorithms. Right now I am getting errors while calculating the loss.
RuntimeError: The size of tensor a (224) must match the size of tensor b (244) at non-singleton dimension 3
and a warning
UserWarning: Using a target size (torch.Size([1, 3, 224, 244])) that is different to the input size (torch.Size([1, 3, 224, 224])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)*
I am using Pytorch.
Can anyone tell me what is wrong with this? In warning and error size of input and output is the same but it is saying it is different.
The summary sizes of input and output images are as follow
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 16, 112, 112] 448
ReLU-2 [-1, 16, 112, 112] 0
Conv2d-3 [-1, 32, 56, 56] 4,640
ReLU-4 [-1, 32, 56, 56] 0
Conv2d-5 [-1, 64, 18, 18] 100,416
ReLU-6 [-1, 64, 18, 18] 0
Conv2d-7 [-1, 128, 3, 3] 401,536
ReLU-8 [-1, 128, 3, 3] 0
Conv2d-9 [-1, 256, 1, 1] 295,168
ConvTranspose2d-10 [-1, 128, 3, 3] 295,040
ReLU-11 [-1, 128, 3, 3] 0
ConvTranspose2d-12 [-1, 64, 12, 12] 401,472
ReLU-13 [-1, 64, 12, 12] 0
ConvTranspose2d-14 [-1, 24, 28, 28] 75,288
ReLU-15 [-1, 24, 28, 28] 0
ConvTranspose2d-16 [-1, 16, 56, 56] 3,472
ReLU-17 [-1, 16, 56, 56] 0
ConvTranspose2d-18 [-1, 8, 111, 111] 1,160
ReLU-19 [-1, 8, 111, 111] 0
ConvTranspose2d-20 [-1, 3, 224, 224] 603
Sigmoid-21 [-1, 3, 224, 224] 0
================================================================
Total params: 1,579,243
Trainable params: 1,579,243
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 9.94
Params size (MB): 6.02
Estimated Total Size (MB): 16.54
----------------------------------------------------------------
Min Value of input Image = tensor(0.0627)
Max Value of input Image = tensor(0.5098)
Input Image shape = torch.Size([1, 3, 224, 244])
Output Image shape = torch.Size([1, 3, 224, 224])
My Autoencoder class is
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(3, 16, 3, stride=2, padding=1), # b, 16, 10, 10
nn.ReLU(True),
nn.Conv2d(16, 32, 3, stride=2, padding=1), # b, 16, 10, 10
nn.ReLU(True),
nn.Conv2d(32, 64, 7, stride=3, padding=1), # b, 16, 10, 10
nn.ReLU(True),
nn.Conv2d(64, 128, 7, stride=5, padding=1), # b, 16, 10, 10
nn.ReLU(True),
nn.Conv2d(128, 256, 3, stride=5, padding=1) # b, 16, 10, 10
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(256, 128, 3), # b, 16, 5, 5
nn.ReLU(True),
nn.ConvTranspose2d(128, 64, 7,stride=3, padding=1,output_padding=1), # b, 16, 5, 5
nn.ReLU(True),
nn.ConvTranspose2d(64, 24, 7,stride=2, padding=1,output_padding=1), # b, 16, 5, 5
nn.ReLU(True),
nn.ConvTranspose2d(24, 16, 3, stride=2, padding=1,output_padding=1), # b, 8, 15, 15
nn.ReLU(True),
nn.ConvTranspose2d(16, 8, 3, stride=2, padding=1), # b, 1, 28, 28
nn.ReLU(True),
nn.ConvTranspose2d(8,3, 5, stride=2, padding=1,output_padding=1), # b, 1, 28, 28
nn.Sigmoid()
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
and training function is as follow
dataset = DatasetLoader('E:/DAL/Dataset/Images', get_transform(train=True))
torch.manual_seed(1)
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-50])
dataset_test = torch.utils.data.Subset(dataset, indices[-50:])
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=True, num_workers=0)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=1, shuffle=False, num_workers=0)
model = autoencoder().cuda()
summary(model, (3, 224, 224))
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate,weight_decay=1e-5)
total_loss = 0
for epoch in range(num_epochs):
for data in data_loader:
# print(data)
img = data
print("Min Value of input Image = ",torch.min(img))
print("Max Value of input Image = ",torch.max(img))
img = Variable(img).cuda()
# ===================forward=====================
output = model(img)
print("Input Image shape = ",img.shape)
print("Output Image shape = ",output.shape)
loss = criterion(output, img)
# ===================backward====================
optimizer.zero_grad()
loss.backward()
optimizer.step()
# ===================log========================
total_loss += loss.data
print('epoch [{}/{}], loss:{:.4f}'
.format(epoch+1, num_epochs, total_loss))
if epoch % 10 == 0:
pic = to_img(output.cpu().data)
save_image(pic, './dc_img/image_{}.png'.format(epoch))
torch.save(model.state_dict(), './conv_autoencoder.pth')
Dataloader Class and transform function is as follow
def get_transform(train):
transforms = []
transforms.append(T.Resize((224,244)))
if train:
transforms.append(T.RandomHorizontalFlip(0.5))
transforms.append(T.RandomVerticalFlip(0.5))
transforms.append(T.ToTensor())
return T.Compose(transforms)
class DatasetLoader(torch.utils.data.Dataset):
def __init__(self, root, transforms=None):
self.root = root
self.transforms = transforms
self.imgs = list(sorted(os.listdir(root)))
def __getitem__(self, idx):
img_path = os.path.join(self.root, self.imgs[idx])
img = Image.open(img_path).convert("RGB")
if self.transforms is not None:
img = self.transforms(img)
return img
def __len__(self):
return len(self.imgs)
| Im pretty sure you got a typo in your get_transform function:
transforms.append(T.Resize((224,244)))
You probably wanted to resize it to (224, 224) instead of (224, 244).
| https://stackoverflow.com/questions/67146363/ |
Pytorch GAN model doesn't train: matrix multiplication error | I'm trying to build a basic GAN to familiarise myself with Pytorch. I have some (limited) experience with Keras, but since I'm bound to do a larger project in Pytorch, I wanted to explore first using 'basic' networks.
I'm using Pytorch Lightning. I think I've added all necessary components. I tried passing some noise through the generator and the discriminator separately, and I think the output has the expected shape. Nonetheless, I get a runtime error when I try to train the GAN (full traceback below):
RuntimeError: mat1 and mat2 shapes cannot be multiplied (7x9 and 25x1)
I noticed that 7 is the size of the batch (by printing out the batch dimensions), even though I specified batch_size to be 64. Other than that, quite honestly, I don't know where to begin: the error traceback doesn't help me.
Chances are, I made multiple mistakes. However, I'm hoping some of you will be able to spot the current error from the code, since the multiplication error seems to point towards a dimensionality problem somewhere. Here's the code.
import os
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
from skimage import io
from torch.utils.data import Dataset, DataLoader, random_split
from torchvision.utils import make_grid
from torchvision.transforms import Resize, ToTensor, ToPILImage, Normalize
class DoppelDataset(Dataset):
"""
Dataset class for face data
"""
def __init__(self, face_dir: str, transform=None):
self.face_dir = face_dir
self.face_paths = os.listdir(face_dir)
self.transform = transform
def __len__(self):
return len(self.face_paths)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
face_path = os.path.join(self.face_dir, self.face_paths[idx])
face = io.imread(face_path)
sample = {'image': face}
if self.transform:
sample = self.transform(sample['image'])
return sample
class DoppelDataModule(pl.LightningDataModule):
def __init__(self, data_dir='../data/faces', batch_size: int = 64, num_workers: int = 0):
super().__init__()
self.data_dir = data_dir
self.batch_size = batch_size
self.num_workers = num_workers
self.transforms = transforms.Compose([
ToTensor(),
Resize(100),
Normalize(mean=(123.26290927634774, 95.90498110733365, 86.03763122875182),
std=(63.20679012922922, 54.86211954409834, 52.31266645797249))
])
def setup(self, stage=None):
# Initialize dataset
doppel_data = DoppelDataset(face_dir=self.data_dir, transform=self.transforms)
# Train/val/test split
n = len(doppel_data)
train_size = int(.8 * n)
val_size = int(.1 * n)
test_size = n - (train_size + val_size)
self.train_data, self.val_data, self.test_data = random_split(dataset=doppel_data,
lengths=[train_size, val_size, test_size])
def train_dataloader(self) -> DataLoader:
return DataLoader(dataset=self.test_data, batch_size=self.batch_size, num_workers=self.num_workers)
def val_dataloader(self) -> DataLoader:
return DataLoader(dataset=self.val_data, batch_size=self.batch_size, num_workers=self.num_workers)
def test_dataloader(self) -> DataLoader:
return DataLoader(dataset=self.test_data, batch_size=self.batch_size, num_workers=self.num_workers)
class DoppelGenerator(nn.Sequential):
"""
Generator network that produces images based on latent vector
"""
def __init__(self, latent_dim: int):
super().__init__()
def block(in_channels: int, out_channels: int, padding: int = 1, stride: int = 2, bias=False):
return nn.Sequential(
nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels, kernel_size=4, stride=stride,
padding=padding, bias=bias),
nn.BatchNorm2d(num_features=out_channels),
nn.ReLU(True)
)
self.model = nn.Sequential(
block(latent_dim, 512, padding=0, stride=1),
block(512, 256),
block(256, 128),
block(128, 64),
block(64, 32),
nn.ConvTranspose2d(32, 3, kernel_size=4, stride=2, padding=1, bias=False),
nn.Tanh()
)
def forward(self, input):
return self.model(input)
class DoppelDiscriminator(nn.Sequential):
"""
Discriminator network that classifies images in two categories
"""
def __init__(self):
super().__init__()
def block(in_channels: int, out_channels: int):
return nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=4, stride=2, padding=1,
bias=False),
nn.BatchNorm2d(num_features=out_channels),
nn.LeakyReLU(0.2, inplace=True),
)
self.model = nn.Sequential(
block(3, 64),
block(64, 128),
block(128, 256),
block(256, 512),
nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=0, bias=False),
nn.Flatten(),
nn.Linear(25, 1),
nn.Sigmoid()
)
def forward(self, input):
return self.model(input)
class DoppelGAN(pl.LightningModule):
def __init__(self,
channels: int,
width: int,
height: int,
lr: float = 0.0002,
b1: float = 0.5,
b2: float = 0.999,
batch_size: int = 64,
**kwargs):
super().__init__()
# Save all keyword arguments as hyperparameters, accessible through self.hparams.X)
self.save_hyperparameters()
# Initialize networks
# data_shape = (channels, width, height)
self.generator = DoppelGenerator(latent_dim=self.hparams.latent_dim, )
self.discriminator = DoppelDiscriminator()
self.validation_z = torch.randn(8, self.hparams.latent_dim,1,1)
def forward(self, input):
return self.generator(input)
def adversarial_loss(self, y_hat, y):
return F.binary_cross_entropy(y_hat, y)
def training_step(self, batch, batch_idx, optimizer_idx):
images = batch
# Sample noise (batch_size, latent_dim,1,1)
z = torch.randn(images.size(0), self.hparams.latent_dim,1,1)
# Train generator
if optimizer_idx == 0:
# Generate images (call generator -- see forward -- on latent vector)
self.generated_images = self(z)
# Log sampled images (visualize what the generator comes up with)
sample_images = self.generated_images[:6]
grid = make_grid(sample_images)
self.logger.experiment.add_image('generated_images', grid, 0)
# Ground truth result (ie: all fake)
valid = torch.ones(images.size(0), 1)
# Adversarial loss is binary cross-entropy
generator_loss = self.adversarial_loss(self.discriminator(self(z)), valid)
tqdm_dict = {'gen_loss': generator_loss}
output = {
'loss': generator_loss,
'progress_bar': tqdm_dict,
'log': tqdm_dict
}
return output
# Train discriminator: classify real from generated samples
if optimizer_idx == 1:
# How well can it label as real?
valid = torch.ones(images.size(0), 1)
real_loss = self.adversarial_loss(self.discriminator(images), valid)
# How well can it label as fake?
fake = torch.zeros(images.size(0), 1)
fake_loss = self.adversarial_loss(
self.discriminator(self(z).detach()), fake)
# Discriminator loss is the average of these
discriminator_loss = (real_loss + fake_loss) / 2
tqdm_dict = {'d_loss': discriminator_loss}
output = {
'loss': discriminator_loss,
'progress_bar': tqdm_dict,
'log': tqdm_dict
}
return output
def configure_optimizers(self):
lr = self.hparams.lr
b1 = self.hparams.b1
b2 = self.hparams.b2
# Optimizers
opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr, betas=(b1, b2))
opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr, betas=(b1, b2))
# Return optimizers/schedulers (currently no scheduler)
return [opt_g, opt_d], []
def on_epoch_end(self):
# Log sampled images
sample_images = self(self.validation_z)
grid = make_grid(sample_images)
self.logger.experiment.add_image('generated_images', grid, self.current_epoch)
if __name__ == '__main__':
# Global parameter
image_dim = 128
latent_dim = 100
batch_size = 64
# Initialize dataset
tfs = transforms.Compose([
ToPILImage(),
Resize(image_dim),
ToTensor()
])
doppel_dataset = DoppelDataset(face_dir='../data/faces', transform=tfs)
# Initialize data module
doppel_data_module = DoppelDataModule(batch_size=batch_size)
# Build models
generator = DoppelGenerator(latent_dim=latent_dim)
discriminator = DoppelDiscriminator()
# Test generator
x = torch.rand(batch_size, latent_dim, 1, 1)
y = generator(x)
print(f'Generator: x {x.size()} --> y {y.size()}')
# Test discriminator
x = torch.rand(batch_size, 3, 128, 128)
y = discriminator(x)
print(f'Discriminator: x {x.size()} --> y {y.size()}')
# Build GAN
doppelgan = DoppelGAN(batch_size=batch_size, channels=3, width=image_dim, height=image_dim, latent_dim=latent_dim)
# Fit GAN
trainer = pl.Trainer(gpus=0, max_epochs=5, progress_bar_refresh_rate=1)
trainer.fit(model=doppelgan, datamodule=doppel_data_module)
Full traceback:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-28805d67d74b>", line 1, in <module>
runfile('/Users/wouter/Documents/OneDrive/Hardnose/Projects/Coding/0002_DoppelGANger/doppelganger/gan.py', wdir='/Users/wouter/Documents/OneDrive/Hardnose/Projects/Coding/0002_DoppelGANger/doppelganger')
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/wouter/Documents/OneDrive/Hardnose/Projects/Coding/0002_DoppelGANger/doppelganger/gan.py", line 298, in <module>
trainer.fit(model=doppelgan, datamodule=doppel_data_module)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 510, in fit
results = self.accelerator_backend.train()
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 57, in train
return self.train_or_test()
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
results = self.trainer.train()
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 561, in train
self.train_loop.run_training_epoch()
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 550, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 718, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 485, in optimizer_step
model_ref.optimizer_step(
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1298, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 286, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 144, in __optimizer_step
optimizer.step(closure=closure, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/optim/adam.py", line 66, in step
loss = closure()
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 708, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 806, in training_step_and_backward
result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 319, in training_step
training_step_output = self.trainer.accelerator_backend.training_step(args)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py", line 62, in training_step
return self._step(self.trainer.model.training_step, args)
File "/usr/local/lib/python3.9/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py", line 58, in _step
output = model_step(*args)
File "/Users/wouter/Documents/OneDrive/Hardnose/Projects/Coding/0002_DoppelGANger/doppelganger/gan.py", line 223, in training_step
real_loss = self.adversarial_loss(self.discriminator(images), valid)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/wouter/Documents/OneDrive/Hardnose/Projects/Coding/0002_DoppelGANger/doppelganger/gan.py", line 154, in forward
return self.model(input)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/usr/local/lib/python3.9/site-packages/torch/nn/functional.py", line 1690, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: mat1 and mat2 shapes cannot be multiplied (7x9 and 25x1)
| This multiplication problem comes from the DoppelDiscriminator. There is a linear layer
nn.Linear(25, 1),
that should be
nn.Linear(9, 1),
based on the error message.
| https://stackoverflow.com/questions/67146595/ |
Cannot fix ImportError: cannot import name 'ToTensorV2' from 'albumentations.pytorch' on Colab | I tried all three solutions from this question to solve the problem, but still receive an error:
ImportError: cannot import name 'ToTensorV2' from 'albumentations.pytorch' (/usr/local/lib/python3.7/dist-packages/albumentations/pytorch/__init__.py)
My current code:
!pip install -U albumentations
import albumentations
from albumentations.pytorch import ToTensorV2
| Have you tried factory resetting the runtime? If you haven't, maybe python was still loading the earlier version. I have just tested the accepted solution from the linked question on a fresh colab instance and it worked fine:
!pip install albumentations==0.4.6
import albumentations
from albumentations.pytorch import ToTensorV2
| https://stackoverflow.com/questions/67152132/ |
Why use Variable() in inference? | I am learning PyTorch for an image classification task, and I ran into code where someone used a PyTorch Variable() in their function for prediction:
def predict_image(image):
image_tensor = test_transforms(image).float()
image_tensor = image_tensor.unsqueeze_(0)
input = Variable(image_tensor)
input = input.to(device)
output = model(input)
index = output.data.cpu().numpy().argmax()
return index
Why do they use Variable() here? (even though it works fine without it.)
| You can safely omit it. Variables are a legacy component of PyTorch, now deprecated, that used to be required for autograd:
Variable (deprecated)
WARNING
The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True. Below please find a quick guide on what has changed:
Variable(tensor) and Variable(tensor, requires_grad) still work as expected, but they return Tensors instead of Variables.
| https://stackoverflow.com/questions/67152554/ |
the Training is stopped for my PyTorch model | I am creating a model to identify names of items in RNN (LITMS)
i get data , then transform data then create Baches then create Model then create train function Correctly but the training is stop here(not working):
this this my code
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
for x, y in get_batches(data, batch_size, seq_length):
print ("the login the loob get_batches is succressfuly")
counter += 1
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
inputs, targets = torch.from_numpy(x), torch.from_numpy(y)
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output, targets.view(batch_size*seq_length))
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
opt.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for x, y in get_batches(val_data, batch_size, seq_length):
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
x, y = torch.from_numpy(x), torch.from_numpy(y)
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
inputs, targets = x, y
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output, targets.view(batch_size*seq_length))
val_losses.append(val_loss.item())
net.train() # reset to train mode after iterationg through validation data
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.4f}...".format(loss.item()),
"Val Loss: {:.4f}".format(np.mean(val_losses)))
I don't know why.
some times i got this error after many trials
please help me if you can
You can find the notebook file here.
| you duplicate this step
remove it then try again
| https://stackoverflow.com/questions/67157753/ |
Fine-tuning model's classifier layer with new label | I would like to fine-tune already fine-tuned BertForSequenceClassification model with new dataset containing just 1 additional label which hasn't been seen by model before.
By that, I would like to add 1 new label to the set of labels that model is currently able of classifying properly.
Moreover, I don't want classifier weights to be randomly initialized, I'd like to keep them intact and just update them accordingly to the dataset examples while increasing the size of classifier layer by 1.
The dataset used for further fine-tuning could look like this:
sentece,label
intent example 1,new_label
intent example 2,new_label
...
intent example 10,new_label
My model's current classifier layer looks like this:
Linear(in_features=768, out_features=135, bias=True)
How could I achieve it?
Is it even a good approach?
| You can just extend the weights and bias of your model with new values. Please have a look at the commented example below:
#This is the section that loads your model
#I will just use an pretrained model for this example
import torch
from torch import nn
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("jpcorb20/toxic-detector-distilroberta")
model = AutoModelForSequenceClassification.from_pretrained("jpcorb20/toxic-detector-distilroberta")
#we check the output of one sample to compare it later with the extended layer
#to verify that we kept the previous learnt "knowledge"
f = tokenizer.encode_plus("This is an example", return_tensors='pt')
print(model(**f).logits)
#Now we need to find out the name of the linear layer you want to extend
#The layers on top of distilroberta are wrapped inside a classifier section
#This name can differ for you because it can be chosen randomly
#use model.parameters instead find the classification layer
print(model.classifier)
#The output shows us that the classification layer is called `out_proj`
#We can now extend the weights by creating a new tensor that consists of the
#old weights and a randomly initialized tensor for the new label
model.classifier.out_proj.weight = nn.Parameter(torch.cat((model.classifier.out_proj.weight, torch.randn(1,768)),0))
#We do the same for the bias:
model.classifier.out_proj.bias = nn.Parameter(torch.cat((model.classifier.out_proj.bias, torch.randn(1)),0))
#and be happy when we compare the output with our expectation
print(model(**f).logits)
Output:
tensor([[-7.3604, -9.4899, -8.4170, -9.7688, -8.4067, -9.3895]],
grad_fn=<AddmmBackward>)
RobertaClassificationHead(
(dense): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(out_proj): Linear(in_features=768, out_features=6, bias=True)
)
tensor([[-7.3604, -9.4899, -8.4170, -9.7688, -8.4067, -9.3895, 2.2124]],
grad_fn=<AddmmBackward>)
| https://stackoverflow.com/questions/67158554/ |
RuntimeError: tensors must be 2-D | I was running a simple MLP network with customized learning algorithms. It worked fine on the training set, but I got this error when I entered additional code to check the test accuracy. How can I fix it?
Test Accuracy code
epochs = 1
for epcoh in range(epochs):
model_bp.eval()
model_fa.eval()
test_loss_bp = 0
correct_bp = 0
test_loss_fa = 0
correct_fa = 0
with torch.no_grad():
for idx_batch, (inputs, targets) in enumerate(test_loader):
output_bp = model_bp(inputs)
output_fa = model_fa(inputs)
# sum up batch loss
test_loss_bp += loss_crossentropy(output_bp, targets).item()
test_loss_bp += loss_crossentropy(output_fa, targets).item()
# get the index of the max log-probability
## predict_bp = outputs_bp.argmax(dim=1, keepdim=True)
predict_bp = torch.max(output_bp.data,1)[1]
correct_bp += predict_bp.eq(targets.view_as(predict_bp)).sum().item()
predict_fa = torch.max(output_fa.data,1)[1]
correct_fa += predict_fa.eq(targets.view_as(predict_fa)).sum().item()
print('Test set: BP Average loss: {:.4f}, Accuracy: {}/{} ({:.4f}%)\n'.format(test_loss_bp, correct_bp, len(test_loader.dataset),
100. * correct_bp / len(test_loader.dataset)))
print('Test set: FA Average loss: {:.4f}, Accuracy: {}/{} ({:.4f}%)\n'.format(test_loss_fa, correct_fa, len(test_loader.dataset),
100. * correct_fa / len(test_loader.dataset)))
Error
I'm curious about the meaning of 'RuntimeError: tensors must be 2-D'. We would appreciate it if you could tell us why it happened and where you made the mistake.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-9b8b6f683e59> in <module>
16 #targets = targets.to(device)
17
---> 18 output_bp = model_bp(inputs)
19 output_fa = model_fa(inputs)
20 # sum up batch loss
~\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
c:\Users\bclab\Desktop\feedback-alignment-pytorch-master\lib\linear.py in forward(self, inputs)
102 """
103 # first layer
--> 104 linear1 = F.relu(self.linear[0](inputs))
105
106 linear2 = self.linear[1](linear1)
~\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
c:\Users\bclab\Desktop\feedback-alignment-pytorch-master\lib\linear.py in forward(self, input)
69 def forward(self, input):
70 # See the autograd section for explanation of what happens here.
---> 71 return LinearFunction.apply(input, self.weight, self.bias)
72
73
c:\Users\bclab\Desktop\feedback-alignment-pytorch-master\lib\linear.py in forward(ctx, input, weight, bias)
11 def forward(ctx, input, weight, bias=None):
12 ctx.save_for_backward(input, weight, bias)
---> 13 output = input.mm(weight.t())
14 if bias is not None:
15 output += bias.unsqueeze(0).expand_as(output)
RuntimeError: tensors must be 2-D
This is my model. And fa_linear, linear : customize network
# load feedforward dfa model
model_fa = fa_linear.LinearFANetwork(in_features=784, num_layers=2, num_hidden_list=[1000, 10]).to(device)
# load reference linear model
model_bp = linear.LinearNetwork(in_features=784, num_layers=2, num_hidden_list=[1000, 10]).to(device)
# optimizers
optimizer_fa = torch.optim.SGD(model_fa.parameters(),
lr=1e-4, momentum=0.9, weight_decay=0.001, nesterov=True)
optimizer_bp = torch.optim.SGD(model_bp.parameters(),
lr=1e-4, momentum=0.9, weight_decay=0.001, nesterov=True)
loss_crossentropy = torch.nn.CrossEntropyLoss()
# make log file
results_path = 'bp_vs_fa_'
logger_train = open(results_path + 'train_log2.txt', 'w')
linear
from torch.autograd import Function
from torch import nn
import torch
import torch.nn.functional as F
# Inherit from Function
class LinearFunction(Function):
# Note that both forward and backward are @staticmethods
@staticmethod
# bias is an optional argument
def forward(ctx, input, weight, bias=None):
ctx.save_for_backward(input, weight, bias)
output = input.mm(weight.t())
if bias is not None:
output += bias.unsqueeze(0).expand_as(output)
return output
# This function has only a single output, so it gets only one gradient
@staticmethod
def backward(ctx, grad_output):
# This is a pattern that is very convenient - at the top of backward
# unpack saved_tensors and initialize all gradients w.r.t. inputs to
# None. Thanks to the fact that additional trailing Nones are
# ignored, the return statement is simple even when the function has
# optional inputs.
input, weight, bias = ctx.saved_variables
grad_input = grad_weight = grad_bias = None
# These needs_input_grad checks are optional and there only to
# improve efficiency. If you want to make your code simpler, you can
# skip them. Returning gradients for inputs that don't require it is
# not an error.
if ctx.needs_input_grad[0]:
grad_input = grad_output.mm(weight)
if ctx.needs_input_grad[1]:
grad_weight = grad_output.t().mm(input)
if bias is not None and ctx.needs_input_grad[2]:
grad_bias = grad_output.sum(0).squeeze(0)
return grad_input, grad_weight, grad_bias
class Linear(nn.Module):
def __init__(self, input_features, output_features, bias=True):
super(Linear, self).__init__()
self.input_features = input_features
self.output_features = output_features
# nn.Parameter is a special kind of Variable, that will get
# automatically registered as Module's parameter once it's assigned
# as an attribute. Parameters and buffers need to be registered, or
# they won't appear in .parameters() (doesn't apply to buffers), and
# won't be converted when e.g. .cuda() is called. You can use
# .register_buffer() to register buffers.
# nn.Parameters can never be volatile and, different than Variables,
# they require gradients by default.
self.weight = nn.Parameter(torch.Tensor(output_features, input_features))
if bias:
self.bias = nn.Parameter(torch.Tensor(output_features))
else:
# You should always register all possible parameters, but the
# optional ones can be None if you want.
self.register_parameter('bias', None)
# weight initialization
torch.nn.init.kaiming_uniform(self.weight)
torch.nn.init.constant(self.bias, 1)
def forward(self, input):
# See the autograd section for explanation of what happens here.
return LinearFunction.apply(input, self.weight, self.bias)
class LinearNetwork(nn.Module):
def __init__(self, in_features, num_layers, num_hidden_list):
"""
:param in_features: dimension of input features (784 for MNIST)
:param num_layers: number of layers for feed-forward net
:param num_hidden_list: list of integers indicating hidden nodes of each layer
"""
super(LinearNetwork, self).__init__()
self.in_features = in_features
self.num_layers = num_layers
self.num_hidden_list = num_hidden_list
# create list of linear layers
# first hidden layer
self.linear = [Linear(self.in_features, self.num_hidden_list[0])]
# append additional hidden layers to list
for idx in range(self.num_layers - 1):
self.linear.append(Linear(self.num_hidden_list[idx], self.num_hidden_list[idx+1]))
# create ModuleList to make list of layers work
self.linear = nn.ModuleList(self.linear)
def forward(self, inputs):
"""
forward pass, which is same for conventional feed-forward net
:param inputs: inputs with shape [batch_size, in_features]
:return: logit outputs from the network
"""
# first layer
linear1 = F.relu(self.linear[0](inputs))
linear2 = self.linear[1](linear1)
return linear2
fa_linear
import torch
import torch.nn.functional as F
import torch.nn as nn
from torch import autograd
from torch.autograd import Variable
class LinearFANetwork(nn.Module):
"""
Linear feed-forward networks with feedback alignment learning
Does NOT perform non-linear activation after each layer
"""
def __init__(self, in_features, num_layers, num_hidden_list):
"""
:param in_features: dimension of input features (784 for MNIST)
:param num_layers: number of layers for feed-forward net
:param num_hidden_list: list of integers indicating hidden nodes of each layer
"""
super(LinearFANetwork, self).__init__()
self.in_features = in_features
self.num_layers = num_layers
self.num_hidden_list = num_hidden_list
# create list of linear layers
# first hidden layer
self.linear = [LinearFAModule(self.in_features, self.num_hidden_list[0])]
# append additional hidden layers to list
for idx in range(self.num_layers - 1):
self.linear.append(LinearFAModule(self.num_hidden_list[idx], self.num_hidden_list[idx+1]))
# create ModuleList to make list of layers work
self.linear = nn.ModuleList(self.linear)
def forward(self, inputs):
"""
forward pass, which is same for conventional feed-forward net
:param inputs: inputs with shape [batch_size, in_features]
:return: logit outputs from the network
"""
# first layer
linear1 = self.linear[0](inputs)
# second layer
linear2 = self.linear[1](linear1)
return linear2
class LinearFAFunction(autograd.Function):
@staticmethod
# same as reference linear function, but with additional fa tensor for backward
def forward(context, input, weight, weight_fa, bias=None):
context.save_for_backward(input, weight, weight_fa, bias)
output = input.mm(weight.t())
if bias is not None:
output += bias.unsqueeze(0).expand_as(output)
return output
@staticmethod
def backward(context, grad_output):
input, weight, weight_fa, bias = context.saved_variables
grad_input = grad_weight = grad_weight_fa = grad_bias = None
if context.needs_input_grad[0]:
# all of the logic of FA resides in this one line
# calculate the gradient of input with fixed fa tensor, rather than the "correct" model weight
grad_input = grad_output.mm(weight_fa)
if context.needs_input_grad[1]:
# grad for weight with FA'ed grad_output from downstream layer
# it is same with original linear function
grad_weight = grad_output.t().mm(input)
if bias is not None and context.needs_input_grad[3]:
grad_bias = grad_output.sum(0).squeeze(0)
return grad_input, grad_weight, grad_weight_fa, grad_bias
class LinearFAModule(nn.Module):
def __init__(self, input_features, output_features, bias=True):
super(LinearFAModule, self).__init__()
self.input_features = input_features
self.output_features = output_features
# weight and bias for forward pass
# weight has transposed form; more efficient (so i heard) (transposed at forward pass)
self.weight = nn.Parameter(torch.Tensor(output_features, input_features))
if bias:
self.bias = nn.Parameter(torch.Tensor(output_features))
else:
self.register_parameter('bias', None)
# fixed random weight and bias for FA backward pass
# does not need gradient
self.weight_fa = nn.Parameter(Variable(torch.FloatTensor(output_features, input_features), requires_grad=False))
# weight initialization
torch.nn.init.kaiming_uniform(self.weight)
torch.nn.init.kaiming_uniform(self.weight_fa)
torch.nn.init.constant(self.bias, 1)
def forward(self, input):
return LinearFAFunction.apply(input, self.weight, self.weight_fa, self.bias)
| You just need to flatten your input before passing it to your model. Something like this:
# ...
# from [batch_size, 1, 28, 28] <- 4-D
# to [batch_size, 1x28x28] <- 2-D, as expected
flat_inputs = torch.flatten(inputs)
output_bp = model_bp(flat_inputs)
output_fa = model_fa(flat_inputs)
# ...
| https://stackoverflow.com/questions/67159259/ |
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [256, 1, 3, 3], but got 3-dimensional input of size [64, 1, 786] instead | I'm trying to combine the CausalConv1d with Conv2d as the encoder of my VAE. But I got this error which is produced on Encoder part. The CausalConv1d is implemented by a nn.Conv1d network, So it should only have 3-dimensional weight, but why the error says expected 4-dimensional? And I have another question, why I can't use a single int but only tuple in Pycharm when I set the "kernel_size", "stride" etc. parameters in a Convs layer? Although the official document said both int and tuple are valid. Here is the traceback:
Traceback (most recent call last):
File "training.py", line 94, in <module>
training(args)
File "training.py", line 21, in training
summary(model, torch.zeros(64, 1, 784), show_input=True, show_hierarchical=False)
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\pytorch_model_summary\model_summary.py", line 118, in summary
model(*inputs)
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "F:\VAE_reservoir_computing\VAE.py", line 34, in forward
mu, log_var = self.encoder.forward(image)
File "F:\VAE_reservoir_computing\CausalCnn_Reservoir.py", line 30, in forward
x = self.conv1d(x.view(x.shape[0], 1, 784))
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\torch\nn\modules\container.py", line 119, in forward
input = module(input)
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "F:\VAE_reservoir_computing\CausalConv1d.py", line 21, in forward
conv1d_out = self.conv1d(x)
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\torch\nn\modules\conv.py", line 263, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Anaconda\envs\vae_reservior_computing\lib\site-packages\torch\nn\modules\conv.py", line 260, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [256, 1, 3, 3], but got 3-dimensional input of size [64, 1, 786] instead
Here is my code of the Encoder-Decoder part:
import torch
import torch.nn as nn
from CausalConv1d import CausalConv1d
class CausalReservoirEncoder(nn.Module):
def __init__(self, in_channels, out_channels, num_filters, z_dim, *args):
super(CausalReservoirEncoder, self).__init__()
self.num_filters = num_filters
self.z_dim = z_dim
hidden_filters = num_filters
self.Conv1d = nn.Sequential(
CausalConv1d(in_channels,out_channels,kernel_size=(3,3),dilation=1,A=False),
nn.LeakyReLU()
)
for p in self.parameters():
p.requires_grad = False
self.encoder = nn.Sequential(
nn.Conv2d(out_channels, self.num_filters, kernel_size=(4, 4), padding=(1, 1), stride=(2, 2)), # 28x28 -> 14x14
nn.LeakyReLU(),
nn.Conv2d(hidden_filters, 2 * hidden_filters, kernel_size=(4, 4), padding=(1, 1), stride=(2, 2)), # 14x14 -> 7x7
nn.LeakyReLU(),
nn.Flatten(),
nn.Linear(2*hidden_filters*7*7, self.z_dim)
)
def forward(self, x):
x = self.Conv1d(x.view(x.shape[0], 1, 784))
h_e = self.encoder(x.view(x.shape[0], -1, 28, 28))
mu, log_var = torch.chunk(h_e, 2, dim=1)
return mu, log_var
class CausalReservoirDecoder(nn.Module):
def __init__(self, z_dim, out_channels, num_filters, **kwargs):
super(CausalReservoirDecoder, self).__init__()
self.z_dim = z_dim
self.num_filters = num_filters
hidden_filters = num_filters
self.linear = nn.Sequential(
nn.Linear(self.z_dim, 2*hidden_filters*7*7),
nn.LeakyReLU()
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(2*hidden_filters, hidden_filters, kernel_size=(4,4), padding=(1,1), stride=(2,2)), #7x7 -> 14x14
nn.LeakyReLU(),
nn.ConvTranspose2d(hidden_filters, out_channels, kernel_size=(4,4), padding=(1,1), stride=(2,2)), # 14x14 -> 28x28
nn.Sigmoid()
)
def forward(self, z):
x = self.linear(z)
x = x.reshape(x.shape[0], -1, 7, 7)
x_recon = self.decoder(x)
return x_recon
Here is the implementation of the CausalConv1d:
import torch.nn as nn
import torch.nn.functional as F
class CausalConv1d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation, A=False, *args, **kwargs):
super(CausalConv1d, self).__init__()
self.kernel_size = kernel_size
self.dilation = dilation
self.A = A
self.padding = (kernel_size[0] - 1) * dilation + A * 1
self.conv1d = nn.Conv1d(in_channels,out_channels,self.kernel_size,stride=(1,1),padding=(0,0),dilation=dilation,**kwargs)
def forward(self, x):
x = F.pad(x, (self.padding, 0))
conv1d_out = self.conv1d(x)
if self.A:
return conv1d_out[:,:,: -1]
else:
return conv1d_out
So anyone can give me some suggestions?
| I know this may not be intuitive, but when you use a kernel_size with 2-dim (e.g., (3,3)), then your Conv1d has 4-dim weights. Therefore, to solve your issue, you must change from:
CausalConv1d(in_channels,out_channels,kernel_size=(3,3),dilation=1,A=False),
to:
CausalConv1d(in_channels, out_channels, kernel_size=3, dilation=1, A=False),
| https://stackoverflow.com/questions/67160592/ |
How to reshape a tensor where one dimension is reduced to 1? | I'm trying to reshape a tensor using addition and squeeze/normalization from [30,50,32,64] to [30,50,32,1]. And when I use Tensorflow reshape, I'm getting the following error.
o = tf.reshape(o, shape=[30, 50, 32, 1])
ValueError: Cannot reshape a tensor with 3072000 elements to shape [30,50,32,1] (48000 elements) for 'discriminator/embedding/Reshape_4' (op: 'Reshape') with input shapes: [30,50,32,64], [4] and with input tensors computed as partial shapes: input[1] = [30,50,32,1].
Can someone suggest a way around it?
| I think you are confusing reshape operation with reduction. First of all, reshape has nothing to do with normalisation. By definition, any reshape operation should preserve number and value of tensor's elements.
i.e. it's impossible to reshape 30x50x32x64 (3072000 elements) tensor to 30x50x32x1 tensor (48000 elements).
Check out the docs: https://www.tensorflow.org/api_docs/python/tf/reshape
So if you want to apply reduction operation on last tensor's dimension you can do it like this:
In [1]: import tensorflow as tf
In [2]: data = tf.random.normal(shape=[30, 50, 32, 64])
In [3]: tf.math.reduce_mean(data, axis=-1, keepdims=True).shape
Out[3]: TensorShape([30, 50, 32, 1])
Squeezing/unsqueezing can be viewed as a special case of reshape operation. Where you are removing or adding single tone dimensions.
# squeeze example
In [4]: tf.squeeze(tf.math.reduce_mean(data, axis=-1, keepdims=True)).shape
Out[4]: TensorShape([30, 50, 32])
# unsqueeze example
In [5]: tf.expand_dims(tf.math.reduce_mean(data, axis=-1, keepdims=True), axis=0).shape
Out[5]: TensorShape([1, 30, 50, 32, 1])
| https://stackoverflow.com/questions/67160604/ |
how to store model.state_dict() in a temp var for later use? | I tried to store the state dict of my model in a variable temporarily and wanted to restore it to my model later, but the content of this variable changed automatically as the model updated.
There is a minimal example:
import torch as t
import torch.nn as nn
from torch.optim import Adam
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc = nn.Linear(3, 2)
def forward(self, x):
return self.fc(x)
net = Net()
loss_fc = nn.MSELoss()
optimizer = Adam(net.parameters())
weights = net.state_dict()
print(weights)
x = t.rand((5, 3))
y = t.rand((5, 2))
loss = loss_fc(net(x), y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(weights)
I thought the two outputs would be the same, but I got (outputs may change due to random initialization)
OrderedDict([('fc.weight', tensor([[-0.5557, 0.0544, -0.2277],
[-0.0793, 0.4334, -0.1548]])), ('fc.bias', tensor([-0.2204, 0.2846]))])
OrderedDict([('fc.weight', tensor([[-0.5547, 0.0554, -0.2267],
[-0.0783, 0.4344, -0.1538]])), ('fc.bias', tensor([-0.2194, 0.2856]))])
The content of weights changed, which is so weird.
I also tried .copy() and t.no_grad() as following, but they did not help.
with t.no_grad():
weights = net.state_dict().copy()
Yes, I know that I can save state dict using t.save(), but I just want to figure out what happened in the previous example.
I'm using Python 3.8.5 and Pytorch 1.8.1
Thanks for any help.
| That's how OrderedDict works. Here's a simpler example:
from collections import OrderedDict
# a mutable variable
l = [1,2,3]
# an OrderedDict with an entry pointing to that mutable variable
x = OrderedDict([("a", l)])
# if you change the list
l[1] = 20
# the change is reflected in the OrderedDict
print(x)
# >> OrderedDict([('a', [1, 20, 3])])
If you want to avoid that, you'll have to do a deepcopy rather than a shallow copy:
from copy import deepcopy
x2 = deepcopy(x)
print(x2)
# >> OrderedDict([('a', [1, 20, 3])])
# now, if you change the list
l[2] = 30
# you do not change your copy
print(x2)
# >> OrderedDict([('a', [1, 20, 3])])
# but you keep changing the original dict
print(x)
# >> OrderedDict([('a', [1, 20, 30])])
As Tensor is also mutable, the same behaviour is expected in your case. Therefore, you can use:
from copy import deepcopy
weights = deepcopy(net.state_dict())
| https://stackoverflow.com/questions/67161171/ |
How to use PyTorch to softmax only the upper triangular elements of a matrix? | Given input like:
tensor([[[1.9392, -1.9266, 0.9664],
[0.0000, -1.9266, 0.9664],
[0.0000, -0.0000, 0.9664]]])
My desired output is:
tensor([[[0.4596, 0.0096, 0.1737],
[0.0000, 0.0096, 0.1737],
[0.0000, -0.0000, 0.1737]]])
I.e. just calculating the function over the upper triangular elements.
| You can access the upper triangular elements with torch.triu_indices:
t = tensor([[1.9392, -1.9266, 0.9664],
[0.0000, -1.9266, 0.9664],
[0.0000, -0.0000, 0.9664]])
idx = torch.triu_indices(*t.shape)
soft = F.softmax(t[idx[0], idx[1]], dim=0)
If you want to reassign the values as in your desired output:
>>> t[idx[0], idx[1]] = soft
>>> t
tensor([[0.4596, 0.0096, 0.1737],
[0.0000, 0.0096, 0.1737],
[0.0000, -0.0000, 0.1737]])
| https://stackoverflow.com/questions/67164667/ |
The .data attribute for a dataset in torchvision.datasets doesn't work for ImageFolder? | I can access the training data set of an MNIST object like so:
from torchvision.datasets import MNIST
trainset = MNIST(root='.', train=True, download=True)
print(trainset.data)
However, I'm not able to do the same using ImageFolder. Anybody know the equivalent for ImageFolder?
AttributeError: 'ImageFolder' object has no attribute 'data'
| There isn't any.
Nowadays, everyone assumes that MNIST fits in memory, thus it is preloaded to the data attribute. However, this is usually not possible for ImageDatasets. Therefore, the images are loaded on-the-fly which means, no data attribute for them. You can access the image paths and labels using the self.imgs or self.samples.
| https://stackoverflow.com/questions/67166330/ |
UnboundLocalError: local variable 'labels' referenced before assignment in PyTorch while doing X-Ray classification | I am trying to replicate this Kaggle notebook, using my data. I am not posting the whole code here, because it is huge.
Received an error, which cannot fix myself
----> 2 model, history = run_fold(model, criterion, optimizer, scheduler, device=CFG.device, fold=0, num_epochs=CFG.num_epochs)
1 frames
<ipython-input-63-1ac8b53f265b> in train_model(model, criterion, optimizer, scheduler, num_epochs,
dataloaders, dataset_sizes, device, fold)
23 for inputs, classes in dataloaders[phase]:
24 inputs = inputs.to(CFG.device)
---> 25 labels = labels.to(CFG.device)
26
27 # forward
UnboundLocalError: local variable 'labels' referenced before assignment
My data structure looks the same as the notebook
I have checked similar questions, there are several of them, but this is more tech question how to fix this thing. Maybe something wrong with index? Appreciate any tips.
| There is a typo in the for loop. It should be labels instead of classes:
# ...
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(CFG.device)
labels = labels.to(CFG.device)
# ...
| https://stackoverflow.com/questions/67170615/ |
model.train() and model.eval() causing nan values | Hey so I am trying my hand at image classification/transfer learning using the monkey species dataset and the resnet50 with a modified final fc layer to predict just the 10 classes. Eveything is working until I use model.train() and model.eval() then after the first epoch it starts to return nans and the accuracy drops off as you'll see below. I'm curious why is this only when switching to train/eval....?
First I import the model and attach the classifier and freeze the parameters
%%capture
resnet = models.resnet50(pretrained=True)
for param in resnet.parameters():
param.required_grad = False
in_features = resnet.fc.in_features
# Build custom classifier
classifier = nn.Sequential(OrderedDict([('fc1', nn.Linear(in_features, 512)),
('relu', nn.ReLU()),
('drop', nn.Dropout(0.05)),
('fc2', nn.Linear(512, 10)),
]))
# ('output', nn.LogSoftmax(dim=1))
resnet.classifier = classifier
resnet.to(device)
Then setting my loss func, optimizer, and shceduler
# Step : Define criterion and optimizer
criterion = nn.CrossEntropyLoss()
# pass the optimizer to the appended classifier layer
optimizer = torch.optim.SGD(resnet.parameters(), lr=0.01)
# Scheduler
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[10], gamma=0.05)
Then setting the training and validation loops
epochs = 20
tr_losses = []
avg_epoch_tr_loss = []
tr_accuracy = []
val_losses = []
avg_epoch_val_loss = []
val_accuracy = []
val_loss_min = np.Inf
resnet.train()
for epoch in range(epochs):
for i, batch in enumerate(train_loader):
# Pull the data and labels from the batch
data, label = batch
# If available push data and label to GPU
if train_on_gpu:
data, label = data.to(device), label.to(device)
# Compute the logit
logit = resnet(data)
# Compte loss
loss = criterion(logit, label)
# Clearing the gradient
resnet.zero_grad()
# Backpropagate the gradients (accumulte the partial derivatives of loss)
loss.backward()
# Apply the updates to the optimizer step in the opposite direction to the gradient
optimizer.step()
# Store the losses of each batch
# loss.item() seperates the loss from comp graph
tr_losses.append(loss.item())
# Detach and store the average accuracy of each batch
tr_accuracy.append(label.eq(logit.argmax(dim=1)).float().mean())
# Print the rolling batch training loss every 20 batches
if i % 40 == 0 and not i == 1:
print(f'Batch No: {i} \tAverage Training Batch Loss: {torch.tensor(tr_losses).mean():.2f}')
# Print the average loss for each epoch
print(f'\nEpoch No: {epoch + 1},Training Loss: {torch.tensor(tr_losses).mean():.2f}')
# Print the average accuracy for each epoch
print(f'Epoch No: {epoch + 1}, Training Accuracy: {torch.tensor(tr_accuracy).mean():.2f}\n')
# Store the avg epoch loss for plotting
avg_epoch_tr_loss.append(torch.tensor(tr_losses).mean())
resnet.eval()
for i, batch in enumerate(val_loader):
# Pull the data and labels from the batch
data, label = batch
# If available push data and label to GPU
if train_on_gpu:
data, label = data.to(device), label.to(device)
# Compute the logits without computing the gradients
with torch.no_grad():
logit = resnet(data)
# Compte loss
loss = criterion(logit, label)
# Store test loss
val_losses.append(loss.item())
# Store the accuracy for each batch
val_accuracy.append(label.eq(logit.argmax(dim=1)).float().mean())
if i % 20 == 0 and not i == 1:
print(f'Batch No: {i+1} \tAverage Val Batch Loss: {torch.tensor(val_losses).mean():.2f}')
# Print the average loss for each epoch
print(f'\nEpoch No: {epoch + 1}, Epoch Val Loss: {torch.tensor(val_losses).mean():.2f}')
# Print the average accuracy for each epoch
print(f'Epoch No: {epoch + 1}, Epoch Val Accuracy: {torch.tensor(val_accuracy).mean():.2f}\n')
# Store the avg epoch loss for plotting
avg_epoch_val_loss.append(torch.tensor(val_losses).mean())
# Checpoininting the model using val loss threshold
if torch.tensor(val_losses).float().mean() <= val_loss_min:
print("Epoch Val Loss Decreased... Saving model")
# save current model
torch.save(resnet.state_dict(), '/content/drive/MyDrive/1. Full Projects/Intel Image Classification/model_state.pt')
val_loss_min = torch.tensor(val_losses).mean()
# Step the scheduler for the next epoch
scheduler.step()
# Print the updated learning rate
print('Learning Rate Set To: {:.5f}'.format(optimizer.state_dict()['param_groups'][0]['lr']),'\n')
The model starts to train but then slowly becomes nan values
Batch No: 0 Average Training Batch Loss: 9.51
Batch No: 40 Average Training Batch Loss: 1.71
Batch No: 80 Average Training Batch Loss: 1.15
Batch No: 120 Average Training Batch Loss: 0.94
Epoch No: 1,Training Loss: 0.83
Epoch No: 1, Training Accuracy: 0.78
Batch No: 1 Average Val Batch Loss: 0.39
Batch No: 21 Average Val Batch Loss: 0.56
Batch No: 41 Average Val Batch Loss: 0.54
Batch No: 61 Average Val Batch Loss: 0.54
Epoch No: 1, Epoch Val Loss: 0.55
Epoch No: 1, Epoch Val Accuracy: 0.81
Epoch Val Loss Decreased... Saving model
Learning Rate Set To: 0.01000
Batch No: 0 Average Training Batch Loss: 0.83
Batch No: 40 Average Training Batch Loss: nan
Batch No: 80 Average Training Batch Loss: nan
| I see that resnet.zero_grad() is after logit = resnet(data), which causes the gradient to explode in your case.
Please do it as below:
# Clearing the gradient
optimizer.zero_grad()
logit = resnet(data)
# Compute loss
loss = criterion(logit, label)
| https://stackoverflow.com/questions/67173932/ |
What is the default settings (e.g. hyperparameters) for MatLab's feedforwardnet? | My code is very simple for one layer of 20 neurons:
FFNN = feedforwardnet(20);
FFNN_trained = train(FFNN,x,y);
This can result in a very good performance in a few hundreds of epochs. I want to reproduce it in Pytorch, so I need to know the details, e.g. learning rate, activation function, optimizer, batch size, when to stop, etc. Also, the data splitting for training/validation/testing seems to be random in feedforwardnet.
Where can I find these details for feedforwardnet? How to specify the training/validation/testing in feedforwardnet?
Thank you for the help. I realise that the levenberg-marquardt method is not available in Pytorch.
| According to the documentations for feedforwardnet, the default setting for this function is to train with the Levenberg-Marquardt backpropagation, aka. damped least-squares -- feedforwardnet(20, 'trainlm') option.
As for the data split, the default seems to be a random 0.7-0.15-0.15 train-validation-test split, using the dividerand function.
From the trainlm page:
trainlm is a network training function that updates weight and bias values according to Levenberg-Marquardt optimization.
trainlm is often the fastest backpropagation algorithm in the toolbox, and is highly recommended as a first-choice supervised algorithm, although it does require more memory than other algorithms.
Training occurs according to trainlm training parameters, shown here with their default values:
net.trainParam.epochs — Maximum number of epochs to train. The default value is 1000.
net.trainParam.goal — Performance goal. The default value is 0.
net.trainParam.max_fail — Maximum validation failures. The default value is 6.
net.trainParam.min_grad — Minimum performance gradient. The default value is 1e-7.
net.trainParam.mu — Initial mu. The default value is 0.001.
net.trainParam.mu_dec — Decrease factor for mu. The default value is 0.1.
net.trainParam.mu_inc — Increase factor for mu. The default value is 10.
net.trainParam.mu_max — Maximum value for mu. The default value is 1e10.
net.trainParam.show — Epochs between displays (NaN for no displays). The default value is 25.
net.trainParam.showCommandLine — Generate command-line output. The default value is false.
net.trainParam.showWindow — Show training GUI. The default value is true.
net.trainParam.time — Maximum time to train in seconds. The default value is inf.
Validation vectors are used to stop training early if the network performance on the validation vectors fails to improve or remains the same for max_fail epochs in a row. Test vectors are used as a further check that the network is generalizing well, but do not have any effect on training.
From Divide Data for Optimal Neural Network Training:
MATLAB provides 4 built-in functions for splitting data:
Divide the data randomly (default) - dividerand
Divide the data into contiguous blocks - divideblock
Divide the data using an interleaved selection - divideint
Divide the data by index - divideind
You can access or change the division function for your network with this property:
net.divideFcn
Each of the division functions takes parameters that customize its behavior. These values are stored and can be changed with the following network property:
net.divideParam
| https://stackoverflow.com/questions/67176333/ |
Is batch normalisation a nonlinear operation? | Can batch normalisation be considered a nonlinear operation like relu or sigmoid?
I have come across a resnet block like:
: IBasicBlock(
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=512)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
so I wonder if batchnormalisation can act as non linear operation
| Batch Normalization is non-linear in the data points xi on a per batch training basis, but is linear during inference in the scaling parameters (as these are fixed).
| https://stackoverflow.com/questions/67177858/ |
PyTorch DataLoader uses same random seed for batches run in parallel | There is a bug in PyTorch/Numpy where when loading batches in parallel with a DataLoader (i.e. setting num_workers > 1), the same NumPy random seed is used for each worker, resulting in any random functions applied being identical across parallelized batches.
Minimal example:
import numpy as np
from torch.utils.data import Dataset, DataLoader
class RandomDataset(Dataset):
def __getitem__(self, index):
return np.random.randint(0, 1000, 2)
def __len__(self):
return 9
dataset = RandomDataset()
dataloader = DataLoader(dataset, batch_size=1, num_workers=3)
for batch in dataloader:
print(batch)
As you can see, for each parallelized set of batches (3), the results are the same:
# First 3 batches
tensor([[891, 674]])
tensor([[891, 674]])
tensor([[891, 674]])
# Second 3 batches
tensor([[545, 977]])
tensor([[545, 977]])
tensor([[545, 977]])
# Third 3 batches
tensor([[880, 688]])
tensor([[880, 688]])
tensor([[880, 688]])
What is the recommended/most elegant way to fix this? i.e. have each batch produce a different randomization, irrespective of the number of workers.
| It seems this works, at least in Colab:
dataloader = DataLoader(dataset, batch_size=1, num_workers=3,
worker_init_fn = lambda id: np.random.seed(id) )
EDIT:
it produces identical output (i.e. the same problem) when iterated over epochs. – iacob
Best fix I have found so far:
...
dataloader = DataLoader(ds, num_workers= num_w,
worker_init_fn = lambda id: np.random.seed(id + epoch * num_w ))
for epoch in range ( 2 ):
for batch in dataloader:
print(batch)
print()
Still can't suggest closed form, thing depends on a var (epoch) then called. Ideally It must be something like worker_init_fn = lambda id: np.random.seed(id + EAGER_EVAL(np.random.randint(10000) ) where EAGER_EVAL evaluate seed on loader construction, before lambda is passed as parameter. Is it possible in python, I wonder.
| https://stackoverflow.com/questions/67180955/ |
PyTorch gradient doesn't flow through a clone of a tensor | I'm trying to have my model learn a certain function. I have parameters self.a, self.b, self.c that are trainable. I'm trying to force self.b to be in a certain range by using tanh. However, when I run the code it appears as the gradient is flowing through the original parameter (self.b), but not through the clone (self.b_opt)
import torch
import torch.nn as nn
import torch.optim as optim
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.a = torch.nn.Parameter(torch.rand(1, requires_grad=True))
self.b = torch.nn.Parameter(torch.rand(1, requires_grad=True))
self.c = torch.nn.Parameter(torch.rand(1, requires_grad=True))
self.b_opt = torch.tanh(self.b.clone())
model_net = model()
#function to learn = 5 * (r > 2) * (3)
optimizer = optim.Adam(model_net.parameters(), lr = 0.1)
for epoch in range(10):
for r in range(5):
optimizer.zero_grad()
loss = 5 * (r > 2) * (3) - model_net.a * torch.sigmoid((r - model_net.b_opt)) * (model_net.c)
loss.backward(retain_graph=True)
optimizer.step()
#print(model_net.a)
print(model_net.b)
print(model_net.b_opt)
#print(model_net.c)
print()
>>> Parameter containing:
tensor([0.4298], requires_grad=True)
tensor([0.7229], grad_fn=<TanhBackward>)
Parameter containing:
tensor([-0.0277], requires_grad=True)
tensor([0.7229], grad_fn=<TanhBackward>)
Parameter containing:
tensor([-0.5007], requires_grad=True)
tensor([0.7229], grad_fn=<TanhBackward>)
| gradient does actually flows through b_opt since it's the tensor that is involved in your loss function. However, it is not a leaf tensor (it is the result of operations on tensors, specifically a clone and a tanh, you can check with model_net.b_opt.is_leaf), which means it allows gradients to be propagated but does not accumulate them (b_opt.grad does not exist). You cannot update the value of b_opt through backprop.
However the fact that the value of b is updated proves that the gradient does flow ! What you actually need is to recompute b_opt each time b is updated, because here your b_opt is computed once and for all in the init function. That would be something like
...
loss.backward(retain_graph=True)
optimizer.step()
self.b_opt = torch.tanh(self.b.clone())
...
A much simpler way to do all this though would be to get rid of b_opt altogether, and replace your loss with
loss = 5 * (r > 2) * (3) - model_net.a * torch.sigmoid((r - tanh(model_net.b))) * (model_net.c)
| https://stackoverflow.com/questions/67181834/ |
What is the difference between `on_validation_epoch_end` and `validation_epoch_end` in Pytorch-lightning? | Inside a LightningModule,Pycharm allows 2 auto complete methods:
class MyModel(LightningModule):
def on_validation_epoch_end(self):
def validation_epoch_end(self, outs):
with on_validation_epoch_end referenced in hooks.py
def on_validation_epoch_end(self) -> None:
"""
Called in the validation loop at the very end of the epoch.
"""
# do something when the epoch ends
and
validation_epoch_end called in evaluation_loop.py as eval_results = model.validation_epoch_end(eval_results) leading to __run_eval_epoch_end.
What is the purpose of each of those?
I can only assume one is deprecated. Could not find any relevant docs.
| Here is a pseudocode that shows when the hooks are called, and I think it makes it quite explicit that you are right : these two functions are redundant (literally called at the same place with the same arguments) and I would say that validation_epoch_end is the one to be considered deprecated here, since it's not mentioned in the doc whereas the hooks (of the form on_event_start/end) are extensively explained
| https://stackoverflow.com/questions/67182475/ |
importing likelihoods fails | I am trying to get in touch with Gaussian Process Classification and try to reproduce the example from https://docs.gpytorch.ai/en/stable/examples/01_Exact_GPs/GP_Regression_on_Classification_Labels.html
Following their code, I want to import the Dirichlet Classification Likelihood by
from gpytorch.likelihoods import DirichletClassificationLikelihood
However, this seems not to work, and I get the Error:
ImportError: cannot import name 'DirichletClassificationLikelihood'
from 'gpytorch.likelihoods' (/Library/Frameworks/Python.framework/Versions/3.8/
lib/python3.8/site-packages/gpytorch/likelihoods/__init__.py)
also other likelihoods as FixedNoiseGaussianLikelihood does not work.
I tried to reinstall GPyTorch in Conda, which did not help.
but importing i.e GaussianLikelihood works without problems. Did somebody had similar problems, or knows how to solve this Error?
EDIT: It only does not work for Jupyter Notebook.
Best
| Solved it.
The paths of Jupyter Notebook did not include the new updated library. After updating paths to the wanted folder by
import sys
sys.paths.append("....")
it worked nice.
Best
| https://stackoverflow.com/questions/67182490/ |
How to create a PyTorch mutable tensor? | I'm trying to create a copy of a tensor that will change if the original changes.
r = torch.tensor(1.0, requires_grad=True)
p = r.clone()
print('before')
print(r)
print(p)
r = r*5
print('after')
print(r)
print(p)
>>>
before
tensor(1., requires_grad=True)
tensor(1.)
after
tensor(5., grad_fn=<MulBackward0>)
tensor(1.)
I tried with clone(), detach(), and even simply p=r, nothing worked.
Update.
Tried view:
r = torch.tensor(1.0, requires_grad=True)
p = r.view(1)
print('before')
print(r)
print(p)
r = r*5
print('after')
print(r)
print(p)
>>>before
tensor(1., requires_grad=True)
tensor([1.], grad_fn=<ViewBackward>)
after
tensor(5., grad_fn=<MulBackward0>)
tensor([1.], grad_fn=<ViewBackward>)
| What your looking for a is a view(which is a shallow copy of the tensor), numpy also follows this as well, the below contains what you want
test = torch.tensor([100])
test_copy = test.view(1)
print(test, test_copy) # tensor([100]) tensor([100])
test[0] = 200
test, test_copy # (tensor([200]), tensor([200]))
Edit: Made some changes and found out the problem
r = torch.tensor(1.0, requires_grad=True) # Remove requires_grad if you really dont need it
p = r.view(1)
print('before')
print(r)
print(p)
with torch.no_grad(): # Had to use this if not wont be able to do math ops cause gradients are being tracked
# Problem here was that you redeclared a new variable of r so you wiped out the previous reference to the tensor
r.mul_(5) # Simple calculations
print('after')
print(r)
print(p)
| https://stackoverflow.com/questions/67183122/ |
Differentiable convolution between two 1d signals in pytorch | i need to implement a convolution between a signal and a window in pytorch and i want it to be differentiable. Since i couldn't find an already existing function for tensors (i could only find the ones with learnable parameters) i wrote one myself but, i'm unable to make it work without breaking the computation graph. How could i do it? The function i made is:
def Convolve(a, b):
conv=torch.zeros(a.shape[0], a.shape[1], requires_grad=True).clone()
l=b.shape[0]
r=int((l-1)/2)
l2=a.shape[1]
for x in range(a.shape[0]):#for evry signal
for x2 in range(a.shape[1]):#for every time instant
for x4 in range(l):#compute the convolution (for every window value)
if (x2-r+x4<0 or x2-r+x4>=l2):#if the index is out of bonds the result is 0 (to avoid zero padding)
conv[x][x2]+=0
else:
conv[x][x2-r+x4]+=a[x][x2-r+x4]*b[x4]#otherwise is window*signal
return conv
Where 'a' is a two dimensional tensor (signal index, time) and 'b' is an Hann window.
The lenght of the window is odd.
| it is (fortunately!) possible to achieve this with pytorch primitives. You are probably looking for functional conv1d. Below is how it works. I was not sure whether you wanted the derivative with respect to the input or the weights, so you have both, just keep the requires_grad that fits you needs :
import torch.nn.functional as F
# batch of 3 signals, each of length 11 with 1 channel
signal = torch.randn(3, 1, 11, requires_grad=True)
# convolution kernel of size 3, expecting 1 input channel and 1 output channel
kernel = torch.randn(1, 1, 3, requires_grad=True)
# convoluting signal with kernel and applying padding
output = F.conv1d(signal, kernel, stride=1, padding=1, bias=None)
# output.shape is (3, 1, 11)
# backpropagating some kind of loss through the computational graph
output.sum().backward()
print(kernel.grad)
>>> torch.tensor([[[...]]])
| https://stackoverflow.com/questions/67185616/ |
Image Net Preprocessing using torch transforms | I am trying to recreate the data preprocessing on the ImageNet data set done in the original publication "Deep Residual Learning for Image Recognition". As said in their paper in section 3.4:
"Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in[256,480]for scale augmentation [41].A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used."
I have figured out the parts of randomly cropping the original image or the horizontal flip with a crop size of 224x224. The other two parts I have not. The other two parts are The image is resized with its shorter side randomly sampled in[256,480] for scale augmentation and the The standard color augmentation in [21] is used.
For the first one, I can't find a "random resize" function in torch transforms. The second, where its referencing [21], is (according to [21]) a "perform PCA on the set of RGB pixel values throughout theImageNet training set". Please refer to ImageNet Classification with Deep Convolutional Neural Networks in section "Data Augmentation" for the full explanation.
How would I recreate this type of preprocessing?
| The first one needs 3 combined transforms, RandomChoice, Resize and RandomCrop.
transforms.Compose([transforms.RandomChoice([transforms.Resize(256),
transforms.Resize(480)]),
transforms.RandomCrop(224)
])
For the second one this is what you're looking for but officially Pytorch (and literally everybody else) simply uses this.
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
If you think that's too simple, the standard Tensorflow pre-processing is just
x /= 127.5
x -= 1.
| https://stackoverflow.com/questions/67185623/ |
index 1 is out of bounds for dimension 0 with size 1 | I am starting to learn about DQN, and I am trying to solve the FrozenLake-v0 problem from scratch by my self and using Pytorch so I will put the whole code since it's connected.
class LinearDeepQNetwork(nn.Module):
def __init__(self,lr,n_action,input_dim):
super(LinearDeepQNetwork,self).__init__()
self.f1=nn.Linear(input_dim,128)
self.f2=nn.Linear(128,n_action)
self.optimizer=optim.Adam(self.parameters(),lr=lr)
self.loss=nn.MSELoss()
self.device=T.device('cuda' if T.cuda.is_available() else 'cpu')
self.to(self.device)
def forward(self,state):
layer1=F.relu(self.f1(state))
actions=self.f2(layer1)
return actions
the second class is the agent, and the problem is in the learning function
nd class is the agent, and the problem is in the learning function
class Agent():
def __init__(self,input_dim,n_action,lr,gamma=0.99,
epslion=1.0,eps_dec=1e-5,eps_min=0.01):
self.input_dim=input_dim
self.n_action=n_action
self.lr=lr
self.gamma=gamma
self.epslion=epslion
self.eps_dec=eps_dec
self.eps_min=eps_min
self.action_space=[i for i in range(self.n_action)]
self.Q=LinearDeepQNetwork(self.lr,self.n_action,self.input_dim)
def choose_action(self,observation):
if np.random.random()>self.epslion:
#conveate the state into tensor
state=T.tensor(observation).to(self.Q.device)
actions=self.Q.forward(state)
action=T.argmax(actions).item()
else:
action=np.random.choice(self.action_space)
return action
def decrement_epsilon(self):
self.epslion=self.epslion-self.eps_dec \
if self.epslion > self.eps_min else self.eps_min
def OH(self,x,l):
x = T.LongTensor([[x]])
one_hot = T.FloatTensor(1,l)
return one_hot.zero_().scatter_(1,x,1)
def learn(self,state,action,reward,state_):
self.Q.optimizer.zero_grad()
states=Variable(self.OH(state,16)).to(self.Q.device)
actions=T.tensor(action).to(self.Q.device)
rewards=T.tensor(reward).to(self.Q.device)
state_s=Variable(self.OH(state_,16)).to(self.Q.device)
q_pred=self.Q.forward(states)[actions]
q_next=self.Q.forward(state_s).max()
q_target=reward+self.gamma*q_next
loss=self.Q.loss(q_target,q_pred).to(self.Q.device)
loss.backward()
self.Q.optimizer.step()
self.decrement_epsilon()
now the problem when I run the following code it gives me an error in the learning phase, and it gives me this error index 1 is out of bounds for dimension 0 with size 1.
env=gym.make('FrozenLake-v0')
n_games=5000
scores=[]
eps_history=[]
agent=Agent(env.observation_space.n,env.action_space.n,0.0001)
for i in tqdm(range(n_games)):
score=0
done=False
obs=env.reset()
while not done:
action=agent.choose_action(obs)
obs_,reward,done,_=env.step(action)
score+=reward
agent.learn(obs,action,reward,obs_)
obs=obs_
scores.append(score)
eps_history.append(agent.epslion)
if i % 100 ==0:
avg_score=np.mean(scores[-100:])
print(f'score={score} avg_score={avg_score} epsilon={agent.epslion} i={i}')
I think the problem is in the shape of the values between the NN and the agent class, but I can't figure out the problem.
Error traceback:
IndexError Traceback (most recent call last)
<ipython-input-10-2e279f658721> in <module>()
17 score+=reward
18
---> 19 agent.learn(obs,action,reward,obs_)
20 obs=obs_
21 scores.append(score)
<ipython-input-8-5359b19ec4fa> in learn(self, state, action, reward, state_)
39 state_s=Variable(self.OH(state_,16)).to(self.Q.device)
40
---> 41 q_pred=self.Q.forward(states)[actions]
42
43 q_next=self.Q.forward(state_s).max()
IndexError: index 1 is out of bounds for dimension 0 with size 1
| since you are calling a tensor that contains a matrix, you need to specify which indices your calling in your case just adding [0] to the forward statement will solve the problem and in the [actions], replace it with [actions.item()]
self.Q.forward(states)[0][actions.item()]
| https://stackoverflow.com/questions/67185851/ |
How to efficiently repeat tensor element variable of time in pytorch? | For example, if I have a tensor A = [[1,1,1], [2,2,2], [3,3,3]], and B = [1,2,3]. How do I get C = [[1,1,1], [2,2,2], [2,2,2], [3,3,3], [3,3,3], [3,3,3]], and doing this batch-wise?
My current element-wise solution btw (takes forever...):
def get_char_context(valid_embeds, words_lens):
chars_contexts = []
for ve, wl in zip(valid_embeds, words_lens):
for idx, (e, l) in enumerate(zip(ve, wl)):
if idx ==0:
chars_context = e.view(1,-1).repeat(l, 1)
else:
chars_context = torch.cat((chars_context, e.view(1,-1).repeat(l, 1)),0)
chars_contexts.append(chars_context)
return chars_contexts
I'm doing this to add bert word embedding to a char level seq2seq task...
| Use this:
import torch
# A is your tensor
B = torch.tensor([1, 2, 3])
C = A.repeat_interleave(B, dim = 0)
EDIT:
The above works fine if A is a single 2D tensor. To repeat all (2D) tensors in a batch in the same manner, this is a simple workaround:
A = torch.tensor([[[1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[1, 2, 3], [4, 5, 6], [2,2,2]]]) # A has 2 tensors each of shape (3, 3)
B = torch.tensor([1, 2, 3]) # Rep. of each row of every tensor in the batch
A1 = A.reshape(1, -1, A.shape[2]).squeeze()
B1 = B.repeat(A.shape[0])
C = A1.repeat_interleave(B1, dim = 0).reshape(A.shape[0], -1, A.shape[2])
C is:
tensor([[[1, 1, 1],
[2, 2, 2],
[2, 2, 2],
[3, 3, 3],
[3, 3, 3],
[3, 3, 3]],
[[1, 2, 3],
[4, 5, 6],
[4, 5, 6],
[2, 2, 2],
[2, 2, 2],
[2, 2, 2]]])
As you can see each inside tensor in the batch is repeated in the same manner.
| https://stackoverflow.com/questions/67186064/ |
How does the BERT model select the label ordering? | I'm training BertForSequenceClassification for a classification task. My dataset consists of 'contains adverse effect' (1) and 'does not contain adverse effect' (0). The dataset contains all of the 1s and then the 0s after (the data isn't shuffled). For training I've shuffled my data and get the logits. From what I've understood, the logits are the probability distributions before softmax. An example logit is [-4.673831, 4.7095485]. Does the first value correspond to the label 1 (contains AE) because it appears first in the dataset, or label 0. Any help would be appreciated thanks.
| The first value corresponds to label 0 and the second value corresponds to label 1. What BertForSequenceClassification does is feeding the output of the pooler to a linear layer (after a dropout which I will ignore in this answer). Let's look at the following example:
from torch import nn
from transformers import BertModel, BertTokenizer
t = BertTokenizer.from_pretrained('bert-base-uncased')
m = BertModel.from_pretrained('bert-base-uncased')
i = t.encode_plus('This is an example.', return_tensors='pt')
o = m(**i)
print(o.pooler_output.shape)
Output:
torch.Size([1, 768])
The pooled_output is a tensor of shape [batch_size,hidden_size] and represents the contextualized (i.e. attention was applied) [CLS] token of your input sequences. This tensor is feed to a linear layer to calculate the logits of your sequence:
classificationLayer = nn.Linear(768,2)
logits = classificationLayer(o.pooler_output)
When we normalize these logits we can see that the linear layer predicts that our input should belong to label 1:
print(nn.functional.softmax(logits,dim=-1))
Output (will differ since the linear layer is initialed randomly):
tensor([[0.1679, 0.8321]], grad_fn=<SoftmaxBackward>)
The linear layer applies a linear transformation: y=xA^T+b and you can already see that the linear layer is not aware of your labels. It 'only' has a weights matrix of size [2,768] to produce logits of size [1,2] (i.e.: first row corresponds to the first value and second row to the second):
import torch:
logitsOwnCalculation = torch.matmul(o.pooler_output, classificationLayer.weight.transpose(0,1))+classificationLayer.bias
print(nn.functional.softmax(logitsOwnCalculation,dim=-1))
Output:
tensor([[0.1679, 0.8321]], grad_fn=<SoftmaxBackward>)
The BertForSequenceClassification model learns by applying a CrossEntropyLoss. This loss function produces a small loss when the logits for a certain class (label in your case) deviate only slightly from the expectation. That means the CrossEntropyLoss is the one that lets your model learn that the first logit should be high when the input does not contain adverse effect or small when it contains adverse effect. You can check this for our example with the following:
loss_fct = nn.CrossEntropyLoss()
label0 = torch.tensor([0]) #does not contain adverse effect
label1 = torch.tensor([1]) #contains adverse effect
print(loss_fct(logits, label0))
print(loss_fct(logits, label1))
Output:
tensor(1.7845, grad_fn=<NllLossBackward>)
tensor(0.1838, grad_fn=<NllLossBackward>)
| https://stackoverflow.com/questions/67190212/ |
Pytorch - Custom Dataset out of Range | im new to Pytorch and DL in genereal, so i hope this is the right place to ask questions like this.
I wanted to create my first Dataset, but my Dataset always runs out of bound. This problem should be easiest to show with the codes and outputs.
class DataframeDataset(torch.utils.data.Dataset):
"""Load Pytorch Dataset from Dataframe
"""
def __init__(self, data_frame, input_key, target_key, transform=None, features=None):
self.data_frame = data_frame
self.input_key = input_key
self.target_key = target_key
self.inputs = self.data_frame[input_key]
self.targets = self.data_frame[target_key]
self.transform = transform
self.features = [input_key, target_key] if features is None else features
self.len = len(self.inputs)
def __len__(self):
return self.len
def __str__(self):
return str(self.info())
def info(self):
info = {
'features': self.features,
'num_rows': len(self)
}
return "Dataset("+ str(info) + ")"
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
data = {
self.input_key: self.inputs[idx],
self.target_key: self.targets[idx]
}
if self.transform:
return self.transform(data)
return data
@staticmethod
def collate_fn(input_key, output_key):
def __call__(batch):
speeches = [data[input_key] for data in batch]
sentences = [data[output_key] for data in batch]
return speeches, sentences
return __call__
with some mook data:
data = [("x1", "y2", "A3"), ("x1", "y2", "b3"), ("x1", "y2", "c3"), ("x1", "y2", "d3")]
df = pd.DataFrame(data, columns=['input', 'target', 'random'])
print(df.head())
input target random
0 x1 y2 A3
1 x1 y2 b3
2 x1 y2 c3
3 x1 y2 d3
ds = DataframeDataset(data_frame=df, input_key="input", target_key="target", transform=None)
print("Len:", len(ds))
print("Ds", ds)
print(ds[0])
Len: 4
Ds Dataset({'features': ['input', 'target'], 'num_rows': 4})
{'input': 'x1', 'target': 'y2'}
So the basic functions seem to work. However, if I want to iterate over the data with a foreach loop, then unfortunately the loop does not know the boundaries. So I get a key-error, because the torch accesses indicies outside the boundary.
for idx, data in enumerate(ds):
print(idx,"->",data)
0 -> {'input': 'x1', 'target': 'y2'}
1 -> {'input': 'x1', 'target': 'y2'}
2 -> {'input': 'x1', 'target': 'y2'}
3 -> {'input': 'x1', 'target': 'y2'}
Traceback (most recent call last):
File "/home/warmachine/.local/lib/python3.8/site-packages/pandas/core/indexes/range.py", line 351, in get_loc
return self._range.index(new_key)
ValueError: 4 is not in range`
If i do something like
for idx in range(0, len(ds)):
data = ds[idx]
print(idx, "->", data)
it works, but i need to be able to use the for-each Style, so that i can use this Dataset within the Trainer of Hugging Face.
Ty in advcanded
| If you want to use Foreach loops, you must implement an Iterator function. Here is an example from PyTorch:
https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset
Slightly modified, works for me.
class DataframeDataset(torch.utils.data.Dataset):
...
def __iter__(self):
worker_info = torch.utils.data.get_worker_info()
if worker_info is None:
return map(self.__getitem__, range(self.__len__()))
per_worker = int(math.ceil((self.__len__()) / float(worker_info.num_workers)))
worker_id = worker_info.id
iter_start = worker_id * per_worker
iter_end = min(iter_start + per_worker, self.__len__())
return map(self.__getitem__, range(iter_start, iter_end))
| https://stackoverflow.com/questions/67194795/ |
Remove rows from tensor by matching | I'm trying to do some operation like if there is tensor in pytorch
a = torch.tensor([[1,0]
,[0,1]
,[2,0]
,[3,2]])
b = torch.tensor([[0,1]
,[2,0]])
I want to remove the rows [0,1], [2,0] which are the rows of b from a.
Is there any way to do this?
# result
a = torch.tensor([[1,0]
,[3,2]])
| You could do it if the tensor shapes were broadcastable.
For a tensor a of shape (?, d) and a tensor b of shape (d,), you could write something like:
cmp = a.eq(b).all(dim=1).logical_not(), i.e. compare each d-dimensional row of a with b and give me the indices where the comparison is False.
From these you can then easily your new tensor like so:
a = a[cmp]
I doubt you'll find an elegant way of doing this when b itself contains a batch dimension; your best bet would be to write a for loop.
Full example:
>>> xs = torch.tensor([[1,0], [0,1], [2,0], [3,2]])
>>> ys = torch.tensor([[0,1],[2,0]])
>>> for y in ys:
... xs = xs[xs.eq(y).all(dim=1).logical_not()]
>>> xs
tensor([[1, 0],
[3, 2]])
| https://stackoverflow.com/questions/67196226/ |
How to get multiple lines exported to wandb | I am using the library weights and biases. My model outputs a curve (a time series). I'd like to see how this curve changes throughout training. So, I'd need some kind of slider where I can select epoch and it shows me the curve for that epoch. It could be something very similar to what it's done with histograms (it shows an image of the histograms across epochs and when you hover it display the histogram corresponding to that epoch). Is there a way to do this or something similar using wandb?
Currently my code looks like this:
for epoch in range(epochs):
output = model(input)
#output is shape (37,40) (lenght 40 and I have 37 samples)
#it's enough to plot the first sample
xs = torch.arange(40).unsqueeze(dim=1)
ys = output[0,:].unsqueeze(dim=1)
wandb.log({"line": wandb.plot.line_series(xs=xs, ys=ys,title="Out")}, step=epoch)
I'd appreciate any help! Thanks!
| You can use wandb.log() with matplotlib. Create your plot using matplotlib:
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 50)
for i in range(1, 4):
fig, ax = plt.subplots()
y = x ** i
ax.plot(x, y)
wandb.log({'chart': ax})
Then when you look on your wandb dashboard for the run, you will see the plot rendered as plotly plot. Click the gear in the upper left hand corner to see a slider that lets you slide over training steps and see the plot at each step.
| https://stackoverflow.com/questions/67202711/ |
How to change PyTorch sigmoid function to be steeper | My model works when I use torch.sigmoid. I tried to make the sigmoid steeper by creating a new sigmoid function:
def sigmoid(x):
return 1 / (1 + torch.exp(-1e5*x))
But for some reason the gradient doesn't flow through it (I get NaN). Is there a problem in my function, or is there a way to simply change the PyTorch implementation to be steeper (as my function)?
Code example:
def sigmoid(x):
return 1 / (1 + torch.exp(-1e5*x))
a = torch.tensor(0.0, requires_grad=True)
b = torch.tensor(0.58, requires_grad=True)
c = sigmoid(a-b)
c.backward()
a.grad
>>> tensor(nan)
| The issue seems to be that when the input to your sigmoid implementation is negative, the argument to torch.exp becomes very large, causing an overflow. Using torch.autograd.set_detect_anomaly(True) as suggested here, you can see the error:
RuntimeError: Function 'ExpBackward' returned nan values in its 0th output.
If you really need to use the function you have defined, a possible workaround could be to put a conditional check on the argument (but I am not sure if it would be stable, so I cannot comment on its usefulness):
def sigmoid(x):
if x >= 0:
return 1./(1+torch.exp(-1e5*x))
else:
return torch.exp(1e5*x)/(1+torch.exp(1e5*x))
Here, the expression in the else branch is equivalent to the original function, by multiplying the numerator and denominator by torch.exp(1e5*x). This ensures that the argument to torch.exp is always negative or close to zero.
As noted by trialNerror, the exponent value is so high that except for values extremely close to zero, your gradient will evaluate to zero everywhere else since the actual slope will be extremely small and cannot be resolved by the data type. So if you plan to use it in a network you will likely find it very difficult to learn anything since gradients will almost always be zero. It might be better to select a smaller exponent, depending on your use case.
| https://stackoverflow.com/questions/67203664/ |
PyTorch loads old data when using tensorboard | In using tensorboard I have cleared my data directory and trained a new model but I am seeing images from an old model. Why is tensorboard loading old data, where is it being stored, and how do I remove it?
| Tensorboard was built to have caches in case long training fails you have "bak"-like files that your board will generate visualizations from. Unfortunately, there is not a good practice to manually remove hidden temp files as they are not seen from displaying files including ones with the . (dot) prefix using bash. This memory is self-managed. For best practices, (1) have your tensorboard name be dynamic for results of each run: this can be done using datetime library in combination with an f-string in python so that the name of each run is separated by a time stamp. (This command be done right from python, say a jupyter notebook, if you import the subprocess package and run your bash command straight from the script.) (2) Additionally, you are strongly advised to save your logdir (log directory) separately from where you are running the code. These two practices together should solve all the problems related to tmp files erroneously populating new results.
How to "reset" tensorboard data after killing tensorflow instance
| https://stackoverflow.com/questions/67205281/ |
Deep learning model test accuracy unstable | I am trying to train and test a pytorch GCN model that is supposed to identify person. But the test accuracy is quite jumpy like it gives 49% at 23 epoch then goes below near 45% at 41 epoch. So it's not increasing all the time though loss seems to decrease at every epoch.
My question is not about implementation errors rather I want to know why this happens. I don't think there is something wrong in my coding as I saw SOTA architecture has this type of behavior as well. The author just picked the best result and published saying that their models gives that result.
Is it normal for the accuracy to be jumpy (up-down) and am I just to take the best ever weights that produce that?
| Accuracy is naturally more "jumpy", as you put it. In terms of accuracy, you have a discrete outcome for each sample - you either get it right or wrong. This makes it so that the result fluctuate, especially if you have a relatively low number of samples (as you have a higher sampling variance).
On the other hand, the loss function should vary more smoothly. It is based on the probabilities for each class calculated at your softmax layer, which means that they vary continuously. With a small enough learning rate, the loss function should vary monotonically. Any bumps you see are due to the optimization algorithm taking discrete steps, with the assumption that the loss function is roughly linear in the vicinity of the current point.
| https://stackoverflow.com/questions/67205760/ |
How to load a pre-trained PyTorch model? | I'm following this guide on saving and loading checkpoints. However, something is not right. My model would train and the parameters would correctly update during the training phase. However, there seem to be a problem when I load the checkpoints. That is, the parameters are not being updated anymore.
My model:
import torch
import torch.nn as nn
import torch.optim as optim
PATH = 'test.pt'
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.a = torch.nn.Parameter(torch.rand(1, requires_grad=True))
self.b = torch.nn.Parameter(torch.rand(1, requires_grad=True))
self.c = torch.nn.Parameter(torch.rand(1, requires_grad=True))
#print(self.a, self.b, self.c)
def load(self):
try:
checkpoint = torch.load(PATH)
print('\nloading pre-trained model...')
self.a = checkpoint['a']
self.b = checkpoint['b']
self.c = checkpoint['c']
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(self.a, self.b, self.c)
except: #file doesn't exist yet
pass
@property
def b_opt(self):
return torch.tanh(self.b)*2
def train(self):
print('training...')
for epoch in range(3):
print(self.a, self.b, self.c)
for r in range(5):
optimizer.zero_grad()
loss = torch.square(5 * (r > 2) * (3) - model_net.a * torch.sigmoid((r - model_net.b)) * (model_net.c))
loss.backward(retain_graph=True) #accumulate gradients
#checkpoint save
torch.save({
'model': model_net.state_dict(),
'a': model_net.a,
'b': model_net.b,
'c': model_net.c,
'optimizer_state_dict': optimizer.state_dict(),
}, PATH)
optimizer.step()
model_net = model()
optimizer = optim.Adam(model_net.parameters(), lr = 0.1)
print(model_net.a)
print(model_net.b)
print(model_net.c)
This prints
Parameter containing:
tensor([0.4214], requires_grad=True)
Parameter containing:
tensor([0.3862], requires_grad=True)
Parameter containing:
tensor([0.8812], requires_grad=True)
I then run model_net.train() to see that the parameters are being updated and this outputs:
training...
Parameter containing:
tensor([0.9990], requires_grad=True) Parameter containing:
tensor([0.1580], requires_grad=True) Parameter containing:
tensor([0.1517], requires_grad=True)
Parameter containing:
tensor([1.0990], requires_grad=True) Parameter containing:
tensor([0.0580], requires_grad=True) Parameter containing:
tensor([0.2517], requires_grad=True)
Parameter containing:
tensor([1.1974], requires_grad=True) Parameter containing:
tensor([-0.0404], requires_grad=True) Parameter containing:
tensor([0.3518], requires_grad=True)
Running model_net.load() outputs:
loading pre-trained model...
Parameter containing:
tensor([1.1974], requires_grad=True) Parameter containing:
tensor([-0.0404], requires_grad=True) Parameter containing:
tensor([0.3518], requires_grad=True)
And lastly, running model_net.train() again outputs:
training...
Parameter containing:
tensor([1.1974], requires_grad=True) Parameter containing:
tensor([-0.0404], requires_grad=True) Parameter containing:
tensor([0.3518], requires_grad=True)
Parameter containing:
tensor([1.1974], requires_grad=True) Parameter containing:
tensor([-0.0404], requires_grad=True) Parameter containing:
tensor([0.3518], requires_grad=True)
Parameter containing:
tensor([1.1974], requires_grad=True) Parameter containing:
tensor([-0.0404], requires_grad=True) Parameter containing:
tensor([0.3518], requires_grad=True)
Update 1.
Following @jhso suggestion I changed my load to:
def load(self):
try:
checkpoint = torch.load(PATH)
print('\nloading pre-trained model...')
self.load_state_dict(checkpoint['model'])
self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(self.a, self.b, self.c)
except: #file doesn't exist yet
pass
This almost seems to work (the network is training now), but I don't think the optimizer is loading correctly. That is because it doesn't go pass the line self.optimizer.load_state_dict(checkpoint['optimizer_state_dict']).
You can see that since it doesn't print(self.a, self.b, self.c) when I run
model_net.load()
| The way you are loading your data is not the recommended way to load your parameters because you're overwriting the graph connections (or something along those lines...). You even save the model state_dict, so why not use it!
I changed the load function to:
def load(self):
try:
checkpoint = torch.load(PATH)
print('\nloading pre-trained model...')
self.load_state_dict(checkpoint['model'])
self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(self.a, self.b, self.c)
self.train()
except: #file doesn't exist yet
pass
But note to do this, you have to add your optimizer to your model:
model_net = model()
optimizer = optim.Adam(model_net.parameters(), lr = 0.1)
model_net.optimizer = optimizer
Which then gave the output (running train, load, train):
Parameter containing:
tensor([0.2316], requires_grad=True) Parameter containing:
tensor([0.4561], requires_grad=True) Parameter containing:
tensor([0.8626], requires_grad=True)
Parameter containing:
tensor([0.3316], requires_grad=True) Parameter containing:
tensor([0.3561], requires_grad=True) Parameter containing:
tensor([0.9626], requires_grad=True)
Parameter containing:
tensor([0.4317], requires_grad=True) Parameter containing:
tensor([0.2568], requires_grad=True) Parameter containing:
tensor([1.0620], requires_grad=True)
loading pre-trained model...
Parameter containing:
tensor([0.4317], requires_grad=True) Parameter containing:
tensor([0.2568], requires_grad=True) Parameter containing:
tensor([1.0620], requires_grad=True)
training...
Parameter containing:
tensor([0.4317], requires_grad=True) Parameter containing:
tensor([0.2568], requires_grad=True) Parameter containing:
tensor([1.0620], requires_grad=True)
Parameter containing:
tensor([0.5321], requires_grad=True) Parameter containing:
tensor([0.1577], requires_grad=True) Parameter containing:
tensor([1.1612], requires_grad=True)
Parameter containing:
tensor([0.6328], requires_grad=True) Parameter containing:
tensor([0.0583], requires_grad=True) Parameter containing:
tensor([1.2606], requires_grad=True)
| https://stackoverflow.com/questions/67205948/ |
Semantic Segmentation metric - making confusion matrix | I want to get meanIoU and pixel Accuracy, so I customize get metric function from Deeplabv3+ github Evaluator class: https://github.com/jfzhang95/pytorch-deeplab-xception.
Currently I am having difficulty writing the confusion matrix of the two images.
Here is my first code block:
for i, row in df.iterrows(): # dataframe have filenames of ground-truth and predicted image
print(" --- Iterrows start --- ")
print("Target Image: \n{}\n{}\n".format(row['Yt'], row['Yp']))
# gt image : aaa.png | target(predicted image) : segmented_aaa.jpg
# Read gt segmap imagefile
gt_image = Image.open(row['Yt']).convert('L')
gt_image = np.array(gt_image)
gt_image = cv2.resize(gt_image, (513, 513))
# Read target(predicted) segmap imagefile
target_image = Image.open(row['Yp']).convert('L')
target_image = np.array(target_image)
target_image = cv2.resize(target_image, (513, 513))
print(gt_image.shape) # (513, 513)
print(target_image.shape) # (513, 513)
print("gt_image:", "\n", gt_image)
print("target_image:", "\n", target_image)
print("gt_image type:", "\n", type(gt_image))
print("target_image type:", "\n", type(target_image))
print("gt_image unique:", "\n", np.unique(gt_image))
print("target_image unique:", "\n", np.unique(target_image))
print(gt_image.shape, target_image.shape)
# mIoU = get_iou(target_image, gt_image, num_classes)
# print(mIoU)
evaluator.add_batch(gt_image, target_image)
print(" --- Iterrows end --- ")
Result of first code block in console log:
(513, 513)
(513, 513)
gt_image:
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
target_image:
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
gt_image type:
<class 'numpy.ndarray'>
target_image type:
<class 'numpy.ndarray'>
gt_image unique:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 77 78 79 80 81 83 84 85 86 87 88 89 90 91 92
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128
129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182
183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218
219 220]
target_image unique:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 160]
and Evaluator(second) code block is:
def _generate_matrix(self, gt_image, pre_image):
print(np.unique(gt_image))
mask = (gt_image >= 0) & (gt_image < self.num_class)
print("\nMASK INFO")
print(mask, mask.shape)
print(np.unique(mask))
print()
print("\nGT_IMAGE INFO")
print(gt_image[mask])
print(np.unique(gt_image[mask]))
print(gt_image[mask].astype('int'))
print(set(gt_image[mask].astype('int').tolist()))
print(sorted(set(self.num_class * gt_image[mask].astype('int'))))
print(pre_image[mask], set(pre_image[mask].tolist()))
print()
label = self.num_class * gt_image[mask].astype('int') + pre_image[mask]
# label[10] = 449
print("\nLABEL INFO")
print(label, label.shape, len(label.tolist()))
print(set(label.tolist()))
print(len(set(label.tolist())))
print()
count = np.bincount(label, minlength=self.num_class**2)
print("\nCOUNT INFO")
print(count.shape)
print()
# !!!!!!!!!! ERROR POINT !!!!!!!!!!
confusion_matrix = count.reshape(self.num_class, self.num_class)
return confusion_matrix
def add_batch(self, gt_image, pre_image):
if gt_image.shape != pre_image.shape:
print("GT_Image's shape is different with PRE_IMAGE's shape!")
exit(0)
# assert gt_image.shape == pre_image.shape
self.confusion_matrix += self._generate_matrix(gt_image, pre_image)
Result of second code block in console log:
(513, 513) (513, 513)
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 77 78 79 80 81 83 84 85 86 87 88 89 90 91 92
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128
129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182
183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218
219 220]
MASK INFO
[[ True True True ... True True True]
[ True True True ... True True True]
[ True True True ... True True True]
...
[ True True True ... True True True]
[ True True True ... True True True]
[ True True True ... True True True]] (513, 513)
GT_IMAGE INFO
[0 0 0 ... 0 0 0]
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}
[0, 21, 42, 63, 84, 105, 126, 147, 168, 189, 210, 231, 252, 273, 294, 315, 336, 357, 378, 399, 420]
[0 0 0 ... 0 0 0] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 142, 145, 146}
LABEL INFO
[0 0 0 ... 0 0 0] (175389,) 175389
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 142, 145, 146, 147, 148, 153, 157, 162, 168, 173, 189, 210, 218, 231, 240, 252, 253, 254, 273, 277, 294, 295, 315, 317, 336, 337, 357, 361, 364, 377, 378, 384, 395, 399, 407, 410, 420, 421, 424, 425, 432, 496}
183
COUNT INFO
(497,)
cannot reshape array of size 497 into shape (21,21)
In my opinion, if there is a value like 480 that is greater than 21*21(=441), it will cause an error.
21 is the number of classes in the Pascal VOC dataset.
The ground-truth and predicted image used as input is this image:
[ground-truth image]
[predicted image]
| Important is this: Save the inference image in PNG format.
In my case, metric's code block was fine, but it failed because I saved it with jpeg format. I saved images in png format and my code worked fine.
| https://stackoverflow.com/questions/67207409/ |
Subsets and Splits