instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
how do I map tensor using dictionary | im trying to map tensor using dictionary but envir_with_agent[1,2] return tensor(4) instead 4 and dictionary cannot map it correctly min code is below
envir_with_agent = b.mountain.clone()
envir_with_agent[b.position_agent[0], b.position_agent[1]] = 4
print(envir_with_agent[1,2])
print(b.dict_map_display[envir_with_agent[1,2]])
dict
self.dict_map_display={
1:'.',
2:'o',
3:'O',
4:'A',
8:'E',
9:'X'}
error
KeyError Traceback (most recent call
last) in
2 envir_with_agent[b.position_agent[0], b.position_agent[1]] = 4
3 print(envir_with_agent[1,2])
----> 4 print(b.dict_map_display[envir_with_agent[1,2]])
KeyError: tensor(4)
| You would want to first convert the scalar tensor to type int before using it as an index; like this:
envir_with_agent = b.mountain.clone()
envir_with_agent[int(b.position_agent[0]), int(b.position_agent[1])] = 4
| https://stackoverflow.com/questions/66224817/ |
Choose 2nd GPU on server | I am running code on server. There are 2 GPUs there, and the 1st one is busy. Yet, I can't find a way to switch between them. I am using pytorch if that is important. Following lines of code should be modified:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
Modification may be stated only here.
Thanks.
| cuda by defaults chooses cuda:0, switching to the other GPU may be done through cuda:1
So, your line becomes:
device = 'cuda:1' if torch.cuda.is_available() else 'cpu'
You can read more about CUDA semantics.
| https://stackoverflow.com/questions/66231821/ |
pytorch previous version installation conflict in my local machine | I am trying to install PyTorch and Torchvision with the following conda command in python 2.7 environment
conda install pytorch==1.1.0 torchvision==0.3.0 -c pytorch
But getting this error,
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- torchvision==0.3.0 -> python[version='>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0']
Your python: python=2.7
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package vs2008_runtime conflicts for:
python=2.7 -> vc=9 -> vs2008_runtime[version='>=9.0.30729.1,<10.0a0']
pytorch==1.1.0 -> ninja -> vs2008_runtime
python=2.7 -> vs2008_runtime
Package zlib conflicts for:
python=2.7 -> sqlite[version='>=3.30.1,<4.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']
torchvision==0.3.0 -> pillow[version='>=4.1.1'] -> zlib[version='1.2.*|1.2.11|1.2.11.*|>=1.2.11,<1.3.0a0|1.2.8']
My conda list for the specific python 2.7 environment is
# Name Version Build Channel
ca-certificates 2021.1.19 haa95532_0
certifi 2020.6.20 pyhd3eb1b0_3
pip 19.3.1 py27_0
python 2.7.18 hfb89ab9_0
setuptools 44.0.0 py27_0
sqlite 3.30.1 h0c8e037_0
vc 9 h7299396_1
vs2008_runtime 9.00.30729.1 hfaea7d5_1
wheel 0.36.2 pyhd3eb1b0_0
wincertstore 0.2 py27hf04cefb_0
I tried to upgrade sqlite to 3.33.0 but it is showing
All requested packages already installed.
And when I tried to uninstall sqlite the whole python package for the environment is getting uninstalled.
Kindly someone helps me with this.
| PyTorch never made a Windows build of those versions of PyTorch and TorchVision for Python 2.7. One can use conda search for this:
$ conda search 'pytorch[channel=pytorch,subdir=win-64,version=1.1]'
Loading channels: done
# Name Version Build Channel
pytorch 1.1.0 py3.5_cuda100_cudnn7_1 pytorch
pytorch 1.1.0 py3.5_cuda90_cudnn7_1 pytorch
pytorch 1.1.0 py3.6_cuda100_cudnn7_1 pytorch
pytorch 1.1.0 py3.6_cuda90_cudnn7_1 pytorch
pytorch 1.1.0 py3.7_cuda100_cudnn7_1 pytorch
pytorch 1.1.0 py3.7_cuda90_cudnn7_1 pytorch
At minimum you'd have to use Python 3.5, but if you're going to Python 3 and need those specific PyTorch versions, you may as well jump to Python 3.7.
conda create -n pytorch_1_1 -c pytorch python=3.7 pytorch=1.1 torchvision=0.3
| https://stackoverflow.com/questions/66235746/ |
Convert numpy ndarray to PIL and and convert it to tensor | def camera(transform):
capture = cv2.VideoCapture(0)
while True:
ret, frame = capture.read()
cv2.imshow('video', frame)
# esc
if cv2.waitKey(1) == 27:
photo = frame
break
capture.release()
cv2.destroyAllWindows()
img = Image.fromarray(cv2.cvtColor(photo, cv2.COLOR_BGR2RGB))
img = img.resize([224, 224], Image.LANCZOS)
if transform is not None:
img = transform(img).unsqueeze(0)
return img
This is my code to get image from the camera,
image_tensor = img.to(device)
And I have an error at the line above...
Traceback (most recent call last):
File "E:/PycharmProjects/ArsElectronica/image_captioning/sample.py", line 126, in <module>
main(args)
File "E:/PycharmProjects/ArsElectronica/image_captioning/sample.py", line 110, in main
caption = Image_Captioning(args, img)
File "E:/PycharmProjects/ArsElectronica/image_captioning/sample.py", line 88, in Image_Captioning
image_tensor = img.to(device)
AttributeError: 'Image' object has no attribute 'to'
The error is like this.
If I have the image as png file and reload it with PIL, it works.
But the one I want is to use the image without saving.
Pls... Someone save me...
| You could convert your PIL.Image to torch.Tensor with torchvision.transforms.ToTensor:
if transform is not None:
img = transform(img).unsqueeze(0)
tensor = T.ToTensor()(img)
return tensor
having imported torchvision.transforms as T
| https://stackoverflow.com/questions/66237451/ |
RuntimeError: CUDA out of memory | I got this Error:
RuntimeError: CUDA out of memory
GPU 0; 1.95 GiB total capacity; 1.23 GiB already allocated 1.27 GiB reserved in total by PyTorch
But it is not out of memory, it seems (to me) that the PyTorch allocates the wrong size of memory. I did change the batch size to 1, kill all apps that use the memory then reboot, and none worked.
This is how I run it, please let me know what info I need to fix it, or where should I check? Thank you.
python train.py --img 416 --batch 16 --epochs 1 \\
--data '../data.yaml' --cfg ./models/yolov4-csp.yaml \\
--weights '' --name yolov4-csp-results --cache
Using CUDA device0 _CudaDeviceProperties(name='Quadro P620', total_memory=2000MB)
Namespace(adam=False, batch_size=16, bucket='', cache_images=True, cfg='./models/yolov4-csp.yaml', data='../data.yaml', device='', epochs=1, evolve=False, global_rank=-1, hyp='data/hyp.scratch.yaml', img_size=[416, 416], local_rank=-1, logdir='runs/', multi_scale=False, name='yolov4-csp-results', noautoanchor=False, nosave=False, notest=False, rect=False, resume=False, single_cls=False, sync_bn=False, total_batch_size=16, weights='', world_size=1)
Start Tensorboard with "tensorboard --logdir runs/", view at http://localhost:6006/
Hyperparameters {'lr0': 0.01, 'momentum': 0.937, 'weight_decay': 0.0005, 'giou': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.5, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mixup': 0.0}
Overriding ./models/yolov4-csp.yaml nc=80 with nc=1
from n params module arguments
0 -1 1 928 models.common.Conv [3, 32, 3, 1]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 20672 models.common.Bottleneck [64, 64]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 119936 models.common.BottleneckCSP [128, 128, 2]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 1 1463552 models.common.BottleneckCSP [256, 256, 8]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 5843456 models.common.BottleneckCSP [512, 512, 8]
9 -1 1 4720640 models.common.Conv [512, 1024, 3, 2]
10 -1 1 12858368 models.common.BottleneckCSP [1024, 1024, 4]
11 -1 1 7610368 models.common.SPPCSP [1024, 512, 1]
12 -1 1 131584 models.common.Conv [512, 256, 1, 1]
13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
14 8 1 131584 models.common.Conv [512, 256, 1, 1]
15 [-1, -2] 1 0 models.common.Concat [1]
16 -1 1 1642496 models.common.BottleneckCSP2 [512, 256, 2]
17 -1 1 33024 models.common.Conv [256, 128, 1, 1]
18 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
19 6 1 33024 models.common.Conv [256, 128, 1, 1]
20 [-1, -2] 1 0 models.common.Concat [1]
21 -1 1 411648 models.common.BottleneckCSP2 [256, 128, 2]
22 -1 1 295424 models.common.Conv [128, 256, 3, 1]
23 -2 1 295424 models.common.Conv [128, 256, 3, 2]
24 [-1, 16] 1 0 models.common.Concat [1]
25 -1 1 1642496 models.common.BottleneckCSP2 [512, 256, 2]
26 -1 1 1180672 models.common.Conv [256, 512, 3, 1]
27 -2 1 1180672 models.common.Conv [256, 512, 3, 2]
28 [-1, 11] 1 0 models.common.Concat [1]
29 -1 1 6561792 models.common.BottleneckCSP2 [1024, 512, 2]
30 -1 1 4720640 models.common.Conv [512, 1024, 3, 1]
31 [22, 26, 30] 1 32310 models.yolo.Detect [1, [[12, 16, 19, 36, 40, 28], [36, 75, 76, 55, 72, 146], [142, 110, 192, 243, 459, 401]], [256, 512, 1024]]
Model Summary: 334 layers, 5.24994e+07 parameters, 5.24994e+07 gradients
Optimizer groups: 111 .bias, 115 conv.weight, 108 other
Scanning labels ../train/labels.cache (78 found, 0 missing, 0 empty, 0 duplicate, for 78 images): 100%|β| 78/78 [00:00<0
Caching images (0.0GB): 3%|ββ | 2/78 [00:00<00:03, 19.31it/Caching images (0.0GB): 54%|ββββββββββββββββββββββββββββββββ |Caching images (0.0GB): 100%|βββββββββββββββββββββββββββββββββββββββββββββ βββββββββββββ| 78/78 [00:00<00:00, 305.27it/s]
Scanning labels ../valid/labels.cache (15 found, 0 missing, 0 empty, 0 duplicate, for 15 images): 100%|β| 15/15 [00:00<0
Caching images (0.0GB): 100%|βββββββββββββββββββββββββββββββββββββββββββββ]βββββββββββββ| 15/15 [00:00<00:00, 333.01it/s]
Analyzing anchors... anchors/target = 4.64, Best Possible Recall (BPR) = 1.0000
Image sizes 416 train, 416 test
Using 8 dataloader workers
Starting training for 1 epochs...
Epoch gpu_mem GIoU obj cls total targets img_size
0%| | 0/5 [00:04<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 443, in <module>
train(hyp, opt, device, tb_writer)
File "train.py", line 256, in train
pred = model(imgs)
File "/home/ctdi/anaconda3/envs/scaled-yolov4.03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ctdi/content/ScaledYOLOv4/models/yolo.py", line 109, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/home/ctdi/content/ScaledYOLOv4/models/yolo.py", line 129, in forward_once
x = m(x) # run
File "/home/ctdi/anaconda3/envs/scaled-yolov4.03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ctdi/content/ScaledYOLOv4/models/common.py", line 47, in forward
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
File "/home/ctdi/anaconda3/envs/scaled-yolov4.03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ctdi/content/ScaledYOLOv4/models/common.py", line 31, in forward
return self.act(self.bn(self.conv(x)))
File "/home/ctdi/anaconda3/envs/scaled-yolov4.03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ctdi/anaconda3/envs/scaled-yolov4.03/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 136, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "/home/ctdi/anaconda3/envs/scaled-yolov4.03/lib/python3.6/site-packages/torch/nn/functional.py", line 2059, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 44.00 MiB (GPU 0; 1.95 GiB total capacity; 1.23 GiB already allocated; 26.94 MiB free; 1.27 GiB reserved in total by PyTorch)
| I finally find it. The problem was, I was using the new CUDA 11.2. That's bad. I remove it. and install CUDA 10.2. That fix the problem.
| https://stackoverflow.com/questions/66243852/ |
How to handle Variable Sized inputs at the end of CNN layers before sending to the Loss Function | I am dealing with variable sized inputs to a CNN and I wanted to know how to feed it to a last FC layer to satisfy the requirement for the CrossEntropy Loss function. Even if taken care for one sample, the subsequent sample would have different dimensions and can't be used in backpropagation. So I wanted to know a way or different ways this can be handled.
(P.S : Cropping the input to make it fixed size is being currently used and the query is for improvising)
| Just place torch.nn.AdaptiveAvgPool2d(S) right between your Last conv layers and 1st Fully connected layer.
Note that your fully connected layer should take input dimension of S x S x no. of channels in last conv layer
| https://stackoverflow.com/questions/66245946/ |
How does torch.einsum perform this 4D tensor multiplication? | I have come across a code which uses torch.einsum to compute a tensor multiplication. I am able to understand the workings for lower order tensors, but, not for the 4D tensor as below:
import torch
a = torch.rand((3, 5, 2, 10))
b = torch.rand((3, 4, 2, 10))
c = torch.einsum('nxhd,nyhd->nhxy', [a,b])
print(c.size())
# output: torch.Size([3, 2, 5, 4])
I need help regarding:
What is the operation that has been performed here (explanation for how the matrices were multiplied/transposed etc.)?
Is torch.einsum actually beneficial in this scenario?
| (Skip to the tl;dr section if you just want the breakdown of steps involved in an einsum)
I'll try to explain how einsum works step by step for this example but instead of using torch.einsum, I'll be using numpy.einsum (documentation), which does exactly the same but I am just, in general, more comfortable with it. Nonetheless, the same steps happen for torch as well.
Let's rewrite the above code in NumPy -
import numpy as np
a = np.random.random((3, 5, 2, 10))
b = np.random.random((3, 4, 2, 10))
c = np.einsum('nxhd,nyhd->nhxy', a,b)
c.shape
#(3, 2, 5, 4)
Step by step np.einsum
Einsum is composed of 3 steps: multiply, sum and transpose
Let's look at our dimensions. We have a (3, 5, 2, 10) and a (3, 4, 2, 10) that we need to bring to (3, 2, 5, 4) based on 'nxhd,nyhd->nhxy'
1. Multiply
Let's not worry about the order in which the n,x,y,h,d axes is, and just worry about the fact if you want to keep them or remove (reduce) them. Writing them down as a table and see how we can arrange our dimensions -
## Multiply ##
n x y h d
--------------------
a -> 3 5 2 10
b -> 3 4 2 10
c1 -> 3 5 4 2 10
To get the broadcasting multiplication between x and y axis to result in (x, y), we will have to add a new axis at the right places and then multiply.
a1 = a[:,:,None,:,:] #(3, 5, 1, 2, 10)
b1 = b[:,None,:,:,:] #(3, 1, 4, 2, 10)
c1 = a1*b1
c1.shape
#(3, 5, 4, 2, 10) #<-- (n, x, y, h, d)
2. Sum / Reduce
Next, we want to reduce the last axis 10. This will get us the dimensions (n,x,y,h).
## Reduce ##
n x y h d
--------------------
c1 -> 3 5 4 2 10
c2 -> 3 5 4 2
This is straightforward. Lets just do np.sum over the axis=-1
c2 = np.sum(c1, axis=-1)
c2.shape
#(3,5,4,2) #<-- (n, x, y, h)
3. Transpose
The last step is rearranging the axis using a transpose. We can use np.transpose for this. np.transpose(0,3,1,2) basically brings the 3rd axis after the 0th axis and pushes the 1st and 2nd. So, (n,x,y,h) becomes (n,h,x,y)
c3 = c2.transpose(0,3,1,2)
c3.shape
#(3,2,5,4) #<-- (n, h, x, y)
4. Final check
Let's do a final check and see if c3 is the same as the c which was generated from the np.einsum -
np.allclose(c,c3)
#True
TL;DR.
Thus, we have implemented the 'nxhd , nyhd -> nhxy' as -
input -> nxhd, nyhd
multiply -> nxyhd #broadcasting
sum -> nxyh #reduce
transpose -> nhxy
Advantage
Advantage of np.einsum over the multiple steps taken, is that you can choose the "path" that it takes to do the computation and perform multiple operations with the same function. This can be done by optimize paramter, which will optimize the contraction order of an einsum expression.
A non-exhaustive list of these operations, which can be computed by einsum, is shown below along with examples:
Trace of an array, numpy.trace.
Return a diagonal, numpy.diag.
Array axis summations, numpy.sum.
Transpositions and permutations, numpy.transpose.
Matrix multiplication and dot product, numpy.matmul numpy.dot.
Vector inner and outer products, numpy.inner numpy.outer.
Broadcasting, element-wise and scalar multiplication, numpy.multiply.
Tensor contractions, numpy.tensordot.
Chained array operations, inefficient calculation order, numpy.einsum_path.
Benchmarks
%%timeit
np.einsum('nxhd,nyhd->nhxy', a,b)
#8.03 Β΅s Β± 495 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
%%timeit
np.sum(a[:,:,None,:,:]*b[:,None,:,:,:], axis=-1).transpose(0,3,1,2)
#13.7 Β΅s Β± 1.42 Β΅s per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
It shows that np.einsum does the operation faster than individual steps.
| https://stackoverflow.com/questions/66255238/ |
How can I limit the range of parameters in pytorch? | So normally in pytorch, there is no strict limit to the parameters in models, but what if I wanted them to stay in the range [0,1]? Is there a way to block the update of parameters to outside that range?
| A trick used in some generative adversarial networks (some of which require the parameters of the discriminator to be within a certain range) is to clamp the values after every gradient update. For example:
model = YourPyTorchModule()
for _ in range(epochs):
loss = ...
optimizer.step()
for p in model.parameters():
p.data.clamp_(-1.0, 1.0)
| https://stackoverflow.com/questions/66258464/ |
obtain matrix from a tensor in pytorch with arbitrary number of trailing dimensions | I have a pytorch tensor with an arbitrary number of dimensions : ...X,Y,Z
I would like to have a function such that I give a number C, and I get the ...,C,Y,Z
my_matrix = [:,:,C,:,:]
But I dont know how many trailing dimensions are before C, I saw an answer with using tuples of slices but can seem to get it to work.
Partial slices in pytorch / numpy with arbitrary and variable number of dimensions
| I think ellipsis will do the job:
t = torch.randn(2, 3, 6, 5, 9, 3)
t[..., 4, :, :]
u = torch.randn(11, 4, 2, 7)
u[..., 2, :, :].shape
| https://stackoverflow.com/questions/66259201/ |
Casting Pytorch's tensor elements the type "float" instead of "double" | I had a matrix saved as a numpy type, call it "X_before" (for example, its shape is 100*30).
Since I want to feed it to an AutoEncoder using Pytorch library, I converted it to torch.tensor like this:
X_tensor = torch.from_numpy(X_before, dtype=torch)
Then, I got the following error:
expected scalar type Float but found Double
Next, I tried to make elements as "float" and then convert them torch.tensor:
X_before = X_before.astype(float)
X_tensor = torch.from_numpy(X_before)
Again, the same error happens.
How should I solve this issue? How can I convert the type of elements in a torch.tensor object to another type?
Thanks in advance
| The easiest way:
X_tensor = torch.tensor(X_before, dtype=torch.float32)
You can see the list of types here:
https://pytorch.org/docs/stable/tensors.html
You can change the type:
X_tensor=X_tensor.type(torch.float64)
(Note that float64 is double, while float32 is the standardd float)
| https://stackoverflow.com/questions/66260320/ |
Pytorch: Reshape 1d to 2d target | Apologies if this has been answered already but I couldn't find it.
I have a binary classification problem which I have been using CrossEntropyLoss for which expects the following input and target tensor dimensions:
I have switched to using BCEWithLogitsLoss with pre-calculated class weights to address a label imbalance problem. The issue is BCEWithLogitsLoss expects the following dimensions:
How would I go about shaping my 1D tensor, ie. tensor([[0, 0, 0, ..., 1, 0, 0]]) to the shape of my inputs which is (No. of X samples, 2)? I have tried .unsqueeze(1) but this gives me (#X, 1). To clarify, my input shape in the prior problem was [32,2] with target shape [32] as per above documentation and I am looking for [32,2] for both input and target dims.
| I ended up figuring the problem out, didn't realise there was built-in one-hot encoding functionality in Pytorch.
I used the line torch.nn.functional.one_hot(targets) which reshaped my target variable to torch.Size([32, 2]).
| https://stackoverflow.com/questions/66260394/ |
Pytorch calculate hit ratio of predictions on batch level | So I would like to calculate my hit ratio at K samples for a recommendation problem I'm dealing with. For those who doesnt know hitratio at K means if the true sample class was in the top K classes predicted from the model. I would like to do this on a batch level.
So If I had the following code
import torch
import torch.nn as nn
import numpy as np
# 5 samples of 5 dimensions
yhat = torch.tensor([[4, 9, 7, 4, 0],
[8, 1, 3, 1, 0],
[9, 8, 4, 4, 8],
[0, 9, 4, 7, 8],
[8, 8, 0, 1, 4]])
# I get the top 3 classes of each sample recommendation
values, indices = torch.topk(a, 3)
# print(indices)
# tensor([[1, 2, 0],
# [0, 2, 1],
# [0, 1, 4],
# [1, 4, 3],
# [1, 0, 4]])
# True sample classes
ytrue = torch.tensor([2], #hits on top 3
[5], #Doesnt hit on top 3
[4], #hits on top 3
[0], #Doesnt hit on top 3
[1]) #hits on top 3
I would like to calculate the hitratio at top 3 (HR3) here. HR3==1 Means all my predictions contained the true sample class in their top3 while 0 means none did.
For the given example I hit 3 out of 5. Result should be 3/5
What is the most efficient way on pytorch to do this?
| Try:
(indices == ytrue.reshape(-1,1)).any(1)
Output:
tensor([ True, False, True, False, True])
If you want the ratio, convert the tensor to float and mean:
(indices == ytrue.reshape(-1,1)).any(1).float().mean()
# tensor(0.6000)
| https://stackoverflow.com/questions/66260676/ |
Pytorch Lightning duplicates main script in ddp mode | When I launch my main script on the cluster with ddp mode (2 GPU's), Pytorch Lightning duplicates whatever is executed in the main script, e.g. prints or other logic. I need some extended training logic, which I would like to handle myself. E.g. do something (once!) after Trainer.fit(). But with the duplication of the main script, this doesn't work as I intend. I also tried to wrap it in if __name__ == "__main__", but it doesn't change behavior. How could one solve this problem? Or, how can I use some logic around my Trainer object, without the duplicates?
| I have since moved on to use the native "ddp" with multiprocessing in PyTorch. As far as I understand, PytorchLightning (PTL) is just running your main script multiple times on multiple GPU's. This is fine if you only want to fit your model in one call of your script. However, a huge drawback in my opinion is the lost flexibility during the training process. The only way of interacting with your experiment is through these (badly documented) callbacks. Honestly, it is much more flexible and convenient to use native multiprocessing in PyTorch. In the end it was so much faster and easier to implement, plus you don't have to search for ages through PTL documentation to achieve simple things.
I think PTL is going in a good direction with removing much of the boiler plate, however, in my opinion, the Trainer concept needs some serious rework. It is too closed in my opinion and violates PTL's own concept of "reorganizing PyTorch code, keep native PyTorch code".
If you want to use PTL for easy multi GPU training, I personally would strongly suggest to refrain from using it, for me it was a waste of time, better learn native PyTorch multiprocessing.
| https://stackoverflow.com/questions/66261729/ |
Finding the index of peak point ( the first True) in boolean tensor mask | After running a detectron2 model in pytorch, Detectron2 gives me the object masks that it finds as a (true/false) tensor. there are 33 objects found in the image so I have torch.Size([33, 683, 1024]).
tensor([[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]], device='cuda:0')
This is great so far. But I need the peak coordinates in y dimension (height) of those 33 objects. (Lets say the object is baloon, then I need the top of the baloon as (x,y) point)
Any idea how can I get the peak point coordinates as fast as possible
thanks in advance
| I had iterated through each dimension and checked for the True condition met, but it took minutes to find out the indexes
Then I have used the torch.where method and it finds all the indexes that meet the condition instantly.
for maskCounter in range(masks.shape[0]):
print((torch.where(masks[maskCounter] == True)[0][0]).item(), (torch.where(masks[maskCounter] == True)[1][0]).item())
| https://stackoverflow.com/questions/66262500/ |
Embedding in PyTorch creates embedding with norm larger than max_norm | Suppose we have an embedding matrix of 10 vectors with dimension of 100, and we impose max_norm=1:
x = Embedding(num_embeddings=10, embedding_dim=100, max_norm=1)
In principle, every embedding should have norm less or equal to 1. However, when I print the vector norms, I get values much greater than 1:
for w in x.weight:
print(torch.norm(w))
> tensor(11.1873, grad_fn=<CopyBackwards>)
> tensor(10.5264, grad_fn=<CopyBackwards>)
> tensor(9.6809, grad_fn=<CopyBackwards>)
> tensor(9.7507, grad_fn=<CopyBackwards>)
> tensor(10.7940, grad_fn=<CopyBackwards>)
> tensor(11.4134, grad_fn=<CopyBackwards>)
> tensor(9.7021, grad_fn=<CopyBackwards>)
> tensor(10.4027, grad_fn=<CopyBackwards>)
> tensor(10.1210, grad_fn=<CopyBackwards>)
> tensor(10.4552, grad_fn=<CopyBackwards>)
Any particular reason why this happens and how to fix it?
| The max_norm argument bounds the norm of the embedding, but not the norm of the weights.
To better understand this, you can run the following example:
from torch import LongTensor, norm
from torch.nn import Embedding
sentences = LongTensor([[1,2,4,5],[4,3,2,9]])
embedding = Embedding(num_embeddings=10, embedding_dim=100, max_norm=1)
for sentence in embedding(sentences):
for word in sentence:
print(norm(word))
This works by dividing each weight in the embedding vector by the norm of the embedding vector itself, and multiplying it by max_norm. In your example max_norm=1, hence it's equivalent to dividing by the norm.
To answer the question you asked in the comment, you can obtain the embedding of a sentence (vector containing word indexes taken from your dictionary), with embedding(sentences), the norm using the 2 for loops above.
| https://stackoverflow.com/questions/66262652/ |
Crash when trying to export PyTorch model to ONNX: forward() missing 1 required positional argument | I'm trying to convert pyTorch model to onnx like this:
torch.onnx.export(
model=modnet.module,
args=example_input,
f=ONNX_PATH, # where should it be saved
verbose=False,
export_params=True,
do_constant_folding=False,
input_names=['input'],
output_names=['output']
)
modnet is a model from this repo: https://github.com/ZHKKKe/MODNet
example_input is a Tensor of shape [1, 3, 512, 512]
During converting I received that error:
TypeError: forward() missing 1 required positional argument: 'inference'
This is my clone of Colab notebook to reproduce exception: https://colab.research.google.com/drive/1AE1VAXIXkm26krIOoBaFfhoE53hhuEdf?usp=sharing
Save me please! :)
| Modnet forward method requires a parameter called inference which is a boolean, indeed when the model is trained they pass it in this way:
# forward the main model
pred_semantic, pred_detail, pred_matte = modnet(image, False)
So here what you have to do is modify your example_input like this:
example_input = (example_input, True)
| https://stackoverflow.com/questions/66263259/ |
Why am I getting the error ValueError: Expected input batch_size (4) to match target batch_size (64)? | Why am I getting the error ValueError: Expected input batch_size (4) to match target batch_size (64)?
Is it something to do with an incorrect number of channels(?) in the first linear layer? In this example I have 128 *4 *4 as the channel.
I have tried looking online and on this site for the answer but I have not been able to find it. So, I asked here.
Here is the network:
class Net(nn.Module):
"""A representation of a convolutional neural network comprised of VGG blocks."""
def __init__(self, n_channels):
super(Net, self).__init__()
# VGG block 1
self.conv1 = nn.Conv2d(n_channels, 64, (3,3))
self.act1 = nn.ReLU()
self.pool1 = nn.MaxPool2d((2,2), stride=(2,2))
# VGG block 2
self.conv2 = nn.Conv2d(64, 64, (3,3))
self.act2 = nn.ReLU()
self.pool2 = nn.MaxPool2d((2,2), stride=(2,2))
# VGG block 3
self.conv3 = nn.Conv2d(64, 128, (3,3))
self.act3 = nn.ReLU()
self.pool3 = nn.MaxPool2d((2,2), stride=(2,2))
# Fully connected layer
self.f1 = nn.Linear(128 * 4 * 4, 1000)
self.act4 = nn.ReLU()
# Output layer
self.f2 = nn.Linear(1000, 10)
self.act5 = nn.Softmax(dim=1)
def forward(self, X):
"""This function forward propagates the input."""
# VGG block 1
X = self.conv1(X)
X = self.act1(X)
X = self.pool1(X)
# VGG block 2
X = self.conv2(X)
X = self.act2(X)
X = self.pool2(X)
# VGG block 3
X = self.conv3(X)
X = self.act3(X)
X = self.pool3(X)
# Flatten
X = X.view(-1, 128 * 4 * 4)
# Fully connected layer
X = self.f1(X)
X = self.act4(X)
# Output layer
X = self.f2(X)
X = self.act5(X)
return X
Here is the training loop:
def training_loop(
n_epochs,
optimizer,
model,
loss_fn,
train_loader):
for epoch in range(1, n_epochs + 1):
loss_train = 0.0
for i, (imgs, labels) in enumerate(train_loader):
outputs = model(imgs)
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_train += loss.item()
if epoch == 1 or epoch % 10 == 0:
print('{} Epoch {}, Training loss {}'.format(
datetime.datetime.now(),
epoch,
loss_train / len(train_loader)))
| That's because you're getting the dimensions wrong. From the error and your comment, I take it that your input is of the shape (64, 1, 28, 28).
Now, the shape of X at X = self.pool3(X) is (64, 128, 1, 1), which you then reshaped on the next line to (4, 128 * 4 * 4).
Long story short, the output of your model is (4, 10) i.e batch_size (4), which you're comparing on this line loss = loss_fn(outputs, labels) with a tensor of batch_size (64) as the error said.
I don't know what you're trying to do but I'm guessing that you'd want to change this line self.f1 = nn.Linear(128 * 4 * 4, 1000) to this self.f1 = nn.Linear(128 * 1 * 1, 1000)
| https://stackoverflow.com/questions/66264564/ |
Looking for a pytorch function to repeat a vector | I am looking for a pytorch function that is similar to tf's tile function.
I saw that PyTorch used to have a tile function, but apparently it was removed.
An example for the functionality I am looking for:
Let's say I have a tensor of dimensions (1,1,1,1000), I want to repeat it several times so I get a (1,40,40,1000) tensor.
| Torch tensors have a repeat() method, therefore:
a = torch.rand((1, 1, 1, 1000))
b = a.repeat(1, 40, 40, 1)
b.shape # Gives torch.Size([1, 40, 40, 1000])
| https://stackoverflow.com/questions/66266543/ |
How to get torch::Tensor shape | If we << a torch::Tensor
#include <torch/script.h>
int main()
{
torch::Tensor input_torch = torch::zeros({2, 3, 4});
std::cout << input_torch << std::endl;
return 0;
}
we see
(1,.,.) =
0 0 0 0
0 0 0 0
0 0 0 0
(2,.,.) =
0 0 0 0
0 0 0 0
0 0 0 0
[ CPUFloatType{2,3,4} ]
How to get the tensor shape (that 2,3,4)? I searched https://pytorch.org/cppdocs/api/classat_1_1_tensor.html?highlight=tensor for an API call but couldn't find one. And I searched for the operator<< overload code, and also couldn't find it.
| You can use torch::sizes() method
IntArrayRef sizes()
It's equivalent of shape in python. Furthermore you can access specific size at given ax (dimension) by invoking torch::size(dim). Both functions are in the API page you linked
| https://stackoverflow.com/questions/66268993/ |
Extracting hidden features from Autoencoders using Pytorch | Following the tutorials in this post, I am trying to train an autoencoder and extract the features from its hidden layer.
So here are my questions:
In the autoencoder class, there is a "forward" function. However, I cannot see anywhere in the code that this function is called. So how does it get trained?
My question above is because I feel if I want to extract the features, I should add another function (f"orward_hidden") in the autoencoder class:
def forward(self, features):
#print("in forward")
#print(type(features))
activation = self.encoder_hidden_layer(features)
activation = torch.relu(activation)
code = self.encoder_output_layer(activation)
code = torch.relu(code)
activation = self.decoder_hidden_layer(code)
activation = torch.relu(activation)
activation = self.decoder_output_layer(activation)
reconstructed = torch.relu(activation)
return reconstructed
def forward_hidden(self, features):
activation = self.encoder_hidden_layer(features)
activation = torch.relu(activation)
code = self.encoder_output_layer(activation)
code = torch.relu(code)
return code
Then, after training, which means after this line in the main code:
print("AE, epoch : {}/{}, loss = {:.6f}".format(epoch + 1, epochs_AE, loss))
I can put the following code to retrieve the features from the hidden layer:
hidden_features = model_AE.forward_hidden(my_input)
Is this way correct? Still, I am wondering how the "forward" function was used for training. Because I cannot see it anywhere in the code that is being called.
| forward is the essence of your model and actually defines what the model does.
It is implicetly called with model(input) during the training.
If you are askling how to extract intermediate features after running the model, you can register a forward-hook like described here, that will "catch" the values for you.
| https://stackoverflow.com/questions/66271710/ |
how to see the data in DataLoader in pytorch | I see something like the following in the examples on Github.
How can I see the type of this data (shape and the other properties)?
train_data = MyDataset(int(1e3), length=50)
train_iterator = DataLoader(train_data, batch_size=1000, shuffle=True)
| You can inspect the data with following statements:
data = train_iterator.dataset.data
shape = train_iterator.dataset.data.shape
datatype = train_iterator.dataset.data.dtype
You can iterate the data and feed to a network as:
for nth_batch, (batch,_) in enumerate(train_iterator):
feedable = Variable(batch)
#here goes neural nets part
As Ivan stated in comments Variable is deprecated (although it still works fine) and Tensor itself now supports autograd, so the batch can be used in neural net.
for nth_batch, (batch,_) in enumerate(train_iterator):
#feedforward the batch
| https://stackoverflow.com/questions/66272911/ |
HuggingFace - GPT2 Tokenizer configuration in config.json | The GPT2 finetuned model is uploaded in huggingface-models for the inferencing
Below error is observed during the inference,
Can't load tokenizer using from_pretrained, please update its configuration: Can't load tokenizer for 'bala1802/model_1_test'. Make sure that: - 'bala1802/model_1_test' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bala1802/model_1_test' is the correct path to a directory containing relevant tokenizer files
Below is the configuration - config.json file for the Finetuned huggingface model,
{
"_name_or_path": "gpt2",
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.3.2",
"use_cache": true,
"vocab_size": 50257
}
Should I configure the GPT2 Tokenizer just like the "model_type": "gpt2" in the config.json file
| Your repository does not contain the required files to create a tokenizer. It seems like you have only uploaded the files for your model. Create an object of your tokenizer that you have used for training the model and save the required files with save_pretrained():
from transformers import GPT2Tokenizer
t = GPT2Tokenizer.from_pretrained("gpt2")
t.save_pretrained('/SOMEFOLDER/')
Output:
('/SOMEFOLDER/tokenizer_config.json',
'/SOMEFOLDER/special_tokens_map.json',
'/SOMEFOLDER/vocab.json',
'/SOMEFOLDER/merges.txt',
'/SOMEFOLDER/added_tokens.json')
| https://stackoverflow.com/questions/66276186/ |
How to read the predicted label of a Neural Network with Cross Entropy Loss? | I am using a neural network to predict the quality of the Red Wine dataset, available on UCI machine Learning, using Pytorch, and Cross-Entropy Loss as the loss function.
This is my code:
input_size = len(input_columns)
hidden_size = 12
output_size = 6 #because there are 6 classes
#Loss function
loss_fn = F.cross_entropy
class WineQuality(nn.Module):
def __init__(self):
super().__init__()
# input to hidden layer
self.linear1 = nn.Linear(input_size, hidden_size)
# hidden layer and output
self.linear2 = nn.Linear(hidden_size, output_size)
def forward(self, xb):
out = self.linear1(xb)
out = F.relu(out)
out = self.linear2(out)
return out
def training_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calcuate loss
loss = loss_fn(out,torch.argmax(targets, dim=1))
return loss
def validation_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calculate loss
loss = loss_fn(out, torch.argmax(targets, dim=1))
return {'val_loss': loss.detach()}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
return {'val_loss': epoch_loss.item()}
def epoch_end(self, epoch, result, num_epochs):
# Print result every 100th epoch
if (epoch+1) % 100 == 0 or epoch == num_epochs-1:
print("Epoch [{}], val_loss: {:.4f}".format(epoch+1, result['val_loss']))
model = WineQuality()
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result, epochs)
history.append(result)
return history
loss_value = evaluate(model, valid_dl)
#model=WineQuality()
epochs = 1000
lr = 1e-5
history = fit(epochs, lr, model, train_loader, val_loader)
I can see that the model is good and that the loss decreases. The problem is when I have to do a prediction on an example:
def predict_single(input, target, model):
inputs = input.unsqueeze(0)
predictions = model(inputs)
prediction = predictions[0].detach()
print("Input:", input)
print("Target:", target)
print("Prediction:", prediction)
return prediction
input, target = val_df[1]
prediction = predict_single(input, target, model)
This returns:
Input: tensor([0.8705, 0.3900, 2.1000, 0.0650, 4.1206, 3.3000, 0.5300, 0.2610])
Target: tensor([6.])
Prediction: tensor([ 3.6465, 0.2800, -0.4561, -1.6733, -0.6519, -0.1650])
I want to see what are associated these logits, in the sense that I know that the highest logit is associated with the predicted class, but I want to see that class. I also applied softmax to rescale these values in a probability:
prediction = F.softmax(prediction)
print(prediction)
output = model(input.unsqueeze(0))
_,pred = output.max(1)
print(pred)
And the output is the following:
tensor([0.3296, 0.1361, 0.1339, 0.1324, 0.1335, 0.1346])
tensor([0])
I don't know what is that tensor([0]). I expect my predicted label, a value like 6.1 if the target is 6. But I am not able to obtain this.
| First of all, Lets review the way you are calculating loss. From your code:
loss = loss_fn(out,torch.argmax(targets, dim=1))
you are using torch.argmax function which expects targets size as torch.Size([num_samples, num_classes]) or torch.Size([32, 6]) in your case. Are you sure your training labels are compatible with this size? From your writing I understand that you are reading label class as a number (from 3 to 8). So, its size is torch.Size([32, 1]). So, when you are calling torch.argmax with training data, `torch.argmax' is always returning 0.
That's why the model is learning to predict the class 0 whatever the input is.
Now, as your class label (for training) is from 3 to 8. Unfortunately if we use these labels with your loss_fn or torch.nn.CrossEntropyLoss(), it will be matched with total 9 labels, (class0 to class8) as maximum class labels is 8. So, you need to transform 3 to 8 -> 0 to 5. For loss calculation use:
loss = loss_fn(out, targets - 3)
| https://stackoverflow.com/questions/66282355/ |
Pytorch transforms.Compose usage for pair of images in segmentation tasks | I'm trying to use the transforms.Compose() in my segmentation task. But I'm not sure how to use the same (almost) random transforms for both the image and the mask.
So in my segmentation task, I have the raw picture and the corresponding mask, I'd like to generate more random transformed image pairs for training popurse. Meaning if I do some transform on my raw pictures, and this transformation should also happen on my mask pictures, and then this pair can go into my CNN. My transformer is something like:
train_transform = transforms.Compose([
transforms.Resize(512), # resize, the smaller edge will be matched.
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomRotation(90),
transforms.RandomResizedCrop(320,scale=(0.3, 1.0)),
AddGaussianNoise(0., 1.),
transforms.ToTensor(), # convert a PIL image or ndarray to tensor.
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) # normalize to Imagenet mean and std
])
mask_transform = transforms.Compose([
transforms.Resize(512), # resize, the smaller edge will be matched.
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomRotation(90),
transforms.RandomResizedCrop(320,scale=(0.3, 1.0)),
##---------------------!------------------
transforms.ToTensor(), # convert a PIL image or ndarray to tensor.
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) # normalize to Imagenet mean and std
])
Notice, in the code block, I added a class that can add random noise to the raw images transformation, which is not in the mask_transformation, that I want my mask images follow the raw image transformation, but ignore the random noise. So how can these two transformations happen in pairs (with the same random act)?
| This seems to have an answer here: How to apply same transform on a pair of picture.
Basically, you can use the torchvision functional API to get a handle to the randomly generated parameters of a random transform such as RandomCrop. Then call torchvision.transforms.functional.crop() on both images with the same parameter values. It seems a bit lengthy but gets the job done. You can skip some transforms on some images, as per your need.
Another option that I've seen elsewhere is to re-seed the random generator with the same seed, to force generation of the same random transformations twice. I would think that such implementations are hacky and keep changing with pytorch versions (e.g. whether to re-seed np.random, random, or torch.manual_seed() ?)
| https://stackoverflow.com/questions/66284850/ |
PyTorch Lightning: Multiple scalars (e.g. train and valid loss) in same Tensorboard graph | With PyTorch Tensorboard I can log my train and valid loss in a single Tensorboard graph like this:
writer = torch.utils.tensorboard.SummaryWriter()
for i in range(1, 100):
writer.add_scalars('loss', {'train': 1 / i}, i)
for i in range(1, 100):
writer.add_scalars('loss', {'valid': 2 / i}, i)
How can I achieve the same with Pytorch Lightning's default Tensorboard logger?
def training_step(self, batch: Tuple[Tensor, Tensor], _batch_idx: int) -> Tensor:
inputs_batch, labels_batch = batch
outputs_batch = self(inputs_batch)
loss = self.criterion(outputs_batch, labels_batch)
self.log('loss/train', loss.item()) # creates separate graph
return loss
def validation_step(self, batch: Tuple[Tensor, Tensor], _batch_idx: int) -> None:
inputs_batch, labels_batch = batch
outputs_batch = self(inputs_batch)
loss = self.criterion(outputs_batch, labels_batch)
self.log('loss/valid', loss.item(), on_step=True) # creates separate graph
| The doc describe it as self.logger.experiment.some_tensorboard_function() where some_tensorboard_function is the provided functions from tensorboard so for your question you want to use
self.logger.experiment.add_scalars()
Tensorboard doc for pytorch-lightning can be found here
| https://stackoverflow.com/questions/66287075/ |
How to implement fractionally strided convolution layers in pytorch? | Before everything, I searched google and StackOverflow but I do not find any similar questions so here I propose a new one.
I'm interested in this paper and want to implement this SGAN for my project. The paper mentioned that its generator network is composed of "a stack of fractionally strided convolution layers", I found two different ways of implementing this in pytorch, one is:
torch.nn.Sequential(
# other layers...
torch.nn.ConvTranspose2d(),
# other layers...
)
the other way is:
torch.nn.Sequential(
# other layers...
torch.nn.Upsample(scale_factor=2),
torch.nn.Conv2D(),
# other layers...
)
So, my question is, which is the better implementation of fractionally strided conv layer, or am I understanding something completely wrong?
Thanks in advance.
P.S, I found the second implementation here, in line 87 - 88.
| tldr; There are some shape constraints but both perform the same operations.
The output shape of nn.ConvTranspose2d is given by y = (x β 1)s - 2p + d(k-1) + p_out + 1, where x and y are the input and ouput shape, respectively, k is the kernel size, s the stride, d the dilation, p and p_out the padding and padding out. Here we keep things simple with s=1, p=0, p_out=0, d=1.
Therefore, the output shape of the transposed convolution is:
y = x - 1 + k
If we look at an upsample (x2) with convolution. Using the same notation as before, the output of nn.Conv2d is given by: y = floor((x + 2p - d(k - 1) - 1) / s + 1). After upsampling x is sized 2x. We keep the dilation at d=1.
y = floor((2x + 2p - k) / s + 1)
If we want to match the output shape of the transposed convolution, we need to have x - 1 + k = floor((2x + 2p - k) / s + 1). This relation will define the values to choose for s and p for our convolution.
Taking a simple example for demonstration: k=2. Now x + 1 needs to be equal to floor((2x + 2p - k) / s + 1), which is solved by setting s=2 and p=1.
Here is the same example in a visual form.
transposed convolution
upsample + convolution
| https://stackoverflow.com/questions/66287552/ |
How to get step-wise validation loss curve over all epochs in PyTorch Lightning | When logging my validation loss inside validation_step() in PyTorch Lighnting like this:
def validation_step(self, batch: Tuple[Tensor, Tensor], _batch_index: int) -> None:
inputs_batch, labels_batch = batch
outputs_batch = self(inputs_batch)
loss = self.criterion(outputs_batch, labels_batch)
self.log('loss (valid)', loss.item())
Then, I get an epoch-wise loss curve:
If I want the step-wise loss curve I can set on_step=True:
def validation_step(self, batch: Tuple[Tensor, Tensor], _batch_index: int) -> None:
inputs_batch, labels_batch = batch
outputs_batch = self(inputs_batch)
loss = self.criterion(outputs_batch, labels_batch)
self.log('loss', loss.item(), on_step=True)
This results in step-wise loss curves for each epoch:
How can I get a single graph over all epochs instead? When running my training for thousands of epochs this gets messy.
| It seems that you have done something wrong when init your logger. Is it defined as the following:
logger = TensorBoardLogger("tb_logs", name="my_model")
Note that on_step will modify your tag which is one cause why they show up as separate images.
Instead of using on_step you can use:
self.logger.experiment.add_scalar('name',metric)
If you want the plots x axis to show number of epochs instead of steps you can place the logger within validation_epoch_end(self, outputs).
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
self.logger.experiment.add_scalar('loss',avg_loss, self.current_epoch)
| https://stackoverflow.com/questions/66290662/ |
Is there a way to set attribute of attribute? | I am currently trying to assign an attribute, consider the following example:
net = timm.create_model(model_name='regnetx_002', pretrained=False)
net.head.fc = torch.nn.Linear(1111, 2)
This seems to work just fine, however, some models have different attributes, like the following
net.classifier = torch.nn.Linear(1111,2)
Therefore, I am trying to set attribute dynamically. I have already manage to retrieve the names of the attributes, like "head" and "fc", but it seems that you cannot do something like:
setattr(net, "head.fc", torch.nn.Linear(1111, 2))
as it will create one new attribute "head.fc" instead. I think there is a misunderstanding in my knowledge, as I noticed that when I print the model's last layer, it looks like:
(head): ClassifierHead(
(global_pool): SelectAdaptivePool2d (pool_type=avg, flatten=True)
(fc): Linear(in_features=368, out_features=1000, bias=True)
)
)
Is there any way to achieve what I want dynamically.
| setattr(net, "head.fc", torch.nn.Linear(1111, 2))
will create head.fc attribute on net.
I guess you want to create
setattr(net.head, "fc", torch.nn.Linear(1111, 2))
or dynamically:
setattr(getattr(net, "head"), "fc", torch.nn.Linear(1111, 2)).
If you want an even more general nested dynamic atrribute, you'll add to build some helper functions for this, like described here.
| https://stackoverflow.com/questions/66290691/ |
reading signed ints from openCV in/to libtorch tensor c++ | I have a cv::Mat which is CV_32SC3 type, it stores both positive and negative values.
When convert it to tensor, the values are messed up:
cout << in_img << endl;
auto tensor_image = torch::from_blob(in_img.data, {1, in_img.rows, in_img.cols, 3}, torch::kByte);
The in_img has negative values, while after print out tensor_image, the values were all totally different than in_img.
the negative values are gone (it somehow seems to normilise it 255 range). I tried converting to Long like so:
auto tensor_image = torch::from_blob(in_img.data, {1, in_img.rows, in_img.cols, 3}, torch::kLong);
but when I print the values like so, I get seg fault:
std::cout << "tensor_image: " << tensor_image << " values." << std::endl;
so, I tried looking at just the first element like so:
std::cout << "input_tensor[0][0][0][0]: " << tensor_image[0][0][0][0] << " values." << std::endl;
and the value is not the same as I see in the python implementation :((
| The type 32SC3 means that your data are 32bits (4 bytes) signed integers, i.e ints. Pytorch kByte type means unsigned char (1 byte, values between 0 and 255). Therefore you are actually reading a matrix of ints as if it were a matrix of uchars.
Try with
auto tensor_image = torch::from_blob(in_img.data, {1, in_img.rows, in_img.cols, 3}, torch::kInt32);
The conversion to kLong was bound to fail because long means int64. So there are just not enough bytes in your opencv int32 matrix to read it as a int64 matrix with the same size.
| https://stackoverflow.com/questions/66291615/ |
Translate Conv2D from PyTorch code to Tensorflow | I have the following PyTorch layer definition:
self.e_conv_1 = nn.Sequential(
nn.ZeroPad2d((1, 2, 1, 2)),
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(5, 5), stride=(2, 2)),
nn.LeakyReLU(),
)
I want to have the same exact layer declaration in Tensorflow. How can I get to that?
self.e_conv_1 = tf.keras.Sequential([
layers.Conv2D(64, kernel_size=(5, 5), activation=partial(tf.nn.leaky_relu, alpha=0.01), padding='same', strides=(1, 2))
])
Should it be something like this code above? I think that at least strides and padding isn't the same.
Thanks in advance to anyone who helps.
| I think you can use layers in this way according to tenssorflow documentation:
tf.keras.Sequential([
layers.ZeroPadding2D(padding=((1,2), (1,2)))
layers.Conv2D(64, kernel_size=(5, 5), activation=partial(tf.nn.leaky_relu,
alpha=0.01), padding='valid', strides=(2, 2))
])
the main difference is between torch zero padding and tensroflow zero padding arguments.
in torch padding arguments are:
m = nn.ZeroPad2d((left, right, top, bottom))
in tensorflow:
tf.keras.layers.ZeroPadding2D(padding=((top,bottom),(left,right)))
| https://stackoverflow.com/questions/66293554/ |
How can I prevent PyTorch from making little changes to my assigned values | PyTorch makes little changes to my assigned values, which causes really different results in my neural network. E.g.:
a = [234678.5462495405945]
b = torch.tensor(a)
print(b.item())
The output is:
234678.546875
The little change PyTorch made to my variable a caused an entirely different result in my neural network. My neural network is a very sensitive one. How can I prevent PyTorch from making little changes to assigned values?
| Your question is pretty broad; you haven't shown us your network. That means none of us can address the real issue. But the code sample you show has a more limited scope: why is PyTorch changing my floats?
PyTorch by default uses single-precision floating point (nowadays called binary32). Python by default uses double-precision floating point (nowadays called binary64). When you convert from a Python float to a PyTorch FloatTensor, you lose precision. (This is called rounding.)
If you want, you can specify the data type, but then your entire network will have to be converted to binary64.
Just for your example:
import torch
a = 234678.5462495405945
b = torch.tensor(a, dtype=torch.float64)
print(b.item())
# 234678.54624954058
If your network is that sensitive, you probably have bigger problems. You're likely vastly overfitted, or you're too focused on one training example. A lot of work on quantizing networks and showing performance curves as you use lower-precision numbers has been done.
| https://stackoverflow.com/questions/66294138/ |
Scalar type in torch.where? | torch.where documentation states that x and y can be either a tensor or a scalar. However, it doesn't seem to support float32 scalar.
import torch
x = torch.randn(3, 2) # x is of type torch.float32
torch.where(x>0, 0, x) # RuntimeError: expected scalar type long long but found float
# torch.where(x>0, 0.0, x) # RuntimeError: expected scalar type double but found float
My question is how to use float32 scalar?
| It's not that torch doesn't support float32. It's your system doesn't provide an easy way to specify 0 as a float32. As stated in the errors, 0 is interpreted as a long long C type, i.e. int64, while 0.0 is interpreted as double C type, i.e. float64.
I guess you need to cast 0 to the same dtype with that of x:
torch.where(x>0.0, torch.tensor(0, dtype=x.dtype), x)
| https://stackoverflow.com/questions/66299149/ |
How to convert custom Pytorch model to torchscript (pth to pt model)? | I trained a custom model with PyTorch using colab environment. I successfully saved the trained model to Google Drive with the name model_final.pth. I want to convert model_final.pth to model_final.pt so that it can be used on mobile devices.
The code I use to train the model is as follows:
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("mouse_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 1000
cfg.SOLVER.STEPS = []
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
cfg.OUTPUT_DIR="drive/Detectron2/"
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
The code I used to convert the model is as follows:
from detectron2.modeling import build_model
import torch
import torchvision
print("cfg.MODEL.WEIGHTS: ",cfg.MODEL.WEIGHTS) ## RETURNS : cfg.MODEL.WEIGHTS: drive/Detectron2/model_final.pth
model = build_model(cfg)
model.eval()
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("drive/Detectron2/model-final.pt")
But I am getting this error IndexError: too many indices for tensor of dimension 3 :
cfg.MODEL.WEIGHTS: drive/Detectron2/model_final.pth
/usr/local/lib/python3.6/dist-packages/torch/tensor.py:593: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
'incorrect results).', category=RuntimeWarning)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-17-8e544c0f39c8> in <module>()
7 model.eval()
8 example = torch.rand(1, 3, 224, 224)
----> 9 traced_script_module = torch.jit.trace(model, example)
10 traced_script_module.save("drive/Detectron2/model_final.pt")
7 frames
/usr/local/lib/python3.6/dist-packages/detectron2/modeling/meta_arch/rcnn.py in <listcomp>(.0)
219 Normalize, pad and batch the input images.
220 """
--> 221 images = [x["image"].to(self.device) for x in batched_inputs]
222 images = [(x - self.pixel_mean) / self.pixel_std for x in images]
223 images = ImageList.from_tensors(images, self.backbone.size_divisibility)
IndexError: too many indices for tensor of dimension 3
| Detectron2 models expect a dictionary or a list of dictionaries as input by default.
So you can not directly use torch.jit.trace function. But they provide a wrapper, called TracingAdapter, that allows models to take a tensor or a tuple of tensors as input. You can find out how to use it in their torchscript tests.
The code for tracing your Mask RCNN model could be (I did not try it):
import torch
import torchvision
from detectron2.export.flatten import TracingAdapter
def inference_func(model, image):
inputs = [{"image": image}]
return model.inference(inputs, do_postprocess=False)[0]
print("cfg.MODEL.WEIGHTS: ",cfg.MODEL.WEIGHTS) ## RETURNS : cfg.MODEL.WEIGHTS: drive/Detectron2/model_final.pth
model = build_model(cfg)
example = torch.rand(1, 3, 224, 224)
wrapper = TracingAdapter(model, example, inference_func)
wrapper.eval()
traced_script_module = torch.jit.trace(wrapper, (example,))
traced_script_module.save("drive/Detectron2/model-final.pt")
More info on detectron2 deployment with tracing can be found here.
| https://stackoverflow.com/questions/66303312/ |
AttributeError: 'ConvertModel' object has no attribute 'seek' | I tried converting a MATLAB model to PyTorch using ONNX, like proposed here by Andrew Naguib:
How to import deep learning models from MATLAB to PyTorch?
I tried running the model using the following code:
import onnx
from onnx2pytorch import ConvertModel
import torch
onnx_model = onnx.load ('resnet50.onnx')
pytorch_model = ConvertModel(onnx_model)
model = torch.load(pytorch_model)
But I got this error:
AttributeError: 'ConvertModel' object has no attribute 'seek'. You can
only torch.load from a file that is seekable. Please pre-load the data
into a buffer like io.BytesIO and try to load from it instead.
How can I fix it, please? Any ideas on how can I "pre-load the data into a buffer like io.BytesIO"?
| Assuming my_data.dat is a file containing binary data, the following code loads it into an ioBytesIO buffer that is seekable:
import io
with open('my_data.dat', 'rb') as f:
buf = io.BytesIO(f.read())
You can now write things like
buf.seek(4)
and
x = buf.read(1)
Of course, in your case, you're going through a onnx.load methdod, and I don't know what that does. But if does return a file object, on a binary file, then the above might help you.
| https://stackoverflow.com/questions/66304260/ |
How to put a list of tensor into a new tensor in pytorch? | How can I merge a list of tensor into a single tensor?
I have a list of 64 images (128x128, RGB), and I want to create a batch of images with this size: torch.Size([64, 3, 128, 128])
I tried with torch.stack() but I can't get all elements of the list without iterating it.
How can I do it?
lfw_dataset = ImageFolder(os.path.join(root_dir), transform=test_transform)
lfw_dataset_train, lfw_dataset_val = torch.utils.data.random_split(lfw_dataset_train1, [train_values, validation_values])
train_loader = DataLoader(lfw_dataset_train, batch_size, num_workers=4, shuffle=True)
val_loader = DataLoader(lfw_dataset_val, batch_size, num_workers=4, shuffle=False)
# Define dictionary of loaders
loaders = {"train": train_loader,
"val": val_loader}
positive_list= []
negative_list= []
positive_img = []
negative_img = []
for i, (input, labels) in enumerate(loaders["train"]):
for num, x in enumerate(labels):
target = x.item()
k = [i for i, (imgs, label_pos) in enumerate(lfw_dataset.imgs) if label_pos==target]
group_pos = (target, k)
positive_list.append(group_pos)
for i, (imgs, label_neg) in enumerate(lfw_dataset.imgs):
if label_neg!=target:
j = [i]
break
group_neg = (target, j)
negative_list.append(group_neg)
anchor_img=input[num]
positive = random.choice(positive_list[num][1])
negative = random.choice(negative_list[num][1])
positive_img.append(lfw_dataset[positive][0])
negative_img.append(lfw_dataset[negative][0])
| You have to use torch.cat(input1, input2, dim)
| https://stackoverflow.com/questions/66306639/ |
PyTorch: Why is my dataset class giving index out of range errors? | I am trying to figure out why my data set is giving out of range index errors.
Consider this torch data set:
# prepare torch data set
class MSRH5Processor(torch.utils.data.Dataset):
def __init__(self, type, shard=False, **args):
# init configurable string
self.type = type
# init shard for sampling large ds if specified
self.shard = shard
# set seed if given
self.seed = args
# set loc
self.file_path = 'C:\\data\\h5py_embeds\\'
# set file paths
self.val_embed_path = self.file_path + 'msr_dev_bert_embeds.h5'
# if true, initialize the dev data
if self.type == 'dev':
# embeds are shaped: [layers, tokens, features]
self.embeddings = h5py.File(self.val_embed_path, 'r')["embeds"]
def __len__(self):
return len(self.embeddings)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
if self.type == 'dev':
sample = {'embeddings': self.embeddings[idx]}
return sample
# load dataset
processor = MSRH5Processor(type='dev', shard=False)
# check length
len(processor) # 22425
# iterate over the samples
count = 0
for step, batch in enumerate(processor):
count += 1
# error: Index (22425) out of range (0-22424)
with h5py.File('C:\\w266\\h5py_embeds\\msr_dev_bert_embeds.h5', 'r') as f:
print(f['embeds'].attrs['last_index']) # 22425
print(f['embeds'].shape) # (22425, 128, 768)
print(len(f['embeds'])) # 22425
If I manually change the data set length to 100 or 22424, I will still get the same error. What is telling PyTorch to look for index 22425?
If I were to make a CSV data set, with 1000 observations (where len = 1000), it would stop entering indices into the __getitem__() method at 999 and not 1000.
Edit:
It seems to be an issue with just the Dataset class and H5py files. If I use a torch dataloader, it will run to the natural length of my data set. Although, I would love to know what Torch is doing to get this figure for my H5 files that is causing it to behave differently than say a CSV.
| To use Dataset as an iterable you must implement either __iter__ method or __getitem__ with Sequence semantics. The iteration stops when method __getitem__ raises IndexError for some index idx
The problem with your dataset is that:
self.embeddings = h5py.File(self.val_embed_path, 'r')["embeds"]
is actually of type h5py._hl.dataset.Dataset which on out-of-index requests raises ValueError
You have to either load entire embeddings at the class constructor so that accessing numpy array on out-of-index will raise IndexError or re-throw IndexError on ValueError in __getitem__
| https://stackoverflow.com/questions/66307545/ |
RuntimeError: shape '[-1, 1031]' is invalid for input of size 900 | I am trying to build a CNN but I get this error:
shape '[-1, 1031]' is invalid for input of size 900
My code is below:
class CPillModel(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(CPillModel, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden_dim, hidden_dim)
self.relu3 = nn.ReLU()
self.fc4 = nn.Linear(hidden_dim, output_dim)
self.act4 = Sigmoid()
def forward(self, x):
....
....
return x
# instantiate ANN
input_dim = 1031
hidden_dim = 150
output_dim = 10
# Create ANN
model = CPillModel(input_dim, hidden_dim, output_dim)
# Cross Entropy Loss
error = nn.CrossEntropyLoss()
# SGD Optimizer
learning_rate = 0.02
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# ANN model training
count = 0
loss_list = []
iteration_list = []
accuracy_list = []
for epoch in range(num_epochs):
# enumerate mini batches
for i, (data_a, data_b) in enumerate(train_loader):
train = Variable(data_a.view(-1, 1031))
labels = Variable(data_b)
# clear the gradients
optimizer.zero_grad()
optimizer.zero_grad()
# compute the model output
outputs = model(data_a)
# calculate loss
loss = error(outputs, data_b)
# credit assignment
loss.backward()
# update model weights
optimizer.step()
count += 1
I think the issue I have is with my train variable line: train = Variable(data_a.view(-1, 1031))
I have 1031 rows of features training data but they are not being converted.
Do I split these 1031 data points into a format such as: a x b to work?
Would data reshape work?
The print out of data_a is: data_a is torch.Size([100, 9]
| First, the number of features is 9 not 1031, so you would have to change input_dim to 9.
I would also suggest you reduce the value of hidden_dim.
You would also change this line train = Variable(data_a.view(-1, 1031)) to train = Variable(data_a) as there is no need for data_a to be reshaped.
I don't know what's inside the forward method so I guess that's all you need to change.
| https://stackoverflow.com/questions/66318221/ |
Torch.cuda.empty_cache() very very slow performance | I have a very slow performance problem when I execute an inference batch loop on a single GPU.
This slow behavior appears after the first batch has been processed -
that is when the GPU is already almost full and its memory needs to be recycled to accept the next batch.
At a pristine GPU state - the performance is super fast (as expected).
I hope both the following code snippet and the output illustrate the problem in a nutshell.
(I've removed the print and time measurements from the snippet for brevity)
predictions = None
for i, batch in enumerate(self.test_dataloader):
# if this line is active - the bottleneck after the first batch moves here, rather than below
# i.e. when i > 0
# torch.cuda.empty_cache()
# HUGE PERFORMANCE HIT HAPPENS HERE - after the first batch
# i.e. when i > 0
# obviously tensor.to(device) uses torch.cuda.empty_cache() internally when needed
# and it is inexplicably SLOW
batch = tuple(t.to(device) for t in batch) # to GPU (or CPU) when gpu
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
logits = outputs[0]
logits = logits.detach()
# that doesn't help alleviate the issue
del outputs
predictions = logits if predictions is None else torch.cat((predictions, logits), 0)
# nor do all of the below - freeing references doesn't help speeding up
del logits
del b_input_ids
del b_input_mask
del b_labels
for o in batch:
del o
del batch
output
start empty cache... 0.00082
end empty cache... 1.9e-05
start to device... 3e-06
end to device... 0.001179 - HERE - time is super fast (as expected)
start outputs... 8e-06
end outputs... 0.334536
logits... 6e-06
start detach... 1.7e-05
end detach... 0.004036
start empty cache... 0.335932
end empty cache... 4e-06
start to device... 3e-06
end to device... 16.553849 - HERE - time is ridiculously high - it's 16 seconds to move tensor to GPU
start outputs... 2.3e-05
end outputs... 0.020878
logits... 7e-06
start detach... 1.4e-05
end detach... 0.00036
start empty cache... 0.00082
end empty cache... 6e-06
start to device... 4e-06
end to device... 17.385204 - HERE - time is ridiculously high
start outputs... 2.9e-05
end outputs... 0.021351
logits... 4e-06
start detach... 1.3e-05
end detach... 1.1e-05
...
Have I missed something obvious or is this the expected GPU behavior?
I am posting this question before engaging in complex coding, juggling between a couple of GPUs and CPU available on my server.
Thanks in advance,
Albert
EDIT
RESOLVED The issue was: in DataLoader constructor - I changed the pin_memory to False (True was causing the issue). That cut the .to(device) time by 350%-400%
self.test_dataloader = DataLoader(
test_dataset,
sampler=SequentialSampler(test_dataset),
# batch_size=len(test_dataset) # AKA - single batch - nope! no mem for that
batch_size=BATCH_SIZE_AKA_MAX_ROWS_PER_GUESS_TO_FIT_GPU_MEM,
# tests
num_workers=8,
# maybe this is the culprit as suggested by user12750353 in stackoverflow
# pin_memory=True
pin_memory=False
)
| You should not be required to clear cache if you are properly clearing the references to the previously allocated variables. Cache is like free, is memory that your script can use for new variables.
Also notice that
a = torch.zeros(10**9, dtype=torch.float)
a = torch.zeros(10**9, dtype=torch.float)
Requires 8GB of memory, even though a uses 4GB (1B elements with 4 bytes each). This occurs because the torch.zeros will allocate memory before the previous content of a is released. This may be happening in your model in a larger scale, depending on how it is implemented.
Edit 1
One suspicious thing is that you are loading your batch to the GPU one example at a time.
Just to illustrate what I mean
import torch
device = 'cuda'
batch = torch.zeros((4500, 10));
Creating the batch as a tuple
batch_gpu = tuple(t.to(device) for t in batch)
torch.cuda.synchronize()
254 ms Β± 36 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
Creating the batch as a list
batch_gpu = list(t.to(device) for t in batch)
torch.cuda.synchronize()
235 ms Β± 3.74 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
batch_gpu = batch.to(device)
torch.cuda.synchronize()
115 Β΅s Β± 2.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each)
In this example it was 2000x faster to copy one example at a time.
Notice that GPU works asynchronously with the CPU. So you may keep calling functions that will return before the operation is finished. In order to make meaningful measurements you may call synchronize to make clear the time boundaries.
The code to be instrumented is this
for i, batch in enumerate(self.test_dataloader):
# torch.cuda.empty_cache()
# torch.synchronize() # if empty_cache is used
# start timer for copy
batch = tuple(t.to(device) for t in batch) # to GPU (or CPU) when gpu
torch.cuda.synchronize()
# stop timer for copy
b_input_ids, b_input_mask, b_labels = batch
# start timer for inference
with torch.no_grad():
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
torch.cuda.synchronize()
# stop timer for inference
logits = outputs[0]
logits = logits.detach()
# if you copy outputs to CPU it will be synchronized
| https://stackoverflow.com/questions/66319496/ |
Automatic Differentiation:numerical or exact? | I have a question about automatic differentiation and especially in Pytorch since i am using this library.I have seen for instance automatic differentiation give the partial derivatives of an expression with respect to a variable.
However,as far as what i have seen ,the result is always given at a specific point,meaning it is a tensor with numerical values.My question is the following: let's say we define a function of two variables:
f(x,y)= xΒ² + yΒ² .
Is Pytorch able to return a function which corresponds to the partial derivative of f with respect to x or y? That is to return the following definitions:
> def partial_f_x:
return 2*x
def partial_f_y:
return 2*y
Because even though the function f here is rather simple,it would be interesting if Pytorch could give us a formula(depending on the different variables) of the derivatives,instead of giving a numerical value at a given point,because in that case we do not know the expression of the derivatives.
So if I summarize: is Pytorch able to return formulas for derivatives of complicated functions? or does it just return a tensor with numerical values for the derivative at a given point?
Thank you so much!
| That is not how pytorch is doing to get the derivative. Most (probably all) of the computational packages are using approximation methods to get the derivative value rather than deriving the derivative function, so they don't care what is the derivative in mathematical terms.
If you are looking for something like that, you can try to use the sympy library for symbolic mathematics. Here's an example:
import sympy as sym
x = sym.Symbol('x')
y = sym.Symbol('y')
sym.diff(x**2 + y**2, x, 1)
# => 2*x
sym.diff(x**2 + y**2, y, 1)
# => 2*y
Then to evaluate, you can simply substitute in the values you want to use for the variable(s):
dfdx.subs(y,1)
# => 2
| https://stackoverflow.com/questions/66322834/ |
How to run one batch in pytorch? | I'm new to AI and python and I'm trying to run only one batch to aim to overfit.I found the code:
iter(train_loader).next()
but I'm not sure where to implement it in my code. even if I did, how can I check after each iteration to make sure that I'm training the same batch?
train_loader = torch.utils.data.DataLoader(
dataset_train,
batch_size=48,
shuffle=True,
num_workers=2
)
net = nn.Sequential(
nn.Flatten(),
nn.Linear(128*128*3,10)
)
nepochs = 3
statsrec = np.zeros((3,nepochs))
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
for epoch in range(nepochs): # loop over the dataset multiple times
running_loss = 0.0
n = 0
for i, data in enumerate(train_loader, 0):
inputs, labels = data
# Zero the parameter gradients
optimizer.zero_grad()
# Forward, backward, and update parameters
outputs = net(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
# accumulate loss
running_loss += loss.item()
n += 1
ltrn = running_loss/n
ltst, atst = stats(train_loader, net)
statsrec[:,epoch] = (ltrn, ltst, atst)
print(f"epoch: {epoch} training loss: {ltrn: .3f} test loss: {ltst: .3f} test accuracy: {atst: .1%}")
please give me a hint
| If you are looking to train on a single batch, then remove your loop over your dataloader:
for i, data in enumerate(train_loader, 0):
inputs, labels = data
And simply get the first element of the train_loader iterator before looping over the epochs, otherwise next will be called at every iteration and you will run on a different batch every epoch:
inputs, labels = next(iter(train_loader))
i = 0
for epoch in range(nepochs):
optimizer.zero_grad()
outputs = net(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
# ...
| https://stackoverflow.com/questions/66323301/ |
AWS EC2 causing issue with Streamlit ML App | It's strange because on my local machine, this issue doesn't happen, and my app works fine.
However, when I run the app on an AWS EC2 instance, it gives me an error regarding a matplotlib import. Below the matplotlib import, I have matplotlib.use('TkAgg'). When the code is like this, the Streamlit app gives me this error (only on the EC2 instance):
ImportError: Cannot load backend 'TkAgg' which requires the 'tk' interactive framework, as 'headless' is currently running
Traceback:
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
File "/home/ubuntu/extremely_unnecessary/app.py", line 16, in <module>
matplotlib.use('TkAgg')
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/matplotlib/__init__.py", line 1171, in use
plt.switch_backend(name)
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/matplotlib/pyplot.py", line 287, in switch_backend
newbackend, required_framework, current_framework))
After doing some research, I tried changing the offending line to matplotlib.use('agg'). When I do this, the app works properly, however none of the models except one work when selected.
The app is hosted here: http://54.193.229.139:8501/ The way it works is you upload an image, then select a pretrained model from the drop down menu to apply "style transfer" to the image you uploaded.
For some bizarre reason, the 12th model in the list (chicken-strawberries-market-069_10000.pth) works, but none of the other models do. Again, this only happens on the EC2 instance - even when I use matplotlib.use('agg'), all the models work when running the streamlit app locally.
I also tried using some other variations including matplotlib.use('GTK3Agg') and matplotlib.use('WebAgg'), which give me various other error messages.
Does anyone know how to fix this so I can get all the models working on the EC2 instance?
Edit: I've started receiving a new error message, I'm working on trying to change the code now. I use CUDA through my GPU, apparently I have to make some cpu-bound changes so it'll work on the ubuntu server. Not sure why the chicken strawberries model works though...
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Traceback:
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
File "/home/ubuntu/extremely_unnecessary/app.py", line 91, in <module>
main()
File "/home/ubuntu/extremely_unnecessary/app.py", line 54, in main
transformer.load_state_dict(torch.load(checkpoint))
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/torch/serialization.py", line 774, in _legacy_load
result = unpickler.load()
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/torch/serialization.py", line 730, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/torch/serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "/home/ubuntu/anaconda3/envs/streamlit/lib/python3.6/site-packages/torch/serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
The code for the app:
import matplotlib.pyplot as plt
from PIL import Image
from torchvision.utils import save_image
import tqdm
import streamlit as st
from models import TransformerNet
from utils import *
import torch
import numpy as np
from torch.autograd import Variable
import argparse
import tkinter as tk
import os
import cv2
import matplotlib
matplotlib.use('agg')
def main():
uploaded_file = st.file_uploader(
"Choose an image", type=['jpg', 'png', 'webm', 'mp4', 'gif', 'jpeg'])
if uploaded_file is not None:
st.image(uploaded_file, width=200)
folder = os.path.abspath(os.getcwd())
folder = folder + '/models'
fnames = []
for basename in os.listdir(folder):
print(basename)
fname = os.path.join(folder, basename)
if fname.endswith('.pth'):
fnames.append(fname)
checkpoint = st.selectbox('Select a pretrained model', fnames)
os.makedirs("images/outputs", exist_ok=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# device = torch.device("cpu")
transform = style_transform()
# Define model and load model checkpoint
transformer = TransformerNet().to(device)
transformer.load_state_dict(torch.load(checkpoint))
transformer.eval()
# Prepare input
image_tensor = Variable(transform(Image.open(
uploaded_file).convert('RGB'))).to(device)
image_tensor = image_tensor.unsqueeze(0)
# Stylize image
with torch.no_grad():
stylized_image = denormalize(transformer(image_tensor)).cpu()
fn = str(np.random.randint(0, 100)) + 'image.jpg'
save_image(stylized_image, f"images/outputs/stylized-{fn}")
st.image(f"images/outputs/stylized-{fn}")
if __name__ == "__main__":
main()
| Turns out all I needed to do was implement the line in the error message - in line 53, I just had to change it from this:
transformer.load_state_dict(torch.load(checkpoint))
to this
transformer.load_state_dict(torch.load(
checkpoint, map_location=torch.load('cpu')))
And it works!
| https://stackoverflow.com/questions/66329589/ |
mat1 and mat2 shapes cannot be multiplied | I am new to AI and python, I'm trying to build an architecture to train a set of images. and later to aim to overfit. but up till now, I couldn't understand how to get the inputs and outputs correctly. I keep seeing the error whenever I try to train the network:
mat1 and mat2 shapes cannot be multiplied (48x13456 and 16x64)
my network:
net2 = nn.Sequential(
nn.Conv2d(3,8, kernel_size=5, padding=0),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(8,16, kernel_size=5, padding=0),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Flatten(),
nn.Linear(16,64),
nn.ReLU(),
nn.Linear(64,10)
)
this is a part of a task I'm working on and I really don't get why it's not running. any hints!
| its because you have flattened your 2D cnn into 1D FC layers...
& you have to manually calculate your changed input shape from 128 size to your Maxpool layer just before flattening layer ...In your case its 29*29*16
So your code must be rewritten as
net2 = nn.Sequential(
nn.Conv2d(3,8, kernel_size=5, padding=0),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(8,16, kernel_size=5, padding=0),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Flatten(),
nn.Linear(13456,64),
nn.ReLU(),
nn.Linear(64,10)
)
This should work
EDIT: This is a simple formula to calculate output size :
(((W - K + 2P)/S) + 1)
Here W = Input size
K = Filter size
S = Stride
P = Padding
So 1st conv block will make your output of size 124
Then you do Maxpool which will make it half i.e 62
2nd conv block will make your output of size 58
Then your last Maxpool will make it 29...
So final flattened output would be 29*29*16 where 16 is output channels
| https://stackoverflow.com/questions/66337378/ |
UnpicklingError: A load persistent id instruction was encountered, but no persistent_load function was specified | I was trying to run a python file named api.py. In this file, I'm loading the pickle file of the Deep Learning model that was built and trained using PyTorch.
api.py
In api.py the below-given functions are the most important ones.
def load_model_weights(model_architecture, weights_path):
if os.path.isfile(weights_path):
cherrypy.log("CHERRYPYLOG Loading model from: {}".format(weights_path))
model_architecture.load_state_dict(torch.load(weights_path))
else:
raise ValueError("Path not found {}".format(weights_path))
def load_recommender(vector_dim, hidden, activation, dropout, weights_path):
rencoder_api = model.AutoEncoder(layer_sizes=[vector_dim] + [int(l) for l in hidden.split(',')],
nl_type=activation,
is_constrained=False,
dp_drop_prob=dropout,
last_layer_activations=False)
load_model_weights(rencoder_api, weights_path)
rencoder_api.eval()
rencoder_api = rencoder_api.cuda()
return rencoder_api
The directory structure
MP1
β£ .ipynb_checkpoints
β β RS_netflix3months_100epochs_64,128,128-checkpoint.ipynb
β£ data
β β£ AutoEncoder.png
β β£ collaborative_filtering.gif
β β£ movie_titles.txt
β β shut_up.gif
β£ DeepRecommender
β β£ data_utils
β β β£ movielens_data_convert.py
β β β netflix_data_convert.py
β β£ reco_encoder
β β β£ data
β β β β£ __pycache__
β β β β β£ input_layer.cpython-37.pyc
β β β β β£ input_layer_api.cpython-37.pyc
β β β β β __init__.cpython-37.pyc
β β β β£ input_layer.py
β β β β£ input_layer_api.py
β β β β __init__.py
β β β£ model
β β β β£ __pycache__
β β β β β£ model.cpython-37.pyc
β β β β β __init__.cpython-37.pyc
β β β β£ model.py
β β β β __init__.py
β β β£ __pycache__
β β β β __init__.cpython-37.pyc
β β β __init__.py
β β£ __pycache__
β β β __init__.cpython-37.pyc
β β£ compute_RMSE.py
β β£ infer.py
β β£ run.py
β β __init__.py
β£ model_save
β β£ model.epoch_99
β β β archive
β β β β£ data
β β β β β£ 92901648
β β β β β£ 92901728
β β β β β£ 92901808
β β β β β£ 92901888
β β β β β£ 92901968
β β β β β£ 92902048
β β β β β£ 92902128
β β β β β£ 92902208
β β β β β£ 92902288
β β β β β£ 92902368
β β β β β£ 92902448
β β β β β 92902608
β β β β£ data.pkl
β β β β version
β β£ model.epoch_99.zip
β β model.onnx
β£ Netflix
β β£ N1Y_TEST
β β β n1y.test.txt
β β£ N1Y_TRAIN
β β β n1y.train.txt
β β£ N1Y_VALID
β β β n1y.valid.txt
β β£ N3M_TEST
β β β n3m.test.txt
β β£ N3M_TRAIN
β β β n3m.train.txt
β β£ N3M_VALID
β β β n3m.valid.txt
β β£ N6M_TEST
β β β n6m.test.txt
β β£ N6M_TRAIN
β β β n6m.train.txt
β β£ N6M_VALID
β β β n6m.valid.txt
β β£ NF_TEST
β β β nf.test.txt
β β£ NF_TRAIN
β β β nf.train.txt
β β NF_VALID
β β β nf.valid.txt
β£ test
β β£ testData_iRec
β β β£ .part-00199-f683aa3b-8840-4835-b8bc-a8d1eaa11c78.txt.crc
β β β£ part-00000-f683aa3b-8840-4835-b8bc-a8d1eaa11c78.txt
β β β£ part-00003-f683aa3b-8840-4835-b8bc-a8d1eaa11c78.txt
β β β _SUCCESS
β β£ testData_uRec
β β β£ .part-00000-4a844096-8dd9-425e-9d9d-bd9062cc6940.txt.crc
β β β£ ._SUCCESS.crc
β β β£ part-00161-4a844096-8dd9-425e-9d9d-bd9062cc6940.txt
β β β£ part-00196-4a844096-8dd9-425e-9d9d-bd9062cc6940.txt
β β β part-00199-4a844096-8dd9-425e-9d9d-bd9062cc6940.txt
β β£ data_layer_tests.py
β β£ test_model.py
β β __init__.py
β£ __pycache__
β β£ api.cpython-37.pyc
β β£ load_test.cpython-37.pyc
β β£ parameters.cpython-37.pyc
β β utils.cpython-37.pyc
β£ api.py
β£ compute_RMSE.py
β£ load_test.py
β£ logger.py
β£ netflix_1y_test.csv
β£ netflix_1y_train.csv
β£ netflix_1y_valid.csv
β£ netflix_3m_test.csv
β£ netflix_3m_train.csv
β£ netflix_3m_valid.csv
β£ netflix_6m_test.csv
β£ netflix_6m_train.csv
β£ netflix_6m_valid.csv
β£ netflix_full_test.csv
β£ netflix_full_train.csv
β£ netflix_full_valid.csv
β£ parameters.py
β£ preds.txt
β£ RS_netflix3months_100epochs_64,128,128.ipynb
β utils.py
I am getting such an error (serialization.py). Can someone help me with this error?
D:\Anaconda\envs\practise\lib\site-packages\torch\serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
762 "functionality.")
763
--> 764 magic_number = pickle_module.load(f, **pickle_load_args)
765 if magic_number != MAGIC_NUMBER:
766 raise RuntimeError("Invalid magic number; corrupt file?")
UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.
| After searching through PyTorch documentation, I ended up saving the model in the ONNX format and later loaded that ONNX model into PyTorch model and used it for inference.
import onnx
from onnx2pytorch import ConvertModel
def load_model_weights(model_architecture, weights_path):
if os.path.isfile("model.onnx"):
cherrypy.log("CHERRYPYLOG Loading model from: {}".format(weights_path))
onnx_model = onnx.load("model.onnx")
pytorch_model = ConvertModel(onnx_model)
## model_architecture.load_state_dict(torch.load(weights_path))
else:
raise ValueError("Path not found {}".format(weights_path))
def load_recommender(vector_dim, hidden, activation, dropout, weights_path):
rencoder_api = model.AutoEncoder(layer_sizes=[vector_dim] + [int(l) for l in hidden.split(',')],
nl_type=activation,
is_constrained=False,
dp_drop_prob=dropout,
last_layer_activations=False)
load_model_weights(rencoder_api, weights_path)
rencoder_api.eval()
rencoder_api = rencoder_api.cuda()
return rencoder_api
Some useful resources:
torch.save
torch.load
ONNX tutorials
| https://stackoverflow.com/questions/66337562/ |
How to pad a numpy 3D array (or torch tensor) with values from surrounding 3D arrays | I have a 3D numpy array of shape 3,3,3 to which I want to pad 2 layers of values from arrays surrounding it spatially, so that it becomes a 5,5,5 array.
What I have done so far using torch cat function (which works the same as numpy concat) to pad the y array, is the following:
x = torch.from_numpy(np.arange(1,28,1).reshape(3,3,3))
y = torch.from_numpy(np.arange(28,55,1).reshape(3,3,3))
z = torch.from_numpy(np.arange(55,82,1).reshape(3,3,3))
torch.cat((y,z[:,:2,:]), dim=1) #To concat z+ with 2 pads
torch.cat((x[:,1:,:],y), dim=1) #To concat z- with 2 pads
torch.cat((y,z[:,:,:2]), dim=2) #To concat x+ with 2 pads
torch.cat((x[:,1:,:],y), dim=1) #To concat x- with 2 pads
torch.cat((x,z[:2,:,:]), dim=2) #To concat y+ with 2 pads
torch.cat((x[1:,:,:],y), dim=1) #To concat y- with 2 pads
But it does not give me the right values. How can I acheive this?
| If I understand correctly, what you want is not a regular array since each dimension have different range depending on the observed axis (i.e., it can't be geometrically represented as a cube - your picture is not a (n,n,n) array).
Anyways, in the somewhat lengthy following snippet, we create a (5,5,5) test 3D array from which the (3,3,3) array can be sampled. Then we consecutively concatenate to obtain the original array, after which we mask the unwanted cells so the output is what your picture shows. Note that you can replace logic operations with + or * when using boolean Numpy arrays.
import numpy as np
# Define dummy 3D field
n = 5
xx, yy, zz = np.ogrid[0:n, 0:n, 0:n]
field = np.sin(xx) + np.cos(yy) + np.tan(zz)
# Indices of 3 innermost elements - to form (3,3,3) array
i1, i2 = n//2 - 1, n//2 + 1
# Inner 3D array
subfield = np.copy(field)[i1:i2+1, i1:i2+1, i1:i2+1]
# Indices of "inferior" pads
x1 = y1 = z1 = np.arange(i1 - 1, i1)
# Indices of "superior" pads
x2 = y2 = z2 = np.arange(i2 + 1, i2 + 2)
# Padding in axis 0 (x)
padded = np.concatenate((field[x1, i1:i2+1, i1:i2+1], subfield))
padded = np.concatenate((padded, field[x2, i1:i2+1, i1:i2+1]))
# Padding in axis 1 (y)
padded = np.concatenate((field[i1-1:i2+2, y1, i1:i2+1], padded), axis = 1)
padded = np.concatenate((padded, field[i1-1:i2+2, y2, i1:i2+1]), axis = 1)
# Padding in axis 2 (z)
padded = np.concatenate((field[i1-1:i2+2, i1-1:i2+2, z1], padded), axis = 2)
padded = np.concatenate((padded, field[i1-1:i2+2, i1-1:i2+2, z2]), axis = 2)
# Check padded array is equal to original 3D array
print(np.all(padded == field))
## We now mask unwanted cells
indices = np.indices(padded.shape)
idx1, idx2 = indices == 0, indices == n - 1
xi, xf = idx1[0], idx2[0]
yi, yf = idx1[1], idx2[1]
zi, zf = idx1[2], idx2[2]
# Logical operations to mask proper slices
xm = (xi + xf) * (yi + yf + zi + zf) # masking in axis 0
ym = (yi + yf) * (xi + xf + zi + zf) # masking in axis 1
zm = (zi + zf) * (yi + yf + xi + xf) # masking in axis 2
mask = xm + ym + zm
# Masked (5,5,5) array
masked_padded = np.ma.masked_where(mask, padded)
By the way, there must be more elegant ways to achieve the same result, but I have not used Numpy's advanced indexing that much :P
| https://stackoverflow.com/questions/66338144/ |
Python: append images in for loop use a lot of memory | I'm actually facing a problem while working on a python project.
I'm appending some images in a for loop and it uses a lot of RAM memory.
If you guys, have any solution to optimize this for loop, It'll help me a lot.
Thanks!
augment_img = []
augment_label = []
augment_weight = []
for i in range(4):
for j in range(len(train_dataset)):
single_img, single_label, single_weight = train_dataset[j]
augment_img.append(single_img)
augment_label.append(single_label)
augment_weight.append(single_weight)
if j % 1000==0:
print(j)```
| Instead of loading all images at once, I would suggest loading them in batches. There are different ways to handle that. In both pytorch and tensorflow, you can save your weights and continue training after some point. So, you can:
iterate through your images in batches
load the model weights and train the model
save the weights for the next round
But easier way is to use default functions implemented in pytorch for example that does the same. For example using ImageFolder and DataLoader:
def load_data(data_folder, batch_size, train, kwargs):
transform = {
'train': transforms.Compose(
[transforms.Resize([256, 256]),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])]),
'test': transforms.Compose(
[transforms.Resize([224, 224]),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
}
data = datasets.ImageFolder(root = data_folder, transform=transform['train' if train else 'test'])
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True, **kwargs, drop_last = True if train else False)
return data_loader
You can specify the folder that contains images ending with various extensions (e.g. .jpg, .png. etc.) and then training your model by passing data_loader to.
| https://stackoverflow.com/questions/66340113/ |
Training with parametric partial derivatives in pytorch | Given a neural network with weights theta and inputs x, I am interested in calculating the partial derivatives of the neural network's output w.r.t. x, so that I can use the result when training the weights theta using a loss depending both on the output and the partial derivatives of the output. I figured out how to calculate the partial derivatives following this post. I also found this post that explains how to use sympy to achieve something similar, however, adapting it to a neural network context within pytorch seems like a huge amount of work and a recipee for very slow code.
Thus, I tried something different, which failed. As a minimal example, I created a function (substituting my neural network)
theta = torch.ones([3], requires_grad=True, dtype=torch.float32)
def trainable_function(time):
return theta[0]*time**3 + theta[1]*time**2 + theta[2]*time
Then, I defined a second function to give me partial derivatives:
def trainable_derivative(time):
deriv_time = torch.tensor(time, requires_grad=True)
fun_value = trainable_function(deriv_time)
gradient = torch.autograd.grad(fun_value, deriv_time, create_graph=True, retain_graph=True)
deriv_time.requires_grad = False
return gradient
Given some noisy observations of the derivatives, I now try to train theta. For simplicity, I create a loss that only depends on the derivatives. In this minimal example, the derivatives are used directly as observations, not as regularization, to avoid complicated loss functions that are besides the point.
def objective(train_times, observations):
predictions = torch.squeeze(torch.tensor([trainable_derivative(a) for a in train_times]))
return torch.sum((predictions - observations)**2)
optimizer = Adam([theta], lr=0.1)
for iteration in range(200):
optimizer.zero_grad()
loss = objective(data_times, noisy_targets)
loss.backward()
optimizer.step()
Unfortunately, when running this code, I get the error
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I suppose that when calculating the partial derivatives in the way I do, I do not really create a computational graph through which autodiff could differentiate through. Thus, the connection to the parameters theta somehow gets lost and now it looks to the optimizer as if the loss is completely independent of the parameters theta. However, I could be totally wrong..
Does anyone know how to fix this?
Is it possible to include this type of derivatives in the loss function in pytorch?
And if so, what would be the most pytorch-style way of doing this?
Many thanks for your help and advise, it is much appreciated.
For completeness:
To run the above code, some training data needs to be generated. I used the following code, which works perfectly and has been tested against the analytical derivatives:
true_a = 1
true_b = 1
true_c = 1
def true_function(time):
return true_a*time**3 + true_b*time**2 + true_c*time
def true_derivative(time):
deriv_time = torch.tensor(time, requires_grad=True)
fun_value = true_function(deriv_time)
return torch.autograd.grad(fun_value, deriv_time)
data_times = torch.linspace(0, 1, 500)
true_targets = torch.squeeze(torch.tensor([true_derivative(a) for a in data_times]))
noisy_targets = torch.tensor(true_targets) + torch.randn_like(true_targets)*0.1
| Your approach to the problem appears overly complicated.
I believe that what you're trying to achieve is within reach in PyTorch.
I include here a simple code snippet that I believe showcases what you would like to do:
import torch
import torch.nn as nn
# Data and Function
torch.manual_seed(0)
input_dim = 1
output_dim = 2
n = 10 # batchsize
simple_function = nn.Sequential(nn.Linear(1, 2), nn.Sigmoid())
t = (torch.arange(n).float() / n).view(n, 1)
x = torch.randn(n, output_dim)
t.requires_grad = True
# Actual computation
xhat = simple_function(t)
jac = torch.autograd.functional.jacobian(simple_function, t, create_graph=True)
grad = jac[torch.arange(n),:,torch.arange(n),0]
loss = (x -xhat).pow(2).sum() + grad.pow(2).sum()
loss.backward()
| https://stackoverflow.com/questions/66341671/ |
How to create a 1 to 1 feed forward layer? | I'm familiar with a fully connected layer, but how can I create a custom layer in PyTorch that is just 1 to 1? That is, each neuron is only connected to 1 other neuron.
Example: Layer 1 neurons: a,b,c
Layer 2 neurons d,e,f
Connections:
a-d
b-e
c-f
| Linear layers are basically just describing a matrix multiplication. And since this is not what you want you can't use the Pytorch implementation nn.Linear. You want to have each weight correspond to just one input neuron and one output neuron. That would mean that the amount of output neurons must be the same as the amount of input neurons (which limits the possibilities of a neural network very much, are you sure you want that?).
So it is basically just elementwise multiplication where one factor is learnable.
An example of what this kind if linear layer could look like:
class ElementwiseLinear(nn.Module):
def __init__(self, input_size: int) -> None:
super(ElementwiseLinear, self).__init__()
# w is the learnable weight of this layer module
self.w = nn.Parameter(torch.rand(input_size), requires_grad=True)
def forward(self, x: torch.tensor) -> torch.tensor:
# simple elementwise multiplication
return self.w * x
Now an example of how to use it in a model:
class Model(nn.Module):
def __init__(self, input_size: int) -> None:
super(Model, self).__init__()
self.elementWiselinear1 = ElementwiseLinear(input_size)
self.elementWiselinear2 = ElementwiseLinear(input_size)
self.elementWiselinear3 = ElementwiseLinear(input_size)
def forward(self, x: torch.tensor) -> torch.tensor:
x = F.relu(self.elementWiselinear1(x))
x = F.relu(self.elementWiselinear2(x))
x = torch.sigmoid(self.elementWiselinear3(x))
return x
Again, I dont know what this could be useful for but I hope thats what you wanted!
| https://stackoverflow.com/questions/66343862/ |
Am getting error trying to predict on a single image CNN pytorch | Error message
Traceback (most recent call last):
File "pred.py", line 134, in
output = model(data)
Runtime Error: Expected 4-dimensional input for 4-dimensional weight [16, 3, 3, 3], but got 3-dimensional input of size [1, 32, 32] instead.
Prediction code
normalize = transforms.Normalize(mean=[0.4914, 0.4824, 0.4467],
std=[0.2471, 0.2435, 0.2616])
train_set = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])
model = models.condensenet(args)
model = nn.DataParallel(model)
PATH = "results/savedir/save_models/checkpoint_001.pth.tar"
model.load_state_dict(torch.load(PATH)['state_dict'])
device = torch.device("cpu")
model.eval()
image = Image.open("horse.jpg")
input = train_set(image)
train_loader = torch.utils.data.DataLoader(
input,
batch_size=1,shuffle=True, num_workers=1)
for i, data in enumerate(train_loader):
#input_var = torch.autograd.Variable(data, volatile=True)
#input_var = input_var.view(1, 3, 32,32)
**output = model(data)
topk=(1,5)
maxk = max(topk)
_, pred = output.topk(maxk, 1, True, True)
Am getting this error when am trying to predict on a single image
Image shape/size error message
Link to saved model
Training code repository
| Instead of doing the for loop and train_loader, solved this by just passing the input directly into the model. like this
input = train_set(image)
input = input.unsqueeze(0)
model.eval()
output = model(input)
More details can be found here link
| https://stackoverflow.com/questions/66348912/ |
Pytorch DDP get stuck in getting free port | I try to get a free port in DDP initialization of PyTorch. However, my code get stuck. The following snippet could repeat my description:
def get_open_port():
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
port = get_open_port()
os.environ['MASTER_PORT'] = str(port) # '12345'
# Initialize the process group.
dist.init_process_group('NCCL', rank=rank, world_size=world_size)
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 5)
def forward(self, x):
print(f'x device={x.device}')
return self.net1(x)
def demo_basic(rank, world_size):
setup(rank, world_size)
logger = logging.getLogger('train')
logger.setLevel(logging.DEBUG)
logger.info(f'Running DPP on rank={rank}.')
# Create model and move it to GPU.
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) # optimizer takes DDP model.
optimizer.zero_grad()
inputs = torch.randn(20, 10) # .to(rank)
print(f'inputs device={inputs.device}')
outputs = ddp_model(inputs)
print(f'output device={outputs.device}')
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_func, world_size):
mp.spawn(
demo_func,
args=(world_size,),
nprocs=world_size,
join=True
)
run_demo(demo_basic, 4)
The function get_open_port is supposed to free the port after invocation. My questions are: 1. How does it happen? 2. How to fix it?
| The answer is derived from here. The detailed answer is: 1. Since each free port is generated from individual process, ports are different in the end; 2. We could get a free port at the beginning and pass it to processes.
The corrected snippet:
def get_open_port():
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
def setup(rank, world_size, port):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = str(port)
# Initialize the process group.
dist.init_process_group('NCCL', rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 5)
def forward(self, x):
print(f'x device={x.device}')
# return self.net2(self.relu(self.net1(x)))
return self.net1(x)
def demo_basic(rank, world_size, free_port):
setup(rank, world_size, free_port)
logger = logging.getLogger('train')
logger.setLevel(logging.DEBUG)
logger.info(f'Running DPP on rank={rank}.')
# Create model and move it to GPU.
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) # optimizer takes DDP model.
optimizer.zero_grad()
inputs = torch.randn(20, 10) # .to(rank)
print(f'inputs device={inputs.device}')
outputs = ddp_model(inputs)
print(f'output device={outputs.device}')
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_func, world_size, free_port):
mp.spawn(
demo_func,
args=(world_size, free_port),
nprocs=world_size,
join=True
)
free_port = get_open_port()
run_demo(demo_basic, 4, free_port)
| https://stackoverflow.com/questions/66348957/ |
MT5ForConditionalGeneration with Pytorch-lightning gives attribute_error | Can't handle that problem for several days
I'm new to NLP and the solution is probably very simple
class QAModel(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.model = MT5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)
def forward(self, input_ids, attention_mask, labels=None):
output = model(
input_ids=input_ids,
attention_mask=attention_mask,
labels=labels
)
return output.loss, output.logits
def training_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('train_loss', loss, prog_bar=True, logger=True)
return loss
def validation_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('val_loss', loss, prog_bar=True, logger=True)
return loss
def test_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('test_loss', loss, prog_bar=True, logger=True)
return loss
def configure_optimizers(self):
return AdamW(self.parameters(), lr=0.0001)
model = QAModel()
from pytorch_lightning.callbacks import ModelCheckpoint
checkpoint_callback = ModelCheckpoint(
dirpath='/content/checkpoints',
filename='best-checkpoint',
save_top_k=1,
verbose=True,
monitor='val_loss',
mode='min'
)
trainer = pl.Trainer(
checkpoint_callback=checkpoint_callback,
max_epochs=N_EPOCHS,
gpus=1,
progress_bar_refresh_rate=30
)
trainer.fit(model, data_module)
Running this code gives me
AttributeError: 'QAModel' object has no attribute 'automatic_optimization'
after fit() function
Probably, the problem is in MT5ForConditionalGeneration, as after passing it to funtion() we've got the same error
| Try inheriting pl.LightingModule instead of pl.LightningDataModule. It is the right choice for defining a model class.
| https://stackoverflow.com/questions/66350832/ |
how do I eliminate the for-loop in this pytorch code | I have this custom pytorch module (below). It does exactly what I need; it just does it very slowly. What can I do to speed this up? I know that I'm not supposed to have a for-loop in there; it just wasn't clear how to do that divide operation without it. How do I broadcast the x tensor to the divide without that loop? I can move the back-weights to their own layer if that helps.
class StepLayer(nn.Module):
def __init__(self):
super(StepLayer, self).__init__()
w = init_weights()
self.front_weights = nn.Parameter(torch.DoubleTensor([w, w]).T, requires_grad=True)
self.back_weights = nn.Parameter(torch.DoubleTensor([w]).T, requires_grad=True)
def forward(self, x):
# x shape is batch by feature
results = []
for batch in x:
b = batch.divide(self.front_weights)
b = torch.some_math_function(b)
b = b.sum(dim=1)
b = torch.some_other_math_function(b)
b = b @ self.back_weights
results.append(b)
stack = torch.vstack(results)
return stack
| Below is a source code with shape after each step described (read code comments please).
I've assumed a few things like F=100, x=Bx2, front_weights=100x2, back_weights=100, you should be able to easily adjust it to your case.
class StepLayer(nn.Module):
def __init__(self):
super().__init__()
F = 100
# Notice I've added `1` dimension in front_weights
self.front_weights = nn.Parameter(torch.randn(1, F, 2), requires_grad=True)
self.back_weights = nn.Parameter(torch.randn(F), requires_grad=True)
def forward(self, x):
# x.shape == (B, 2)
x = x.unsqueeze(dim=1) # (B, 1, 2)
x = x / self.front_weights # (B, F, 2)
# I just took some element-wise math function from PyTorch
x = torch.sin(x) # (B, F, 2)
x = torch.sum(x, dim=-1) # (B, F)
x = torch.sin(x) # (B, F)
return x @ self.back_weights # (B, )
# results = []
# for batch in x:
# # batch - (1, 2)
# b = batch.divide(self.front_weights) # (F, 2)
# b = torch.some_math_function(b) # (F, 2)
# b = b.sum(dim=1) # (F, )
# b = torch.some_other_math_function(b) # (F, )
# b = b @ self.back_weights # (1, )
# results.append(b)
# stack = torch.vstack(results) # (B, )
# return stack # (B,)
layer = StepLayer()
print(layer(torch.randn(64, 2)).shape)
Main trick is to use 1 dimensions for broadcasting when necessary (division especially) and smart weight initialization so you don't have to do any transpose operations.
Other things
You might want to reconsider Double, float (as used above) is much faster, especially on CUDA and takes half the memory (and neural network should compensate for precision loss, if any).
Use half precision and mixed training if the speed is still an issue (float16 dtype instead of float32), but only on CUDA, see here to know more about automated mixed precision
| https://stackoverflow.com/questions/66358684/ |
Is there a good way to modify some values in a pytorch tensor while preserving the autograd functionality? | Sometimes I need to modify some of the values in a pytorch tensor. For example, given a tensor x, I need to multiply its positive part by 2 and multiply its negative part by 3:
import torch
x = torch.randn(1000, requires_grad=True)
x[x>0] = 2 * x[x>0]
x[x<0] = 3 * x[x<0]
y = x.sum()
y.backward()
However such inplace operations always break the graph for autograd:
Traceback (most recent call last):
File "test_rep.py", line 4, in <module>
x[x>0] = 2 * x[x>0]
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
Therefore, so far I've been using the following workaround:
import torch
x = torch.randn(1000, requires_grad=True)
y = torch.zeros_like(x, device=x.device)
y[x>0] = 2 * x[x>0]
y[x<0] = 3 * x[x<0]
z = y.sum()
z.backward()
which results in manually creating new tensors. I wonder if there is a better way to do this.
| How about like following?
import torch
x = torch.randn(1000, requires_grad=True)
x = torch.where(x>0, x*2, x)
x = torch.where(x<0, x*3, x)
y = x.sum()
y.backward()
| https://stackoverflow.com/questions/66363241/ |
Reward of Pong game - (OpenAI gym) |
I know that the Pong Game initializes to new game when one side scores 20 points.
By the way, the reward shows that it goes down below -20.
Why is that so?
One thing to expect is that after one side gets 20 points, the game is reset by playing one more time. Does the game need to get 21 points to initialize?
(Use 8 workers, A2C, PongNoFrameskip-v4)
| Pong is played to 21 points, not 20.
| https://stackoverflow.com/questions/66363245/ |
DCGANs discriminator accuracy metric using PyTorch | I am implementing DCGANs using PyTorch.
It works well in that I can get reasonable quality generated images, however now I want to evaluate the health of the GAN models by using metrics, mainly the ones introduced by this guide https://machinelearningmastery.com/practical-guide-to-gan-failure-modes/
Their implementation uses Keras which SDK lets you define what metrics you want when you compile the model, see https://keras.io/api/models/model/. In this case the accuracy of the discriminator, i.e. percentage of when it successfully identifies an image as real or generated.
With the PyTorch SDK, I can't seem to find a similar feature that would help me easily acquire this metric from my model.
Does Pytorch provide the functionality to be able to define and extract common metrics from a model?
| Pure PyTorch does not provide metrics out of the box, but it is very easy to define those yourself.
Also there is no such thing as "extracting metrics from model". Metrics are metrics, they measure (in this case accuracy of discriminator), they are not inherent to the model.
Binary accuracy
In your case, you are looking for binary accuracy metric. Below code works with either logits (unnormalized probability outputed by discriminator, probably last nn.Linear layer without activation) or probabilities (last nn.Linear followed by sigmoid activation):
import typing
import torch
class BinaryAccuracy:
def __init__(
self,
logits: bool = True,
reduction: typing.Callable[
[
torch.Tensor,
],
torch.Tensor,
] = torch.mean,
):
self.logits = logits
if logits:
self.threshold = 0
else:
self.threshold = 0.5
self.reduction = reduction
def __call__(self, y_pred, y_true):
return self.reduction(((y_pred > self.threshold) == y_true.bool()).float())
Usage:
metric = BinaryAccuracy()
target = torch.randint(2, size=(64,))
outputs = torch.randn(size=(64, 1))
print(metric(outputs, target))
PyTorch Lightning or other third party
You can also use PyTorch Lightning or other framework on top of PyTorch which defines metrics like accuracy
| https://stackoverflow.com/questions/66365566/ |
Pytorch crop images from Top Left Corner in Transforms | I'm using Pytorch's transforms.Compose and in my dataset I have 1200x1600 (Height x Width) images.
I want to crop the images starting from the Top Left Corner (0,0) so that I can have 800x800 images.
I was looking in Pytorch documentation but I didn't find anything to solve my problem, so I copied the source code of center_crop in my project and modified it as follows:
def center_crop(img: Tensor, output_size: List[int]):
# .... Other stuff of Pytorch
# ....
# Original Pytorch Code (that I commented)
crop_top = int((image_height - crop_height + 1) * 0.5)
crop_left = int((image_width - crop_width + 1) * 0.5)
# ----
# My modifications:
crop_top = crop_left = 0
return crop(img, crop_top, crop_left, crop_height, crop_width)
But basically I think this is quite an overkill, if it's possible I'd like to avoid to copy their code and modify it.
Isn't there anything that already implements the desired behaviour by default, is there?
| I used Lambda transforms in order to define a custom crop
from torchvision.transforms.functional import crop
def crop800(image):
return crop(image, 0, 0, 800, 800)
data_transforms = {
'images': transforms.Compose([transforms.ToTensor(),
transforms.Lambda(crop800),
transforms.Resize((400, 400))])}
| https://stackoverflow.com/questions/66368123/ |
How does PyTorch DataLoader interact with a PyTorch dataset to transform batches? | I'm creating a custom dataset for NLP-related tasks.
In the PyTorch custom datast tutorial, we see that the __getitem__() method leaves room for a transform before it returns a sample:
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
image = io.imread(img_name)
### SOME DATA MANIPULATION HERE ###
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
return sample
However, the code here:
if torch.is_tensor(idx):
idx = idx.tolist()
implies that multiple items should be able to be retrieved at a time which leaves me wondering:
How does that transform work on multiple items? Take the custom transforms in the tutorial for example. They do not look like they could be applied to a batch of samples in a single call.
Related, how does a DataLoader retrieve a batch of multiple samples in parallel and apply said transform if the transform can only be applied to a single sample?
|
How does that transform work on multiple items? They work on multiple items through use of the data loader. By using transforms, you are specifying what should happen to a single emission of data (e.g., batch_size=1). The data loader takes your specified batch_size and makes n calls to the __getitem__ method in the torch data set, applying the transform to each sample sent into training/validation. It then collates n samples into your batch size emitted from the data loader.
Related, how does a DataLoader retrieve a batch of multiple samples in parallel and apply said transform if the transform can only be applied to a single sample? Hopefully above makes sense to you. Parallelization is done by the torch data set class and the data loader, where you specify num_workers. Torch will pickle the data set and spread it across workers.
| https://stackoverflow.com/questions/66370250/ |
Adding custom weights to training data in PyTorch | Is it possible to add custom weights to the training instances in PyTorch? More explicitly, I'd like to add a custom weight for every row in my dataset. By default, the weights are 1, which means every data is equally important for my model.
| Loss functions support class weights not sample weights. For sample weights you can do something like below (commented inline):
import torch
x = torch.rand(8, 4)
# Ground truth
y = torch.randint(2, (8,))
# Weights per sample
weights = torch.rand(8, 1)
# Add weights as a columns, so that it will be passed trough
# dataloaders in case you want to use one
x = torch.cat((x, weights), dim=1)
model = torch.nn.Linear(4, 2)
loss_fn = torch.nn.CrossEntropyLoss(reduction='none')
def weighted_loss(y, y_hat, w):
return (loss_fn(y, y_hat)*w).mean()
loss = weighted_loss(model(x[:, :-1]), y, x[:, -1])
print (loss)
| https://stackoverflow.com/questions/66374709/ |
Computing Cosine Distance with Differently shaped tensors | I have the following tensor representing a word vector
A = (2, 500)
Where the first dimension is the BATCH dimension (i.e. A contains two word vectors each with 500 elements)
I also have the following tensor
B = (10, 500)
I want to compute the cosine distance between A and B such that I get
C = (2, 10, 1)
i.e for each row in A compute the cosine distance with each row in B
I looked at using torch.nn.functional.F.cosine_similarity however this doesn't work as the dimensions must be the same.
Whats the best efficient way of achieving this in pytorch?
| Use broadcasting technique with unsqueeze
import torch.nn.functional as F
C = F.cosine_similarity(A.unsqueeze(1), B, dim=-1)
print(C.shape)
# torch.size([2,10])
| https://stackoverflow.com/questions/66374955/ |
How to get a column from a tensor? | Let's say I have a tensor consisting of 1 and 0's as shown below. How can I get the index of a specific column to replace with new values ? If I want to replace the values of column 1 with the [3.,4.,5.,6.], how do I accomplish this ?
a = torch.tensor([[[1., 0., 0., 0.]],
[[0., 1., 0., 0.]],
[[1., 0., 0., 0.]],
[[0., 0., 0., 1.]],
[[1., 0., 0., 0.]],
[[0., 0., 0., 1.]],
[[1., 0., 0., 0.]]])
| Calling them 'columns' is a bit tricky, given that this is a 3D tensor.
This will do what you need, setting 'column' 1 to the values you gave.
a = torch.tensor([[[1., 0., 0., 0.]],
[[0., 1., 0., 0.]],
[[1., 0., 0., 0.]],
[[0., 0., 0., 1.]],
[[1., 0., 0., 0.]],
[[0., 0., 0., 1.]],
[[1., 0., 0., 0.]]])
# Change values in 'column' 1 (zero-indexed):
# The 0 is there because of the size-1 second dimension.
a[1, 0, :] = torch.tensor([3., 4., 5., 6.])
print(a)
# tensor([[[1., 0., 0., 0.]],
# [[3., 4., 5., 6.]],
# [[1., 0., 0., 0.]],
# [[0., 0., 0., 1.]],
# [[1., 0., 0., 0.]],
# [[0., 0., 0., 1.]],
# [[1., 0., 0., 0.]]])
| https://stackoverflow.com/questions/66379373/ |
Why are embeddings necessary in Pytorch? | I (think) I understand the basic principle behind embeddings: they're a shortcut to quickly perform the operation (one hot encoded vector) * matrix without actually performing that operation (by utilizing the fact that this operation is equivalent to indexing into the matrix) while still maintaining the same gradient as if that operation was actually performed.
I know in the following example:
e = Embedding(3, 2)
n = e(torch.LongTensor([0, 2]))
n will be a tensor of shape 2, 2.
But, we could also do:
p = nn.Parameter(torch.zeros([3, 2]).normal_(0, 0.01))
p[tensor([0, 2])]
and get the same result without an embedding.
This in and of itself wouldn't be confusing since in the first example n has a grad_fn called EmbeddingBackward whereas in the 2nd example p has a grad_fn called IndexBackward, which is what we expect since we know embeddings simulate a different derivative.
The confusing part is in chapter 8 of the fastbook they use embeddings to compute movie recommendations. But then, they do it without embeddings in basically the same manner & the model still works. I would expect the version without embeddings to fail because the derivative would be incorrect.
Version with embeddings:
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = Embedding(n_users, n_factors)
self.user_bias = Embedding(n_users, 1)
self.movie_factors = Embedding(n_movies, n_factors)
self.movie_bias = Embedding(n_movies, 1)
self.y_range = y_range
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
res = (users * movies).sum(dim=1, keepdim=True)
res += self.user_bias(x[:,0]) + self.movie_bias(x[:,1])
return sigmoid_range(res, *self.y_range)
Version without:
def create_params(size):
return nn.Parameter(torch.zeros(*size).normal_(0, 0.01))
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = create_params([n_users, n_factors])
self.user_bias = create_params([n_users])
self.movie_factors = create_params([n_movies, n_factors])
self.movie_bias = create_params([n_movies])
self.y_range = y_range
def forward(self, x):
users = self.user_factors[x[:,0]]
movies = self.movie_factors[x[:,1]]
res = (users*movies).sum(dim=1)
res += self.user_bias[x[:,0]] + self.movie_bias[x[:,1]]
return sigmoid_range(res, *self.y_range)
Does anyone know what is going on?
| Besides the points provided in the comments:
I (think) I understand the basic principle behind embeddings: they're
a shortcut to quickly perform the operation (one hot encoded vector) *
matrix without actually performing that operation
This one is only partially correct. Indexing into matrix is fully differentiable operation, it's just taking part of the data and propagate gradient this path only. Multiplication here would be wasteful and unnecessary.
and get the same result without an embedding.
This one is true
called EmbeddingBackward whereas in the 2nd example p has a grad_fn called IndexBackward, which is what we expect since we know embeddings simulate a different derivative. (emphasis mine)
This one isn't (or often is not). Embedding also chooses part of the data, just like you did, grad_fn is different due to "extra functionalities", but in principle it is the same (choosing some vector(s) from the matrix), think of it like IndexBackward on steroids.
The confusing part is in chapter 8 of the fastbook they use embeddings
to compute movie recommendations. But then, they do it without
embeddings in basically the same manner & the model still works. I
would expect the version without embeddings to fail because the
derivative would be incorrect. (emphasis mine)
In this exact case, both approaches are equivalent (give or take different results from random initialization).
Why nn.Embedding at all?
Easier to understand your intent when reading the code
Common utilities and functionality added if needed (as pointed out in the comments)
| https://stackoverflow.com/questions/66382790/ |
Tensor creation and destruction within a while loop in libtorch C++ | I have just started with libtorch and I am having some trouble with while loops and tensors :roll_eyes:
So, my main func looks like so:
int main()
{
auto tensor_create_options = torch::TensorOptions().dtype(torch::kFloat32).device(torch::kCPU).requires_grad(false);
torch::Tensor randn_tensor = torch::randn({10}, tensor_create_options);
int randn_tensor_size = randn_tensor.sizes()[0];
while (randn_tensor_size > 5)
{
std::cout << "==> randn_tensor shape: " << randn_tensor.sizes() << std::endl;
randn_tensor.reset(); //reset();
torch::Tensor randn_tensor = torch::randn({3}, tensor_create_options);
std::cout << "==> randn_tensor shape 3: " << randn_tensor.sizes() << std::endl;
randn_tensor_size--;
}
return 0;
}
and I get thrown this:
==> randn_tensor shape: [10]
==> randn_tensor shape 3: [3]
terminate called after throwing an instance of 'c10::Error'
what(): sizes() called on undefined Tensor
Essentially, what I want to do is recreate the tensor within the while loop and ensure that I can access it again in the whileloop.
Interestingly, it seems to have cerated the tensor of reduced size but the while loop does not seem to recognise this.
Thank you!
| You have a shadowing issue here, try the loop that way:
while (randn_tensor_size > 5)
{
std::cout << "==> randn_tensor shape: " << randn_tensor.sizes() << std::endl;
randn_tensor.reset(); //reset();
randn_tensor = torch::randn({3}, tensor_create_options);
std::cout << "==> randn_tensor shape 3: " << randn_tensor.sizes() << std::endl;
randn_tensor_size--;
}
Maybe, the reset of the tensor isn't necessary further on, depends of the internals of this class. If that's not the actual intention of your code and you want only the original tensor to be deleted, then simply reset that one right before the loop. Indepently of this, try to make the code clearer in terms of intention emphasis! I do not really understand what you want to achieve exactly. Your loop counter is misued at all since you mix size and counting semantics, depending on the initial size only. Within the loop, you simply recreate the tensor on the stack again and again, not affecting your counter.
| https://stackoverflow.com/questions/66383344/ |
How to obtain the values from each row at given position in pytorch? | How to obtain the values from a 2-d torch array based on 1-d array containing the positions at each row:
for example:
a = torch.randn((5,5))
>>> a
tensor([[ 0.0740, -0.3129, 0.7814, -0.0519, 1.3503],
[ 1.1985, 0.2098, -0.0326, 0.3922, 0.5037],
[-1.4334, 1.4047, -0.6607, -1.8024, -0.0088],
[ 1.2116, 0.5928, 1.4041, 1.0494, -0.1146],
[ 0.4173, 1.0482, 0.5244, -2.1767, 0.5264]])
b = torch.randint(0,5, (5,))
>>> b
tensor([1, 0, 1, 3, 2])
I wanted to get the elements of tensor a at the position given by tensor b
For example:
desired output:
tensor([-0.3129,
1.1985,
1.4047,
1.0494,
0.5244])
Here, each element in a given position by tensor b is chosen row-wise.
I have tried:
for index in range(b.size(-1)):
val = torch.cat((val,a[index,b[index]].view(1,-1)), dim=0) if val is not None else a[index,b[index]].view(1,-1)
>>> val
tensor([[-0.3129],
[ 1.1985],
[ 1.4047],
[ 1.0494],
[ 0.5244]])
However, is there a tensor indexing way to do it?
I tried couple of solutions using tensor indexing, but none of them worked.
| You can use torch.gather
>>> a.gather(1, b.unsqueeze(1))
tensor([[-0.3129],
[ 1.1985],
[ 1.4047],
[ 1.0494],
[ 0.5244]])
Or
>>> a[range(len(a)), b].unsqueeze(1)
tensor([[-0.3129],
[ 1.1985],
[ 1.4047],
[ 1.0494],
[ 0.5244]])
| https://stackoverflow.com/questions/66384631/ |
How can I add more neurons / filters to a neural network model after training? | I'm interested in training both a CNN model and a simple linear feed forward model in PyTorch, and after training to add more filters -- to the CNN layers, & neurons -- to the linear model layers and the outputs (e.g. from binary classification to multiclass classification) of both. By adding them I specifically mean to keep the weights that were trained constant, and to add random initialized weights to the new, incoming weights.
There's an example of a CNN model here, and an example of a simple linear feed forward model here
| This one was a bit tricky and requires slice (see this answer for more info about slice, but it should be intuitive). Also this answer for slice trick. Please see comments for explanation:
import torch
def expand(
original: torch.nn.Module,
*args,
**kwargs
# Add other arguments if needed, like different stride
# They won't change weights shape, but may change behaviour
):
new = type(original)(*args, **kwargs)
new_weight_shape = torch.tensor(new.weight.shape)
new_bias_shape = torch.tensor(new.bias.shape)
original_weight_shape = torch.tensor(original.weight.shape)
original_bias_shape = torch.tensor(original.bias.shape)
# I assume bias and weight exist, if not, do some checks
# Also quick check, that new layer is "larger" than original
assert torch.all(new_weight_shape >= original_weight_shape)
assert new_bias_shape >= original_bias_shape
# All the weights will be inputted from top to bottom, bias 1D assumed
new.bias.data[:original_bias_shape] = original.bias.data
# Create slices 0:n for each dimension
slicer = tuple([slice(0, dim) for dim in original_weight_shape])
# And input the data
new.weight.data[slicer] = original.weight.data
return new
layer = torch.nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3)
new = expand(layer, in_channels=32, out_channels=64, kernel_size=3)
This should work for any layer (which has weight and bias, adjust if needed). Using this approach you can recreate your neural network or use PyTorch's apply (docs here)
Also remember, that you have to explicitly pass creational *args and **kwargs for "new layer" which will have trained connections inputted.
| https://stackoverflow.com/questions/66388182/ |
How to decipher these 2 lines of code? What does it mean? | This is the code:
real_idx = torch.rand(len(real)) > 0.2
real_selected = real[real_idx][: int(len(real)*p_real)]
p_real is actually 0.2
We're supposed to mix 2 tensors, namely the real and the fake tensor.
| The first line creates a random mask the size of real. The second line applies this mask on real and slices out 20% the size of real from whatever the masking operation returned.
Now if what you meant to ask was why is this being done, then you'd have to give more details.
| https://stackoverflow.com/questions/66389691/ |
can't convert pytorch tensor dtype from float64 to double | I need to convert an int to a double tensor, and I've already tried several ways including torch.tensor([x], dtype=torch.double), first defining the tensor and then converting the dtype to double with x_tensor.double(), and also defining the tensor with torch.DoubleTensor([x]) but none actually change the dtype from torch.float64. Here's the code snippet
>>> x = 6
>>> x = torch.tensor([x], dtype=torch.double)
>>> x
tensor([6.], dtype=torch.float64)
>>> x = 6
>>> x_tensor = torch.Tensor([x])
>>> x_double = x_tensor.double()
>>> x_double
tensor([6.], dtype=torch.float64)
>>> x_tensor = torch.DoubleTensor([x])
>>> x_tensor
tensor([6.], dtype=torch.float64)
Any ideas as to why it's not converting?
| It is converting; torch.float64 and torch.double are the same thing
| https://stackoverflow.com/questions/66391304/ |
TypeError: Can't convert re.compile('[A-Z]+') (re.Pattern) to Union[str, tokenizers.Regex] | I'm having issues applying a Regex expression to a Split() operation found in the HuggingFace Library. The library requests the following input for Split().
pattern (str or Regex) β A pattern used to split the string. Usually a
string or a Regex
In my code I am applying the Split() operation like so:
tokenizer.pre_tokenizer = Split(pattern="[A-Z]+", behavior='isolated')
but it's not working because [A-Z]+ is being interpreted as a string not a Regex expression. I've used the following to no avail:
pattern = re.compile("[A-Z]+")
tokenizer.pre_tokenizer = Split(pattern=pattern, behavior='isolated')
Getting the following error:
TypeError: Can't convert re.compile('[A-Z]+') (re.Pattern) to Union[str, tokenizers.Regex]
| The following solution worked by importing Regex from the tokenizers library:
from tokenizers import Regex
tokenizer.pre_tokenizer = Split(pattern=Regex("[A-Z]+"),
behavior='isolated')
| https://stackoverflow.com/questions/66391490/ |
Torch: How to inspect weights after training? | I am wondering what I am doing wrong when looking to see how the weights changed during training.
My loss goes down considerably but it appears that the initialized weights are the same as trained weights. Am I looking in the wrong location? I would appreciate any insight that you might have!
import torch
import numpy as np
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import torch.nn.functional as F
# setup GPU/CPU processing
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# initialize model
class mlp1(torch.nn.Module):
def __init__(self, num_features, num_hidden, num_classes):
super(mlp1, self).__init__()
self.num_classes = num_classes
self.input_layer = torch.nn.Linear(num_features, num_hidden)
self.out_layer = torch.nn.Linear(num_hidden, num_classes)
def forward(self, x):
x = self.input_layer(x)
x = torch.sigmoid(x)
logits = self.out_layer(x)
probas = torch.softmax(logits, dim=1)
return logits, probas
# instantiate model
model = mlp1(num_features=28*28, num_hidden=100, num_classes=10).to(device)
# check initial weights
weight_check_pre = model.state_dict()['input_layer.weight'][0][0:25]
# optim
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# download data
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
# data loader
train_dataloader = DataLoader(dataset=train_dataset,
batch_size=100,
shuffle=True)
# train
NUM_EPOCHS = 1
for epoch in range(NUM_EPOCHS):
model.train()
for batch_idx, (features, targets) in enumerate(train_dataloader):
# send data to device
features = features.view(-1, 28*28).to(device)
targets = targets.to(device)
# forward
logits, probas = model(features)
# loss
loss = F.cross_entropy(logits, targets)
optimizer.zero_grad()
loss.backward()
# now update weights
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Loss: %.4f'
%(epoch+1, NUM_EPOCHS, batch_idx,
len(train_dataloader), loss))
# check post training
weight_check_post = model.state_dict()['input_layer.weight'][0][0:25]
# compare
weight_check_pre == weight_check_post # all equal
| That is because both variables are referencing the same object (dictionary) in memory and so will always equal to each other.
You can do this to get actual copies of the state_dict.
import copy
# check initial weights
weight_check_pre = copy.deepcopy(model.state_dict()['input_layer.weight'][0][0:25])
...
# check post training
weight_check_post = copy.deepcopy(model.state_dict()['input_layer.weight'][0][0:25])
| https://stackoverflow.com/questions/66393460/ |
which object detection algorithms can extract the QR code from an image efficiently | I am new to Object Detection, for now, I want to predict the QR code in images. I want to extract the QR code from the images, and only predict the QR code without the background information, and finally predict the exact number the QR code is representing, since I am using PyTorch, is there any object detection algorithm which is compatible to PyTorch that I could apply to this task?
(for what I mean by extracting, the raw input is an image,I want to change the input of the image to the QR code in the image).
| There are two ways for this task:
Computer Vision based approach:
OpenCV library's QRCodeDetector() function can detect and read QR codes easily. It returns data in QR code and bounding box information of the QR code:
import cv2
detector = cv2.QRCodeDetector()
data, bbox, _ = detector.detectAndDecode(img)
Deep learning based approach:
Using common object detection framework - Yolo (Yolo v5 in PyTorch), you can achieve your target. However, you need data to train it. For computer vision based approach, you don't need to do training or data collection.
You may consider reading these two.
| https://stackoverflow.com/questions/66399668/ |
Pytorch Resnet model error if FC layer is changed in Colab | If I simply import the Resnet Model from Pytorch in Colab, and use it to train my dataset, there are no issues. However, when I try to change the last FC layer to change the output features from 1000 to 9, which is the number of classes for my datasets, the following error is obtained.
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
Working version:
import torchvision.models as models
#model = Net()
model=models.resnet18(pretrained=True)
# defining the optimizer
optimizer = Adam(model.parameters(), lr=0.07)
# defining the loss function
criterion = CrossEntropyLoss()
# checking if GPU is available
if torch.cuda.is_available():
model = model.cuda()
criterion = criterion.cuda()
Version with error:
import torchvision.models as models
#model = Net()
model=models.resnet18(pretrained=True)
# defining the optimizer
optimizer = Adam(model.parameters(), lr=0.07)
# defining the loss function
criterion = CrossEntropyLoss()
# checking if GPU is available
if torch.cuda.is_available():
model = model.cuda()
criterion = criterion.cuda()
model.fc = torch.nn.Linear(512, 9)
Error occurs in the stage where training occurs, aka
outputs = model(images)
How should I go about fixing this issue?
| Simple error, the fc layer should be instantiated before declaring model as cuda.
I.e
model=models.resnet18(pretrained=True)
model.fc = torch.nn.Linear(512, 9)
if torch.cuda.is_available():
model = model.cuda()
| https://stackoverflow.com/questions/66401357/ |
In PyTorch, what exactly does the grad_fn attribute store and how is it used? | In PyTorch, the Tensor class has a grad_fn attribute. This references the operation used to obtain the tensor: for instance, if a = b + 2, a.grad_fn will be AddBackward0. But what does "reference" mean exactly?
Inspecting AddBackward0 using inspect.getmro(type(a.grad_fn)) will state that the only base class of AddBackward0 is object. Additionally, the source code for this class (and in fact, any other class which might be encountered in grad_fn) is nowhere to be found in the source code!
All of this leads me to the following questions:
What precisely is stored in grad_fn and how is it called during back-propagation?
How come the objects that get stored in grad_fn do not have some sort of common super class, and why is there no source code for them on GitHub?
| grad_fn is a function "handle", giving access to the applicable gradient function. The gradient at the given point is a coefficient for adjusting weights during back-propagation.
"Handle" is a general term for an object descriptor, designed to give appropriate access to the object. For instance, when you open a file, open returns a file handle. When you instantiate a class, the __init__ function returns a handle to the created instance. The handle contains references (usually memory addresses) to the data and functions for the item in question.
It appears as the generic object class because it's from the underlying implementation in another language, such that it does not map exactly to the Python function type. PyTorch handles the inter-language call and return. This hand-off is part of the pre-complied (shared-object) run-time system.
Is that enough to clarify what you see?
| https://stackoverflow.com/questions/66402331/ |
What is a good use of the intermediate hidden states of an RNN? | So I've used RNN/LSTMs in three different capacities:
Many to many: Use every output of the final layer to predict the next. Could be classification or regression.
Many to one: Use the final hidden state to perform regression or classification.
One to many: Take a latent space vector, perhaps the final hidden state of an LSTM encoder and use it to generate a sequence (I've done this in the form of an autoencoder).
In none of these cases do I use the intermediate hidden states to generate my final output. Only the last layer outputs in case #1 and only the last layer hidden state in case #2 and #3. However, PyTorch nn.LSTM/RNN returns a vector containing the final hidden state of every layer, so I assume they have some uses.
I'm wondering what some use cases of those intermediate layer states are?
| Thereβs nothing explicitly requiring you to use the last layer only. You could feed in all of the layers to your final classifier MLP for each position in the sequence (or at the end, if youβre classifying the whole sequence).
As a practical example, consider the ELMo architecture for generating contextualized (that is, token-level) word embeddings. (Paper here: https://www.aclweb.org/anthology/N18-1202/) The representations are the hidden states of a multi-layer biRNN.
Figure 2 in the paper shows how different layers differ in usefulness depending on the task. The authors suggest that lower levels encode syntax, while higher levels encode semantics.
| https://stackoverflow.com/questions/66403791/ |
How to split json dataset and save it? | I have taken a json dataset. Dataset name is v2_OpenEnded_mscoco_train2014_questions.json How can I split some of the data from the dataset and save the split data into another json file?
This is the example of my dataset:
{"image_id": 426004, "question": "How many buns are on the plate?",
"question_id": 426004002}, {"image_id": 92846, "question": "What is
the color of the vase without flowers?", "question_id": 92846000},
{"image_id": 92846, "question": "Is there anything red in this
photo?", "question_id": 92846002}, {"image_id": 92846, "question":
"What does that vase represent?", "question_id": 92846003},
{"image_id": 262166, "question": "What color is the couch?",
"question_id": 262166002}, {"image_id": 262166, "question": "How many
seats are available?", "question_id": 262166003}
I have about 443,757 data in my dataset. I want to split the dataset into 400 different dataset each having 100 data. How can I automate this using python?
It will be a great help if this can be done with pytorch.
| Try this:
n_rows = 100
current_data = []
for i, e in enumerate(data):
if i % n_rows == 0 and i > 0:
with open(f'dataset_{i - n_rows}-{i}.json', 'w') as f:
json.dump(current_data, f)
current_data = []
current_data.append(e)
data is a list with jsons which you have. We iterate over it and every n_rows lines are written in a new file. The last several rows aren't written.
| https://stackoverflow.com/questions/66407955/ |
How to get the indexes of equal elements in two different size PyTorch tensors? | Let's say I have two PyTorch tensors:
t_1d = torch.Tensor([6, 5, 1, 7, 8, 4, 7, 1, 0, 4, 11, 7, 4, 7, 4, 1])
t = torch.Tensor([4, 7])
I want to get the indices of exact match intersection between the sets for
the tensor t_1d with tensor t.
Desired output of t_1d and t: [5, 12] (first index of exact intersection)
Preferably on GPU for large Tensors, so no loops or Numpy casts.
| In general, we can check where each element in t is equal to elements in t_1d.
After that, shift back the last element by as many places as it misses from the first element (in general case, here shift by -1) and check whether arrays are equal:
intersection = (t_1d == t[0]) & torch.roll(t_1d == t[1], shifts=-1)
torch.where(intersection)[0] # torch.tensor([5, 12])
| https://stackoverflow.com/questions/66409421/ |
How do I pass an array of tensors into the criterion/loss function in PyTorch? | My loss function gives an error:
self.loss_fn = nn.MSELoss()
#### -- Snip ####
loss = self.loss_fn(predictions, targets) # Error here: 'list' object has no attribute 'size'
loss.backward()
My predictions are an array of tensors as follows:
predictions = []
for _ in range(100):
prediction = MyNeuralNet(inputs)
predictions.append(prediction)
How can I pass an array of tensors into my loss criterion function without getting the above error?
| By using torch.stack I could fix my issue:
predictions = torch.stack(predictions)
loss = self.loss_fn(predictions, targets)
| https://stackoverflow.com/questions/66411459/ |
What is torch.randn((1, 5))? | I'm confused as to why there are double parantheses instead of just torch.randn(1,5).
Is torch.randn(1,5) the same thing as torch.randn((1,5))?
| You should check the definition of this function here.
size (int...) β a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.
>>> import torch
>>> a = torch.randn(1,5)
>>> b = torch.randn((1,5))
>>> a.shape == b.shape
True
Therefore, you can use either a or b since they have the same shape.
| https://stackoverflow.com/questions/66414391/ |
Target and output shape/type for binary classification using PyTorch | so I have some annotated images that I want to use to train a binary image classifier but I have been having issues creating the dataset and actually getting a test model to train. Each image is either of a certain class or not so I want to set up a binary classification dataset/model using PyTorch. I had some questions:
should labels be float or long?
what shape should my labels be?
I am using a resnet18 class from torchvision model, should my final softmax layer have one or two outputs?
what shapes should my target be, during training, if my batch size is 200?
what shape should my outputs be?
Thanks in advance
Quote
Delete
| Binary classification is slightly different than multi-label classification: while for multilabel your model predicts a vector of "logits", per sample, and uses softmax to converts the logits to probabilities; In the binary case, the model predicts a scalar "logit", per sample, and uses the sigmoid function to convert it to class probability.
In pytorch the softmax and the sigmoind are "folded" into the loss layer (for numerical stability considerations) and therefore there are different Cross Entropy loss layers for the two cases: nn.BCEWithLogitsLoss for the binary case (with sigmoid) and nn.CrossEntropyLoss for the multilabel case (with softmax).
In your case you want to use the binary version (with sigmoid): nn.BCEWithLogitsLoss.
Thus your labels should be of type torch.float32 (same float type as the output of the network) and not integers.
You should have a single label per sample. Thus, if your batch size is 200, the target should have shape (200,1).
I'll leave it here as an exercise to show that training a model with two outputs and CE+softmax is equivalent to binary output+sigmoid ;)
| https://stackoverflow.com/questions/66416878/ |
How to transform tensor size from [a, b] to [a, b, k] in PyTorch | I want to add one extra dimension to a tensor and set the value of this dimension as a specific value.
For examples,
print(a.size)
torch.Size([10, 5])
# after tranformation and set a to b
print(b.size)
torch.Size([10, 5, 1])
# after transformation and set a to c
print(c.size)
torch.Size([10, 2, 5])
Thansk in advances.
| torch.stack allows you to concatenate two arrays along a new dimension
value = 1.37
a = torch.normal(0, 1, size=(5, 10))
c = torch.stack([a, torch.ones(a.shape) * value], dim=1)
c.shape
Out: torch.Size([5, 2, 10])
c[:, 0, :]
Out: tensor([[-0.2944, -0.7366, 0.6882, -0.7106, 0.0182, -0.1156, -1.0394, -0.7524,
0.7587, -0.6066],
[-1.0445, -2.7990, 0.0232, 0.5246, -0.7383, 0.0306, -1.0277, -0.8969,
0.4026, 0.2006],
[-1.2622, -0.6563, -1.9218, -0.6932, -1.9633, 1.8271, 0.6753, -0.7564,
0.0107, -0.2312],
[-0.8111, -1.0776, -0.8583, 0.2782, -0.8116, 0.0984, 0.4799, 0.6854,
0.4408, -0.4280],
[-1.1083, 1.8509, 0.1209, 0.5571, -1.1472, 0.2342, 0.3912, 0.7858,
0.5879, 0.4139]])
c[:, 1, :]
Out: tensor([[1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700,
1.3700],
[1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700,
1.3700],
[1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700,
1.3700],
[1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700,
1.3700],
[1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700, 1.3700,
1.3700]])
| https://stackoverflow.com/questions/66422022/ |
How to import module only when I need to use it? | I have a project which has optional dependencies, depending on the use case it may need torch library. PyTorch takes a lot of memory, it's not optimal to install it when you don't need to use it. In my project, there are parts of the code which use torch, but most of the logic does not need it. I'm trying to implement optional import so that when I need to use torch I use it, here is my idea:
def _lazy_import_torch():
import torch
def main():
_lazy_import_torch()
print(f'You have torch: {torch.__version__}')
if __name__ == '__main__':
main()
When I run this code it appears that torch was not imported:
$ python main.py
Traceback (most recent call last):
File "main.py", line 9, in <module>
main()
File "main.py", line 6, in main
print(f'You have torch: {torch.__version__}')
NameError: name 'torch' is not defined
torch is installed in my environment.
I was wondering why it did not work and how can I make it works, please help me.
| This code makes torch a local name of the _lazy_import_torch function:
def _lazy_import_torch():
import torch
After it's executed, this name is gone, so you won't be able to access it anywhere outside this function.
You could return torch from the function and then pass it around everywhere:
def _lazy_import_torch():
import torch
return torch
def main():
torch = _lazy_import_torch()
print(f'You have torch: {torch.__version__}')
Or use a global variable:
torch = None
def _lazy_import_torch():
global torch
import torch
If the module hasn't been imported yet, you'll get an error every time you try to use it, but after you run _lazy_import_torch, the global name torch will refer to the torch module.
Here's an example with the CSV module from the standard library:
>>> csv = None
>>> def import_csv():
... global csv
... import csv
...
>>> csv
>>> import_csv()
>>> csv
<module 'csv' from '.../lib/python3.9/csv.py'>
>>>
| https://stackoverflow.com/questions/66423047/ |
How GNN layers can know which graph in a batch has the given node & node feature? | When we pass input as node features (x) and edge index (edge_index) to pytorch_geometric layer (e.g. GATConv), I am worried whether the layer can differentiate which batch sample the given node elements belong to.
x follows the shape [num of nodes, feature size] and edge_index follows shape [2, num of edges]. However, these 2 do not have the given information to know which input graph of batch size 32 have given node feature in the x.
Anyone can clarify on this ?
| PyTorch-Geometric treats all the graphs in a batch as a single huge graph, with the individual graphs disconnected from each other. The node indices correspond to nodes in this big graph. This means there is no need for a batch dimension in x or edge_index.
| https://stackoverflow.com/questions/66423622/ |
Convert my model predicted array to categorical using a condition | I've built a MLP neural network -> Trained -> Saved -> loaded -> now I'm testing my loaded model.
I've used y_pred=loaded_model(Variable(featuresTest)) to predict using pytorch then converted my tensor to array y_pred_arr and here's my result:
array([[-1.1663326 , 0.369073 ],
[-1.4255922 , 0.23584184],
[-2.045612 , 1.4165274 ],
...,
[ 4.327711 , -4.1331964 ],
[-1.255816 , 0.65834284],
[ 6.642277 , -7.4430957 ]], dtype=float32)
My true labels array y_test which I'm trying to compare to are categorical
array([[0., 1.],
[0., 1.],
[0., 1.],
...,
[1., 0.],
[0., 1.],
[1., 0.]], dtype=float32)
now I'm trying to use sklearn.metrics ->classification report however as my predicted array is obviously continuous I'm trying to convert it to a categorical which uses a condition converting the highest positive number in each of the array to a 1 for instance:
my first set in the predicted array is [-1.4255922 , 0.23584184], <- this would be converted to [ 0 , 1] as 0.2358 is the highest positive class in the prediction.
I've tried to use:
from keras.utils.np_utils import to_categorical
labels123 = to_categorical(y_pred_arr, dtype = "int64")
however, got this error:
IndexError: index -75 is out of bounds for axis 1 with size 74
Could I please get some assistance with this. Any help would be appreciated
| Try this instead.
from keras.utils.np_utils import to_categorical
labels123 = to_categorical(np.argmax(y_pred_arr, 1), dtype = "int64")
| https://stackoverflow.com/questions/66429758/ |
Append tensor to each element of another tensor | I have a pytorch tensor: x = torch.zeros(2, 2), and another tensor of variable values: item = torch.tensor([[1, 2], [3, 4]]), I am just giving this tensor for example.
I would like add the item tensor as each element of the x tensor, such that
x = [[item, item],
[item, item]]
So x is a tensor with tensors inside it.
I have tried assigning item directly to x, but got an error: RuntimeError: The expanded size of the tensor must match the existing size at non-singleton dimension
| use torch.repeat(),
your target_tensor shape will be torch.Size([2, 2, 2, 2]).
item tensor shape is already torch.Size([2, 2])
use :
target_tensor = item.repeat(2, 2, 1, 1)
the first two parameters of the repeat() function are x's shape
| https://stackoverflow.com/questions/66429934/ |
quantization vector with numpy/pytorch | I have created quantized values as: {0.0, 0.5, 1.0} and would like to do quantization for a vector based on the set. For instance, given vector [0.1, 0.1, 0.9, 0.8, 0.6, 0.6] would be transferred into [0.0, 0.0, 1.0, 1.0, 0.5, 0.5].
Please guide me fastest way using python/numpy/pytorch. Thank you very much!
| use map() and lambda()
you can get the round off nearest to 0.5 with round(x * 2) / 2
a = [0.1, 0.1, 0.9, 0.8, 0.6, 0.6]
list(map(lambda x: round(x * 2) / 2, a))
output :
[0.0, 0.0, 1.0, 1.0, 0.5, 0.5]
| https://stackoverflow.com/questions/66431595/ |
ValueError: Expected x_max for bbox (0.65, 0.51, 1.12, 0.64, 3) to be in the range [0.0, 1.0], got 1.1234809015877545 | I want to apply data augmentations from PyTorch's Albumentations to images with bounding boxes.
When I apply the HorizontalFlip Transformation, I receive this error ValueError: Expected x_max for bbox (0.6505353259854019, 0.517013871576637, 1.1234809015877545, 0.6447916687466204, 3) to be in the range [0.0, 1.0], got 1.1234809015877545.
I use the following code
A.Compose([
A.HorizontalFlip(p=1),
ToTensorV2(p=1.0)],
p=1.0,
bbox_params=A.BboxParams(format='coco',min_area=0, min_visibility=0,label_fields=['labels'])
)
When I apply the Cutout transformation, I do not have any error regarding the bounding boxes
A.Compose([
A.Cutout(num_holes=10, max_h_size=32, max_w_size=32, fill_value=0, p=0.5),
ToTensorV2(p=1.0)],
p=1.0,
bbox_params=A.BboxParams(format='coco',min_area=0, min_visibility=0,label_fields=['labels'])
)
| The bounding box exceeded the image since after the transformation it ends up being larger than the image size. Check carefully the dimensions of the bounding box.
| https://stackoverflow.com/questions/66438736/ |
How to find training accuracy in pytorch | def train_and_test(e):
epochs = e
train_losses, test_losses, val_acc, train_acc= [], [], [], []
valid_loss_min = np.Inf
model.train()
print("Model Training started.....")
for epoch in range(epochs):
running_loss = 0
batch = 0
for images, labels in trainloader:
images, labels = images.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
batch += 1
if batch % 10 == 0:
print(f" epoch {epoch + 1} batch {batch} completed")
test_loss = 0
accuracy = 0
with torch.no_grad():
print(f"validation started for {epoch + 1}")
model.eval()
for images, labels in validloader:
images, labels = images.to(device), labels.to(device)
logps = model(images)
test_loss += criterion(logps, labels)
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss / len(trainloader))
test_losses.append(test_loss / len(validloader))
val_acc.append(accuracy / len(validloader))
training_acc.append(running_loss / len(trainloader))
scheduler.step()
print("Epoch: {}/{}.. ".format(epoch + 1, epochs),"Training Loss: {:.3f}.. ".format(train_losses[-1]), "Valid Loss: {:.3f}.. ".format(test_losses[-1]),
"Valid Accuracy: {:.3f}".format(accuracy / len(validloader)), "train Accuracy: {:.3f}".format(running_loss / len(trainloader)))
model.train()
if test_loss / len(validloader) <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min, test_loss / len(validloader)))
torch.save({
'epoch': epoch,
'model': model,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': valid_loss_min
}, path)
valid_loss_min = test_loss / len(validloader)
print('Training Completed Succesfully !')
return train_losses, test_losses, val_acc ,train_acc
my output is
Model Training started.....
epoch 1 batch 10 completed
epoch 1 batch 20 completed
epoch 1 batch 30 completed
epoch 1 batch 40 completed
validation started for 1
Epoch: 1/2.. Training Loss: 0.088.. Valid Loss: 0.072.. Valid Accuracy: 0.979 train Accuracy: 0.088
Validation loss decreased (inf --> 0.072044). Saving model ...
I am using dataset that is multi-set classification and getting training accuracy and training loss equal so I think there is error in training accuracy code.
training_acc.append(running_loss / len(trainloader))
"train Accuracy: {:.3f}".format(running_loss / len(trainloader))
training_acc.append(accuracy / len(trainloader))
"train Accuracy: {:.3f}".format(accuracy / len(trainloader))
is also not working fine
| this method should be followed to plot training loses as well as accuracy
for images , labels in trainloader:
#start = time.time()
images, labels = images.to(device), labels.to(device)
optimizer.zero_grad()# Clear the gradients, do this because gradients are accumulated as 0 in each epoch
# Forward pass - compute outputs on input data using the model
outputs = model(images) # modeling for each image batch
loss = criterion(outputs,labels) # calculating the loss
# the backward pass
loss.backward() # This is where the model learns by backpropagating
optimizer.step() # And optimizes its weights here - Update the parameters
running_loss += loss.item()
# as Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(outputs)
top_p , top_class = ps.topk(1,dim=1)
equals = top_class == labels.view(*top_class.shape)
# Convert correct_counts to float and then compute the mean
acc += torch.mean(equals.type(torch.FloatTensor))
| https://stackoverflow.com/questions/66443620/ |
Pytorch change format of tensors from BxWxH to B, N, 3 | I have a tensor A with the shape BxWxH (B=Batch size, W=Width, H=Height) and want to change it to a tensor B of shape BxNx3 (B=Batch size, N=Number of points=W*H).
Tensor A represents a depth map, e.g. tensor[0,1,2] => gives the depth value for the pixel (1,2) in batch 0.
Tensor B also represents a depth map but in a different format. Each point in tensor B has the following three dimensions: (x coord, y coord, depth value).
How can I transform tensor A into tensor B?
| You are looking for meshgrid to give you the x and y coordinates of each pixel:
b, w, h = A.shape
x, y = torch.meshgrid(torch.arange(w), torch.arange(h))
B = torch.cat((x[None, ...], y[None, ...], A), dim=0)
B = B.reshape(b, w*h, 3)
| https://stackoverflow.com/questions/66446332/ |
Turn off console logging for Hydra when using Pytorch Lightning | Is there a way to turn off console logging for Hydra, but keep file logging? I am encountering a problem where Hydra is duplicating all my console prints. These prints are handled by Pytorch Lightning and I want them to stay like that. However, I am fine with hydra logging them to a file (once per print), but I do not want to see my prints twice in the console.
| I struggled a bit with the hydra documentation which is why I wanted to write a detailed explanation here so that other people can have it easy. In order to be able to use the answer proposed by @j_hu, i.e.:
hydra/job_logging=none
hydra/hydra_logging=none
with hydra 1.0 (which is the stable version at the time I am writing this answer) you need to first:
Create a directory called hydra within your config directory.
Create two subdirectories: job_logging and hydra_logging.
Create two none.yaml files in both of those directories as described below.
# @package _group_
version: 1
root: null
disable_existing_loggers: false
After this is done, you can use the none.yaml configuration to either override the logging via the command line:
python main.py hydra/job_logging=none hydra/hydra_logging=none
or via the config.yaml file:
defaults:
- hydra/hydra_logging: none
- hydra/job_logging: none
| https://stackoverflow.com/questions/66446854/ |
How to use a Pytorch DataLoader for a dataset with multiple labels | I'm wondering how to create a DataLoader that supports multiple types of labels in Pytorch. How do I do this?
| You can return a dict of labels for each item in the dataset, and DataLoader is smart enough to collate them for you. i.e. if you provide a dict for each item, the DataLoader will return a dict, where the keys are the label types. Accessing a key of that label type returns a collated tensor of that label type.
See below:
import torch
from torch.utils.data import Dataset, DataLoader
import numpy as np
class M(Dataset):
def __init__(self):
super().__init__()
self.data = np.random.randn(20, 2)
print(self.data)
def __getitem__(self, i):
return self.data[i], {'label_1':self.data[i], 'label_2':self.data[i]}
def __len__(self):
return len(self.data)
ds = M()
dl = DataLoader(ds, batch_size=6)
for x, y in dl:
print(x, '\n', y)
print(type(x), type(y))
[[-0.33029911 0.36632142]
[-0.25303721 -0.11872778]
[-0.35955625 -1.41633132]
[ 1.28814629 0.38238357]
[ 0.72908184 -0.09222787]
[-0.01777293 -1.81824167]
[-0.85346074 -1.0319562 ]
[-0.4144832 0.12125039]
[-1.29546792 -1.56314292]
[ 1.22566887 -0.71523568]]
tensor([[-0.3303, 0.3663],
[-0.2530, -0.1187],
[-0.3596, -1.4163]], dtype=torch.float64)
{'item_1': tensor([[-0.3303, 0.3663],
[-0.2530, -0.1187],
[-0.3596, -1.4163]], dtype=torch.float64), 'item_2': tensor([[-0.3303, 0.3663],
[-0.2530, -0.1187],
[-0.3596, -1.4163]], dtype=torch.float64)}
<class 'torch.Tensor'> <class 'dict'>
...
| https://stackoverflow.com/questions/66446881/ |
How to use neural nets for classification with a large number of classes? | Are there techniques I could use to classify data (I'm using pytorch) using a large number of classes?
I noticed that if I try to build a multi layer perceptron network I run out of memory on the GPU given the last layer must have too many neurons, even though my GPU has 24Gb of memory. I have about 3000 classes.
Is there a way or technique do handle this type of scenario?
Note that I am NOT asking for an opinion on which technique is better. I am asking for an objective list of techniques that could be used in this scenario. This can be answered in fact-based fashion and include citations, etc if needed.
| A tricky approach that you could try to follow is to divide the model into two sub nn.Modules. You send the first one to the GPU and keep the last layer(s), the classifier in the CPU. By doing so you will lose some training speed and overall performance, but you are going to be able to handle this huge layer in the MLP.
However it is not a common thing to have this large number of classes, are you doing some Computer Vision or NLP task? If so, you could use some task-specific network types such as CNNs or LSTMs, that perform better with a much more efficient number of parameters to deal with (e.g. using pooling layers in the CNNs). If you have to use a MLP try reducing the dimensionality of the penultimate layer.
| https://stackoverflow.com/questions/66449332/ |
How to get value in order with a small average value | I've got a data frame like
x y cluster
0 112 4
0 113 4
1 111 4
and I'll get locations from this code:
for n in range(0,9):
...
location = np.array(cluseter_location )
I want to sort it out in the order of column 'cluster' with small columns 'y' means, so I tried:
for n in range(0,9):
cluster_ = data2[data2['cluster_id']== n]
...
| Modifying your code to sort list of tuples
In your code, instead of appending the cluster_int, just append the tuple (n, cluster_int), and then later while sorting, use lambda to sort by the second value of each tuple.
for n in range(0,9):
cluster_ = data2[data2['cluster_id']== n]
cluster_list = cluster_['y'].tolist()
cluster_avg = sum(cluster_list)/len(cluster_list)
cluster_int = int(cluster_avg)
print("cluster_id : %d" %n ,"average : %d" %cluster_int)
lst.append((n,cluster_int)) #<-------
a = sorted(lst, key = lambda x:x[1]) #<-------
print(a) #<-------
ordered_average = [average for cluster, average in a] #<-------
ordered_clusters = [cluster for cluster, average in a] #<-------
print(ordered_average) #<-------
print(ordered_clusters) #<-------
#cluster and average together
[(4,112),(8,121,(1,127),(6,139),(5,149)]
#averages sorted
[112, 121, 127, 139, 149]
#clusters sorted
[4,8,1,6,5]
Alternate way using pandas
A quicker way to do this would be directly sorting the pandas dataframe after groupby.
print(df.groupby('cluster')['y'].mean().reset_index().sort_values('y'))
| https://stackoverflow.com/questions/66451470/ |
Extracting features of the hidden layer of an autoencoder using Pytorch | I am following this tutorial to train an autoencoder.
The training has gone well. Next, I am interested to extract features from the hidden layer (between the encoder and decoder).
How should I do that?
| The cleanest and most straight-forward way would be to add methods for creating partial outputs -- this can be even be done a posteriori on a trained model.
from torch import Tensor
class AE(nn.Module):
def __init__(self, **kwargs):
...
def encode(self, features: Tensor) -> Tensor:
h = torch.relu(self.encoder_hidden_layer(features))
return torch.relu(self.encoder_output_layer(h))
def decode(self, encoded: Tensor) -> Tensor:
h = torch.relu(self.decoder_hidden_layer(encoded))
return torch.relu(self.decoder_output_layer(h))
def forward(self, features: Tensor) -> Tensor:
encoded = self.encode(features)
return self.decode(encoded)
You can now query the model for encoder hidden states by simply calling encode with the corresponding input tensor.
If you'd rather not add any methods to the base class (I don't see why), you could alternatively write an external function:
def get_encoder_state(model: AE, features: Tensor) -> Tensor:
return torch.relu(model.encoder_output_layer(torch.relu(model.encoder_hidden_layer(features))))
| https://stackoverflow.com/questions/66452650/ |
Why DataLoader return list that has a different length with batch_size | I am writting a customed dataloader, while the returned value makes me confused.
import torch
import torch.nn as nn
import numpy as np
import torch.utils.data as data_utils
class TestDataset:
def __init__(self):
self.db = np.random.randn(20, 3, 60, 60)
def __getitem__(self, idx):
img = self.db[idx]
return img, img.shape[1:]
def __len__(self):
return self.db.shape[0]
if __name__ == '__main__':
test_dataset = TestDataset()
test_dataloader = data_utils.DataLoader(test_dataset,
batch_size=1,
num_workers=4,
shuffle=False, \
pin_memory=True
)
for i, (imgs, sizes) in enumerate(test_dataloader):
print(imgs.size()) # torch.Size([1, 3, 60, 60])
print(sizes) # [tensor([60]), tensor([60])]
break
Why "sizes" returns a list of length 2? I think it should be "torch.Size([1, 2])" which indicates height and width of a image(1 batch_size).
Further more, should the length of the returned list be the same to batch_size? If I want to get the size, I have to write "sizes = [sizes[0][0].item(), sizes[1][0].item()]". And this makes me very confused.
Thank you for your time.
| It is caused by collate_fn function and its default behaviour. It's main purpose is to ease the batch preparation process. So you can customize your batch preparation process updating this function. As stated in documentation collate_fn, it automatically converts NumPy arrays and Python numerical values into PyTorch Tensors and it preserves the data structure. So it returns in your case [tensor([60]), tensor([60])]. In many cases, you return image with labels as tensors(instead of size of image) and feedforward to neural net. I don't know why you return the image size while enumerating, but you can get what you need adding a custom collate_fn as:
def collate_fn(data):
imgs, lengths = data[0][0],data[0][1]
return torch.tensor(imgs), torch.tensor([lengths])
Then you should set it to DataLoader's argument:
test_dataloader = DataLoader(test_dataset,
batch_size=1,
num_workers=4,
shuffle=False, \
pin_memory=True, collate_fn=collate_fn
)
Then you can loop as:
for i, (imgs, sizes) in enumerate(test_dataloader):
print(imgs.size())
print(sizes)
print(sizes.size())
break
and output will be as:
torch.Size([3, 60, 60])
tensor([[60, 60]])
torch.Size([1, 2])
Afterall, I would like to add one more thing, you should not just return the self.db.shape[0] in len function. In this case your batch size is 1 and it's ok; however, when the batch size changes it will not return the true value for #batches. You can update your class as:
class TestDataset:
def __init__(self, batch_size=1):
self.db = np.random.randn(20, 3, 60, 60)
self._batch_size = batch_size
def __getitem__(self, idx):
img = self.db[idx]
return img, img.shape[1:]
def __len__(self):
return self.db.shape[0]/self._batch_size
| https://stackoverflow.com/questions/66452655/ |
Is there a element-map function in pytorch? | I'm new in PyTorch and I come from functional programming languages(where map function is used everywhere). The problem is that I have a tensor and I want to do some operations on each element of the tensor. The operation may be various so I need a function like this:
map : (Numeric -> Numeric) -> Tensor -> Tensor
e.g. map(lambda x: x if x < 255 else -1, tensor) # the example is simple, but the lambda may be very complex
Is there such a function in PyTorch? How should I implement such function?
| Most mathematical operations that are implemented for tensors (and similarly for ndarrays in numpy) are actually applied element wise, so you could write for instance
mask = tensor < 255
result = tensor * mask + (-1) * ~mask
This is a quite general appraoch. For the case that you have right now where you only want to modify certain elements, you can also apply "logical indexing" that let's you overwrite the current tensor:
tensor[mask < 255] = -1
So in python there actually is a map() function but usually there are better ways to do it (better in python; in other languages - like Haskell - map/fmap is obviously prefered in most contexts).
So the key take-away here is that the preferred method is taking advantage of the vectorization. This also makes the code more efficient as those tensor operations are implemented in a low level language, while map() is nothing but a python-for loop that is a lot slower.
| https://stackoverflow.com/questions/66458123/ |
HTTPS POST to query FastAPI using python requests | I am trying to serve a Neural Network using FastAPI.
from fastapi import Depends, FastAPI
from pydantic import BaseModel
from typing import Dict
class iRequest(BaseModel):
arg1: str
arg2: str
class iResponse(BaseModel):
pred: str
probs: Dict[str, float]
@app.post("/predict", response_model=iResponse)
def predict(request: iRequest, model: Model = Depends(get_model)):
pred, probs = model.predict(request.arg1, request.arg2)
return iResponse(pred = pred, probs = probs)
The manual site http://localhost:8000/docs#/default/predict_predict_post works fine and translates into the following curl command:
curl -X POST "http://localhost:8000/predict" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"arg1\":\"I am the King\",\"arg2\":\"You are not my King\"}"
which also works.
When I try to query the API using python requests:
import requests
data = {"arg1": "I am the King",
"arg2": "You are not my King"}
r = requests.post("http://localhost:8000/predict", data=data)
I only get the "422 Unprocessable Entity" Errors. Where am I going wrong here?
| You provide a data argument to requests.post, which does a POST with Content-Type: application/x-www-form-urlencoded, which is not JSON.
Consider using requests.post(url, json=data) and you should be fine.
| https://stackoverflow.com/questions/66460864/ |
torchvision MNIST HTTPError: HTTP Error 403: Forbidden | I am trying to replicate this experiment presented in this webpage https://adversarial-ml-tutorial.org/adversarial_examples/
I got the jupyter notebook and loaded in my localhost and open it using Jupiter notebook. When I run the following code to get the dataset using the following code:
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
mnist_train = datasets.MNIST("../data", train=True, download=True, transform=transforms.ToTensor())
mnist_test = datasets.MNIST("../data", train=False, download=True, transform=transforms.ToTensor())
train_loader = DataLoader(mnist_train, batch_size = 100, shuffle=True)
test_loader = DataLoader(mnist_test, batch_size = 100, shuffle=False)
and I get the following error:
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../data\MNIST\raw\train-images-idx3-ubyte.gz
0/? [00:00<?, ?it/s]
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
<ipython-input-15-e6f62798f426> in <module>
2 from torch.utils.data import DataLoader
3
----> 4 mnist_train = datasets.MNIST("../data", train=True, download=True, transform=transforms.ToTensor())
5 mnist_test = datasets.MNIST("../data", train=False, download=True, transform=transforms.ToTensor())
6 train_loader = DataLoader(mnist_train, batch_size = 100, shuffle=True)
~\Anaconda3\lib\site-packages\torchvision\datasets\mnist.py in __init__(self, root, train, transform, target_transform, download)
77
78 if download:
---> 79 self.download()
80
81 if not self._check_exists():
~\Anaconda3\lib\site-packages\torchvision\datasets\mnist.py in download(self)
144 for url, md5 in self.resources:
145 filename = url.rpartition('/')[2]
--> 146 download_and_extract_archive(url, download_root=self.raw_folder, filename=filename, md5=md5)
147
148 # process and save as torch files
~\Anaconda3\lib\site-packages\torchvision\datasets\utils.py in download_and_extract_archive(url, download_root, extract_root, filename, md5, remove_finished)
254 filename = os.path.basename(url)
255
--> 256 download_url(url, download_root, filename, md5)
257
258 archive = os.path.join(download_root, filename)
~\Anaconda3\lib\site-packages\torchvision\datasets\utils.py in download_url(url, root, filename, md5)
82 )
83 else:
---> 84 raise e
85 # check integrity of downloaded file
86 if not check_integrity(fpath, md5):
~\Anaconda3\lib\site-packages\torchvision\datasets\utils.py in download_url(url, root, filename, md5)
70 urllib.request.urlretrieve(
71 url, fpath,
---> 72 reporthook=gen_bar_updater()
73 )
74 except (urllib.error.URLError, IOError) as e: # type: ignore[attr-defined]
~\Anaconda3\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data)
245 url_type, path = splittype(url)
246
--> 247 with contextlib.closing(urlopen(url, data)) as fp:
248 headers = fp.info()
249
~\Anaconda3\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
220 else:
221 opener = _opener
--> 222 return opener.open(url, data, timeout)
223
224 def install_opener(opener):
~\Anaconda3\lib\urllib\request.py in open(self, fullurl, data, timeout)
529 for processor in self.process_response.get(protocol, []):
530 meth = getattr(processor, meth_name)
--> 531 response = meth(req, response)
532
533 return response
~\Anaconda3\lib\urllib\request.py in http_response(self, request, response)
639 if not (200 <= code < 300):
640 response = self.parent.error(
--> 641 'http', request, response, code, msg, hdrs)
642
643 return response
~\Anaconda3\lib\urllib\request.py in error(self, proto, *args)
567 if http_err:
568 args = (dict, 'default', 'http_error_default') + orig_args
--> 569 return self._call_chain(*args)
570
571 # XXX probably also want an abstract factory that knows when it makes
~\Anaconda3\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args)
501 for handler in handlers:
502 func = getattr(handler, meth_name)
--> 503 result = func(*args)
504 if result is not None:
505 return result
~\Anaconda3\lib\urllib\request.py in http_error_default(self, req, fp, code, msg, hdrs)
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: Forbidden
Any help solving this issue is much appreciated.
I also can download the dataset directly from the link but then I don't know how to use that!
| Yes it's a known bug: https://github.com/pytorch/vision/issues/3500
The possible solution can be to patch MNIST download method.
But it requires wget to be installed.
For Linux:
sudo apt install wget
For Windows:
choco install wget
import os
import subprocess as sp
from torchvision.datasets.mnist import MNIST, read_image_file, read_label_file
from torchvision.datasets.utils import extract_archive
def patched_download(self):
"""wget patched download method.
"""
if self._check_exists():
return
os.makedirs(self.raw_folder, exist_ok=True)
os.makedirs(self.processed_folder, exist_ok=True)
# download files
for url, md5 in self.resources:
filename = url.rpartition('/')[2]
download_root = os.path.expanduser(self.raw_folder)
extract_root = None
remove_finished = False
if extract_root is None:
extract_root = download_root
if not filename:
filename = os.path.basename(url)
# Use wget to download archives
sp.run(["wget", url, "-P", download_root])
archive = os.path.join(download_root, filename)
print("Extracting {} to {}".format(archive, extract_root))
extract_archive(archive, extract_root, remove_finished)
# process and save as torch files
print('Processing...')
training_set = (
read_image_file(os.path.join(self.raw_folder, 'train-images-idx3-ubyte')),
read_label_file(os.path.join(self.raw_folder, 'train-labels-idx1-ubyte'))
)
test_set = (
read_image_file(os.path.join(self.raw_folder, 't10k-images-idx3-ubyte')),
read_label_file(os.path.join(self.raw_folder, 't10k-labels-idx1-ubyte'))
)
with open(os.path.join(self.processed_folder, self.training_file), 'wb') as f:
torch.save(training_set, f)
with open(os.path.join(self.processed_folder, self.test_file), 'wb') as f:
torch.save(test_set, f)
print('Done!')
MNIST.download = patched_download
mnist_train = MNIST("../data", train=True, download=True, transform=transforms.ToTensor())
mnist_test = MNIST("../data", train=False, download=True, transform=transforms.ToTensor())
train_loader = DataLoader(mnist_train, batch_size=1, shuffle=True)
test_loader = DataLoader(mnist_test, batch_size=1, shuffle=False)
| https://stackoverflow.com/questions/66467005/ |
Creating an annotation list using time stamps | I have ECG data of length 230897 and sampling frequency=100hz. I would like to create an annotation list (of length 230897) corresponding to the timestamps and duration (in seconds) of the arrhythmia event (sequence of 1's):
timestamps duration
2014-09-10T22:10:20.000000000 3.5
2014-09-10T23:10:10.000000000 4
2014-09-10T23:50:20.000000000 6
FOr example: the annotation list should be of length 230897 and at '2014-09-10T22:10:20.000000000' there should be 1's for 3.5 seconds; at'2014-09-10T23:10:10.000000000' 1's for 4 seconds and so on. This is what I have tried so far and unfortunately, it is not working:
start_time= '2014-09-10T21:01:10.000000000'
data_length=230897
annot_list = np.zeros(data_length)
prev=0, fs=100
for t in timestamps:
time_diff = t - prev
seconds = time_diff.astype('timedelta64[s]').astype(np.int32)
total_cov = seconds*fs
i1 = i0 + total_cov
i0, i1 = int(i0), int(i1)
annot_list[i0:i1] = 1.0
i0 = i1
prev = t
Can someone please help me here?
| Assuming that your data looks like that:
timestamps = [np.datetime64("2014-09-10T22:10:20.000000000"),
np.datetime64("2014-09-10T23:10:10.000000000"),
np.datetime64("2014-09-10T23:50:20.000000000")]
durations = [3.5,4,6]
Then, you can simply iterate over your list and caluclate your start index and your stop index from your start time, your timestamp and your duration, like so:
start_time = np.datetime64("2014-09-10T21:01:10.000000000")
data_length = 2308970
annot_list = np.zeros(data_length)
freq = 100
for timestamp, duration in zip(timestamps, durations):
time_from_start = int(
freq * (np.timedelta64(timestamp - start_time, "s").astype("float32"))
)
duration = int(freq * duration)
annot_list[time_from_start : time_from_start + duration] = 1
| https://stackoverflow.com/questions/66470855/ |
Pytorch lightning metrics: ValueError: preds and target must have same number of dimensions, or one additional dimension for preds | Googling this gets you no where, so I decided to help future me and others by posting this as a searchable question.
def __init__():
...
self.val_acc = pl.metrics.Accuracy()
def validation_step(self, batch, batch_index):
...
self.val_acc.update(log_probs, label_batch)
gives
ValueError: preds and target must have same number of dimensions, or one additional dimension for preds
for log_probs.shape == (16, 4) and for label_batch.shape == (16, 4)
What's the issue?
| pl.metrics.Accuracy() expects a batch of dtype=torch.long labels, not one-hot encoded labels.
Thus, it should be fed
self.val_acc.update(log_probs, torch.argmax(label_batch.squeeze(), dim=1))
This is just the same as torch.nn.CrossEntropyLoss
| https://stackoverflow.com/questions/66474197/ |
UserWarning: torchaudio C++ extension is not available | can someone please help me out with this UserWarning in torchaudio?
ErrorMessage:
C:\Users\anaconda3\lib\site-packages\torchaudio\extension\extension.py:14:
UserWarning: torchaudio C++ extension is not available.
warnings.warn('torchaudio C++ extension is not available.')
Thank's in advance!
| https://pytorch.org/audio/stable/backend.html says like this:
Availability
"sox" and "sox_io" backends require C++ extension module, which is included in Linux/macOS binary distributions. These backends are not available on Windows.
So you have a wrong operating system or backend selected. Depends point of view :)
| https://stackoverflow.com/questions/66475868/ |
Converting 1D tensor into a 1D array using Fastai | I have a question, about hoy to convert a 1D Tensor to a 1D Array, using Fastai.
For example, I have this tensor:
tensor([2.2097e-05, 2.6679e-04, 4.6098e-05, 5.5458e-01, 4.4509e-01])
| I'm assuming when you say tensor you are talking about Pytorch and when you say array you are talking about Numpy.
Then this should do it.
tensor.numpy()
| https://stackoverflow.com/questions/66482542/ |
Is there a better way to multiply & sum two Pytorch tensors along the first dimension? | I have two Pytorch tensors, a & b, of shape (S, M) and (S, M, H) respectively. M is my batch dimension. I want to multiply & sum the two tensors such that the output is of shape (M, H). That is, I want to compute the sum over s of a[s] * b[s].
For example, for S=2, M=2, H=3:
>>> import torch
>>> S, M, H = 2, 2, 3
>>> a = torch.arange(S*M).view((S,M))
tensor([[0, 1],
[2, 3]])
>>> b = torch.arange(S*M*H).view((S,M,H))
tensor([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]]])
'''
DESIRED OUTPUT:
= [[0*[0, 1, 2] + 2*[6, 7, 8]],
[1*[3, 4, 5] + 3*[9, 10, 11]]]
= [[12, 14, 16],
[30, 34, 38]]
note: shape is (2, 3) = (M, H)
'''
I've found one way that sort of works, using torch.tensordot:
>>> output = torch.tensordot(a, b, ([0], [0]))
tensor([[[12, 14, 16],
[18, 20, 22]],
[[18, 22, 26],
[30, 34, 38]]])
>>> output.shape
torch.Size([2, 2, 3]) # always (M, M, H)
>>> output = output[torch.arange(M), torch.arange(M), :]
tensor([[12, 14, 16],
[30, 34, 38]])
But as you can see, it makes a lot of unnecessary computations and I have to slice the ones that are relevant for me.
Is there a better way to do this that doesn't involve the unnecessary computations?
| This should work:
(torch.unsqueeze(a, 2)*b).sum(axis=0)
>>> tensor([[12, 14, 16],
[30, 34, 38]])
| https://stackoverflow.com/questions/66485409/ |
PyTorch model input shape | I loaded a custom PyTorch model and I want to find out its input shape. Something like this:
model.input_shape
Is it possible to get this information?
Update: print() and summary() don't show this model's input shape, so they are not what I'm looking for.
| PyTorch flexibility
PyTorch models are very flexible objects, to the point where they do not enforce or generally expect a fixed input shape for data.
If you have certain layers there may be constraints e.g:
a flatten followed by a fully connected layer of width N would enforce the dimensions of your original input (M1 x M2 x ... Mn) to have a product equal to N
a 2d convolution of N input channels would enforce the data to be 3 dimensionsal, with the first dimension having size N
But as you can see neither of these enforce the total shape of the data.
We might not realize it right now, but in more complex models, getting the size of the first linear layer right is sometimes a source of frustration. Weβve heard stories of famous practitioners putting in arbitrary numbers and then relying on error messages from PyTorch to backtrack the correct sizes for their linear layers. Lame, eh? Nah, itβs all legit!
Deep Learning with PyTorch
Investigation
Simple case: First layer is Fully Connected
If your model's first layer is a fully connected one, then the first layer in print(model) will detail the expected dimensionality of a single sample.
Ambiguous case: CNN
If it is a convolutional layer however, since these are dynamic and will stride as long/wide as the input permits, there is no simple way to retrieve this info from the model itself.1 This flexibility means that for many architectures multiple compatible input sizes2 will all be acceptable by the network.
This is a feature of PyTorch's Dynamic computational graph.
Manual inspection
What you will need to do is investigate the network architecture, and once you've found an interpretable layer (if one is present e.g. fully connected) "work backwards" with its dimensions, determining how the previous layers (e.g. poolings and convolutions) have compressed/modified it.
Example
e.g. in the following model from Deep Learning with PyTorch (8.5.1):
class NetWidth(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1)
self.fc1 = nn.Linear(16 * 8 * 8, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
out = out.view(-1, 16 * 8 * 8)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
We see the model takes an input 2.d. image with 3 channels and:
Conv2d -> sends it to an image of the same size with 32 channels
max_pool2d(,2) -> halves the size of the image in each dimension
Conv2d -> sends it to an image of the same size with 16 channels
max_pool2d(,2) -> halves the size of the image in each dimension
view -> reshapes the image
Linear -> takes a tensor of size 16 * 8 * 8 and sends to size 32
...
So working backwards, we have:
a tensor of shape 16 * 8 * 8
un-reshaped into shape (channels x height x width)
un-max_pooled in 2d with factor 2, so height and width un-halved
un-convolved from 16 channels to 32
Hypothesis: It is likely 16 in the product thus refers to the number of channels, and that the image seen by view was of shape (channels, 8,8), and currently is (channels, 16,16)2
un-max_pooled in 2d with factor 2, so height and width un-halved again (channels, 32,32)
un-convolved from 32 channels to 3
So assuming the kernel_size and padding are sufficient that the convolutions themselves maintain image dimensions, it is likely that the input image is of shape (3,32,32) i.e. RGB 32x32 pixel square images.
Notes:
Even the external package pytorch-summary requires you provide the input shape in order to display the shape of the output of each layer.
It could however be any 2 numbers whose produce equals 8*8 e.g. (64,1), (32,2), (16,4) etc however since the code is written as 8*8 it is likely the authors used the actual dimensions.
| https://stackoverflow.com/questions/66488807/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.