sub
stringclasses 4
values | title
stringlengths 3
304
| selftext
stringlengths 3
30k
| upvote_ratio
float64 0.07
1
| id
stringlengths 9
9
| created_utc
float64 1.6B
1.65B
|
---|---|---|---|---|---|
pytorch
|
pytorch hook function does not work
|
May I know why the [pytorch hook function](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L316) does not work ?
https://preview.redd.it/ymehkpym7xl71.png?width=980&format=png&auto=webp&s=ac49f72563b81a6f8f268af53e3c4dd0342c8355
| 1 |
t3_pj4i55
| 1,630,950,595 |
pytorch
|
PonderNet - Paper implementation tutorial
|
nan
| 0.92 |
t3_pj1kg2
| 1,630,942,050 |
pytorch
|
Beginner Question
|
Hi, as a part of my early projects, I wanted to find a database I could use to train on letters of the alphabet, just like how I'd done digits with MNIST. Is there a similar dataset I could use? I looked into EMNIST but realized that a lot of the images there were rotated in multiple directions.
| 0.84 |
t3_piuxyu
| 1,630,914,589 |
pytorch
|
handling batch size with custom LSTM
|
Hi All,
I am trying to implement custom LSTM layer with custom cell.
It is working OK when I pass only one sample, but when I want to pass a batch of data a problem appear.
Full question with code implementation is in the main thread
[https://discuss.pytorch.org/t/batch-size-handling-with-lstm/129123](https://discuss.pytorch.org/t/batch-size-handling-with-lstm/129123)
| 0.81 |
t3_pfths3
| 1,630,501,506 |
pytorch
|
Neural Network for Finding Relationships and Correlations between Rows and Different Datasets
|
Essentially, I have a number of very large datasets with relationships between them (dates, locations, different value columns, etc.). I would like to be able to find relationships and correlations between columns. Is this something that could be done with some sort of unsupervised neural network? If so, does anyone have any examples? Also, does anybody know what the name of this type of research is? Lastly, many of the datasets are very large (100s of millions of rows), which is something to keep in mind. Thanks.
| 1 |
t3_pdj173
| 1,630,188,632 |
pytorch
|
Residual Networks in PyTorch
|
I wrote a tutorial on how to implement simple residual blocks or use pre-trained ResNet models for image classification.
Link: [https://taying-cheng.medium.com/building-a-residual-network-with-pytorch-df2f6937053b](https://taying-cheng.medium.com/building-a-residual-network-with-pytorch-df2f6937053b)
| 0.72 |
t3_pdd301
| 1,630,168,518 |
pytorch
|
Trying to understand the backward function
|
Hey Guys,
Say I want to call backward on some internal node in the graph (NOT the final loss). Obviously since it is an internal node, it is non-scalar too, and the gradients (ie, x.grad's will be 3-dimensional if we count the batch dim. Here is what the doc says: "**. If the tensor is non-scalar and requires gradient, the function additionally requires specifying** gradient".
Gradient of what? the loss? I don't care about the loss, why should I provide that? The x.grads should be containing the gradient of that tensor with respect to the leaf nodes, and the gradient is not required at all. Can someone explain please?
​
Best
| 1 |
t3_pbijee
| 1,629,919,982 |
pytorch
|
Various size image dataset
|
I have a dataset which consists of various size images (from 155x89 to 437x268) which are polygons (region of interest - ROI) and pixels outside of the polygon/ROI are zero (see [https://prnt.sc/1qia3t5](https://prnt.sc/1qia3t5)). I want to build a deep learning model for a binary classification model. From my experience (e.g. Dogs VS Cats simple dataset), what most people do is resize the images to a fixed size e.g. 224x224 which could be ideal for transfer learning models. However I am not sure if its ideal. Here are my thoughts:
1. Resize all images to a fixed size 224x224 and apply model (the "texture" remains the same in my opinion and the model judges based on black and wait composition)
2. zero pad all images to a "big" dimension e.g. 600x600 and apply model (computational expensive, need huge amount of GPU memory, cannot run on my GPU, CPU takes forever)
3. zero pad all images to a "big" dimension e.g. 600x600 and resize back to 224x224 (size matters)
| 1 |
t3_pbg0s3
| 1,629,912,590 |
pytorch
|
A sneak peek of the upcoming features of TorchVision
|
nan
| 1 |
t3_p8qunm
| 1,629,550,068 |
pytorch
|
Train and test accuracies on my ResNet 50 implementation are 0.1. I don't understand why. Can someone take a look at my model implementation and see where the bug lies?
|
Hello PyTorch developers,
I tried to implement ResNet 50 (doing Exercise 2 from [d2l.ai book, section 7.6](http://d2l.ai/chapter_convolutional-modern/resnet.html#exercises)). You can find the ResNet architecture described [here](https://arxiv.org/pdf/1512.03385.pdf) (page 5, Table 1). However, when I train it, my train and test accuracies are 0.1. I'd really appreciate it if someone could have a look and see what may be the case here.
Here is my implementation, along with the debug output. I separate every code cell with its own code block. The outputs are also in their own code block.
I worked through the shapes of the matrices on paper and the reason why I use `self.conv4` and `self.conv5` is so that I can adjust the network output and the input so that they can be added together. Maybe I should do all of this in some other way; I'm not sure. Please do have a look at the code below.
```
import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l
class Residual(nn.Module): #@save
"""The Residual block of ResNet."""
def __init__(self, input_channels, num_channels, strides=1):
# I removed the use_conv_1x1 attribute since I always adjust both X and Y
super().__init__()
self.conv1 = nn.Conv2d(input_channels, num_channels, kernel_size=1,
stride=strides)
self.conv2 = nn.Conv2d(num_channels, num_channels, kernel_size=3,
padding=1)
self.conv3 = nn.Conv2d(num_channels, num_channels * 4, kernel_size=1) # no padding doesn't change the
# image shape
self.conv4 = nn.Conv2d(num_channels*4, num_channels,
kernel_size=1) # used to adjust Y
self.conv5 = nn.Conv2d(input_channels, num_channels,
kernel_size=1, stride=strides) # used to adjust X
self.bn1 = nn.BatchNorm2d(num_channels)
self.bn2 = nn.BatchNorm2d(num_channels)
self.bn3 = nn.BatchNorm2d(num_channels * 4)
def forward(self, X):
# debug purpose prints are commented out
print("-----------------------------------")
print("X.shape:")
print(X.shape)
Y = F.relu(self.bn1(self.conv1(X)))
print("Y.shape:")
print(Y.shape)
Y = F.relu(self.bn2(self.conv2(Y)))
print("Y.shape:")
print(Y.shape)
Y = F.relu(self.bn3(self.conv3(Y)))
print("Y.shape:")
print(Y.shape)
Y = self.conv4(Y)
print("Y.shape:")
print(Y.shape)
print("X.shape:")
print(X.shape)
X = self.conv5(X)
print("X.shape:")
print(X.shape)
print("-----------------------------------")
Y += X
return F.relu(Y)
```
```
blk = Residual(3, 3)
X = torch.rand(4, 3, 6, 6)
Y = blk(X)
Y.shape
```
```
-----------------------------------
X.shape:
torch.Size([4, 3, 6, 6])
Y.shape:
torch.Size([4, 3, 6, 6])
Y.shape:
torch.Size([4, 3, 6, 6])
Y.shape:
torch.Size([4, 12, 6, 6])
Y.shape:
torch.Size([4, 3, 6, 6])
X.shape:
torch.Size([4, 3, 6, 6])
X.shape:
torch.Size([4, 3, 6, 6])
-----------------------------------
torch.Size([4, 3, 6, 6])
```
```
blk = Residual(3, 6, strides=2)
blk(X).shape
```
```
-----------------------------------
X.shape:
torch.Size([4, 3, 6, 6])
Y.shape:
torch.Size([4, 6, 3, 3])
Y.shape:
torch.Size([4, 6, 3, 3])
Y.shape:
torch.Size([4, 24, 3, 3])
Y.shape:
torch.Size([4, 6, 3, 3])
X.shape:
torch.Size([4, 3, 6, 6])
X.shape:
torch.Size([4, 6, 3, 3])
-----------------------------------
torch.Size([4, 6, 3, 3])
```
```
b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
nn.BatchNorm2d(64), nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
```
```
def resnet_block(input_channels, num_channels, num_residuals,
first_block=False):
blk = []
for i in range(num_residuals):
if i == 0 and not first_block:
blk.append(
Residual(input_channels, num_channels,
strides=2))
else:
blk.append(Residual(num_channels, num_channels))
return blk
```
```
b2 = nn.Sequential(*resnet_block(64, 64, 3, first_block=True))
b3 = nn.Sequential(*resnet_block(64, 128, 4))
b4 = nn.Sequential(*resnet_block(128, 256, 6))
b5 = nn.Sequential(*resnet_block(256, 512, 3))
```
```
net = nn.Sequential(b1, b2, b3, b4, b5, nn.AdaptiveAvgPool2d((1, 1)),
nn.Flatten(), nn.Linear(512, 10))
```
```
X = torch.rand(size=(1, 1, 224, 224))
for layer in net:
print("X.shape:")
print(X.shape)
X = layer(X)
print(layer.__class__.__name__, 'output shape:\t', X.shape)
```
```
X.shape:
torch.Size([1, 1, 224, 224])
Sequential output shape: torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
-----------------------------------
X.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 256, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 256, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 256, 56, 56])
Y.shape:
torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
-----------------------------------
Sequential output shape: torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 64, 56, 56])
-----------------------------------
X.shape:
torch.Size([1, 64, 56, 56])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 512, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 64, 56, 56])
X.shape:
torch.Size([1, 128, 28, 28])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 512, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 128, 28, 28])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 512, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 128, 28, 28])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 512, 28, 28])
Y.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 128, 28, 28])
-----------------------------------
Sequential output shape: torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 128, 28, 28])
-----------------------------------
X.shape:
torch.Size([1, 128, 28, 28])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 1024, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 128, 28, 28])
X.shape:
torch.Size([1, 256, 14, 14])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 1024, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 1024, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 1024, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 1024, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 1024, 14, 14])
Y.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
-----------------------------------
Sequential output shape: torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 256, 14, 14])
-----------------------------------
X.shape:
torch.Size([1, 256, 14, 14])
Y.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 2048, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
X.shape:
torch.Size([1, 256, 14, 14])
X.shape:
torch.Size([1, 512, 7, 7])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 2048, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
X.shape:
torch.Size([1, 512, 7, 7])
X.shape:
torch.Size([1, 512, 7, 7])
-----------------------------------
-----------------------------------
X.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
Y.shape:
torch.Size([1, 2048, 7, 7])
Y.shape:
torch.Size([1, 512, 7, 7])
X.shape:
torch.Size([1, 512, 7, 7])
X.shape:
torch.Size([1, 512, 7, 7])
-----------------------------------
Sequential output shape: torch.Size([1, 512, 7, 7])
X.shape:
torch.Size([1, 512, 7, 7])
AdaptiveAvgPool2d output shape: torch.Size([1, 512, 1, 1])
X.shape:
torch.Size([1, 512, 1, 1])
Flatten output shape: torch.Size([1, 512])
X.shape:
torch.Size([1, 512])
Linear output shape: torch.Size([1, 10])
```
**Do you see what is the issue?**
Thank you in advance!
| 0.57 |
t3_p7b464
| 1,629,363,372 |
pytorch
|
Does torchserve in AWS scale to have equal inference time for per request for any number of parallel requests?
|
If I manually launch an ec2 server with pytorch inference, the inference time will depend on the resources I configured and the number of users. When many users request in parallel, inference time will increase (due to limited resources and waiting).
The requirement is: inference time per image per user should be less than 100 ms. Is there any way I can ensure this is met regardless of number of parallel requests? Is this possible with SageMaker?
| 1 |
t3_p6duzx
| 1,629,238,412 |
pytorch
|
can anyone send a ZIP download link for the MNIST dataset?
|
the fricking download server is down for some reason and im stuck without the dataset
| 0.43 |
t3_p6296v
| 1,629,201,261 |
pytorch
|
What's wrong with my code?
|
import torch
import torchvision
from torchvision import transforms, datasets
train = datasets.MNIST("", train=True, download=True,
transform=transforms.Compose([transforms.ToTensor()]))
test = datasets.MNIST("", train=False, download=True,
transform=transforms.Compose([transforms.ToTensor()]))
| 0.16 |
t3_p5czq1
| 1,629,107,873 |
pytorch
|
FastAi article, enjoy
|
nan
| 0.38 |
t3_p561sz
| 1,629,076,756 |
pytorch
|
Can I use pytorch for detecting a hotword? How good will it be?
|
And how hard will I work?
| 0.2 |
t3_p4i9oi
| 1,628,982,845 |
pytorch
|
Considering getting started with machine Learning and neural networks, is pytorch good for me?
|
nan
| 0.42 |
t3_p4i93x
| 1,628,982,788 |
pytorch
|
Issue with grad_fn = None
|
Why `AttributeError: 'ConvEdge' object has no attribute 'grad_fn'` for [https://github.com/promach/gdas/blob/main/gdas.py#L167](https://github.com/promach/gdas/blob/main/gdas.py#L167) ?
is there some pytorch visualization tool that I could use to debug my current situation ?
https://preview.redd.it/b4q7cqfgrch71.png?width=980&format=png&auto=webp&s=1852404cadb9eaa65bb153bd4b5409fea8edee00
| 0.33 |
t3_p3kmzc
| 1,628,850,110 |
pytorch
|
Nvidia Releases CUDA Python
|
nan
| 0.95 |
t3_p31uwc
| 1,628,781,210 |
pytorch
|
PyTorch to CoreML
|
Hi,
I'm looking for someone to help me with my project.
I have found a project on github that builds a classifier based on audio using pytorch.
My plan is to convert this model using coreml so it can be used in my ios app.
It's a simple task and I will pay you $50 if you can do it by the end of this week.
Message me for details
thanks
jk
| 0.25 |
t3_p2rlvs
| 1,628,739,134 |
pytorch
|
Mixup: Implementation + Experiments
|
nan
| 0.86 |
t3_p1rtel
| 1,628,610,281 |
pytorch
|
How to construct this network in pytorch with onnx on netron
|
Hi, I am trying to reconstruct this graph using pytorch. I am unable to generate slice and transpose, since the GRU is generating the rest of it by itself.
​
https://preview.redd.it/ik3putyhecg71.png?width=471&format=png&auto=webp&s=1ec7a3880b893e193ac8bc12c37faf5166d04026
How do I reconstruct the exact same network?
I am using the given code to construct the graph
x = torch.randn(batch_size, frames, 161, requires_grad=True)
torch_out = model(x)
# Export the model
torch.onnx.export(model, # model being run
x, # model input (or a tuple for multiple inputs)
"super_resolution.onnx", # where to save the model (can be a file
or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
| 0.9 |
t3_p12bt7
| 1,628,518,601 |
pytorch
|
PyTorch Dataset class as input to YOLO
|
I have searched everywhere, but I can't find an example of someone writing their own `Dataset` classes to feed data into a PyTorch YOLO implementation. Everyone just formats a dataset as a directory structure with one bounding box file per image and points the network to that.
But I want to feed my own dataset in using the `torch.utils.data.Dataset` class. How do I do that? What format do these networks expect the training examples to be in?
Or has someone already done this and somehow I just missed it?
Thanks in advance!
| 1 |
t3_p06py7
| 1,628,390,758 |
pytorch
|
How can I learn about generative deep learning? VAEs & GANs? Prereqs? DL newbie
|
Hello all, I’m a statistics student in my university and I’ve had experience with using machine learning algorithms for data mining. My background in ML has come from the approach of introduction to statistical learning, and how to use these machine learning algorithms and apply them to tabular datasets. Learning about machine learning came form my background and exposure to statistical inference/probability theory and regression/glms + Bayesian statistics.
I had never ventured into deep learning and neural networks yet because I had never found a reason to. I really love statistics, but I could never find a niche in deep learning which really clicked “statistically” for me. But then I found it, and I’ve heard about these things known as “Generative Networks” and architectures such as GANs and VAEs.
Just reading about it it felt very similar to bayesian concepts and I just thought it was something I wanted to try and get my hands on learning. For programming experience, I know python and R, and I’ve used packages like sklearn/tidymodels for the classical machine learning algorithms for data analysis, but my deep learning experience is quite limited.
The most I have done was built like one CNN to classify images but it wasn’t really all that great and it was in Tensorflow. However I have planned on learning pytorch, but needed a reason to and I think I will try and learn generative networks using pytorch.
I would like some help on where to go next however… I guess I should first choose what it is I want to learn within generative models, which I think GANs or VAEs. But what do I need to do to first understand how these work? How the math works? I have some lin alg background, but I feel like there’s a lot I need to do before I can start coding this in pytorch. I have read some medium articles to first learn what generative networks are and the basis of them, but nothing more. Any advice would be appreciated for a deep learning newbie like myself. Thanks!
| 0.75 |
t3_ozu9w1
| 1,628,346,554 |
pytorch
|
Model based on input/label tensor
|
Complete beginner here:
Basically I have my two tensors, one for input and one for the output respectively label.
How do I write a model that takes the input tensor (list of strings that were character for character substituted with numbers as in a labelencoder)
and output tensor (just one number).
My plan is to make a text comparison, which compares two or more strings (therefore character substitution instead of word substitution) and compares them on minor differences.
| 1 |
t3_oz1si8
| 1,628,234,563 |
pytorch
|
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 3, 32, 32]], which is output 0 of torch::autograd::CopyBackwards, is at version 5; expected version 1 instead. Hint: enable anomaly detection
|
For [https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L394](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L394) , why **RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: \[torch.cuda.FloatTensor \[4, 3, 32, 32\]\], which is output 0 of torch::autograd::CopyBackwards, is at version 5; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set\_detect\_anomaly(True).** ?
​
https://preview.redd.it/5n4pxjyjbif71.png?width=1922&format=png&auto=webp&s=88bc54de821a5ef6cd0fca569be555825980dfe5
| 1 |
t3_oydvet
| 1,628,155,153 |
pytorch
|
Hands-On Workshop: Accelerate PyTorch Applications Using Intel oneAPI Toolkit
|
Analytics India Magazine, in association with Intel®, has put together a hands-on virtual workshop on August 18, 2021, to unpack Intel® Extension for PyTorch\*. The participants will learn how to train a model using Intel® Extension for PyTorch\* and use the PyTorch extensions for inference. Expert trainers from Intel will also demonstrate how to accelerate AI inference performance with Intel® Distribution of OpenVINO™ Toolkit through ONNX.
[https://register.gotowebinar.com/register/5241945191813665294](https://register.gotowebinar.com/register/5241945191813665294)
| 1 |
t3_oycpe9
| 1,628,149,384 |
pytorch
|
What is the PyTorch equivalent of this Keras code?
|
I am using a data generated from a Keras code to train my PyTorch model (using numpy as a bridge). I couldn't find a 1:1 solution in PyTorch as of now. I'm using PyTorch 1.7, that's available by default in Kaggle.
train_image_generator = ImageDataGenerator(
rescale=1./255,
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90, width_shift_range=0.15,
height_shift_range=0.15,
horizontal_flip=True,
zoom_range=[0.9, 1.25],
brightness_range=[0.5, 1.5]
)
test_image_generator = ImageDataGenerator(rescale=1./255)
| 1 |
t3_oyclse
| 1,628,148,871 |
pytorch
|
How do I train a pretrained JIT model?
|
I am trying to perform transfer learning using one of the silerio STT models located in the torch hub. The model loads as a JIT model which I am entirely unfamiliar with.
Does anyone know how I can work with this model as I would a traditional pytorch module? I want to be able to freeze / unfreeze layers, remove layers, etc, as well as use it in a training loop.
Thank you for your help!
| 1 |
t3_oxw999
| 1,628,094,169 |
pytorch
|
FIVE WAYS TO INCREASE MODEL PERF W/ PYTORCH PROFILER!
|
nan
| 0.75 |
t3_oxl8rq
| 1,628,052,492 |
pytorch
|
Why grad_fn = None
|
Why `grad_fn = None` for [https://github.com/promach/gdas/blob/main/gdas.py#L402](https://github.com/promach/gdas/blob/main/gdas.py#L402) ?
https://preview.redd.it/mjaemqb915f71.png?width=1922&format=png&auto=webp&s=604a115065075e2e8733a5242d867cba50ca3bc9
| 0.25 |
t3_ox2lfo
| 1,627,994,280 |
pytorch
|
Why is the PyTorch model doing worse than the same model in Keras even with the same weight initialization?
|
I made the exact same shallow convolutional network (6 Conv layers, 2 fully connected layers) in PyTorch and Keras. I used glorot\_uniform to initialize the weights of Conv2d in PyTorch (as they are by default in Keras). Yet the Keras model reached 97% validation set accuracy while the PyTorch model only reached 85%.
| 0.92 |
t3_ox0g4e
| 1,627,985,680 |
pytorch
|
error w torchvision.io loading video (help greatly appreciated!)
|
I'm new to pytorch and torchvision, and I keep getting this error whenever I try to run code to read a video.
RuntimeError: Not compiled with video\_reader support, to enable video\_reader support, please install ffmpeg (version 4.2 is currently supported) and build torchvision from source."
I'm working in an anaconda environment. These are the versions I'm working with:
ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
torchvision 0.10.0
ffmpeg 4.2.2
ffmpeg-python 0.2.0
Python 3.9.4 (default, Apr 9 2021, 16:34:09
​
I'm running the python file as you regularly would. I'm not sure what they mean by "build torchvision from source" or if that entails some extra commands/steps I'm missing. I've looked online but I haven't found anything helpful yet, so I thought I'd ask here. Thanks!
| 0.84 |
t3_owkij9
| 1,627,928,587 |
pytorch
|
Real-Time Training
|
Hi everyone. I am a rookie on Pytorch Learning and I’m searching about a way to learn about live train my regression models on new entries at the dataset. I’ve got the best checkpoint of the training, but I don’t know the following steps. Could you give me some advice?
Cheers
| 1 |
t3_ov9ihj
| 1,627,746,566 |
pytorch
|
Graph Attention Networks v2: Annotated implementation
|
[Implementation with side-by-side notes](https://nn.labml.ai/graphs/gatv2/index.html)
GATv2 is an improvement over Graph Attention Networks (GAT). They show GAT has static attention. i.e., the attention ranks (ordered by the magnitude of attention) for key-nodes are the same for every query-node. They introduce GATv2 that overcomes this limitation by applying the attention scoring linear layer after the activation.
* [Twitter thread](https://twitter.com/labmlai/status/1421350760638418948)
* [GAT v2 Paper](https://arxiv.org/abs/2105.14491)
* [GAT annotated implementation](https://nn.labml.ai/graphs/gat/index.html)
* [GAT Paper](https://arxiv.org/abs/1710.10903)
| 1 |
t3_ov70vc
| 1,627,737,599 |
pytorch
|
I'm trying to follow along this beginner machine learning game youtube video using pytorch and unity. How come pytorch exports models as .onnx files instead of .nn? Unity doesn't seem to accept .onnx files.
|
This is the video:
[https://youtu.be/axF\_nHHchFQ](https://youtu.be/axF_nHHchFQ)
Here's what it looks like after training
https://preview.redd.it/1cfsnagu4fe71.png?width=2568&format=png&auto=webp&s=452e273874c30734299abaeab4e268b54366918b
| 0.67 |
t3_ouuelm
| 1,627,680,730 |
pytorch
|
Implementation of LR-CN
|
Hi, is there any resources about Long-Term Recurrent Convolutional Network that implemented with PyTorch?
| 1 |
t3_otzu8n
| 1,627,572,981 |
pytorch
|
New user here. Some advice or help is needed on getting LSTM working
|
Hello,
I am new to PyTorch and really need advice.
I can see plenty of tutorials that include hidden states in training `LSTM`. When I try to do the same I get an error. I was wondering if it's required to gather output value from the hidden states and then pass it as a second parameter in another iteration `model(inputs, (h, c))`?
The model is a trivial sequence "predictor", which is going to suggest a next element based on a previous one. Because the input is a sequence, I thought `RNN` and specifically `LSTM` would be a better approach than a regular net.
So far, I've got the following training loop
for _ in range(50):
for x, c in train_loader:
x = x.view(1, 1, -1)
optimiser.zero_grad()
m_out, h = model(x, h)
loss = loss_function(m_out.view(1, -1), c)
loss.backward()
optimiser.step()
Inputs are tensors of `1 x 1 x C`, where `C` a is a one-hot vector with `1` at corresponding index, e.g. `[0, 0, 0, 1, 0, ...]` would indicate 4th class. I use ASCII characters as classes. Hidden states are initialised with zeros
h = (torch.zeros(1, 1, n_features), torch.zeros(1, 1, n_features))
Loss function, model, and an optimiser are also set
model = TextPredictor(n_features, n_features)
loss_function = nn.NLLLoss()
optimiser = torch.optim.SGD(model.parameters(), lr=0.1)
I am trying to test two versions of the model. One where I handle initial states and get the following error
> RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
The other where I remove hidden states
m_out = model(x)
The latter version is running through the training loop but it seems that without handling the inner state `(h, c)`, the model is not working after training; it always returns the same class regardless of the input.
I was hoping perhaps someone can give a few clues on how to improve the model.
Thanks
| 1 |
t3_otyn0t
| 1,627,569,186 |
pytorch
|
Help needed with RNNs!
|
Hello,
I understand Recurrence Neural Networks are used frequently in natural language processing or similar processes. For example, predicting the next word in the sentence: "I took my dog for a \_\_\_" (walk). As an analogy, I'm trying to instead use RNNs for a version of generating what can be seen as a "coherent" sentence, given multiple other sentences.
What I'm really trying to do is, given multiple sets of points, and how each set of points is partitioned, I'm trying to find how I would partition another set of points in order to match the pattern of the previous sets.
RNNs seemed like a good method to do this, purely because of their high data interdependence (nearby data affecting other data), which I want so that the decision of what cluster a specific point belongs to affects future decisions within the same set. Has anyone found an RNN implementation of this? I am really struggling to find an implementation that has continuous input, classification output, and finite sequence length.
Any help would really, really be appreciated. I believe this deals with many to many RNNs.
| 1 |
t3_otrtel
| 1,627,539,065 |
pytorch
|
Long video data loading for RNNs
|
Hey guys,
Had a lot of trouble with working out how to load long videos into my model, lets say I need to feed in 200 frames to my LSTM layer, but I don't have the memory to pass the entire 200 frame segment into my model. How do I go about doing this? I could use a batch size of 1 and then run 20 forward passes holding 10 frames and passing a segment length of 200 to my LSTM layer? I could run all 200 frames through the convolution layers, return them, then run all those 200 frames through the LSTM layers? I think I'm being a fool and there's probably an easier way. Can anyone confirm if this is an ok way to do this or if there is an easier way?
| 0.75 |
t3_othmpg
| 1,627,502,194 |
pytorch
|
Long-term series forecasting with Query Selector -- efficient model of sparse attention
|
We would like to share with you our latest development in artificial intelligence - QuerySelector. This is the SOTA (State of the Art) in this field.
On Papers with Code, you have a link to arXiv and our Github.
[https://paperswithcode.com/paper/long-term-series-forecasting-with-query](https://paperswithcode.com/paper/long-term-series-forecasting-with-query)
Feel free to ask questions
| 0.72 |
t3_ot7giu
| 1,627,469,353 |
pytorch
|
Voice recognition - Speech to Text
|
I am looking for implementing Speech to Text system in Python 3.X. Rather than reinventing the wheel, is there some Amazon Alexa/Google or some other API which I can easily use to implement this? If yes can you point me towards it's tutorials?
Thanks!
| 1 |
t3_osov52
| 1,627,399,082 |
pytorch
|
A collection of some of the best PyTorch courses for beginners to learn PyTorch online
|
nan
| 0.94 |
t3_osmzfu
| 1,627,393,028 |
pytorch
|
Training loss stays the same
|
I managed to get [my own GDAS code implementation](https://github.com/promach/gdas) up and running.
However, the loss stay the same which indicates the training process is still incorrect.
Could anyone advise ?
https://preview.redd.it/k4czo8g58rd71.png?width=980&format=png&auto=webp&s=88db8a643e3282bac66df89f1ee2e87c12edeabb
| 1 |
t3_osmhch
| 1,627,391,246 |
pytorch
|
PyTorch batchnorm2d in numpy
|
nan
| 1 |
t3_os2al2
| 1,627,316,829 |
pytorch
|
RuntimeError: expand(torch.cuda.FloatTensor{[3, 3, 3, 3]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (4)
|
Why `[3, 3, 3, 3]` for the [variable w](https://github.com/promach/gdas/blob/main/gdas.py#L469) ?
https://preview.redd.it/yyztk1dt5ed71.png?width=1922&format=png&auto=webp&s=a0beb6008332b9978bd628eb23900628cc960874
| 0.25 |
t3_orfi7v
| 1,627,233,064 |
pytorch
|
Excessive CPU RAM being used even inside .cuda() mode
|
I am having issue with excessive CPU RAM usage with [this coding](https://github.com/promach/gdas) even inside `.cuda()` mode
Could anyone advise ?
| 0.81 |
t3_oqsxm2
| 1,627,142,936 |
pytorch
|
ValueError: optimizer got an empty parameter list (nn.parameter is not persistent across parent classes)
|
how to make `nn.parameter()` persists across [parent classes](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L291) ?
in my coding: class Graph → class Cells → class nodes → class Connections → class Edge
the `nn.parameter()` is located inside class Edge
​
https://preview.redd.it/1b5wuo6orzc71.png?width=980&format=png&auto=webp&s=bb23c9260b167e6274461774b0b9ded1de74a939
| 1 |
t3_oq6dkm
| 1,627,058,865 |
pytorch
|
Need help with environment setup
|
Hello everyone. I would like to apologize in advance if this is not the right community to post to, but I thought that I should start from somewhere.
TLDR: Couldn't setup the necessary packages for an environment that I want to train using PyTorch (have tried Conda and Docker). Running on Ubuntu 16.04.
I am currently new to PyTorch. Basically I am trying to setup an environment with PyTorch, CUDA, Torchvision, Tensorboard, and OpenCV. Seems like an easy task, but I am currently stuck at this for a few days. I have tried Docker, and one issue with it is not enough shared memory (kernel dying) when I try to run training via a Jupyter notebook.
I've also tried Conda too, but I am currently stuck with actually even installing all the necessary packages that I have listed. I've installed various versions of Anaconda, even installing using the terminal and even the navigator, but I still couldn't install all of the necessary packages. The best I got so far was managing to install some packages separately using the terminal and navigator, but after when I'm done, PyTorch is unable to run on "cuda:0" device despite being able to detect it as a 10.2 version.
I am currently stumped right now, and would appreciate it if anyone could give me some pointers on how should I go.
P.S: I am currently using Linux Ubuntu 16.04.
| 1 |
t3_opye1w
| 1,627,030,256 |
pytorch
|
RuntimeError: The size of tensor a (5) must match the size of tensor b (32) at non-singleton dimension 3
|
For the **RuntimeError: The size of tensor a (5) must match the size of tensor b (32) at non-singleton dimension 3** , may I know why tensor b is of size 32 ? and what does it exactly mean by “singleton dimension 3” ?
The code could be [found here](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L298-L300).
https://preview.redd.it/u0rjkz9uipc71.png?width=960&format=png&auto=webp&s=d2d406c5c5276f1be03f3b4cb4e0ae9dce9aefa5
| 1 |
t3_op860n
| 1,626,934,780 |
pytorch
|
Issue with MNIST dataset when applied to Echo State Network
|
Hello everyone. I'm a beginner in PyTorch and I'm kinda stuck on a project I'm working on. I've created an Echo State Network from scratch in torch and this is what I have so far. I've tried to do binary classification with it where it has to look at a block in a sequence and tell if it's a square or sine wave. It got 100% acc for that, however, when I try it on MNIST, it doesn't work so well. For the MNIST dataset, I follow the conventional way of interpreting as time-series, which is that each row is an input, and all the rows in chronological order make up a sequence. So like I said, I'm inexperienced in PyTorch and I know all this code could've been done in like numpy or something, but I just used PyTorch to get some practice in. But anyway, below is the code.
This is main.py
import model
input_size = 28 # row of image
hidden_size = 100
output_size = 10
density = 0.1 # sparse connectivity between reservoir units
sigma_bias = 0.01 # if > 0, then spectral radius of w_hh (hidden-to-hidden weights) are < 1
sequence_length = 28 # total number of rows in image
initial_state_forget_amt = 5 # dismiss initial transient of <initial_state_forget_amt> steps from initial zero hidden state
# load training and testing data
dataset = model.torchvision.datasets.FashionMNIST(root='data/',
train=True,
download=True,
transform=model.torchvision.transforms.ToTensor())
test_dataset = model.torchvision.datasets.FashionMNIST(root='data/',
train=False,
download=True,
transform=model.torchvision.transforms.ToTensor())
# number of training sequences to loop through
start = 10
end = 100
# initialize arrays that will record accuracy for every run of the loop
train_acc_array = model.np.zeros((end - start,))
test_acc_array = model.np.zeros((end - start,))
# create model
ESN = model.ESN(input_size=input_size,
hidden_size=hidden_size,
output_size=output_size,
density=density,
sigma_bias=sigma_bias)
# apply random (seeded) permutation on training and testing data
model.np.random.seed(0)
idxs = model.np.random.permutation(60000)
training_data = dataset.data[idxs, :, :]
training_data_targets = dataset.targets[idxs]
training_data = model.torch.permute(training_data, [1, 2, 0])
model.np.random.seed(10)
test_idxs = model.np.random.permutation(10000)
test_data = test_dataset.data[test_idxs, :, :]
test_data_targets = dataset.targets[test_idxs]
test_data = model.torch.permute(test_data, [1, 2, 0])
train_input_signal = model.torch.empty(0)
train_output_signal = model.torch.empty(0)
test_input_signal = model.torch.empty(0)
test_output_signal = model.torch.empty(0)
valid_states_amt = sequence_length - initial_state_forget_amt
for train_num_sequences in range(start, end):
test_num_sequences = int(0.3 * train_num_sequences)
# isolate data up to <train_num_sequences>
# one-hot encoded output but repeated 28 times for every input of sequence (every row of image)
train_input_signal = training_data[:, :, :train_num_sequences] / 255
e_train = model.torch.zeros(output_size, train_num_sequences)
e_train[training_data_targets[:train_num_sequences], range(train_num_sequences)] = 1
train_output_signal = e_train.t().repeat(1, sequence_length).view(-1, output_size).t()
test_input_signal = test_data[:, :, :test_num_sequences] / 255
e_test = model.torch.zeros(output_size, test_num_sequences)
e_test[test_data_targets[:test_num_sequences], range(test_num_sequences)] = 1
test_output_signal = e_test.t().repeat(1, sequence_length).view(-1, output_size).t()
# create hidden matrix whose columns correspond to reservoir states of every row of every sequence
# <initial_state_forget_amt> steps during initial transient have been discarded within <create_hidden_states_matrix()>
hidden_states_matrix = ESN.create_hidden_states_matrix(input_signal=train_input_signal,
sequence_length=sequence_length,
initial_state_forget_amt=initial_state_forget_amt)
test_hidden_states_matrix = ESN.create_hidden_states_matrix(input_signal=test_input_signal,
sequence_length=sequence_length,
initial_state_forget_amt=initial_state_forget_amt)
# create empty tensor "a," perform mode-1 unfolding on training/testing output signals and store in <a/b> respectively.
a = model.torch.empty(0)
b = model.torch.empty(0)
for i in range(train_num_sequences):
a = model.torch.hstack(
[a, train_output_signal[:, ((i * sequence_length) + initial_state_forget_amt):((i + 1) * sequence_length)]])
for j in range(test_num_sequences):
b = model.torch.hstack(
[b, test_output_signal[:, ((j * sequence_length) + initial_state_forget_amt):((j + 1) * sequence_length)]])
# find hidden-to-output weights
ESN.fit(hidden_states=hidden_states_matrix, target_tensor=a)
# calculate outputs as probabilities, using softmax, based on newly found h2o weights
calculated_probs = ESN(hidden_states_matrix)
calculated_test_probs = ESN(test_hidden_states_matrix)
# --------calculate accruracy----------
# find max probability along columns
_, max_idxs_train = model.torch.max(calculated_probs, dim=0)
_, max_idxs_test = model.torch.max(calculated_test_probs, dim=0)
train_output = model.torch.empty(0)
test_output = model.torch.empty(0)
# an image must be classified but right now, we still have 28 outputs assigned to a single image
# take mode of every set of 28 outputs of all images in <max_idxs_train/test>
for i in range(int(calculated_probs.shape[1] / valid_states_amt)):
v_train, _ = model.torch.mode(max_idxs_train[(valid_states_amt * i):(valid_states_amt * (i + 1))])
train_output = model.torch.hstack([train_output, v_train])
for j in range(int(calculated_test_probs.shape[1] / valid_states_amt)):
v_test, _ = model.torch.mode(max_idxs_test[(valid_states_amt * j):(valid_states_amt * (j + 1))])
test_output = model.torch.hstack([test_output, v_test])
# find ground truth
_, train_ground_truth = model.torch.max(train_output_signal[:, 28 * model.np.array(range(train_num_sequences))],
dim=0)
_, test_ground_truth = model.torch.max(test_output_signal[:, 28 * model.np.array(range(test_num_sequences))], dim=0)
# calculate accuracy
train_accuracy = model.accuracy(calculated_output=train_output,
ground_truth=train_ground_truth,
num_of_elements=train_num_sequences)
test_accuracy = model.accuracy(calculated_output=test_output,
ground_truth=test_ground_truth,
num_of_elements=test_num_sequences)
# store in accuracies in array
train_acc_array[train_num_sequences - start] = train_accuracy
test_acc_array[train_num_sequences - start] = test_accuracy
And here is the file, model.py, containing my classes and functions.
import torch
import torchvision
import numpy as np
import math
import matplotlib.pyplot as plt
import random
# ESN class
class ESN(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size, density, sigma_bias):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = torch.nn.Linear(self.input_size, self.hidden_size)
self.reservoir = Reservoir(self.hidden_size, density, sigma_bias)
self.h2o = torch.nn.Linear(self.hidden_size, self.output_size)
self.i2h.bias.data = torch.zeros(tuple(self.i2h.bias.data.shape))
self.h2o.bias.data = torch.zeros(tuple(self.h2o.bias.data.shape))
self.tanh = torch.nn.Tanh()
self.softmax = torch.nn.Softmax(dim=0)
self.sigmoid = torch.nn.Sigmoid()
# return output based on current hidden state
def forward(self, hidden_states_n):
output = self.softmax(self.h2o(hidden_states_n.t()).t()) # output connected to current hidden state
return output
# return next step hidden states
# input and hidden tensor form: (# of features, # of units)
def compute_next_hidden_state(self, input_tensor, hidden_states_n):
i2h_term = self.i2h(input_tensor.t())
h2h_term = self.reservoir(hidden_states_n)
hidden_states_np1 = self.tanh(i2h_term + h2h_term).t() # hidden state at time n + 1
return hidden_states_np1
# create initial zero hidden state
def init_hidden(self):
return torch.zeros(self.hidden_size, 1)
# least squares solution of w_out (hidden-to-output) matrix
def fit(self, hidden_states, target_tensor):
# hidden and target tensor form: (# of features, # of units)
w_out = torch.matmul(torch.linalg.pinv(hidden_states.t()), target_tensor.t()).t()
self.h2o.weight.data = w_out
# create hidden state matrix whose column n is the hidden state at time n
def create_hidden_states_matrix(self, input_signal, sequence_length, initial_state_forget_amt):
hidden_states_matrix = torch.empty(0)
for i in range(input_signal.shape[2]):
# run on training and testing data
current_hidden_state = self.init_hidden()
for j in range(sequence_length - 1):
# begin recording hidden states once the initial_state_forget_amt has been passed to leverage ESN's state forgetting property
if j == initial_state_forget_amt:
hidden_states_matrix = torch.hstack([hidden_states_matrix, current_hidden_state])
elif j < initial_state_forget_amt:
current_hidden_state = self.compute_next_hidden_state(input_tensor=coeff_s_u(input_signal[:, j + 1, i], 'u'),
hidden_states_n=current_hidden_state)
continue
hidden_state_np1 = self.compute_next_hidden_state(input_tensor=coeff_s_u(input_signal[:, j + 1, i], 'u'),
hidden_states_n=hidden_states_matrix[:, j - initial_state_forget_amt])
hidden_states_matrix = torch.hstack([hidden_states_matrix, hidden_state_np1])
return hidden_states_matrix
# Reservoir class
class Reservoir(torch.nn.Module):
def __init__(self, hidden_size, density, sigma_bias):
super().__init__()
self.hidden_size = hidden_size
self.density = density
# create a sparse tensor w_hh (hidden-to-hidden) that satisifes constraints of largest singular/eigenvalue < 1
nnz_vals = math.ceil(self.density * self.hidden_size * self.hidden_size)
values = torch.normal(0, 1, (nnz_vals,))
self.w_hh = make_sparse_square_matrix(values, self.hidden_size)
_, s, _ = torch.svd(self.w_hh)
s_max = s[0]
self.w_hh = self.w_hh / (s_max + sigma_bias)
# calculate h2h term in calculation of next step hidden states
def forward(self, hidden_states_n):
return coeff_s_u(torch.matmul(self.w_hh, hidden_states_n), 's')
# squeeze or unsqueeze input_tensor based on certain requirements and calculations
def coeff_s_u(input_tensor, s_u):
dimension = input_tensor.ndim
coeff_in = torch.max(torch.tensor([0, dimension]))
if coeff_in != 0:
coeff_in = coeff_in / dimension
if s_u == 'u':
if input_tensor.ndim < 2:
input_tensor = torch.unsqueeze(input_tensor, dim=int(coeff_in) * dimension).float()
elif s_u == 's':
if input_tensor.ndim > 1:
input_tensor = torch.squeeze(input_tensor, dim=int(coeff_in) * dimension - 1).float()
return input_tensor
# create sparse square matrix
# will be used to create matrix representing hidden-to-hidden unit connections
def make_sparse_square_matrix(values, mode_length):
flattened_sparse_tensor = torch.zeros(mode_length * mode_length,)
rand_idx = random.sample(range(mode_length * mode_length), values.shape[0])
flattened_sparse_tensor[rand_idx] = values
return torch.reshape(flattened_sparse_tensor, (mode_length, mode_length))
# measures accuracy of calculated output based on ground truth labels
def accuracy(calculated_output, ground_truth, num_of_elements):
return float(torch.sum(calculated_output == ground_truth) / num_of_elements)
So the problem here that encountering is this.
https://preview.redd.it/1zfzoner4pc71.png?width=640&format=png&auto=webp&s=7c68529416b1f9a254af494017e6d33afbb73d7b
For the above case, I started at 10 training sequences and ended at 100. I've spent a lot of time trying to debug the code that I have and am unsure why the training acc drops off so quickly (overfitting that early wouldn't make sense, would it?) and why the testing acc is so low. I guess I can attribute the stagnation in the testing acc to me taking 30% of the number of training sequences, which, for such low numbers, will be very close to the previous number of testing sequences. To implement this ESN, I used [this](https://www.researchgate.net/publication/215385037_The_echo_state_approach_to_analysing_and_training_recurrent_neural_networks-with_an_erratum_note%27) paper. Also, I chose not to have any feedback from the outputs to the reservoir layer, which that paper suggests you can do. I'm not sure if this is the right place to post this but I'd appreciate any help. Thank you.
&#x200B;
| 0.88 |
t3_op7d9n
| 1,626,931,142 |
pytorch
|
Why are the embeddings of tokens multiplied by $\sqrt D$ (note not divided by square root of D) in a transformer?
|
nan
| 1 |
t3_op0z2t
| 1,626,907,580 |
pytorch
|
Cloud for Deep Learning training
|
Hi Guys, which Cloud platform should I pay for/use in order to perform deep learning experiments? Google Colab Pro doesn't meet my requirements since it doesn't let you use the service for more than 24 hours in one go. My experiments might run for approximately 4 days, give or take.
Thanks!
| 1 |
t3_omjwpb
| 1,626,583,825 |
pytorch
|
Issue with torch.load() with torch version 1.4
|
I can't resolve this issue where `torch.load()` can't read a zipfile `model.pth` and shows this:
```sh
RuntimeError: version <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED
```
unfortunately i can't upgrade torch version as I'm bound to python2. Any suggestions will be of great help.
| 1 |
t3_omafr5
| 1,626,548,475 |
pytorch
|
Is there a difference between gradient accumulation vs loss accumulation?
|
I need to minimize the mean/sum loss over a large number of samples, and I can only use a batch\_size of 1. I decided to accumulate loss because I'm not sure if a batch size of 1 will result in a minimal mean loss.
To simulate accumulation, is there a difference in *loss target* between the following two snippets?
model.zero_grad()
for _ in range(batch_size):
x, target_y = ...
y = model(x)
loss = loss_function(y, target_y)
loss.backward()
optimizer.step()
vs
model.zero_grad()
loss = 0
for _ in range(batch_size):
x, target_y = ...
y = model(x)
loss = loss + loss_function(y, target_y)
loss.backward()
optimizer.step()
I think the 2nd snippet is less memory efficient because it needs to keep all gradients for all samples in memory, but I'm mainly interested in trying to understand if these two implementations lead to different solutions when minimizing the loss -even if it's only a subtle difference-.
Any ideas?
| 1 |
t3_om1f7j
| 1,626,516,053 |
pytorch
|
Is there a seq2seq model in time series analysis?
|
Most of the time, I always see machine translation. I did find one but it was on TF2, would love to see and study a seq2seq PyTorch model code. Thank you
| 1 |
t3_ols9l9
| 1,626,476,584 |
pytorch
|
JAX Vs TensorFlow Vs PyTorch: A Comparative Analysis
|
nan
| 0.57 |
t3_olebus
| 1,626,432,062 |
pytorch
|
finite difference method in DARTS code
|
For my [DARTS coding](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L373-L374) , how to do the coding for *Ltrain(w+)* and *Ltrain(w-)* inside [equation (8) of DARTS paper](https://arxiv.org/pdf/1806.09055.pdf#page=4) ?
&#x200B;
Note: Upon checking the definition of finite difference method at [https://mythrex.github.io/math\_behind\_darts/](https://mythrex.github.io/math_behind_darts/) , it seems that the above 2 lines of code are wrong.
but the question is how to modify these 2 lines of code ?
&#x200B;
https://preview.redd.it/91dieytsh7b71.png?width=1000&format=png&auto=webp&s=0ecd10bb7f13a27413fe836303f6db5edbf40242
| 1 |
t3_ok8bkr
| 1,626,281,134 |
pytorch
|
How can I use a CFD loss function with PyTorch3D?
|
I recently stubbed upon PyTorch3D from facebook and caught my attention this use case in which the model deforms one 3D model into another.
[ Deform a sphere mesh to dolphin](https://i.redd.it/glbgwlsad5b71.gif)
My idea is to develop a topology optimization AI for CFD wind tunnel simulations. Basically, starting from a sphere, I want the model to create the most lift/drag efficient shape.
So can I use as a loss function for example lift divided by drag coefficient? Or should I use a 3D scalar field for the loss function? If that's the case, I don't know how could I pull this off. Any guidelines would be very much appreciated!
Thank you!
| 0.83 |
t3_ok0u2c
| 1,626,255,222 |
pytorch
|
[Help] Change models.vgg19(pretrained = True) classification to binary classification
|
This is what I am looking at:
[https://pytorch.org/tutorials/beginner/transfer\_learning\_tutorial.html#finetuning-the-convnet](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#finetuning-the-convnet)
I want to change the model to vgg19 and binary classification.
Thank you!
| 0.8 |
t3_oilo21
| 1,626,071,444 |
pytorch
|
PyTorch Tutorial on Generative Adversarial Networks (GANs)
|
I have written a tutorial on building and training a GAN on MNIST using PyTorch. Feel free to check it out!
Link: [https://taying-cheng.medium.com/building-a-gan-with-pytorch-237b4b07ca9a](https://taying-cheng.medium.com/building-a-gan-with-pytorch-237b4b07ca9a)
| 0.94 |
t3_ohm5np
| 1,625,935,530 |
pytorch
|
MaskRCNN postprocessing doesn't seem to fit code in git
|
Hey folks,
I'm currently trying to "hack" a MaskRCNN so that I get the full label distributions into the output.
I was looking into the "[postprocessing\_detections](https://github.com/pytorch/vision/blob/25bc21dfcbb390f2f215e8f83aa5e028c77a0f24/torchvision/models/detection/roi_heads.py#L664)" function defined in roi\_heads - however, when I tried to replace it with my own and slightly altered version by copying it from the git and adding minor changes, it turns out that the function seems to work different than what I find on the git.
Most importantly, it doesn't get 4 inputs - as the code in the git needs - but only 3, which seem to be head outputs, anchors and image shapes. I've found two other definitions of a function with the same name in completely different contexts ([RetinaNet](https://github.com/pytorch/vision/blob/25bc21dfcbb390f2f215e8f83aa5e028c77a0f24/torchvision/models/detection/retinanet.py#L398) and [SSD)](https://github.com/pytorch/vision/blob/25bc21dfcbb390f2f215e8f83aa5e028c77a0f24/torchvision/models/detection/ssd.py#L364) which take 3 inputs; But it makes 0 sense why they'd play any role here, and also they'd want the arguments to be different types than what I get (those functions want dict, list list; MaskRCNN internally I get inputs torch.Tensor, list, list).
Does anyone have an idea how and why the methods seem to differ from what I find in the git? I literally can't find a code version that seems to fit what the MaskRCNN actuall does internally.
If it's important, here's the scheme of how I get to the model (and the error):
import torchvision
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
def postprocess_detections_new(self,
class_logits, # type: Tensor
box_regression, # type: Tensor
proposals, # type: List[Tensor]
image_shapes # type: List[Tuple[int, int]]
):
--- code 1:1 copypasted from the postprocess_detections function in git ---
model.roi_heads.postprocess_detections = postprocess_detections_new
output = model(example_input)
Where example\_input is some random example image I put in that works perfectly well with the unaltered model. Doing only the insertion of the copypasted code, the model already doesn't work anymore due to mismatching number of expected/given arguments.
Thanks a lot in advance, would really appreciate if somebody could point me somewhere!
| 0.56 |
t3_ogy02j
| 1,625,844,705 |
pytorch
|
Can someone explain DQN
|
Hi!
I've been looking for tutorials on DQN and came across the official one from Pytorch. Can someone explain the math for me?
[Reinforcement Learning (DQN) Tutorial — PyTorch Tutorials 1.9.0+cu102 documentation](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#dqn-algorithm)
| 0.7 |
t3_og0bmv
| 1,625,720,219 |
pytorch
|
Extending kerv2d to kerv3d in https://github.com/wang-chen/kervolution/blob/unfold/kervolution.py
|
Dear friends,
I am trying to create custom kerv3d layer from the existing custom kerv2d layer given in the repository https://github.com/wang-chen/kervolution/blob/unfold/kervolution.py. Can anyone please suggest me suitable modifications to kerv2d layer for getting kerv3d. I want to replace conv3d layer with kerv3d for experimentation. Thank you.
| 0.8 |
t3_oeah5n
| 1,625,501,236 |
pytorch
|
what type of network should i use?
|
Im attempting to make a self driving car in a game right now I have a network taking in 5 inputs from the game and I want 4 binary outputs ex\[0,1,0,1\] I've tried a bunch of functions/loss calculations and nothing I can get to work. What i want on the output is a mutli hot vector. The code I'm using right now is below. Any advice is appreciated.
&#x200B;
import torch
import torchvision
from torchvision import transforms,datasets
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
np_load_old = np.load
np.load = lambda *a, **k: np_load_old(*a, allow_pickle=True, **k)
train_data = np.load('training_datav2.npy')
np.load = np_load_old
training_inputs= []
training_outputs= []
for row in train_data:
training_inputs.append(row[0])
training_outputs.append(row[1])
trainX= training_inputs[:len(training_inputs)//2]
testX= training_inputs[len(training_inputs)//2:]
trainY= training_outputs[:len(training_outputs)//2]
testY= training_outputs[len(training_outputs)//2:]
my_x = np.array(trainX) # a list of numpy arrays
my_y = np.array(trainY) # another list of numpy arrays (targets)
tensor_x = torch.Tensor(my_x) # transform to torch tensor
tensor_y = torch.Tensor(my_y)
trainDataset = TensorDataset(tensor_x,tensor_y)
my_testx = np.array(testX) # a list of numpy arrays
my_testy = np.array(testY) # another list of numpy arrays (targets)
tensor_testx = torch.Tensor(my_testx) # transform to torch tensor
tensor_testy = torch.Tensor(my_testy)
testDataset = TensorDataset(tensor_testx,tensor_testy)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1=nn.Linear(5,10)
self.fc2=nn.Linear(10,10)
self.fc3=nn.Linear(10,5)
self.fc4=nn.Linear(5,4)
def forward(self,x):
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=F.relu(self.fc3(x))
x=(self.fc4(x))
return x
net = Net()
#print(net)
optimizer= optim.Adam(net.parameters(),lr=0.001)
criterion = torch.nn.BCEWithLogitsLoss()
print("QUEUE ROCKEY MUSIC")
EPOCHS = 100
for epoch in range(EPOCHS):
for data in trainDataset:
x,y=data
net.zero_grad()
out=net(x)
loss= criterion(out,y)
loss.backward()
optimizer.step()
print(loss)
correct =0
total=0
with torch.no_grad():
for data in trainDataset:
x, y = data
out = net(x)
for idx, i in enumerate(out):
if torch.argmax(i)==y[idx]:
#print("out:",out)
#print("rounded",torch.round_(out))
# print("X:",x)
# print("yidx",y)
# print("i",i)
# print("torch",torch.argmax(i))
correct+=1
total+=1
guess=(net(torch.Tensor([11, 0, 1, 0, -0.18750000000000006])))
print( (guess))
print((guess>0.5).float())
guessright=net(torch.tensor([6, 0, 1, 0, 0.3971354166666663]))
guessleft=net(torch.tensor([6, 0, 1, 0, -0.3971354166666663]))
print("right",(guessright>0.5).float())
print("left",(guessleft>0.5).float())
print("right",(guessright))
print("left",(guessleft))
print("Accuraccy:",(correct/total))
| 0.8 |
t3_odqpll
| 1,625,425,094 |
pytorch
|
Please help me with this!!
|
I am new to pytorch and torchtext module. I haven't used TEnsorflow as of now so the solution i am searching for now must be in PyTorch.
I am trying to do multilabel classification using torchtext module in stackoverflow tag prediction dataset. I am confused how to do it when converting the csv file to torchtext.legacy.data.tabulardataset( label parameter as there are thousands label for one hot encoding of those labels), i am confused in this step. Can anyone do this project or help me with the syntax or provide me link to tutorial or notebook that has this project done.
If this question looks confusing, please tell me i will try to explain further.
| 0.67 |
t3_odcq9m
| 1,625,370,910 |
pytorch
|
Setting up a TPU and Ubuntu VM instance for use with Pytorch on Google Cloud
|
Hi!
I wrote [a gist, detailing how to setup a TPU and Ubuntu VM instance for use with Pytorch on Google Cloud](https://gist.github.com/visionscaper/91504d755ebf37bf440a24fe4b5ac84f). Since I thought it might be of use to members on this Reddit community I'm sharing it here.
If you have any comments or thoughts about it, please let me know!
| 1 |
t3_od7qln
| 1,625,351,110 |
pytorch
|
Having a hard time understanding Torch serve, any help would be appreciated
|
so I started programming back in December, learned TF and Keras in April but immediately switched to PyTorch because they said it was more flexible.
I currently want to make and practice producing restful APIs but doing it via Flask always is tedious, so I tried to search a more "easier" way and I saw Torch Serve. So as of now, I'm really having hard time understanding it due to lack of resources in the internet. I rarely use AWS Sagemaker because I run most of my models locally so this is pretty new to me.
So basically I'm pretty decent in making models in Juptyer/PyCharm but when it comes to deployment, I'm really having a hard time.
| 1 |
t3_obin5x
| 1,625,133,960 |
pytorch
|
TFLite equivalent in Pytorch?
|
I know about Torchscript but I'm specifically interested in running exported Pytorch models in a memory-restricted Python environment, to which TFLite lends itself very well but which I don't see in Pytorch. Is there something like a lightweight Torchscript Python implementation that I don't know about (like a Pytorch Lite), or should I just try to export Pytorch -> ONNX -> TFLite? I also saw [Pytorch Mobile](https://pytorch.org/mobile/home/), which said it supports Linux, but it only ever mentions Android and iOS environments.
Thanks.
| 1 |
t3_obf99j
| 1,625,118,211 |
pytorch
|
Switching Pytorch installation version to save memory
|
I am trying to run a few large computer vision models (\~700MB in total) into my app and want to minimize the memory overhead taken up by Pytorch in order to save space. From looking at [the releases page](https://pypi.org/project/torch/#files), I see many different releases with drastically different sizes (specifically [these](https://i.imgur.com/drrFzDx.png) versions).
What differences in these different releases cause the drastically different bundle sizes? Is this actually helpful when running a program in Pytorch (do smaller bundles mean smaller Pytorch memory overhead)?
I'm considering using the `manylinux2014-aarch` version but I don't know if it's missing some important components or something like that, or what platforms it supports (is it arch-only?).
Thanks.
| 1 |
t3_oabnde
| 1,624,983,720 |
pytorch
|
Everything You Need To Know About Torchvision’s SSDlite Implementation
|
nan
| 0.91 |
t3_o9he6a
| 1,624,875,473 |
pytorch
|
In-shop clothes retrieval Fine Tuning
|
I am new to "In-shop clothes retrieval" problem statement. One of the dataset can be referenced [here](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/InShopRetrieval.html).
If I want to fine tune a CNN model using a similar dataset, how can I go about it?
Thanks!
| 0.5 |
t3_o8t7di
| 1,624,783,281 |
pytorch
|
REDUCE MEMORY LOADED INTO GPU WHEN TRAINING MODEL
|
Hello everyone,
I'm a Pytorch beginner. When I try to train a network (not written by me) using RTX 2060, it triggers "RuntimeError: CUDA out of memory...".
The model was built using Python 2.7 & Pytorch 0.4.0 (trained by GTX 1080Ti). I tried to adapt to Python 3.9.5 & Pytorch 1.8.1 and successfully run inference. I think that this is due to the memory limit of my GPU (6 GBs). Is there anyway that I can solve this issue given that I am not the author of the code?
Here is the github repository if you guys want to have a look: [https://github.com/zijundeng/BDRAR](https://github.com/zijundeng/BDRAR)
Cheers.
| 0.5 |
t3_o8b51t
| 1,624,715,341 |
pytorch
|
Pie & AI: Bangalore - Getting started with PyTorch Lightning
|
nan
| 0.67 |
t3_o82vuc
| 1,624,678,110 |
pytorch
|
Stoping running training on Jupyter without running out of memory
|
Hey Guys,
You all have probably had this experience that you needed to stop training a model in Jupyter to test something and then using the same model again. The issue is that when I stop the run, the gpu memory is not released so any further use of the model leads to a CUDA memory error. Anyone knows a solution to this problem ?
Thanks
| 0.67 |
t3_o7r1f0
| 1,624,638,148 |
pytorch
|
DINO - Emerging Properties in Self-Supervised Vision Transformers | Implementation
|
nan
| 1 |
t3_o7qvap
| 1,624,637,637 |
pytorch
|
Concatenating ResNet-50 predictions PyTorch
|
I am using a pre-trained ResNet-50 model where the last dense is removed and the output from the average pooling layer is flattened. This is done for feature extraction purposes. The images are read from folder after being resized to (300, 300); it's RGB images.
torch version: 1.8.1 & torchvision version: 0.9.1 with Python 3.8.
The code is as follows:
model_resnet50 = torchvision.models.resnet50(pretrained = True)
# To remove last dense layer from pre-trained model, Use code-
model_resnet50_modified = torch.nn.Sequential(*list(model_resnet50.children())[:-1])
# Using 'AdaptiveAvgPool2d' layer, the predictions have shape-
model_resnet50_modified(images).shape
# torch.Size([32, 2048, 1, 1])
# Add a flatten layer after 'AdaptiveAvgPool2d(output_size=(1, 1))' layer at the end-
model_resnet50_modified.flatten = nn.Flatten()
# Sanity check- make predictions using a batch of images-
predictions = model_resnet50_modified(images)
predictions.shape
# torch.Size([32, 2048])
I want to now feed batches of images to this model and concatenate the predictions made by the model (32, 2048) vertically.
# number of images in training and validation sets-
len(dataset_train), len(dataset_val)
# (22500, 2500)
There are a total of 22500 + 2500 = 25000 images. So the final table/matrix should have the shape: (25000, 2048) -> number of images = 25000 and number of extracted features = 2048.
I tried running a toy code using np.vstack() as follows:
x = np.random.random_sample(size = (1, 3))
x.shape
# (1, 3)
x
# array([[0.52381798, 0.12345404, 0.1556422 ]])
for i in range(5):
y = np.random.random_sample(size = (1, 3))
np.vstack((x, y))
x
# array([[0.52381798, 0.12345404, 0.1556422 ]])
Solution(s)?
&#x200B;
Thanks!
| 1 |
t3_o7i5g3
| 1,624,602,952 |
pytorch
|
Google Colab Cuda RuntimeError
|
Why do I keep getting this error? RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA\_LAUNCH\_BLOCKING=1.
I first got this while training my model. And now it appears even when I just want to load another co-model weights I have saved. I try to look it up, but all the answers seem to be having mistakes within the loss function with the wrong ground truth data form. But I got this in the middle of forwarding. How do I "pass CUDA\_LAUNCH\_BLOCKING=1" to fix this?
| 1 |
t3_o7bm61
| 1,624,577,420 |
pytorch
|
Same results over and over again
|
I have a custom dataset of 100 images that i prepare some models for a next challenge. When I run a model for 10 epochs, the validation accuracy remains always the same (e.g. 40%). If I increase the epochs to 1000, the validation accuracy also remains the same for all the epochs (e.g. 35%). The train loader is heavy augmented so I did not expect this. If I run the same code 10 mins later, the validation accuracy does not remain the same but varies. There are times that the training accuracy also remains the same. How is this possible?
| 1 |
t3_o78vx2
| 1,624,565,211 |
pytorch
|
Get file names and file path using PyTorch dataloader
|
I am using PyTorch 1.8 and Python 3.8 to read images from a folder using the following code:
&#x200B;
print(f"PyTorch version: {torch.__version__}")
# PyTorch version: 1.8.1
# Device configuration-
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"currently available device: {device}")
# currently available device: cpu
# Define transformations for training and test sets-
transform_train = transforms.Compose(
[
# transforms.RandomCrop(32, padding = 4),
# transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
)
transform_test = transforms.Compose(
[
transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
)
# Define directory containing images-
data_dir = 'My_Datasets/Cat_Dog_data/'
# Define datasets-
train_data = datasets.ImageFolder(data_dir + '/train',
transform = train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test',
transform = test_transforms)
print(f"number of train images = {len(train_data)} & number of validation images = {len(test_data)}")
# number of train images = 22500 & number of validation images = 2500
print(f"number of training classes = {len(train_data.classes)} & number of validation classes = {len(test_data.classes)}")
# number of training classes = 2 & number of validation classes = 2
# Define data loaders-
trainloader = torch.utils.data.DataLoader(train_data, batch_size = 32)
testloader = torch.utils.data.DataLoader(test_data, batch_size = 32)
len(trainloader), len(testloader)
# (704, 79)
# Sanity check-
len(train_data) / 32, len(test_data) / 32
You can iterate through the train data using 'train\_loader' as follows:
&#x200B;
for img, lab in train_loader:
print(img.shape, lab.shape)
pass
However, I am interested in getting the file name along with the file path from which the file was read. How can I achieve this?
Thanks!
&#x200B;
| 0.75 |
t3_o6wxug
| 1,624,524,536 |
pytorch
|
Convolutional Autoencoder CIFAR10 PyTorch - RuntimeError
|
I am using PyTorch version: 1.9.0+cu102 with Convolutional Autoencoder for CIFAR-10 dataset as follows:
# Define transformations for training and test sets-
transform_train = transforms.Compose(
[
# transforms.RandomCrop(32, padding = 4),
# transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
)
transform_test = transforms.Compose(
[
transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
)
# Load dataset-
train_dataset = torchvision.datasets.CIFAR10(
root = './data', train = True,
download = True, transform = transform_train
)
test_dataset = torchvision.datasets.CIFAR10(
root = './data', train = False,
download = True, transform = transform_test
)
print(f"len(train_dataset) = {len(train_dataset)} & len(test_dataset) = {len(test_dataset)}")
# len(train_dataset) = 50000 & len(test_dataset) = 10000
batch_size = 64
# Create training and testing loaders-
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size = batch_size,
shuffle = True
)
test_loader = torch.utils.data.DataLoader(
test_dataset, batch_size = batch_size,
shuffle = False
)
print(f"len(train_loader) = {len(train_loader)} & len(test_loader) = {len(test_loader)}")
# len(train_loader) = 782 & len(test_loader) = 157
# Sanity check-
len(train_dataset) / batch_size, len(test_dataset) / batch_size
# (781.25, 156.25)
# Get some random training images-
images, labels = next(iter(train_loader))
print(f"images.shape: {images.shape} & labels.shape: {labels.shape}")
# images.shape: torch.Size([64, 3, 32, 32]) & labels.shape: torch.Size([64])
LEARNING_RATE = 0.001
num_epochs = 20
class Reshape(nn.Module):
def __init__(self, *args):
super().__init__()
self.shape = args
def forward(self, x):
return x.view(self.shape)
class Trim(nn.Module):
def __init__(self, *args):
super().__init__()
def forward(self, x):
return x[:, :, :32, :32]
encoder = nn.Sequential(
nn.Conv2d(
in_channels = 3, out_channels = 32,
kernel_size = 3, padding = 1,
stride = 1, bias = True
),
nn.LeakyReLU(negative_slope = 0.01),
nn.Conv2d(
in_channels = 32, out_channels = 64,
kernel_size = 3, padding = 1,
stride = 2, bias = True
),
nn.LeakyReLU(negative_slope = 0.01),
nn.Conv2d(
in_channels = 64, out_channels = 64,
kernel_size = 3, padding = 1,
stride = 2, bias = True
),
nn.LeakyReLU(negative_slope = 0.01),
nn.Conv2d(
in_channels = 64, out_channels = 64,
kernel_size = 3, padding = 1,
stride = 1, bias = True
),
nn.LeakyReLU(negative_slope = 0.01),
nn.Flatten(),
nn.Linear(
in_features = 4096, out_features = 1500,
bias = True
),
nn.Linear(
in_features = 1500, out_features = 500,
bias = True
),
nn.Linear(
in_features = 500, out_features = 100,
bias = True
)
)
# Sanity check-
x = torch.rand(size = (32, 3, 32, 32))
print(f"x.shape = {x.shape}")
encoder_op = encoder(x)
print(f"encoder_op.shape = {encoder_op.shape}")
# x.shape = torch.Size([32, 3, 32, 32])
# encoder_op.shape = torch.Size([32, 100])
decoder = nn.Sequential(
nn.Linear(
in_features = 100, out_features = 500,
bias = True),
nn.Linear(
in_features = 500, out_features = 1500,
bias = True),
nn.Linear(
in_features = 1500, out_features = 4096,
bias = True),
Reshape(-1, 64, 8, 8),
nn.ConvTranspose2d(
in_channels = 64, out_channels = 64,
kernel_size = 3, stride = 1,
padding = 1, bias = True),
# output: torch.Size([32, 64, 8, 8])
nn.ConvTranspose2d(
in_channels = 64, out_channels = 64,
kernel_size = 3, stride = 2,
padding = 1, bias = True),
# output: torch.Size([32, 64, 15, 15])
nn.ConvTranspose2d(
in_channels = 64, out_channels = 32,
kernel_size = 3, stride = 2,
padding = 0, bias = True),
# torch.Size([32, 32, 31, 31])
nn.ConvTranspose2d(
in_channels = 32, out_channels = 3,
kernel_size = 3, stride = 1,
padding = 0, bias = True),
# output: torch.Size([32, 3, 33, 33])
Trim(),
# (3, 33, 33) -> (3, 32, 32)
nn.Sigmoid()
)
# Sanity check-
decoder(encoder_op).shape
# torch.Size([32, 3, 32, 32])
class AutoEncoder(nn.Module):
def __init__(self):
super().__init__()
self.encoder = encoder
self.decoder = decoder
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
# Initialize an autoencoder instance-
model = AutoEncoder()
# Move model to (GPU) device-
model.to(device)
# Specify optimizer and loss function-
optimizer = torch.optim.Adam(model.parameters(), lr = 0.001)
loss_fn = F.mse_loss
num_epochs = 15
# Python3 lists to hold training metrics-
trainining_loss = []
validation_loss = []
def compute_epoch_loss_autoencoder(model, data_loader, loss_fn, device):
model.eval()
curr_loss, num_examples = 0., 0
with torch.no_grad():
for features, _ in data_loader:
features = features.to(device)
logits = model(features)
loss = loss_fn(logits, features, reduction='sum')
num_examples += features.size(0)
curr_loss += loss
curr_loss = curr_loss / num_examples
return curr_loss
start_time = time.time()
for epoch in range(num_epochs):
running_loss = 0.0
model.train()
for batch_idx, (features, _) in enumerate(train_loader):
features = features.to(device)
# forward and back prop-
logits = model(features) # make predictions using model
loss = loss_fn(logits, features)
optimizer.zero_grad()
# Perform backprop-
loss.backward()
# Update model parameters-
optimizer.step()
# Compute model's performance-
running_loss += loss.item() * features.size(0)
# Compute loss using training dataset-
epoch_loss = running_loss / len(train_dataset)
trainining_loss.append(epoch_loss)
# Compute loss using validation dataset-
val_loss = compute_epoch_loss_autoencoder(
model, test_loader,
loss_fn, device
)
validation_loss.append(val_loss)
print(f"Epoch = {epoch + 1}: Autoencoder train loss = {epoch_loss:.4f} & val loss = {val_loss:.4f}")
end_time = time.time()
# Get some validation images-
for img, label in test_loader:
break
img.shape, label.shape
# (torch.Size([64, 3, 32, 32]), torch.Size([64]))
img.to(device)
# Pass batch size = 64 images through encoder to get latent space representations-
model.encoder(img)
This line gives me the error:
>RuntimeError Traceback (most recent call last) <ipython-input-69-14d47c831d37> in <module>()
>
>\----> 1 model.encoder(img)
>
>4 frames
>
>/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in \_conv\_forward(self, input, weight, bias)
>
>**438** \_pair(0), self.dilation, self.groups)
>
>**439** return F.conv2d(input, weight, bias, self.stride,
>
>\--> 440 self.padding, self.dilation, self.groups)
>
>**441**
>
>**442** def forward(self, input: Tensor) -> Tensor:
>
>RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
What's going wrong?
Thanks!
| 1 |
t3_o6w3zg
| 1,624,520,498 |
pytorch
|
Can we used matrices and lists for custom Dataset?
|
I am trying to create a dataset where the inputs are matrices and then also a list containing tuples. The label to these matrices + lists is an integer value.
Is this possible?
| 1 |
t3_o63ozl
| 1,624,415,704 |
pytorch
|
Can pytorch handle string processing?
|
Is there any way for me to preform a regular expression search on a string using pytorch? I would very much enjoy any resources surrounding such.
| 1 |
t3_o61m2r
| 1,624,409,035 |
pytorch
|
2 transforms on 1 dataset
|
I have a custom dataset which I initialize as `dataset=CustomDataset(root_dir=..., transform=None)`. Then I split it to training and testing set with `train_set, test_set = torch.utils.data.random_split(dataset, [num_training, num_testing])`. I want to apply 2 different transormations (from `torchvision.transforms`) to each of the `train_set` and `test_set`. Is it possible? If yes, how can i do that?
| 1 |
t3_o5lxet
| 1,624,364,781 |
pytorch
|
How to do Linear Algebra with Probability Distributions
|
I would like to use probability distributions instead of variables in python. I have tried a few libraries (PyMC3, Tensorflow+Tensorflow\_probability, edward2, numpyro, torch with pyro). So far the things I want to do, add, multiply with a scalar and with another probability distribution, apply sin/cos to a probability distribution (not to a sample from the probability distribution) I could only achieve in PyMC3.
The following code returns errors at both sums and products.
import torch
loc = 200.
scale = 1
d = torch.distributions.Normal(loc, scale)
scalar = torch.tensor(5.)
sum1 = d + d
sum2 = torch.add(d, d)
prod1 = d * scalar
prod2 = torch.mul(d, scalar)
The same concept can be implemented in PyMC3 without any problems and the resulting probability distributions are scaled and/or shifted. Can this be implemented somehow in pytorch?
| 1 |
t3_o4sg7d
| 1,624,272,836 |
pytorch
|
Pre-processing audio data on GPU
|
In my new tutorial, you can learn how to pre-process audio data directly on GPU using Pytorch and torchaudio.
This video is part of the “PyTorch for Audio and Music Processing” series, which aims to teach you how to use PyTorch and torchaudio for audio-based Deep Learning projects.
Video:
[https://www.youtube.com/watch?v=3wD\_eocmeXA&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=7](https://www.youtube.com/watch?v=3wD_eocmeXA&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=7)
| 1 |
t3_o4rjei
| 1,624,268,672 |
pytorch
|
Neural Style Transfer in PyTorch
|
nan
| 1 |
t3_o4mtnc
| 1,624,248,972 |
pytorch
|
Using hooks
|
I'm learning about hooks and wanted to practice them. I'm basically trying to create a hook that whenever theres a gradient updates to each weights, will just take the weight value and the loss, and multiply that value by 1e-1. So if the update should be:
w1 -= lr \* loss\_value = 1e-5 \* 50
I want it to go through the hook before the update and make it 1e-5 \* 50 \* 1e-1
How can I go about this? The concept of hooks is a bit confusing to me
| 1 |
t3_o4m21c
| 1,624,246,255 |
pytorch
|
Skip connection implementation
|
how to implement skip connection for this [coding](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L124) ?
class SkipEdge(Edge):
def __init__(self):
super().__init__()
self.f =
&#x200B;
| 0.71 |
t3_o3wdro
| 1,624,159,145 |
pytorch
|
How to use a custom dataset in PyTorch?
|
I'm trying to create a dataset for training a robot in a 2D world. The World is going to have labels like "free", "unknown", "obstacle" to identify different components of the map. This will most likely be with matrices using integers for the labels e.g. free=1, unknown=2 etc.
I want to use this dataset to train a CNN to learn the value of each state in the map. The current plan is to break this world down into multiple binary matrices to separate the different labels. So, eventually the neural net will have input like feeding in RGB images (but the different matrix layers here will represent the different labels).
I was looking at the DataLoader class in Pytorch and it allows us to create custom datasets. However, in my dataset I don't have .jpg files, but separate matrix layers to represent the different labels in the map at each state. Does anyone know how I can create a dataset like the one I want in PyTorch?
| 1 |
t3_o346c2
| 1,624,062,443 |
pytorch
|
Derivative of neural network with respect to inputs?
|
I have a simple MLP that takes input X (shape 1,100) and outputs Y\_predict (shape 1,100). I would like to take the derivative of Y\_predict with rspct to each input:
e.g.
d/dx\_i (Y\_predict,i)
&#x200B;
Any advice on doing this? Thanks!
| 1 |
t3_o2830z
| 1,623,964,539 |
pytorch
|
Creating random linearly independent vectors
|
I can generate a random vector with
a = torch.rand(10)
However, how can I generate N more vectors (e.g. N=5: b,c,d,e,f) that will be linearly independent of the other vectors (i.e a is independent of any of b,c,d,e,f, b is independent of any of a,c,d,e,f, etc.)?
| 1 |
t3_o27bl3
| 1,623,962,538 |
pytorch
|
Pre-processing audio data with different durations
|
In real-world audio datasets, not all files have the same duration / num. of samples. This can be a problem for the majority of Deep Learning models (e.g., CNN), which expect training data with a fixed shape.
In computer vision, there’s a simple workaround when there are images with different sizes: resizing. What about audio data? The solution is more complex.
First, you should decide the number of samples you want to consider for your experiments (e.g., 22050 samples)
Then, when loading waveforms you should ensure that they have as many samples as the expected ones. To ensure this, you can do two things:
1. cut the waveforms which have more samples than the expected ones;
2. zero pad the waveforms which have less samples than the expected ones.
Does this feel too abstract?
No worries, in my new video I demonstrate how you can use cutting/padding with audio data in Pytorch and torchaudio.
This video is part of the “PyTorch for Audio and Music Processing” series, which aims to teach you how to use PyTorch and torchaudio for audio-based Deep Learning projects.
Video:
[https://www.youtube.com/watch?v=WyJvrzVNkOc&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=6](https://www.youtube.com/watch?v=WyJvrzVNkOc&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=6)
| 1 |
t3_o1ufte
| 1,623,927,361 |
pytorch
|
cuda not allocating any memory but caching
|
I have added these lines to see if the GPU is being used:
if device.type == 'cuda':
print('Memory Usage:')
torch.rand(20000,20000).cuda()
print('Allocated:', round(torch.cuda.memory\_allocated(0)/1024\*\*3,10), 'GB')
print('Cached: ', round(torch.cuda.memory\_reserved(0)/1024\*\*3,10), 'GB')
The result was:
Memory Usage:
Allocated: 0.0 GB
Cached: 1.490234375 GB
My supervisor pointed out to me that my code isn't using the GPU while running on our cluster, and I am not sure why.
| 1 |
t3_o1t5az
| 1,623,922,324 |
pytorch
|
Everything You Need To Know About Torchvision’s SSD Implementation
|
nan
| 0.87 |
t3_o1sxqg
| 1,623,921,472 |
pytorch
|
Installing PyTorch fails on MacOS with brand new conda env
|
nan
| 1 |
t3_o1hwgv
| 1,623,883,912 |
pytorch
|
Any way to train models on phone using Pytorch?
|
I'm currently doing a research in federated learning which requires training a lightweight model on a mobile device.
I read about Pytorch Mobile, but it apparently cannot be used to perform backprop on the phone itself (correct me if I'm wrong).
Is there any workaround to do this task?
| 0.84 |
t3_o0zmpa
| 1,623,830,241 |
pytorch
|
PyTorch 1.9 Release, including torch.linalg and Mobile Interpreter
|
nan
| 1 |
t3_o0wmry
| 1,623,817,666 |
pytorch
|
Preforming well in training but very poor performance in testing.
|
I’m training a semantic segmentation model based on the resent50 framework. I’ve trained the data on 5000 images over 50 epochs and it’s predicting well with the training data, however when I provide new data for evaluation the model is failing to classify. Any suggestions on how to correct this? Larger training dataset, more data (as in depth of the data)?
Edit: fixed 500 to 5000
| 1 |
t3_nzxfd8
| 1,623,705,818 |
pytorch
|
Performance issues with torch.norm
|
Has anyone else experienced performance issues using \`torch.norm\` and how did you work around this?
| 0.33 |
t3_nzke5l
| 1,623,669,083 |
pytorch
|
Extract mel spectrograms with Pytorch + torchaudio
|
I published a new tutorial where you can learn how to extract Mel Spectrograms and resampling audio with torchaudio. I also review the most common torchaudio transforms and explain how you can use them.
This video is part of the “PyTorch for Audio and Music Processing” series, which aims to teach you how to use PyTorch and torchaudio for audio-based Deep Learning projects.
Here's the video:
[https://www.youtube.com/watch?v=lhF\_RVa7DLE&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=5](https://www.youtube.com/watch?v=lhF_RVa7DLE&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=5)
| 1 |
t3_nzjzse
| 1,623,667,519 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.