sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float64
0.07
1
id
stringlengths
9
9
created_utc
float64
1.6B
1.65B
pytorch
I can't debug my code
Hi everyone. I'm trying to debug my code and I can't manage to find what's wrong. I'm running a Roberta model with a classifier head on a Pytorch Lightning trainer and I just get stuck after it is running the optimizer config, it's loading the train data, calling \_\_len\_\_(from dataset) a couple of times, loading val data, calling len again a few times and then complete freeze, what could be going on? Also it's 100% running on the gpu but python is not using any gpu in task manager so I don't think it's actually doing anything. Does anyone have any ideeas what variables I can check in the debugger to see what's going on?
0.43
t3_thjopg
1,647,650,927
pytorch
need some help in our collage 🤞 project its 80% is completed
this is an NLP and python based project we are trying to achieve something new this project is almost there but the only file connecting thing is leftover😌 please dm me for more information/collaboration😊 we are open to welcoming you to this project**🤗** **not a paid work🙄**
0.33
t3_thcomg
1,647,630,708
pytorch
Is it possible to modify weights file ? (.pt)
I have pretrained weights for model i coded. The networks is the same and i just need to change the location of layer classes.
0.88
t3_tglv2a
1,647,555,341
pytorch
Cool PyTorch Guide/Wiki
PyTorch Guide/Wiki: [https://github.com/mikeroyal/PyTorch-Guide](https://github.com/mikeroyal/PyTorch-Guide)
0.83
t3_tfdaa8
1,647,421,375
pytorch
RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 3
Why if `NUM_OF_CELLS` variable is [increased from 8 to 16](https://github.com/buttercutter/gdas/blob/30c3694251188ad7c1d4de85efa44f90f8936f70/gdas.py#L44), the following error pops up ? /home/phung/PycharmProjects/venv/py39/bin/python /home/phung/PycharmProjects/beginner_tutorial/gdas.py Files already downloaded and verified Files already downloaded and verified run_num = 0 Entering train_NN(), forward_pass_only = 0 modules = <generator object Module.named_children at 0x7fa359f57e40> c = 0 , n = 0 , cc = 0 , e = 0 Traceback (most recent call last): File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 841, in <module> ltrain = train_NN(forward_pass_only=0) File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 605, in train_NN NN_output = graph.forward(NN_input) File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 353, in forward self.cells[c].nodes[n].connections[ RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 3 Process finished with exit code 1
0.2
t3_tf8ivf
1,647,402,354
pytorch
Prevent Flask from re-loading models during prediction for different images
Hi, I am using Flask to display machine learning results. In particular, I am using Yolo object detection and I would like to show predictions in the index.html. This works but the problem is that it is reloading the model each time I select another image as input. It would be important to avoid reloading of the model as this slows down the App dramatically. A similar question was asked before at https://stackoverflow.com/questions/61049310/how-to-avoid-reloading-ml-model-every-time-when-i-call-python-script but the answers do not work for my case. Please find here the code for [app.py](https://app.py) : from flask import Flask, render_template, request, redirect, url_for, make_response import os import io from PIL import Image import cv2 import torch from werkzeug.exceptions import BadRequest import pandas as pd import time import argparse model=None def load_model(): global model model=torch.hub.load('ultralytics/yolov5', 'custom', path='/my-directory/best.pt', force_reload=True) app = Flask(__name__) @app.route('/', methods=["POST"]) def upload_file(): uploaded_file = request.files['file'] if uploaded_file.filename != '': uploaded_file.save(os.path.join('static', uploaded_file.filename)) img = Image.open(os.path.join('static', uploaded_file.filename)) results = model(img, size=640) df = results.pandas().xyxy[0] res = round(df.iloc[0,0],2) return render_template('index.html', user_image = os.path.join('static', uploaded_file.filename), dtext = res) if __name__ == "__main__": print('starting...') load_model() app.run(debug=True) I have also tried to put the model part directly as but it gives as an error "model not defined": if __name__ == "__main__": model=torch.hub.load('ultralytics/yolov5', 'custom', path='/my-directory/best.pt', force_reload=True) app.run(debug=True) Any suggestions or help would be welcome!
1
t3_te92ba
1,647,295,493
pytorch
Pytorch: Weighting in BCEWithLogitsLoss, but with 'weight' instead of 'pos_weight'
I'm looking how to do class weighting using BCEWithLogitsLoss. https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html The example on how to use `pos_weight` seems clear to me. If there are 3x more negative samples than positive samples, then you can set pos_weight=3 Does the `weight` parameter do the same thing? Say that I set it `weight=torch.tensor([1, 3])`. Is that the same thing as `pos_weight=3` Also, is `weight` normalized? Is `weight=torch.tensor([1, 3])` the same as `weight=torch.tensor([3, 9])`, or are they different in how they affect the magnitude of the loss?
1
t3_te3g7p
1,647,280,492
pytorch
Where to find paid tutor for pytorch / CNN homework project
I'm looking for someone who can help me (paid) with a homework assignment for about Convolutional neural networks and image Segmentation. I must be done in PyTorch. I need help with a basic implementation, but also with writing a short report and explaining choices. So I'm looking for someone with a theoretical background (ML / Data Science) Does anybody know a good place to look for help?
0.71
t3_tdey0r
1,647,201,262
pytorch
Article series: Deployment of Deep Learning models on Genesis Cloud - Tutorials & Benchmarks
​ https://preview.redd.it/h6b43oyqesm81.png?width=2036&format=png&auto=webp&s=4433b69002b5ddc374b2b4db9713f1e08b6b17a0 We are proud to introduce our new article series that will guide you on how to run state of the art deep learning models on Genesis Cloud infrastructure. These articles will be initially published as blog posts and will be added to our [knowledge base](https://support.genesiscloud.com/a/solutions) after their release. Please note: The order of the articles is important as articles are written as a series and information contained in the initial articles might be required for understanding the subsequent articles. In this series of articles we will use 1x RTX 3080 instance type on Genesis Cloud (our recommended GPU for inference use) and showcase four (4) different deployment strategies for deep learning inference using (a) PyTorch (TorchScript), (b) TensorRT, and (c) Triton. For the models, we will focus on computer vision applications using the torchvision model collection. This collection will serve as an example and includes various pretrained versions of classic deep learning algorithms such as alexnet, densenet, mobilenet, resnet, shufflenet, and squeezenet. ## Articles * Article 1: PyTorch, torchvision, and simple inference examples - available next week * Article 2: Deployment techniques for PyTorch models using TorchScript - upcoming (early April 2022) * Article 3: Deployment techniques for PyTorch models using TensorRT - upcoming (end of April 2022) * Article 4: Using Triton for production deployment of TensorRT models - upcoming (May 2022) ## Why run deep learning inference on a GPU? In the early days of machine learning GPUs were mainly used for training deep learning models while inference could still be done on a CPU. While the field of machine learning progressed immensely in the past 10 years, the models have grown in both size and complexity, meaning that today the standard infrastructure setup for latency-sensitive deep learning applications are based on GPU cloud instances instead of CPU-only instances. Rationale for using a GPU is not just performance but also cost. Compared to CPUs, GPUs are often two orders of a magnitude more efficient in processing deep neural networks. This means, that cost-savings can be achieved by switching to a GPU instance especially when operating with high throughput applications. ## How to run deep learning inference on a Genesis Cloud GPU instance? All you need are a Genesis Cloud GPU instance, a trained deep learning model, data to be processed, and the supporting software. We will show you how to master it all. Each article will contain: * Installation and/or building instructions for various components * Necessary background information * Sample code for validation of installation and further experiments * Annotations and explanations to help you understand the sample code * Prebuilt models ready for deployment (when applicable) * Benchmarking scripts and results (when applicable) In case you aren’t using Genesis Cloud yet, [get started here](https://id.genesiscloud.com/signup/). **Now start accelerating on machine learning with Genesis Cloud** 🚀
1
t3_tbuunl
1,647,018,809
pytorch
PyTorch 1.11, TorchData, and functorch are now available
nan
1
t3_tb7cu0
1,646,942,380
pytorch
Improving topology and performance of binary classifer
I'm trying to develop a binary classifier for a university project, but I'm getting really terrible results, (ie., no better than chance/50%, on training or test data). Basically, I'm trying to build a classifier to distinguish between animated and photographic styles in film. Anyone have any recommendations on what I could do better? I think my network topology is far from optimal, though I don't know what I can improve or how to improve it, nor even where to look for how to topographically design a network, (anyone have any resources? Is it just trial and error?) ​ Bear with me, as I based this classifier *heavily* off the tutorial from Pytorch's docs. I'm pretty new to this, so I'm trying to figure it out. ​ Data: I used ffmpeg to scale frames from a bunch of movies and shows (either filmed or animated) to 128x128 (3 color layers/RGB). I have approximately 200k frames each of animated and photographed images, though at any one time I'm only loading about 80k frames into memory, (60k for training of 30k photographic and 30k animated, and 20k for testing of 10k photographic and 10k animated). Input to the network ends up being of shape \[3, 64, 64\] after cropping. images_train, labels_train, images_test, labels_test = getFrames() # each of these is an [n_frames, 3, 128, 128] torch.Tensor # getFrames also balances the dataset, so n_class0 = n_class1 transform_forward = torchvision.transforms.Compose([ torchvision.transforms.RandomCrop(64), torchvision.transforms.Normalize([0.5] * 3, [0.5] * 3), ]) transform_backward = torchvision.transforms.Compose([ torchvision.transforms.Normalize([-1.] * 3, [2.] * 3), torchvision.transforms.ToPILImage(), ]) class Frames(torch.utils.data.Dataset): def __init__(self, train, train_vs_test_files=((0, 1), (6, 7)), transform=transform_forward, target_transform=None): self.transform = transform self.target_transform = target_transform # shuffle dataset (probably not strictly necessary if using # DataLoader(shuffle=True), but gives me peace of mind) if train is True: shuffled = np.random.permutation(len(images_train)) self.images = images_train[shuffled].float() self.labels = labels_train[shuffled].float() else: shuffled = np.random.permutation(len(images_test)) self.images = images_test[shuffled].float() self.labels = labels_test[shuffled].float() def __len__(self): return len(self.images) def __getitem__(self, idx): image = self.images[idx] label = self.labels[idx] if self.transform: image = self.transform(image) if self.target_transform: label = self.target_transform(label) return image, label dataset_train = Frames(train=True) loader_train = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size, shuffle=True) I'm fairly confident I'm loading my data properly, as if I review the images with something like, plt.imshow(transform_backward(dataset_train[np.random.randint(len(dataset_train))])) I get what I expect. One concern I have about my data is that many frames are very similar, (ie., from the same shot in a film/show/movie, but only very slightly different scene/camera position/position of objects in frame). I told ffmpeg to export only 1 of every *n* frames, but that only somewhat remedies the problem. ​ Model and training: Input to the network ends up being of shape \[3, 64, 64\] after cropping. class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.kernel = 5 self.conv1 = torch.nn.Conv2d(3, 6, self.kernel) self.conv2 = torch.nn.Conv2d(6, 16, self.kernel) self.fc1 = torch.nn.Linear(2704, 768) self.fc2 = torch.nn.Linear(768, 128) self.fc3 = torch.nn.Linear(128, 1) self.activation = torch.nn.ReLU() # easier to change activations if I want self.sigmoid = torch.nn.Sigmoid() self.pool = torch.nn.MaxPool2d(2) def forward(self, x): x = self.pool(self.activation(self.conv1(x))) x = self.pool(self.activation(self.conv2(x))) x = torch.flatten(x, 1) x = self.activation(self.fc1(x)) x = self.activation(self.fc2(x)) x = self.activation(self.fc3(x)) x = torch.squeeze(x) return self.sigmoid(x) net = Net().cuda() criterion = torch.nn.BCELoss() optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.9) print('Training...') for epoch in range(20): # loop over the dataset multiple times running_loss = 0. for i, data in enumerate(loader_train): optimizer.zero_grad() inputs, labels = data[0].cuda(), data[1].cuda() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() print(f'Epoch {epoch + 1:02d} loss: {running_loss / 2000:.3f}') print('Finished training.')
0.81
t3_taicem
1,646,861,996
pytorch
Resource to learn advanced concepts
I have gone through the basics of building and training neural networks. I want to implement an algorithm from a paper that requires me to build layers with new functionalities. For instance, I need to keep a copy of the weights in real form, but output a binarized form. For some reason, I can't seem to find any good reference on how such things can be done in Pytorch. I searched the tutorial page and came across one where they implement a forward and a backward method. The tutorial doesn't even explain why these methods are static or some of the variable names (like `ctx`). Basically, I'm looking for a resource that explains how a layer with custom functionality is built from scratch. Is there such thing?
0.67
t3_ta48bz
1,646,818,579
pytorch
Are there any simple pytorch neural networks to combine pictures?
Hi, For instance, given two pictures, there is one common object in both pictures, how to let neural network to combine the object, like YOLO can recognize it but to output the combined picture not output the scalar probability . tried DCGAN, got very bad results. also Saw one [https://github.com/jcjohnson/neural-style](https://github.com/jcjohnson/neural-style) that is not the goal also way complicated. ​ thanks a ton.
0.67
t3_t9jaft
1,646,753,410
pytorch
Video of a tree created using an AI made with pytorch
nan
0.96
t3_t6usk1
1,646,433,158
pytorch
Case Study: Amazon Ads Uses PyTorch and AWS Inferentia to Scale Models for Ads Processing
nan
0.72
t3_t5jgkx
1,646,282,826
pytorch
Dense layers in Tensorflow: what's the PyTorch equivalent?
What is the equivalent of this tf line in PyTorch? self.q = tf.keras.layers.Dense(conv_filters)
0.86
t3_t58p6q
1,646,251,095
pytorch
How to load checkpoint from batch without iterating over dataset again.
I am training a model with quite large CelebA data, the training time is quite long on google colab, leading to not running 1 epoch, then the connection is disconnected. I want to ask is there a way to fix this to load the model at the batch stop to continue training in the next lane? Thanks
0.67
t3_t4wu1s
1,646,215,017
pytorch
Custom dataset's __getitem__ calls itself indefinitely when handling exception
Hey, I'm writing a script for my customdatset class but I get `Index out of range` error whenever I access data using for loop like so: ``` cd = CustomDataset(df) for img, target in cd: pass ``` I realized I might have a problem reading a few images (if they are corrupt) so I implemented a `random_on_error` feature which chooses a random image if something is wrong with the current image. And I'm sure that's where the problem is. As I've noticed that all the 2160 images in the dataset are read without any hiccups(i print the index number for every iteration) but the loop would not stop and read the 2161st images which results in an `Index out of range` exception that gets handled by reading a random image. This continues forever. Here is my class: https://pastebin.com/LkNPGrFb I believe the problem is with the `except` block (line 27), as when I remove it the code works fine. But I cannot see what the problem here is. Any help is appreciated, thanks EDIT: found the mistake, I forgot to check for index error and raise it. As without it all the exceptions are handled by the generic `except` block
1
t3_t4vgto
1,646,209,119
pytorch
Beginner practice problem implementing linear regression from scratch using tensors
I'm learning PyTorch. I like to learn by challenging myself to solve toy problems, so I made up [this problem and solution](https://www.practiceprobs.com/problemsets/pytorch/tensors/screen-time/). Perhaps it will help others. (More to come.)
1
t3_t4irct
1,646,169,822
pytorch
What are the best ways to make generator fast converge for dcgan
Hi, loss function and optimizers defined as followed lr = 0.0002 beta1 = 0.5 loss_fun=nn.BCELoss() optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999),weight_decay=0.0) # optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999),weight_decay=0.0) # ​ Discriminator loss at first was going to 1 around, and slightly decreasing around 0.8 quite far away from 0.5 which could prove Discriminator is working. so the problem might lie in generator? tried 1. to use bigger learning rate for generator 2. to use bigger weight\_decay for generator both will result in that less epoch will have noise visually. how to use the loss error properly to guide the generator? or more data ​ thanks a ton
1
t3_t4fngi
1,646,161,802
pytorch
How many epoch will lead to overfitting?
Hi, for instance, in the [https://pytorch.org/tutorials/beginner/dcgan\_faces\_tutorial.html](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) there are 220,00 around samples data, D has 2 million parameters around, G has 3 million parameters around, seems the network parameters number ten times more than the data, and there were 5 epoches, the results are very good. How to tell overfitting or not based on these steps and number ratio or something? ​ thanks a ton
0.33
t3_t4c442
1,646,152,790
pytorch
How to take two inputs of images and extract feature and combine them into latent space?
Hi, Especially one object, part of it in one image and the part in the other image. there might be overlap part between images. to combine them into latent space and later generator could produce a combined one Tried concat in row level, like one is on the top and the other and on the bottom with conv2d() and other batchnorm() leakyrelu() but somehow the result contained two objects not combined. find something similar [https://github.com/PramuPerera/In2I](https://github.com/PramuPerera/In2I) but the code is barely readable. the paper did not disclose very detailed process. thanks a ton
0.33
t3_t4c0td
1,646,152,553
pytorch
index out of range when training with SubsetRandomSampler
hey, I'm learning to use `SubsetRandomSampler` since it shuffles and returns every image in dataset (unlike the normal `RandomSampler`). But my training method breaks immediately after using the `SubsetRandomSampler` and gives index out of range error. But everything works find if i use `RandomSampler` Here is the code: https://pastebin.com/YKgdR5JR It gives me: `index 2323 is out of bounds for axis 0 with size 2160` Error. I've check the code multiple times for typos but I cant figure out whats going wrong. Any help is appreciated, thanks
1
t3_t3jtax
1,646,066,605
pytorch
Having used tensorflow and wanted to try out pytorch but have now tried for the last couple of hours to pip install torch without success - what am I doing wrong?
EDIT: **Solved** by using an earlier version of python: pyenv local 3.9.9 python -m pip install torch Worked with a somewhat unclear error about unchecked dependencies, numpy and tensorflow. As I'm also going to use tensorflow I just installed it and it seems to have taken care of other potential dependencies. **Original question** Installing tensorflow just works but pytorch seems to be quite picky. Long story short: I first created a local python version with pyenv pyenv local 3.10.2 Then python -m venv .venv source .venv/bin/activate Then various different approaches following trusty Google (of course none worked - otherwise I wouldn't be here) 1) python -m pip install torch ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch 2) Went to https://pytorch.org/get-started/locally/ and generated a command python -m pip install torch==1.10.2+cpu torchvision==0.11.3+cpu torchaudio==0.10.2+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html note: This is an issue with the page at the URL mentioned above. hint: You might need to reach out to the owner of that package index, to get this fixed. See https://github.com/pypa/pip/issues/10825 for context. ERROR: Could not find a version that satisfies the requirement torch==1.10.2+cpu (from versions: none) ERROR: No matching distribution found for torch==1.10.2+cpu 3) python -m pip install --use-deprecated=html5lib torch==1.10.2+cpu torchvision==0.11.3+cpu torchaudio==0.10.2+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html Looking in links: https://download.pytorch.org/whl/cpu/torch_stable.html ERROR: Could not find a version that satisfies the requirement torch==1.10.2+cpu (from versions: none) ERROR: No matching distribution found for torch==1.10.2+cpu
0.89
t3_t3hu1z
1,646,061,221
pytorch
What is PyTorch Mobile?
nan
0.25
t3_t3db04
1,646,047,050
pytorch
Pytorch C++
Should i use Linux OS for Pytorch C++?
1
t3_t2t2d7
1,645,982,710
pytorch
Constant validation metrics during training
I am training a neural network in pytorch lightning. The learning curves for the training metrics change over time, but the validation metrics are always constant. Originally I thought it might be over fitting but I changed the learning rate and it is still constant. Any advice for fixing this issue would be appreciated.
1
t3_t2qq5w
1,645,976,194
pytorch
Get image quality score form single class images
Hello community, I come to you with a query: I am trying to find a model to calculate or predict the quality of an image of a single class, that is, of a set of images, all with the object of interest already located, I want to know how much quality it has, this to be able to apply it in a fingerprint and guarantee a good image quality before the image is captured, study a little how onyx camera works, they use a quality net model in tensorflow but I don't know exactly what it is, because I have not had access to the file to pass it through netron, they know of a network that gives me the quality value of an image (in this case fingerprints) that I can train, I already have my dataset with two classes, images good quality and poor quality images
0.5
t3_t1q0mb
1,645,856,123
pytorch
Link prediction using PyTorch-geometric? Lessons learned, etc?
I’m working to get some link prediction models developed and have been starting to work with PyTorch-geometric. It looks like it’s pretty capable and has the ability to scale reasonably well. I have a heterogenous graph and am expecting the total nodes to be a few million with several edge types. The current approach I’m developing is to utilize spark for creating datasets for training because spark is already set up in my environment for this and does similar tasks already. So with the automation to generate training sets already in place the next evolution is to to begin iterating on features/embeddings/model-architecture. I’m really interested in others experience with heterogeneous graphs and lessons with scaling/integration/automation/etc. PyTorch geometric looks pretty promising from reading the docs and initial exploration with some exemplar data (on a very small scale) seems to indicate good potential. I’m working in AWS with a lot of lambda automation and use of step functions with EMR for executing the data pipelines. What do you all have to share on this? Thank you.
1
t3_t1jbv4
1,645,836,248
pytorch
Can the kernel of conv2d stride with multiple channels?
Hi, suppose input is 8x66x66, like can the kernel extract feature from 1st channel and 5th channel? and next iteration, 2nd channel and 6th channel... or is there some similar way to achieve this? ​ Thanks a ton
0.67
t3_t1it5y
1,645,834,733
pytorch
Can I use pytorch to extend a 3D array in a direction of my choice?
This 3D array (https://imgur.com/a/BCuCJNM) is a cross-section of a forest. I would like to predict additional forest slices moving in the x or y direction. I found this tutorial online, but this tutorial is for a 1D data set (https://www.codespeedy.com/predict-next-sequence-using-deep-learning-in-python/).
0.67
t3_t1dnoh
1,645,820,926
pytorch
How do i improve my unit tests?
Hey, so i'm pretty new to pytorch and I like it very much but find it quite verbose, so I'm writing a framework for personal use that i plan on putting on my cv, so I'm making it complete with unit tests, static typing,etc. But its my first time writing tests, so quite clueless about what aspects I'm supposed to test and what I should improve on. Here is the link to the github repo: https://github.com/default-303/easyTorch Its pretty small for now, Only has a dataset and a trainer class for classification, but I do intend to expand it with localization and segmentation eventually. So, till then I'd like to have input on how i can imporve my tests and make them better. Any help is appreciated, thanks
0.86
t3_t0y8s4
1,645,774,804
pytorch
Do you use cloud GPU platforms?
Hey everyone! I'm working with a bunch of guys that are developing a new platform for cloud GPU rental, and we're still in the conception stages atm. I understand that not all of you are using GPU clouds, but for those of you who are, are there any features you think that current platforms are missing? Do you think there's much room for improvement in the platforms you've used so far? What's your favourite platform? It would be great to get some insight from people who know what theyre talking about :) TIA! [View Poll](https://www.reddit.com/poll/szfb49)
1
t3_szfb49
1,645,615,896
pytorch
In DCGANs, before the last layer activation for both G and D, is there a need to place a BatchNorm2d()?
Hi, ​ In the official example, [https://pytorch.org/tutorials/beginner/dcgan\_faces\_tutorial.html](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) none of the G and D has the `BatchNorm2d()` before the last activation? Is the following error associated with the usage of `BatchNorm2d()` Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). after getting the result from G, manually `results=netG(...) ..` `plt.imshow(results.detach().cpu().numpy()) # there are some transpose operation omitted here` ​ thanks a ton
1
t3_sywgqj
1,645,559,192
pytorch
hookandlook - a library helping to gather stats and run checks for Pytorch models
nan
1
t3_syq1o6
1,645,543,110
pytorch
Typing and testing for torch
Hey, I'm making my first major project that i plan on putting on cv. So i want to make this as complete as possible, that includes typing and testing. The problem is haven't seen anyone use any of those in the wild. Some I'm confused, do people even have testing modules for pytorch projects ? If yes what do they usually test. So it'll be very helpful if someone who has done those in their projects let me know how they've achieved those.
1
t3_sy6pyc
1,645,483,047
pytorch
D loss doesn't begin with 1 around and G generates noise
Hi, Followed [https://pytorch.org/tutorials/beginner/dcgan\_faces\_tutorial.html](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) ​ [D G error over iteration](https://preview.redd.it/dlocq362hfj81.png?width=619&format=png&auto=webp&s=83b8433ef1eeab90c3388d34a66a6d45321693d6) ​ loss is calculated with BCE and optimizer is Adam ... criterion = nn.BCELoss() optimizerD = optim.Adam(netD.parameters(),weight_decay=0.3) ... The input is 200x3x240x320 D code nn.Conv2d(nc, nc*4, 4, 2, 1, bias=False), nn.BatchNorm2d(nc*4), nn.LeakyReLU(0.2, inplace=True), nn.Dropout(0.2, inplace=False),# DROP!! nn.Conv2d(nc*4, nc*6 , 3, 2, 1, bias=False), nn.BatchNorm2d(nc*6), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(nc*6 , nc*8 , 3, 2, 1, bias=False), nn.BatchNorm2d(nc*8), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(nc*8 , nc*4 , 3, 2, 1, bias=False), nn.BatchNorm2d(nc*4 ), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(nc*4 , nc*2 , 3, 2, 1, bias=False), nn.BatchNorm2d(nc*2), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(nc*2 , 3 , 3, 2, 1, bias=False), nn.BatchNorm2d(3 ), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(3 , 3 , 3, 2, 1, bias=False), nn.BatchNorm2d(3 ), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(3 , 1 , (2,3), 1, 0, bias=False), nn.BatchNorm2d(1 ), nn.LeakyReLU(0.2, inplace=True), nn.Sigmoid(), G code one difference this G from the official one is the BatchNorm2d(3) before the `Tanh()` #latent space is (6, 15, 10) nn.ConvTranspose2d( nz, nz*6, (4,4), 2, 1, bias=False), nn.BatchNorm2d(nz*6), nn.ReLU(True), nn.Dropout(0.5, inplace=False),# DROP!! nn.ConvTranspose2d(nz*6, nz*6, 4, 2, 1, bias=False), nn.BatchNorm2d(nz*6), nn.ReLU(True), nn.Dropout(0.5, inplace=False),# DROP!! nn.ConvTranspose2d(nz*6, nz*4, 4, 2, 1, bias=False), nn.BatchNorm2d(nz*4), nn.ReLU(True), nn.Dropout(0.5, inplace=False),# DROP!! nn.ConvTranspose2d(nz*4, nz*2, (3,4), (1,2), 1, bias=False), nn.BatchNorm2d(nz*2), nn.ReLU(True), nn.Dropout(0.5, inplace=False),# DROP!! nn.ConvTranspose2d(nz*2, nc, 4, 2, 1, bias=False), nn.BatchNorm2d(3), nn.Tanh() # last gate first epoch, the result it noise, more epoch, the results are just either black or white. so stuck here. Data not enough? will `dropout()` help fix it? D, G too small? add more kernel numbers? or loss function is not good or optimizer is not good? Is the weigh\_decay messing around the results? thanks a ton
1
t3_sy2wzs
1,645,473,815
pytorch
Parallel convolutions
Hello everybody, I have a model with 3 inputs X1, X2 and X3, each input go in a different convolution layer: Def forward(X1, X2, X3): F1 = conv1(X1) F2 = conv2(X2) F3 = conv3(X3) Return F1 + F2 + F3 The convolution layers are executed in sequential, is there a solution to execute them in parallel to decrease the model's execution time? Thank you
1
t3_sxpv4p
1,645,437,276
pytorch
Filter instances in loss calculation
Is there a way to filter out classes in a loss function? Is it necessary to build a custom loss? For example, assume loses shouldn't be calculated only for class 0, so in a binary setting ignore that class and compute loses for the rest.
1
t3_sx6m6w
1,645,378,713
pytorch
Time Series Forecasting Using LSTM
nan
0.67
t3_sx3qxv
1,645,371,194
pytorch
Best way to skip error inducing images in customdataset in pytorch
Hey, I'm learning pytorch and decided to make a simple cat breed detector. I'm using a public dataset from kaggle consisting of 120k images. I made my own dataset class but I'm getting a few different errors when training the model. here Is my dataset class: https://pastebin.com/jmWiFFGf Few errors, i can recall are, ``` if image.shape[2] > 3: IndexError: tuple index out of range ``` ``` Could not load "" Reason: "broken data stream when reading image file" ``` Is there a way to just `catch` the error and skip the problematic image altogether? I thought of a solution: ``` def __getitem__(self, index): try: target = self.targets[index] image = io.imread(self.image_paths[index]) ### to handle rgba images if image.shape[2] > 3: image = color.rgba2rgb(image) except: print('found error') np.delete(self.image_paths, obj = index, axis = 0) np.delete(self.targee, obj = index, axis = 0) self.__getitem__(index) ### other code continues... ``` but i feel like its not a good solution since my `Dataloader` spits out multiprocessing errors. How do you guys handle errors in your image datasets ? any help is appreciated, thanks
1
t3_sx07lt
1,645,360,029
pytorch
How to register a dynamic backward hook on tensors in Pytorch?
I'm trying to register a backward hook on **each neuron's** weights in a network. By dynamic I mean that it will take a value and multiply the associated gradients by that value. From [here](https://pytorch.org/docs/stable/generated/torch.Tensor.register_hook.html) it seem like it's possible to register a hook on a tensor with a fixed value (though note that I need it to take a value that will change). From [here](https://medium.com/the-dl/how-to-use-pytorch-hooks-5041d777f904) it also seems like it's possible to register a hook on all of the parameters -- they use it to do gradients clipping (though note that I'm trying to only do it on each neuron's weights). If my network is as follows: class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(3,5) self.fc2 = nn.Linear(5,10) self.fc3 = nn.Linear(10,1) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = torch.relu(self.fc3(x)) return x The first layer has 5 neurons with 3 associated weights for each. Hence, this layer should have 5 hooks that modifies (i.e change the current gradient by multiplying it) their 3 associated weights gradients during the backward step. Training pseudo-code example: net = Model() for epoch in epochs: out = net(data) loss = criterion(out, target) optimizer.zero_grad() loss.backward() for hook in list_of_hooks: #not sure if there's a more "pytorch" way of doing this without a for loop hook(random_value) optimizer.step()
1
t3_swmqof
1,645,312,711
pytorch
dataset script does not run in colab
Hey, I've decided to make reusable scripts for frequently used classes. So, i made one for my image dataset and imported it in collab. I can create the dataset object successfully but can't get data from it. here is my dataset code: https://pastebin.com/XfAiTe3A Here is how i use the script in google colab: ``` from imageDataset import customDataset dataset = customDataset(train_data)\ dataset[0] ``` Here is the error: ``` 16 target = self.targets[index] ---> 17 image = io.imread(self.image_paths[index]) 18 19 if self.augmentations is not None: SystemError: <built-in function imread> returned NULL without setting an error ``` But if i copy paste the code in a jupyter cell , i can use the class like i normally do. What I'm i doing wrong? any help is appreciated, thanks Edit: shit started working after i reran the notebook from the start. That said, i hated coding 10mins ago but i think I'm loving it again....
1
t3_sw7ytk
1,645,269,934
pytorch
during importing the pytorch, got error like ?stride@tensor@at@qeba_j_j@z could not be located!
Hi during importing the libs, import torch import torch.nn as nn import torch.nn.parallel import torch.backends.cudnn as cudnn import torch.optim as optim import torch.utils.data import torchvision.datasets as dset import torchvision.transforms as transforms import torchvision.utils as vutils ​ got an error like [error](https://preview.redd.it/sz6cye8ffgi81.png?width=419&format=png&auto=webp&s=0a7b21098f6a61c6ffe1d2427c3ce069d56ffd54) ​ any idea how to fix it? thanks a ton.
0.6
t3_suyork
1,645,130,319
pytorch
RuntimeError: CUDA out of memory.
Hi, ​ 1. all the network parameters together are less than 50,000 which is literally smaller than official examples [https://pytorch.org/tutorials/beginner/dcgan\_faces\_tutorial.html#data](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html#data) 2. data size are each one is 3x640x480, there might be 900 of them together... 3. network layers are deep like 40 in total. ​ RuntimeError: CUDA out of memory. Tried to allocate 1.03 GiB (GPU 0; 8.00 GiB total capacity; 6.34 GiB already allocated; 0 bytes free; 6.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory What may cause the memory overflow? data size? like each one resolution too big or too deep layers(when data passing through there will be a lot of memory allocation for parameters or intermediate results? or something else? ​ thanks a ton
0.67
t3_suyk8c
1,645,129,981
pytorch
How to calculate class weights for token level classification problem?
For each of my sentence the 0 labels are very less as compared to the 1’s (for token level classification). I use batches and calculate loss(CrossEntropy) after each batch. How should i create the class weights vector and use it in the loss calculation. Please suggest !
1
t3_stypkl
1,645,025,621
pytorch
waymo dataset
Have anyone used waymo dataset?I would like to use their data for object detection, what is the best way to read tfrecords?
0.86
t3_stwz2x
1,645,021,060
pytorch
Adding node and cell names for tensorboard graph
nan
0.81
t3_stvycz
1,645,018,240
pytorch
Efficient neural network
Hello everybody, I take part in a challenge of efficient neural network, the aim is to have the best score with a maximum of 400GFlops. My network has 40k parameters and is using 240Gflops. It's a really lite convnet, I can't really make it lighter. Some participants proposed a network with 20 millions parameters and using only 140 GFlops. With a better score. I thought the number of parameters was related to GFlops, I don't understand how they can propose network with so much parameters but using few GFlops. Do you have an idea? Thank you
0.67
t3_stvx7c
1,645,018,148
pytorch
after adding a nn.Dropout() layer, the number of parameters won't change?
Hi, class MLP(nn.Module): ''' Multilayer Perceptron. ''' def __init__(self): super().__init__() self.layers = nn.Sequential( nn.Flatten(), ... nn.Dropout(p=0.5), ... ) def forward(self, x): '''Forward pass''' return self.layers(x) THANKS A TON
0.75
t3_st969r
1,644,948,455
pytorch
How to create a dynamic learning rate per neuron in PyTorch?
I know it's possible to have a learning rate per layer ([link](https://stackoverflow.com/questions/51801648/how-to-apply-layer-wise-learning-rate-in-pytorch)). I also found how to dynamically change the learning rate (changing it in the middle of training dynamically without a scheduler) ([link](https://stackoverflow.com/questions/48324152/pytorch-how-to-change-the-learning-rate-of-an-optimizer-at-any-given-moment-no/64453694)). How can I create an optimizer that will have a dynamic learning rate **per neuron**? So that I could change the value of the learning rate for specific neurons during training
0.78
t3_srwbk0
1,644,795,481
pytorch
How do I read an image tensor?
I'm kind of new to deep learning and I'm trying to understand image tensors. I'm having a tensor.size() output of [1, 3, 48, 48] I already know how to read that. Like it's 1 image with rgb channels for the 3 and size 48 by 48 pixels. But how I'm supposed to read the actual tensor output? Anyone here to help me out or anyone at least have a source where I can find some information about it?
0.57
t3_srrebc
1,644,782,273
pytorch
Model loss not behaving correctly
https://preview.redd.it/…ing this effect?
0.67
t3_sqwuyt
1,644,687,761
pytorch
RuntimeError: expand(torch.LongTensor{[512, 16, 16]}, size=[512]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (3
Does anyone here have experience coding with Hypergraph convolution in Pytorch? I'm getting this error but I don't know where the error is located? Can anyone help me? Thank you. Traceback (most recent call last): File "/home/huynth/miniconda3/envs/inpainting/lib/python3.8/site-packages/torchsummary/torchsummary.py", line 140, in summary _ = model.to(device)(*x, *args, **kwargs) # type: ignore[misc] File "/home/huynth/miniconda3/envs/inpainting/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/huynth/Hypergraph-Inpainting/models/generator.py", line 222, in forward x11 = self.graph1(x11, hyperedge_index=refine_conv.squeeze().long()) File "/home/huynth/miniconda3/envs/inpainting/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/home/huynth/miniconda3/envs/inpainting/lib/python3.8/site-packages/torch_geometric/nn/conv/hypergraph_conv.py", line 147, in forward B = scatter_add(x.new_ones(hyperedge_index.size(1)), File "/home/huynth/miniconda3/envs/inpainting/lib/python3.8/site-packages/torch_scatter/scatter.py", line 29, in scatter_add return scatter_sum(src, index, dim, out, dim_size) File "/home/huynth/miniconda3/envs/inpainting/lib/python3.8/site-packages/torch_scatter/scatter.py", line 11, in scatter_sum index = broadcast(index, src, dim) File "/home/huynth/miniconda3/envs/inpainting/lib/python3.8/site-packages/torch_scatter/utils.py", line 12, in broadcast src = src.expand_as(other) RuntimeError: expand(torch.LongTensor{[512, 16, 16]}, size=[512]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (3)
1
t3_sqr04p
1,644,670,781
pytorch
How to allocate the batch for manual initialized tensor?
hi ​ did not use `dataloader` so initialize the tensor with `torch.from_numpy` during the training, how to set up the batch thing for tensors `data`? for epoch in range(num_epochs): // is there something needed for allocating the batches? netD.zero_grad() output = netD(data). thanks a ton
1
t3_sq5doc
1,644,602,230
pytorch
optimier.zero_grad()
Do we actually need to run optimizer.zero_grad() in a normal training? Or is the gradient already removed if I call optimizer.step()?
1
t3_sps9i5
1,644,560,033
pytorch
How to use the saved model to predict another image?
I am using keypointrcnn model for body joint detection, When i am visualizing the model prediction it is taking images from the test folder which already has the labels and annotations part. How can I use a random image to find the prediction of the model. Whenever I am using random images it is asking for the annotation and labels, but I want a new random image to find the prediction. Here is the code where the test image are loading with the help of iterator: iterator = iter(data\_loader\_test) images, targets = next(iterator) images = list(image.to(device) for image in images) with torch.no\_grad():     [model.to](https://model.to)(device)     model.eval()     output = model(images) print("Predictions: \\n", output) ========================================== The code below is used for visualizing the result: But it is visualizing only the test image: image = (images\[0\].permute(1,2,0).detach().cpu().numpy() \* 255).astype(np.uint8) scores = output\[0\]\['scores'\].detach().cpu().numpy() high\_scores\_idxs = np.where(scores > 0.85)\[0\].tolist() # Indexes of boxes with scores > 0.7 post\_nms\_idxs = torchvision.ops.nms(output\[0\]\['boxes'\]\[high\_scores\_idxs\], output\[0\]\['scores'\]\[high\_scores\_idxs\], 0.3).cpu().numpy() # Indexes of boxes left after applying NMS (iou\_threshold=0.3) \# Below, in output\[0\]\['keypoints'\]\[high\_scores\_idxs\]\[post\_nms\_idxs\] and output\[0\]\['boxes'\]\[high\_scores\_idxs\]\[post\_nms\_idxs\] \# Firstly, we choose only those objects, which have score above predefined threshold. This is done with choosing elements with \[high\_scores\_idxs\] indexes \# Secondly, we choose only those objects, which are left after NMS is applied. This is done with choosing elements with \[post\_nms\_idxs\] indexes keypoints = \[\] for kps in output\[0\]\['keypoints'\]\[high\_scores\_idxs\]\[post\_nms\_idxs\].detach().cpu().numpy():     keypoints.append(\[list(map(int, kp\[:2\])) for kp in kps\]) bboxes = \[\] for bbox in output\[0\]\['boxes'\]\[high\_scores\_idxs\]\[post\_nms\_idxs\].detach().cpu().numpy():     bboxes.append(list(map(int, bbox.tolist()))) visualize(image, bboxes, keypoints) ==================================================================== I want to ask how, or where should I modify my code so that i can use any random image for prediction with the help of saved model.
1
t3_spr7mg
1,644,556,428
pytorch
Attempting to rewrite BERT codebase into RoBERTA, running into shape issue.
\[SOLVED\] Biting off more than I can chew here, but trying anyways. I'm attempting to replace the BERT tokenization/model in this tutorial with all RoBERTA. And running into an issue with tensor shape when executing my training function. Tutorial: [https://github.com/curiousily/Getting-Things-Done-with-Pytorch/blob/master/08.sentiment-analysis-with-bert.ipynb](https://github.com/curiousily/Getting-Things-Done-with-Pytorch/blob/master/08.sentiment-analysis-with-bert.ipynb) Teaching moment here for someone, but I thought the shape of BERT and RoBERTA would be the same? Perhaps I messed up somewhere? Copy of my attempt: [https://colab.research.google.com/drive/1TWL1MYrN2xZ9\_lp0ID3uJoOTzIOOGLzk?usp=sharing](https://colab.research.google.com/drive/1TWL1MYrN2xZ9_lp0ID3uJoOTzIOOGLzk?usp=sharing) edit: solved, my initial input included an extra column, thus increasing the len of the resulting tensor. classic. https://preview.redd.it/efvjgqxph3h81.png?width=1454&format=png&auto=webp&s=4d364f301b60c44ff7cd9ef7fdc4f4e9a1e4de54
0.76
t3_spkpcq
1,644,537,997
pytorch
Is there a way to lock seed so training a network will always return same results?
nan
0.66
t3_spi5g8
1,644,530,020
pytorch
zero_grad() is supposed to be invoked every time one data point passed? How does a scalar.backward() from a loss function affect another model parameters?
Hi, According to the [https://pytorch.org/tutorials/beginner/dcgan\_faces\_tutorial.html](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) 1. `zero_grad()`was called every iteration of a data passed by? is it supposed to be initialized in the beginning of the training for only one time? what is the point here?also in the official tutorial, it was the **neural network model instance** that is called by `zero_grad()`Another example from Internet [https://github.com/himanshu-dutta/dcgan-pytorch/blob/master/DCGAN.ipynb](https://github.com/himanshu-dutta/dcgan-pytorch/blob/master/DCGAN.ipynb) The author called **the optimizer instance** `zero_grad()`Confused a lot by the difference here 2. `.backward()`is to update the gradients for the output tensors, how does this affect the model parameter or no need? it does not hook with an optimizer for the model when calling `.step()` thanks a ton
1
t3_spgsgh
1,644,526,433
pytorch
Are images data supposed to be normalized between 0-1? 0-255 in float is bad formal or something?
Hi, ​ Seems pytorch cannot work with uint8 255 data. So data in the normalized 0-1 are better than 0-255 in float ? ​ thanks a ton
0.67
t3_spf9d1
1,644,522,528
pytorch
[P] What we learned by accelerating by 5X Hugging Face generative language models
nan
0.8
t3_sp2r0k
1,644,485,826
pytorch
How to get specific classes from torchvision.datasets ?
Hi How to save the result as a new variable not to override the original one. So how to initialize a empty variable or null one? ​ import torchvision.datasets as dset dataset = dset.ImageFolder(root=dataroot) tried idxl= dataset.targets==0 DATAL=dataset.data[idxl] error goes: ~\Anaconda3\lib\site-packages\torch\utils\data\dataset.py in getattr(self, attribute_name) 81 return function 82 else: ---> 83 raise AttributeError 84 85 @classmethod AttributeErro also DATAR=dataset.__getitem__(2) also wrong.referred to [https://pytorch.org/vision/stable/datasets.html](https://pytorch.org/vision/stable/datasets.html) [https://discuss.pytorch.org/t/how-to-use-one-class-of-number-in-mnist/26276/8](https://discuss.pytorch.org/t/how-to-use-one-class-of-number-in-mnist/26276/8) ​ ​ ​ thanks a ton
0.67
t3_sooe9e
1,644,441,919
pytorch
how to use the dilation and kernel in conv2d to reduce the size of tensor properly?
​ hi, class trial (nn.Module): self.main = nn.Sequential( nn.Conv2d(3,3,(2,1),0,0,(60,1)) ) def forward(self,input): return input y=torch.randn(4,3,120,60) T=trial() result=T(y) result size is expected to be (4,3,60,60). The kernel and dilation do not work together? Thanks a ton
1
t3_sojbkt
1,644,428,618
pytorch
How to Classify Images with Unsupervised Learning in Pytorch
Hi guys. I'm trying to classify images into two classes using Unsupervised Learning in Pytorch. Could anyone point me to any beginner-friendly tutorials that could help? Thanks!
0.87
t3_so5q03
1,644,384,688
pytorch
Basic mistake? nn is not defined
How is the following possible? I am importing torch.nn: import torch.nn as nn Thus, nn is defined, which I have confirmed. However I am getting NameError: name 'nn' is not defined. PS C:\VTCProject\yolov5> python Python 3.8.5 (tags/v3.8.5:580fbb0, Jul 20 2020, 15:57:54) [MSC v.1924 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import torch >>> import torch.nn as nn >>> nn <module 'torch.nn' from 'C:\\Python38\\lib\\site-packages\\torch\\nn\\__init__.py'> >>> import yaml >>> from models.ylmt import Model Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\VTCProject\yolov5\models\ylmt.py", line 1, in <module> class Model(nn.Module): NameError: name 'nn' is not defined >>> exit Use exit() or Ctrl-Z plus Return to exit >>> exit() The file models\\[ylmt.py](https://ylmt.py) should inherits the modules imported by the parent script, right? I ran into this running the Ultralytics training example with custom data: python train.py --data .\data\roadometry.yaml --cfg yolov5s --weights '' --batch-size 16 \------------------------------------ Edit: It seems that my understanding was wrong: [https://stackoverflow.com/questions/3106089/python-import-scope](https://stackoverflow.com/questions/3106089/python-import-scope) So I guess my question now is, how did this ever work in the Ultralytics repo? The contents of [ylmt.py](https://ylmt.py) are: class Model(nn.Module): def init(self, cfg='ylmt.yaml', ch=3, frames=2, nc=36): # model, input channels, number of classes super(Model, self).init() print("init") def forward(self, x, augment=False, profile=False): print("ylmt: forward") ​
1
t3_snh6lq
1,644,316,526
pytorch
How to extract feature from 2 tensors into one? what layer should be used?
Hi, class Combine(nn.Module): def __init__(self, ngpu): self.main = nn.Sequential ( ... ) def forward(self, t1,t2): return self.main(t1,t2) confused what kind of layer to use?
1
t3_sna82o
1,644,292,192
pytorch
Recoloring images with pytorch!
nan
0.97
t3_smeykx
1,644,201,354
pytorch
PyTorch-Lightning trainer.fit() "Validation sanity check" fails (full error trace in post) - any ideas?
**EDIT: Never mind, I found the error, I am very dumb, sorry for wasting your time.** *Solution:* in the `TestDataset`, [`self.data`](https://self.data) is not defined; thus calling dataset.\_\_len\_\_ returns the AttributeError. The solution is to not mindlessly copy-paste tutorial code. [`self.data`](https://self.data) should be replaced with `self.encodings.input_ids`. *Incidental Solution:* I also did not set up my `forward()` method correctly for the LightningModule. `forward()` should instead be: def forward(self, input_ids, attention_mask, labels): return self.model(input_ids, attention_mask, labels).logits If you have any other comments / "by the way, this is a better way to do this," or want to laugh at my mistakes, please feel free to comment anyway. Thank you for your time! \---- Hi all, I am having a little trouble getting a LightningModule off the ground. When I call [`trainer.fit`](https://trainer.fit)`(model, data_module)`, I receive the following (very long) error trace: >>> trainer.fit(model, data_module) LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] | Name | Type | Params --------------------------------------------- 0 | model | RobertaForMaskedLM | 355 M --------------------------------------------- 355 M Trainable params 0 Non-trainable params 355 M Total params 1,421.648 Total estimated model params size (MB) Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 460, in fit self._run(model) File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 758, in _run self.dispatch() File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 799, in dispatch self.accelerator.start_training(self) File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 96, in start_training self.training_type_plugin.start_training(trainer) File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 144, in start_training self._results = trainer.run_stage() File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 809, in run_stage return self.run_train() File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 844, in run_train self.run_sanity_check(self.lightning_module) File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1112, in run_sanity_check self.run_evaluation() File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 925, in run_evaluation dataloaders, max_batches = self.evaluation_loop.get_evaluation_dataloaders() File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\evaluation_loop.py", line 63, in get_evaluation_dataloaders self.trainer.reset_val_dataloader(model) File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\data_loading.py", line 409, in reset_val_dataloader self.num_val_batches, self.val_dataloaders = self._reset_eval_dataloader(model, 'val') File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\trainer\data_loading.py", line 370, in _reset_eval_dataloader num_batches = len(dataloader) if has_len(dataloader) else float('inf') File "C:\Program Files\Python38\lib\site-packages\pytorch_lightning\utilities\data.py", line 32, in has_len if len(dataloader) == 0: File "C:\Program Files\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 404, in __len__ return len(self._index_sampler) File "C:\Program Files\Python38\lib\site-packages\torch\utils\data\sampler.py", line 245, in __len__ return (len(self.sampler) + self.batch_size - 1) // self.batch_size # type: ignore[arg-type] File "C:\Program Files\Python38\lib\site-packages\torch\utils\data\sampler.py", line 69, in __len__ return len(self.data_source) File "C:\Users\E\Documents\RobertaTest.py", line 59, in __len__ return len(self.data) File "C:\Program Files\Python38\lib\site-packages\torch\utils\data\dataset.py", line 83, in __getattr__ raise AttributeError AttributeError I'm trying to run a masked language model using `roberta-large`. Here is that LightningModule: class TestMLM(pl.LightningModule): def __init__(self, model_name_or_path, learning_rate, adam_beta1, adam_beta2, adam_epsilon, n_training_steps=None, n_warmup_steps=None): super().__init__() self.save_hyperparameters() self.n_training_steps = n_training_steps self.n_warmup_steps = n_warmup_steps self.learning_rate = learning_rate config = RobertaConfig.from_pretrained(model_name_or_path, return_dict=True) self.model = RobertaForMaskedLM.from_pretrained(model_name_or_path, config=config) def forward(self, x): return self.model(x).logits def training_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss = self(input_ids,attention_mask,labels) self.log('train_loss', loss, on_epoch=True, prog_bar=True, logger=True) return {"Training Loss": loss} def validation_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss = self(input_ids,attention_mask,labels) self.log('val_loss', loss, on_epoch=True, prog_bar=True, logger=True) return {"Validation Loss": loss} def test_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss = self(input_ids,attention_mask,labels) self.log('test_loss', loss, on_epoch=True, prog_bar=True, logger=True) return {"Test Loss": loss} def configure_optimizers(self): optimizer = AdamW(self.parameters(), lr = self.learning_rate, betas=(self.hparams.adam_beta1, self.hparams.adam_beta2), eps=self.hparams.adam_epsilon,) scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=self.n_warmup_steps, num_training_steps=self.n_training_steps ) return dict( optimizer=optimizer, lr_scheduler=dict( scheduler=scheduler, interval='step' ) ) here's the LightningDataModule: class TestDataModule(pl.LightningDataModule): def __init__(self, train_data, test_data, tokenizer, batch_size=BATCH_SIZE, max_token_len=MAX_TOKEN_COUNT): #max max 512 super().__init__() self.batch_size = batch_size self.train_data = train_data self.test_data = test_data self.tokenizer = tokenizer self.max_token_len = max_token_len def setup(self, stage=None): self.train_dataset = TestDataset( self.train_data ) self.test_dataset = TestDataset( self.test_data ) def train_dataloader(self): return DataLoader( self.train_dataset, batch_size=self.batch_size, shuffle=True, num_workers=0 ) def val_dataloader(self): return DataLoader( self.test_dataset, batch_size=self.batch_size, num_workers=0 ) def test_dataloader(self): return DataLoader( self.test_dataset, batch_size=self.batch_size, num_workers=0 ) and the Dataset: class TestDataset(Dataset): def __init__(self, encodings): self.encodings=encodings def __len__(self): return len(self.data) def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} The type of data that are passed in to instantiate a Dataset is of class `<class 'transformers.tokenization_utils_base.BatchEncoding'>`. Those data look like: {'input_ids': tensor([[ 0, 1437, 1437, ..., 1, 1, 1], [ 0, 1437, 1437, ..., 1, 1, 1], [ 0, 50264, 8845, ..., 55, 87, 2], ..., [ 0, 10127, 8845, ..., 1, 1, 1], [ 0, 1437, 1437, ..., 1, 1, 1], [ 0, 1437, 1437, ..., 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]]), 'labels': tensor([[ 0, 1437, 1437, ..., 1, 1, 1], [ 0, 1437, 1437, ..., 1, 1, 1], [ 0, 10127, 8845, ..., 55, 87, 2], ..., [ 0, 10127, 8845, ..., 1, 1, 1], [ 0, 1437, 1437, ..., 1, 1, 1], [ 0, 1437, 1437, ..., 1, 1, 1]])} Any ideas what I'm doing wrong here? I'm sorry for the long post & code; I figure it's important to see the full picture. Thank you for any help you may be able to provide!
1
t3_sm2iuk
1,644,169,235
pytorch
How to parallelize a training loop ever samples of a batch when CPU is only available in pytorch?
nan
0.84
t3_sm073v
1,644,163,554
pytorch
Trying to write an image denoising network with unexpectedly poor results. Any advice appreciated.
Hi folks, I've been working on image denoising for a couple of months but I can't seem to solve the poor performance of my denoiser. Below is my model. Please refer to the class DnCnn at the bottom. class FCN(nn.Module): def __init__(self): super(FCN, self).__init__() self.fcn = nn.Sequential( nn.Conv2d(3, 32, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(32, 32, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(32, 32, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(32, 32, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(32, 3, 3, padding=1), nn.ReLU(inplace=True) ) def forward(self, x): return self.fcn(x) class resblock(nn.Module): def __init__(self, in_channels, out_channels): super().__init__() self.strided_conv = nn.Sequential( nn.Conv2d(in_channels, out_channels, stride=1, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True) ) def forward(self, x): conv_block = self.strided_conv(x) if conv_block.size() == x.size(): out = x + conv_block return out else: return conv_block class dncnn_block(nn.Module): def __init__(self, in_channels, nc, out_channels): super().__init__() self.resblock1 = resblock(in_channels, nc) self.resblock2 = resblock(nc, nc) self.resblock3 = resblock(nc, nc) self.resblock4 = resblock(nc, nc) self.resblock5 = resblock(nc, nc) self.resblock6 = resblock(nc, nc) self.resblock7 = resblock(nc, nc) self.resblock8 = resblock(nc, nc) self.resblock9 = resblock(nc, out_channels) def forward(self, x): layer1 = self.resblock1(x) layer2 = self.resblock2(layer1) layer3 = self.resblock3(layer2) layer4 = self.resblock4(layer3) layer5 = self.resblock5(layer4) layer6 = self.resblock6(layer5) layer7 = self.resblock7(layer6) layer8 = self.resblock8(layer7) layer9 = self.resblock9(layer8) return layer9 class DnCNN(nn.Module): def __init__(self, in_nc=6, out_nc=3, nc=64, nb=20, act_mode='BR'): super(DnCNN, self).__init__() assert 'R' in act_mode or 'L' in act_mode bias = False self.fcn = FCN() dncnn1 = dncnn_block(nc, nc, nc) head1 = conv(in_nc, nc, mode='C'+act_mode[-1], bias=bias) tail1 = conv(nc, out_nc, mode='C', bias=bias) self.model1 = sequential(head1, dncnn1, tail1) def forward(self, x, train_mode=True): noise_level = self.fcn(x) concat_img = torch.cat([x, noise_level], 1) level1_out = self.model1(concat_img) + x return noise_level, level1_out I call DnCNN as my network. It is made up of different blocks. I have excluded the conv() function but it is just a function that returns specified layers. The input is the noisy image x. The idea is that there is a mini network/ancillary network called fcn() whose output is merged with the noisy image x and is passed through the main network. The original image is then added back onto the predicted: residual image *level1_out = self.model1(concat_img) + x* to return a denoised image. I am returning "noise_map" as I have ground truth noise maps which represent the severity of the noise. As such being able to determine the noise map is a means of knowing how much noise is in the image prior to denoising. My problem is that when the network is done training, the result is not very smooth on very fine Gaussian noise. This is unexpected because I am using identical training parameters to networks that perform well in this regard. I have tried with many different datasets also, including the same used in other state of the art denoising solutions. I've run out of experiments that I can think of that may be causing the problem. I am posting this here in case someone sees something improper in the way the network is built. Thanks folks. edit: I forgot to say I'm training it on x3 2080 TIs with torch.nn.DataParallel().
1
t3_slx5qr
1,644,155,598
pytorch
Respect format text: fairseq m2m_100
Hello, I use fairseq and the m2m\_100 model to translate text, however the format of the text is not respected and the lines are not jumped, I found the solution for the line break (separate the text between each line, translate it individually and reunite all), however there is some text that cannot be translated, like emojis or characters marked as unknown (e.g.: "▬"), I also have some style elements (e.g. \_\_ for highlighting). How can I make the translation respect these elements? ​ for the emojis I tried to replace them with a number but the sentence loses meaning, and it doesn't work all the time.
1
t3_sliib5
1,644,099,865
pytorch
How can I choose the right optimizer?
I'm pretty new to coding neural networks. I'm currently working on an emotion image detection and I'm wondering what's the best optimizer or how I can find the right one for the project.
0.86
t3_slfgta
1,644,091,707
pytorch
Tutorial: An app for Fashion Search using DocArray and Torchvision
Hey everyone, I've built a tutorial for a fashion search prototype. It uses DocArray and Torchvision, ResNet50 model. Right now it's still in the basic stages. Check it out here: [https://colab.research.google.com/github/alexcg1/neural-search-notebooks/blob/main/fashion-search/1\_build\_basic\_search/basic\_search.ipynb](https://colab.research.google.com/github/alexcg1/neural-search-notebooks/blob/main/fashion-search/1_build_basic_search/basic_search.ipynb) Would love to know what you guys think. If you can point me to more suitable models, that would be great!
1
t3_sl5492
1,644,061,424
pytorch
Probe PyTorch models
We wrote simple class to extract hidden layer outputs, gradients, and parameters of PyTorch models. Here's an example: probe = ModelProbe(model) out = model(inp) attn_queries_filter = probe.forward_output['*.attention.query'] attn_queries = torch.stack(attn_queries_filter.get_list()) * 🧑‍🏫 [Demo that extracts attention maps of BERT](https://github.com/labmlai/labml/blob/master/guides/model_probe.ipynb) * 📚 [Documentation](https://docs.labml.ai/api/analytics.html#probing) * 💻 [Github](https://github.com/labmlai/labml)
0.91
t3_sl433i
1,644,057,497
pytorch
Torchrun command not found, does it need to be installed seperately?
nan
1
t3_skwhz1
1,644,030,386
pytorch
Showing the pre-trained hyper-parameters of pytorch model
Hello guys, i'm data science student and i'm newbie with pytorch (i always use tf). I've trained a pytorch model that has saved the last checkpoint. Since it's been a while and I don't remember what hyperparameters I trained the model with, my question is: is there a way that from the checkpoint can display them? thanks all.
0.88
t3_sjgnnt
1,643,886,788
pytorch
What is the official implementation of first order MAML using the higher PyTorch library?
nan
1
t3_sixdqd
1,643,829,908
pytorch
Torch Datasets Tutorial
Hi all, I'm looking for a comprehensive tutorial on pytorch datasets and how they're used by various projects. The documentation on them seems a bit sparse, and doesn't impose much structure. There are also lots of other wrappers like \`TensorDataset\` and \`IterableDataset\` without much documentation. Thanks
0.67
t3_siozrw
1,643,809,354
pytorch
Would making the gradient "data" by detaching them implement first order MAML using PyTorch's higher library?
nan
0.75
t3_si5xv1
1,643,750,372
pytorch
[Question] Pytorch with databases
I've been thinking about developing a tool that would help to transform the graph DB data into a Pytorch dataset and use it with a custom model. Are there some red flags that I should have in mind regarding this tight coupling between ML and DB?
1
t3_shtulo
1,643,718,593
pytorch
Tried to allocate less than there is free memory.Cuda out of Memory
RuntimeError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 15.90 GiB total capacity; 14.72 GiB already allocated; 81.75 MiB free; 14.83 GiB reserved in total by PyTorch) How?
1
t3_shqlgf
1,643,706,306
pytorch
[Question] Reorder/select elements of tensor based on index tensor
Greetings, I have the following two tensors input shape: 16 32 32 3 index shape: 16 32 32 2 output shape: 16 32 32 3 The formula for the output would be: `output[b, h, w] = input[b, index[b, h, w, 0], index[b, h, w, 1]]` I tried to use `torch.gather` but I was not able to formulate the previous assignment. Does anyone know how to do this in an efficient manner? Thanks! ​ *For context: input contains a batch of 16 elemens where each one is a tensor of 32x32 that containts 3D points. index is a mapping from position to 3D point.*
1
t3_sh96dz
1,643,655,911
pytorch
GPT2 from scratch | Video tutorial
nan
1
t3_sh6vkz
1,643,650,408
pytorch
Can anybody explain me how to use weight and pos_weight params in BCEwithlogitsloss?
I am trying to create a multi label classifier in which dataset is unbalanced. To tackle this problem somebody suggested me I should use weight but after applying weights my model's accuracy decreased. To provide weights I have used two formulas : 1. (total\_elements)/(num\_classes\*label\_count) 2. 1/label\_count Suppose if my dataset of 1000 entries with 2 classes hot encoded and 0 is present 700 times and 1 is present 400 times. So here total elements are 1000,num\_classes are 2 and label count for 0 is 700 and for 1 is 400. Before using weights my model's accuracy was around 60 percent. But after using weights it got reduced to 30 percent. I know I am doing something wrong with weight stuff and I also dont know how to calculate pos\_weight either. Can somebody explain me what I am doing wrong in this process? Info about the model that I'm working on: I am working on multi-label movie genre detector with 12 genres. I am using a basic convolution network as a model. I'm using sigmoid as final layer.
1
t3_sh3tb4
1,643,642,868
pytorch
A tensor-slicing question
I'm trying to assemble a tensor based on the contents of two other tensors, like so: I have a 2D tensor called A, with shape \[I, J\], and another 2D tensor called B, with shape \[M, N\], whose elements are indices into the 1st dimension of A.I want to obtain 3D tensor C with shape \[M, N, J\] such that `C[m,n,:] == A[B[m,n],:]`. I \*could\* do this using nested for-loops to iterate over all indices in M and N, assigning the right values to C at each one, but M and N are large so this may be slow. I suspect there's some faster way of doing this using some clever slicing or some pytorch function, but I don't know what it would be. It looks a bit like somewhere one would use `torch.gather()`, but that requires all tensors to have the same number of dimensions. Can anyone help me? EDIT: I got an answer over on StackOverflow: [https://stackoverflow.com/questions/70926905/how-to-build-a-tensor-from-one-tensor-of-contents-and-another-of-indices](https://stackoverflow.com/questions/70926905/how-to-build-a-tensor-from-one-tensor-of-contents-and-another-of-indices)
0.8
t3_sgnyt2
1,643,590,642
pytorch
Concatenate different datasets on different channels
Hi everyone! I'm quite new with Pytorch and I'm getting a weird problem when I try to concatenate a dataset with filtered images with the originales ones ON DIFFERENT CHANNELS. First I load both with `ImageFolder()` and then use `torch.cat()` to concatenate the filtered dataset on the "raw" dataset (on another channel, of course). But I get this error (I'm trying one of the images as an example): https://preview.redd.it/ybjuwim8ove81.png?width=831&format=png&auto=webp&s=257f91b899c8398218b7feac7379b19b00ec203c `full_dataset_orig[0][0]` is one of the greyscaled images from the raw dataset and it is a \[1,126,126\] tensor, `full_dataset_aug1[0][0]` is from the filtered dataset and it has the same TYPE AND DIMENSIONS. **Why python is pointing out that it's a tuple?** **Is there an easier way to concatenate on diffent channels 2 pytorch datasets?** ​ Thank you in advance!
1
t3_sgh0mp
1,643,571,678
pytorch
Torchscript throws *args and *kwargs error while converting a model
Hi, I'm pretty new to torchscript and was trying to convert a model to JIT but it throws this error: torch.jit.frontend.NotSupportedError: Compiled functions can't take a variable number of arguments or use keyword-only arguments with defaults: `File "/home/anushka/airborne-detection-starter-kit/seg_tracker/models_transformation.py", line 60` `def updated_forward(*args, **kwargs):` `~~~~~~~ <--- HERE` `a = (tsm(args[0], duration=duration, dilation=dilation), ) + args[1:]` `return orig_forward(*a, **kwargs)` Here's the function which is giving this error: `def add_tsm_to_module(obj, duration, dilation=1):` `orig_forward = obj.forward` `def updated_forward(*args, **kwargs):` `a = (tsm(args[0], duration=duration, dilation=dilation), ) + args[1:]` `return orig_forward(*a, **kwargs)` `obj.forward = updated_forward` `return obj` ​ Any help would be really appreciated Thanks
0.86
t3_sgbxph
1,643,557,707
pytorch
CUDA Versions on Pytorch
I'll try to make it as short as possible. I have CUDA 11.2 installed on my PC but I downloaded the pytorch version that is with the Cuda 11.3 (from the official website where they ask you your platform and CUDA version and give you a command to install pytorch) When I set my device to my gpu the model is still training using my cpu so I realized its probably because of the CUDA different versions. ​ Is there any way to fix it without having to download CUDA 11.3 ?
0.8
t3_sffuyy
1,643,454,054
pytorch
Training Large Datasets
Hi guys. I'm new to deep learning and until now I've used 1gb - 5gb datasets in model training. Now I have a 50gb dataset. How do you train a model with a dataset that large? It can't fit comfortably on my local computer by the way.
0.88
t3_serim1
1,643,378,575
pytorch
Difference in type between Input type and Weight type with ModuleList
Hello guys, I have a problem with a list of modules. I have coded a module called BranchRoutingModule. I would like to create a list from this module. I have the following code: def _branch_routings(self): # structure = [nn.ModuleList([BranchRoutingModule(in_channels=self.in_channels) for j in range(int(pow(2, i)))]) for i in range(self.tree_height - 1)] # [[None], [None, None]] for tree height = 3 structure = [[None for j in range(int(pow(2, i)))] for i in range(self.tree_height - 1)] # [[None], [None, None]] for tree height = 3 cur = 0 for i in range(self.tree_height - 1): for j in range(int(pow(2, i))): self.__setattr__('branch_routing_module' + str(cur), BranchRoutingModule(in_channels=self.in_channels)) structure[i][j] = self.__getattr__('branch_routing_module' + str(cur)) cur += 1 return structure I have first tried using nn.ModuleList (commented out at the top) but I get the following error: “Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same” However if I use **setattr** and **getattr**, I get no errors and my model works fine. Why is that? I don’t understand why **setattr** and **getattr** fix the problem. I am using CUDA. Thank you and regards, Antoine
1
t3_se4l8m
1,643,306,842
pytorch
Problem calculating accuracy on multiclass classfication
Hi, I am facing a problem when I try to compute accuracy on a multiclass classification problem with the Accuracy() function from torchmetrics Here is the training function I created: from torchmetrics import Accuracy accuracy = Accuracy() def training(epoch, model, train_loader, optimizer, criterion): "Training over an epoch" metric_monitor = MetricMonitor() model.train() for batch in train_loader: images = batch['t1'][tio.DATA].cuda() labels = batch['label'].cuda() output = F.softmax(model(images), dim=0) loss = criterion(output, labels) output = output.data.max(dim=1,keepdim=True)[1] acc = accuracy(output, labels) metric_monitor.update("Loss", loss.item()) metric_monitor.update("Accuracy", acc) optimizer.zero_grad() loss.backward() optimizer.step() print("[Epoch: {epoch:03d}] Train | {metric_monitor}".format(epoch=epoch, metric_monitor=metric_monitor)) return metric_monitor.metrics['Loss']['avg'], metric_monitor.metrics['Accuracy']['avg'] When I try to call this function to train the model I get the following error : **ValueError: If preds have one dimension more than target, preds should be a float tensor.** Does anybody know what I am doing wrong ?
0.75
t3_se1tly
1,643,299,656
pytorch
PyTorch Object Detection
I recently switched from classification to object detetcion for a project. Anyone has any tutorials on how to implement * R-CNN (not mandatory, would be helpful for me) * Fast R-CNN (not mandatory, would be helpful for me) * Faster R-CNN * Mask R-CNN * YOLOv1-v4 in pure PyTorch?
0.92
t3_sdtoka
1,643,272,118
pytorch
Is there an annotation tool for instance segmentation on Ipad? [Discussion]
I**s there an annotation tool to produce multi layer (for overlapping objects within the image) and pixel exact image annotations on IPad?** *Background to my question:* There a lot of annotation tools for Linux and Windows (e.g. the ones listed here: [https://www.v7labs.com/blog/best-image-annotation-tools](https://www.v7labs.com/blog/best-image-annotation-tools) or here: [https://humansintheloop.org/10-of-the-best-open-source-annotation-tools-for-computer-vision-2021/](https://humansintheloop.org/10-of-the-best-open-source-annotation-tools-for-computer-vision-2021/)) I haven't tried all of them, but non of them seem to be available for the IPad. I am using the Ipad to make image annotations because it is faster for me to annotate with a stylus than with a mouse on the PC (also I can do annotations when I am not in the office). Further, most annotation tools feel clunky and too overloaded with bureaucracy (this is only my subjective opinion). I am currently using Adobe Fresco (sucks only because its not open source and a little expensive), which works well in combination with a small script, that I wrote to convert the .psd files into torch tensors. My workflow with Fresco is fast and the annotations are very precise. However, I was bashed by a reviewer when submitting a paper mentioning that the annotations were produced with Fresco. The paper was rejected because the reviewer thought annotating images with fresco was ridiculous and that there are supposedly much better alternatives (which he did not mention)... and which I am still too dump to find.
1
t3_sdlitl
1,643,245,053
pytorch
PyTorch/LibTorch in Unreal Engine 4
Hi, most of the online discussion I find for integrating LibTorch in Unreal Engine 4 is quite old. I'd love to be able to implement a CNN inside UE4 but it's seeming less likely this can be done given the lack of resources available to do this. Many thanks for any help you can give, it is greatly appreciated.
1
t3_scgruu
1,643,125,897
pytorch
Help changing mat1 and mat2 shapes for CNN
Hi, I'm making a CNN to classify images for a project and given and given I'm quite new to this type of stuff, I figured using the image classifier example from the Pytorch docs would be a good start. However, I wasn't to use the Caltech256 dataset instead of CIFAR10, as it is more complex and useful for this scenario. ​ As the datasets use separate image sizes, the NN class no longer functions properly as it is being passed tensors of the wrong shape. after looking online, I believe the line at fault is `self.fc1 = nn.Linear(16 * 5 * 5, 120)` as those values are no longer correct, therefore giving me the error `RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x44944 and 400x120)`. Can anyone help me get this solved? I've tried to work out what those numbers mean or how to work out what values I need in my case but I haven't got anywhere. Sorry if this is a silly or obvious question!! [Here's a link with all of my project code if that helps with things](https://github.com/legendgamer800/CaltechModel/blob/main/main.py). Thank you!!
0.8
t3_sb785o
1,642,980,654
pytorch
Help installing Pytorch on MAC OS Monterey
\[FIXED, INSTALLED PYTORCH WITH ANACONDA VERSION 3.8 INSTEAD OF 3.10\]Hey Y'all, I am trying to install and set up an environment for Pytorch on Mac. I am fairly new to most technical interfaces, pardon me if this is a silly question! https://preview.redd.it/uvvlolk9agd81.png?width=1455&format=png&auto=webp&s=06fc938f40e62847017ec6f43363db872bc7d681 I am troubleshooting and I am at this point. towards the bottom it seems that Python is incompatible with bzip? and Torchvision conflicts with ffmpeg and bzip2. Does anyone know how I should proceed? If anyone has any resources to help It would be greatly appreciated. Thanks everyone.
0.88
t3_savagt
1,642,949,506
pytorch
Help understand RNN tutorial from pytorch site
I'm following along the RNN tutorial found on [Pytorch's tutorial site](https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html), and have tried implementing it myself but I run into an error which I don't understand and can't google-fu my way out off. The tutorial lists no declaration of an optimizer, and in the train() function, we find this: `for p in rnn.parameters():` `p.data.add_(p.grad.data, alpha=-learning_rate)` However, when I run this I get a NoneType Error, basically saying that rnn.parameters() returns nothing. The tutorial mentions nothing regarding tunable parameters, so how is this training of the network supposed to work?
0.89
t3_sacq54
1,642,887,230
pytorch
Best way to train with independent inputs and outputs?
Hi there, I'm just starting with ML and I have a problem like that: I train a ML skeletal mesh deformer, the inputs are skeleton bone angles and outputs are deformations that should be applied to vertices. Some (not all) bones/vertices are totally independent, e.g. toes should have no influence on the fingers. I train using a set of random poses and sometimes the network will think that when I move a toe the skin on a finger should also move. Is there a way to construct training example set to make the network better understand that some parts are independent?
1
t3_sa1pkl
1,642,855,333
pytorch
My first training epoch takes about 1 hour where after that every epoch takes about 25 minutes.Im using amp, gradient accum, grad clipping, torch.backends.cudnn.benchmark=True,Adam optimizer,Scheduler with warmup, resnet+arcface.Is putting benchmark=True main reason for this?
nan
0.86
t3_s9zttf
1,642,848,002