sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float64
0.07
1
id
stringlengths
9
9
created_utc
float64
1.6B
1.65B
pytorch
Converting a trained model to a callable function object with different function signature (and datatype)
I’m usually not the one to post “syntax, please help!” questions on here but this one has me stumped. I parametrized a derivative function with a PyTorch nn.Module object and now I want to pass it to a scipy function (‘solve_ivp’). The problem is that ‘solve_ivp’ takes two parameters (y and t) while I only use one, and it passes numpy arrays and expects them when returned. My first thought was an anonymous function, like this: ode_scipy = lambda t, y: func(torch.from_numpy(y).to(device)).cpu().detach().numpy() Where func is my trained network. This gave the error ‘’TypeError: forward() missing 1 required positional argument: ‘y’ ‘’ which doesn’t make sense to me because I am definitely passing a parameter. As far as I know, the ‘forward’ function only takes one parameter. Any thoughts? I’m not against ditching the anonymous function altogether if someone has a smarter way to do this
1
t3_nxuo0m
1,623,458,954
pytorch
How PyTorch Is Challenging TensorFlow Lately
nan
0.91
t3_nx9vkx
1,623,395,683
pytorch
CNN - Apple Classification
I have the following problem statement in which I only need to predict whether a given image is an apple or not. For training only 8 images are provided with the following details: 1. apple\_1 image - 2400x1889 PNG 2. apple\_2 image - 641x618 PNG 3. apple\_3 image - 1000x1001 PNG 4. apple\_4 image - 500x500 PNG contains a sticker on top of fruit 5. apple\_5 image - 2400x1889 PNG 6. apple\_6 image - 1000x1000 PNG 7. apple\_7 image - 253x199 JPG 8. apple\_8 image - 253x199 JPG I am thinking about using Transfer learning: either VGG or ResNet-18/34/50. Maybe ResNet is an overkill for this problem statement? How do I deal with such varying image sizes and of different file extensions (PNG, JPG)? Any online code tutorial will be helpful. Thanks!
0.75
t3_nx8a95
1,623,389,359
pytorch
How do you save/load models for later use in your PyTorch workflow?
A persistent problem for me is that when I am still developing a model, I will often need to do lots of tweaking of activations, dense layer size/number, etc, and it’s a common headache for me to try to load a model from a few days back and try to evaluate it only to see that I tweaked something and now it won’t load right. I use torch.save(self.state_dict(), …) and self.load_state_dict(torch.load(…)) for this since sometimes I’ll run little tests on CPU, sometimes GPU(s) and this has worked the best for me across all devices I end up having to either have 1) tons of command line arguments or 2) tons of separate files calling different model parameters if I want the same training architecture for all models (which I really do) Any advice on how to manage using a large set of varied models? Is there a smart way I’m not seeing for managing how to load or save models where I don’t need to worry about model tweaks during testing?
1
t3_nvg52f
1,623,190,480
pytorch
Minimum requirements for loading gpt2-xl
Hey! Quick question, First of all, sorry if this is not the right place for this question. I'm a beginner. So I'm wondering which would be the minimum requirements for loading the [gpt2-xl](https://huggingface.co/gpt2-xl) (6 GB) PyTorch model, RAM is the key thing here I guess. Note that I am not talking about Fine-Tuning, but rather just load it. I am using [this](https://github.com/graykode/gpt-2-Pytorch) repo to load the [large](https://huggingface.co/gpt2-large/tree/main) (3 GB) version via Google Colab and it works by changing a few parameters (Google Colab has 13 GB RAM). But when trying to load the xl in the RAM the python process is just directly killed. For example, would this VPS configuration be enough? **6 vCPU Cores** **16 GB RAM** **400 GB SSD** **400 Mbit/s Port** **thanks!!**
1
t3_nv2ta4
1,623,154,968
pytorch
Variant Size Input Image to Convolution Layers
Many problems in computer vision have images as inputs but the region of interest (ROI) is not squared or orthogonal, its a polygon. As a results, the image the computer "sees", is an orthogonal image with "0" outside the ROI. An example can be a plaque of the carotid ultrasound. Moreover, the dimensions of these images are not standard. How do you guys approach those problems? I will start with some of my ideas: * zero padding to a maximum height N1 and maximum width N2 and the input to the convolution layer is N1 x N2 (too many "0"s) * Resize image to standard size M1 x M2 (loose information) * apply abose but with N1=N2 or M1=M2 Note that especially in medial application, any loss of information might be crucial.
1
t3_num4c2
1,623,096,299
pytorch
Pay Attention to MLPs - Annotated implementation
[https://nn.labml.ai/transformers/gmlp/index.html](https://nn.labml.ai/transformers/gmlp/index.html) gMLP uses Multilayer Perceptrons (MLP) with gating instead of attention. It does pretty well compared to BERT on NLP and achieves same accuracy as ViT in vision tasks. * [Github](https://github.com/labmlai/annotated_deep_learning_paper_implementations/tree/master/labml_nn/transformers/gmlp) * [Paper](https://arxiv.org/abs/2105.08050)
1
t3_nud24p
1,623,073,926
pytorch
Create a custom audio PyTorch dataset using torchaudio
I published a new tutorial in my "Pytorch for Audio + Music Processing" series called "Custom audio PyTorch dataset with torchaudio" In the video, you can learn how to create a custom audio dataset with PyTorch loading audio files with torchaudio. In the process, you’ll also learn basic I/O functions in torchaudio. This video is part of the “PyTorch for Audio and Music Processing” series, which aims to teach you how to use PyTorch and torchaudio for audio-based Deep Learning projects. Video: [https://www.youtube.com/watch?v=88FFnqt5MNI&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=4](https://www.youtube.com/watch?v=88FFnqt5MNI&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=4)
1
t3_nuam0c
1,623,067,044
pytorch
Forward pass computation for GDAS NAS coding
For [https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L143-L169](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L143-L169) , how to do forward pass training computation for architecture \`weights\` of each and every edges ? [https://github.com/D-X-Y/AutoDL-Projects/issues/99#issuecomment-835802887](https://github.com/D-X-Y/AutoDL-Projects/issues/99#issuecomment-835802887) ​ https://preview.redd.it/ei8qje4cqr371.png?width=1920&format=png&auto=webp&s=efc30910d6664081e05a6fc52e7ae7e8862bdfa6
1
t3_nu3jl6
1,623,038,975
pytorch
Convergence Issues with autograd
I am converting a tensorflow custom training loop to PyTorch, but autograd is having issues with convergence, while GradientTape does not. What am I doing wrong? ### with PyTorch ``` def train_fn(X, Y, epsilon=5, epochs=1000): b = ((-epsilon - epsilon) * torch.rand(X.shape) + epsilon).clone().detach().requires_grad_(True) opt = torch.optim.Adam([b], lr=5, betas=(0.1,0.1)) for epoch in range(epochs): x = box(X+b) loss = loss_fn(x, Y) loss.backward() print(f'{epoch} :: {loss}') opt.step() return x, b ``` ### with TensorFlow ``` def train_fn(X, Y, epsilon=5, epochs=1000): b = tf.Variable(np.random.uniform(-epsilon, epsilon, X.shape).astype('float32')) opt = tf.keras.optimizers.Adam(learning_rate=5, beta_1=0.1, beta_2=0.1) for epoch in range(epochs): with tf.GradientTape(persistent=False, watch_accessed_variables=True) as grad: x = box(X+b) loss = loss_fn(x, Y) print(f'{epoch} :: {loss}') opt.minimize(loss, [b], tape=grad) return x, b ```
1
t3_ntt98s
1,623,007,110
pytorch
[DL] Validation step: metrics remain unchanged after each epoch (PyTorch Lightning)
I’m running a DL model with PyTorch Lightning to try and classify some data (2 categories: 1/0). I don’t understand why the validation score remains identical after each epoch. Batch size = 1024 Train data = 900\_000 rows Val data = 100\_000 rows ... self.layers = nn.Sequential( nn.Linear(100, 1024*16), nn.LeakyReLU(), nn.Linear(1024*16, 1024*8), nn.LeakyReLU(), nn.Linear(1024*8, 1024*8), nn.LeakyReLU(), nn.Linear(1024*8, 1024*8), nn.LeakyReLU(), nn.Linear(1024*8, 1024*4), nn.LeakyReLU(), nn.Linear(1024*4, 1024*4), nn.LeakyReLU(), nn.Linear(1024*4, 256), nn.LeakyReLU(), nn.Linear(256, 1), nn.Sigmoid(), ) def forward(self, x): return self.layers(x.float()) def training_step(self, batch, batch_idx): x, y = batch preds = self.layers(x.float()) loss = self.criterion(preds, y.float()) # nn.BCELoss() acc = FM.accuracy(preds > 0.5, y) metrics = {'train_acc': acc.item(), 'train_loss': loss.item()} self.log_dict(metrics) return loss def validation_step(self, batch, batch_idx): x, y = batch preds = self(x.float()) loss = self.criterion(preds, y.float()) # nn.BCELoss() acc = FM.accuracy(preds > 0.5, y) metrics = {'val_acc': acc.item(), 'val_loss': loss.item()} self.log_dict(metrics) return metrics The val\_loss remains stable at 48.79 after each and every epoch (tested for up to 10 epochs; same true for val\_acc which doesn’t change), which is weird. I would expect some slight variation even if the model doesn’t have much to learn from the data. At least some ovefitting should be possible (model has 300 million+ parameters in total). However, the train\_loss does vary from batch to batch: https://preview.redd.it/zdbf4x4qio371.png?width=355&format=png&auto=webp&s=02a35bd2f3061cc6ab9397469272706e932759cc Am I missing something? Why doesn't the validation loss change? \*Edit\* if anyone wants to try this, here is the Notebook and data file (.pt) with 110k rows in "x" and "y": [https://we.tl/t-VfV34UqXK5](https://we.tl/t-VfV34UqXK5)
0.75
t3_ntqrju
1,623,000,288
pytorch
Transforming an AI Project Into an Application
Hello everyone. I have completed a reinforcement learning project by using Python and PyTorch for neural networks. Basically, I have an AI agent that uses MCTS and PyTorch neural networks to play a game (Mancala), but it only works in my terminal. How can I transform it into a mobile application or a website? I would like people to play against my AI agent on a website or on a mobile app. Any help is appreciated. I just want to know if it is possible and what are the ways to do that.
1
t3_nsd65m
1,622,834,582
pytorch
I made a bunch of Vision Models with PyTorch and deployed it over Heroku :)
nan
0.85
t3_nsbihc
1,622,830,156
pytorch
Why Pytorch chose C++ backend instead of Rust?
nan
0.29
t3_ns53d3
1,622,813,195
pytorch
How to reduce pytorch download size?
nan
1
t3_nroboj
1,622,755,232
pytorch
Checking errors in Multi-class 3D segmentation.
I am performing 3D multi-class segmentation of medical images(T1w MR). But U-Net with MSDL(multi-sourced Dice Loss) never reaches beyond 74% dice. And the desired performance is not satisfactory. 1. To check there are any bugs/errors in the code, I ran training with one single data point and the same validation set to overfit the U-Net. The validation set is also the same data point. After training for 200 epochs both train and validation, the dice score seems to reach 80%(*I expected it to be 100%*). And validation loss curve follows the training curve very closely(first image). 2. With the same loss, learning rate, and weight initialization, I changed the validation set with completely different image patches. The validation curves seem to behave in a random way(*as it should*) and for some reason, the dice go beyond 1 in train samples(all samples are the same image/mask). (second image)The dice overshoot may be due to the MSDL(multi-sourced dice loss) I have used(*which doesn’t take background into account*). ​ [Fig. 1. ](https://preview.redd.it/9v3sa9p1p3371.png?width=4000&format=png&auto=webp&s=4c2675e467aa08ae6cb85ecd0023a61b74df82aa) ​ [Fig. 2. ](https://preview.redd.it/p6eww0d4p3371.png?width=4000&format=png&auto=webp&s=ae5178e0aa7a1a54070d606159bfd23df09cf3a7) ​ From these tests’ results, is it confirmed that the current framework works? Are there any other checks that need to be performed?If the setup is good, shouldn’t it overfit in the first case or dice metric reaching nearly 1?
1
t3_nrlgo4
1,622,747,827
pytorch
Attention Weights in the Encoder Layer
I use multiple [TransformerEncoderLayers](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) on input sequences for self-attention. Since the size of the sequences differs, so I use src\_key\_padding\_mask: ​ x = some input mask = give_mask(x) for encoderlayer in self.encoderlayers: x = encoderlayer(x, src_key_padding_mask=mask) After training, I extracted the attention weights of each layer. Here I have two questions: 1. Do my attention weights look right? My picture shows the weights of one layer (the others look similar). The sequence length here is nine, so I expected a squared shape of 9x9.. 2. How can I put the weights of multiple, stacked layers together? Just add them up to one weight matrix? thanks in advance ​ https://preview.redd.it/bd54f1lfo2371.png?width=380&format=png&auto=webp&s=e72f3c9193a8f69e47a427bfd88b7a95fc66a11d
0.83
t3_nrgjtp
1,622,735,083
pytorch
PyTorch and Binary Classification
I recently implemented some PyTorch models (CNN) for a binary classification problem. And then I asked myself if the outputs should be 1 (True/False thresholded at 0.5) or 2 (Class 1/Class 2). I found both in literature but I have to ask: What do you use and why? Let me start first: I use 2 outputs because I want the probabilities of each class after a softmax function application to the output. Moreover, I find a 2 output model more generalised in order to apply a similar model to another problem. Note that I write my models as this: `model = <model>(img_width=28, img_height=28, in_channels=1, num_classes=2)`
1
t3_nrc0ta
1,622,722,485
pytorch
Annotated implementation of Attention Free Transformer (AFT)
This is a PyTorch implementation of paper "An Attention Free Transformer" with side-by-side notes. [https://nn.labml.ai/transformers/aft/index.html](https://nn.labml.ai/transformers/aft/index.html) Attention Free Transformer (AFT) replaces dot product self-attention with a new operation that has lower memory complexity. They also introduce AFT-local and AFT-conv. * [Github](https://github.com/labmlai/annotated_deep_learning_paper_implementations/tree/master/labml_nn/transformers/aft) * [Paper](https://arxiv.org/abs/2105.14103) * [Twitter Thread](https://twitter.com/labmlai/status/1400126844997750785)
0.95
t3_nqrsyh
1,622,656,376
pytorch
Jetson Nano: TensorFlow model. Possibly I should use PyTorch instead?
Hello - I built a model (for ANPR) using TensorFlow and EasyOCR. Over the past week or so, getting TensorFlow to install on the Jetson Nano has been next to impossible. Tons of issues with it (some are documented) and overall I found one person that was able to get it running well which took over 50hrs to install on the Jetson Nano. Maybe I should try PyTorch instead of TensorFlow to work on the Jetson Nano? I read on the NVIDIA forums that it works better with the Jetson Nano, but I am not completely sure why (I can explore, but thought someone here might know)? This all said, there has to be a better way to get a TensorFlow model to run on a Jetson Nano. Is this where ONNX comes in (if so, any ideas/resources you might be able to point me to that shows how?) Any/all thoughts/help is much appreciated. Thanks!
1
t3_nqmo3h
1,622,643,009
pytorch
Quantization in pytorch
I have tried to load the model to cpu(map location has been set to cpu) which has been trained on GPU and then I got error stating the parameter size mismatch and couldn't get rid of it for long time. I've been able to remove it by adding torch.quantization.prepare_qat(net, inplace= True) model = torch quantization.convert(model.eval(), inplace= False) And then the model has been loaded successfully on to cpu and works. I've been through pytorch documentation but couldn't understand what exactly was happening. Can some make me understand it please?
1
t3_nqi05c
1,622,626,946
pytorch
Anyone use the torchdiffeq neural ODE library?
I’ve started using this library recently, and it took me a while to learn the tricks of working with it. If anyone else has experience with this library or uses it now, I’d love to do a knowledge/code swap or keep in touch to have someone to bounce ideas off of. In my research group, I’m the only one who uses this library, and only a couple others do anything related to deep learning. I’d love to share some ideas with someone!
0.84
t3_npx0to
1,622,562,362
pytorch
GPU out of memory
I am using PyTorch to build some CNN models. My dataset is some custom medical images around 200 x 200. However, my 3070 8GB GPU runs out of memory every time. I tried to use `.detach()` after each batch but the problem still appears. I attach my code: def training(epoch,model,data_loader): model.train() running_loss = 0.0 running_correct = 0 for batch_idx , (data,target) in enumerate(data_loader): if IS_CUDA: data,target = data.cuda(),target.cuda() data , target = Variable(data,False),Variable(target) optimizer.zero_grad() output = model(data.float()) loss = criterion(output,target) running_loss += criterion(output,target) preds = output.data.max(dim=1,keepdim=True)[1] running_correct += preds.eq(target.data.view_as(preds)).cpu().sum() #data.detach() #target.detach() loss.backward() optimizer.step() accuracy = 100. * running_correct/len(data_loader.dataset) print(f'> training loss is {loss:{5}.{4}} and training accuracy is {accuracy:{5}.{4}} %') return loss,accuracy #%% Testing def validation(epoch,model,data_loader): model.eval() running_loss = 0.0 running_correct = 0 for batch_idx , (data,target) in enumerate(data_loader): if IS_CUDA: data,target = data.cuda(),target.cuda() data , target = Variable(data,True),Variable(target) output = model(data.float()) loss = criterion(output,target) running_loss += criterion(output,target) preds = output.data.max(dim=1,keepdim=True)[1] running_correct += preds.eq(target.data.view_as(preds)).cpu().sum() #data.detach() #target.detach() loss = running_loss/len(data_loader.dataset) accuracy = 100. * running_correct/len(data_loader.dataset) print(f'> validation loss is {loss:{5}.{4}} and validation accuracy is {accuracy:{5}.{4}} %') return loss,accuracy What should I do to train my model in the GPU for faster computations? Images shall NOT be resized to lower height/width due to loss of information. Batch size is kept to minimum. Epochs will be much larger in main training. That is just a test for 40 images out of 150 which will be 2000+ after augmentation.
1
t3_npu26a
1,622,554,114
pytorch
Hey guys could you help me solve this problem ?
I have a dataset with over 1000 classes with just 10 images in each classes. In the end, I have to predict if any new image is part of the dataset or not.I tried implementing with my own neural network, tried Siamese networks. My model goes up to 72% val accuracy but shows wrong outputs.I've been on this for days.Could you guys help me out ?
0.33
t3_np28qh
1,622,463,981
pytorch
Text to Image generation using path file
I trained a text to image generation model based on [https://github.com/aelnouby/Text-to-Image-Synthesis](https://github.com/aelnouby/Text-to-Image-Synthesis). Now I have 2 path files (one for generator , another for discriminator) . How to generate images using this path files?
1
t3_np14e6
1,622,460,163
pytorch
Best book to learn Pytorch?
Hello, I know a bit of Pytorch but I would like to become an expert. Is there any book that dives very deep into Pytorch? Topics I would like to know more: \+ A bit of review of the basics e.g. data types on Pytorch. \+ Building trivial neural networks review (experience on MLP and CNN, not so much in RNN such as LSTM). \+ Building nontrivial neural networks. For example strange connections. \+ Autograd. How does Autograd work on non-trivial neural networks? \+ Best practices. \+ Pytorch distributed. \+ A bit explanation of CUDA and how it is used in Pytorch. \+ Emphasis on code. Thanks.
1
t3_nozzl2
1,622,455,915
pytorch
Pytorch machine learning deployment on Heroku getting R14 Memory Quota Exceeded warnings
I created a machine learning and object recognition web application. The Heroku memory quota is 512mb, my application fluctuates between 700mb and 950mb. Wondering what the best approach to reducing application memory usage is? Dependencies listed below. absl-py==0.12.0 astroid==2.5.3 attrs==20.3.0 backcall==0.2.0 cachetools==4.2.1 certifi==2020.12.5 chardet==4.0.0 click==7.1.2 clickclick==20.10.2 cycler==0.10.0 Cython==0.29.23 decorator==5.0.5 Flask==1.1.2 google-auth==1.28.1 google-auth-oauthlib==0.4.4 grpcio==1.37.0 idna==2.10 inflection==0.5.1 ipython==7.22.0 ipython-genutils==0.2.0 isodate==0.6.0 isort==5.8.0 itsdangerous==1.1.0 jedi==0.18.0 Jinja2==2.11.3 jsonschema==3.2.0 kiwisolver==1.3.1 lazy-object-proxy==1.6.0 Markdown==3.3.4 MarkupSafe==1.1.1 matplotlib==3.4.1 mccabe==0.6.1 numpy==1.20.2 oauthlib==3.1.0 openapi-schema-validator==0.1.5 openapi-spec-validator==0.3.0 opencv-python==4.5.1.48 pandas==1.2.4 parso==0.8.2 pexpect==4.8.0 pickleshare==0.7.5 Pillow==8.2.0 prompt-toolkit==3.0.18 protobuf==3.15.8 ptyprocess==0.7.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 Pygments==2.8.1 pylint==2.7.4 pyparsing==2.4.7 pyrsistent==0.17.3 python-dateutil==2.8.1 pytz==2021.1 PyYAML==5.4.1 requests==2.25.1 requests-oauthlib==1.3.0 rsa==4.7.2 scipy==1.6.2 seaborn==0.11.1 six==1.15.0 tensorboard==2.4.1 tensorboard-plugin-wit==1.8.0 thop==0.0.31.post2005241907 toml==0.10.2 torch==1.8.1 torchaudio==0.8.1 torchvision==0.9.1 tqdm==4.60.0 traitlets==5.0.5 typing-extensions==3.7.4.3 urllib3==1.26.4 watchdog==2.0.3 wcwidth==0.2.5 Werkzeug==1.0.1 wrapt==1.12.1
1
t3_nofx6u
1,622,395,537
pytorch
How to make good looking network diagrams for publication?
I’ve found a few tools online that make pretty sharp looking flow charts, but they are typically geared towards convolutional models. I just have a few dense layers and I want to make a good looking diagram showing what the model looks like, it feels incomplete just describing it. Any tips/packages that are useful?
1
t3_nnxbaj
1,622,325,735
pytorch
Tutorials/walkthroughs of torchtext 0.9 anywhere?
Saw a bunch of stuff from torchtext that are now in legacy and the documentation for 0.9 is pretty poor, especially with the example code. Any help with resources would be great.
0.84
t3_nnvyrt
1,622,321,170
pytorch
Can someone please help me navigate through the following error message I am getting trying to install pytorch.
I got the following command from PyTorch's website - conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia I get the following error - Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package cudatoolkit conflicts for: cudatoolkit=11.1 pytorch -> cudatoolkit[version='10.0.*|8.*|>=10.0,<10.1|>=10.1,<10.2|>=11.1,<11.2|>=10.2,<10.3|>=11.0,<11.1|>=9.2,<9.3|>=9.0,<9.1|>=8.0,<8.1|9.*|>=10.1.243,<10.2.0a0|>=9.2,<9.3.0a0|>=10.0.130,<10.1.0a0|9.2.*|>=9.0,<9.1.0a0|>=8.0,<8.1.0a0|9.0.*|8.0.*|7.5.*'] torchvision -> pytorch==1.4.0 -> cudatoolkit[version='10.0.*|>=10.1.243,<10.2.0a0|9.2.*|>=8.0,<8.1|>=8.0,<8.1.0a0|8.*|9.*|9.0.*|8.0.*|7.5.*'] torchvision -> cudatoolkit[version='>=10.0,<10.1|>=10.1,<10.2|>=11.1,<11.2|>=10.2,<10.3|>=11.0,<11.1|>=9.2,<9.3|>=9.0,<9.1|>=10.0.130,<10.1.0a0|>=9.2,<9.3.0a0|>=9.0,<9.1.0a0'] torchaudio -> pytorch==1.8.1 -> cudatoolkit[version='10.0.*|>=10.0,<10.1|>=10.1,<10.2|>=11.1,<11.2|>=10.2,<10.3|>=11.0,<11.1|>=9.2,<9.3|>=10.1.243,<10.2.0a0|>=9.2,<9.3.0a0|>=10.0.130,<10.1.0a0|9.2.*|>=9.0,<9.1|>=9.0,<9.1.0a0'] Package libstdcxx-ng conflicts for: torchvision -> cudatoolkit[version='>=11.1,<11.2'] -> libstdcxx-ng[version='>=7.2.0|>=9.3.0'] torchaudio -> python[version='>=3.9,<3.10.0a0'] -> libstdcxx-ng[version='>=5.4.0|>=7.2.0|>=7.3.0'] cudatoolkit=11.1 -> libstdcxx-ng[version='>=7.3.0|>=9.3.0'] pytorch -> libstdcxx-ng[version='>=5.4.0|>=7.3.0'] pytorch -> cudatoolkit[version='>=11.1,<11.2'] -> libstdcxx-ng[version='>=7.2.0|>=9.3.0'] python=3.9 -> libstdcxx-ng[version='>=7.3.0'] torchvision -> libstdcxx-ng[version='>=5.4.0|>=7.3.0'] Package pytorch conflicts for: pytorch torchvision -> pytorch[version='1.1.*|1.2.0+cu92|1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|>=1.1.0|>=1.0.0|>=0.4|>=0.3|1.3.1.*|1.2.0.*'] torchaudio -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|>=1.1.0'] Package _libgcc_mutex conflicts for: cudatoolkit=11.1 -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main] torchvision -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main] python=3.9 -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main] pytorch -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main] Package libgcc-ng conflicts for: python=3.9 -> libgcc-ng[version='>=7.3.0'] python=3.9 -> zlib[version='>=1.2.11,<1.3.0a0'] -> libgcc-ng[version='>=7.2.0'] Package six conflicts for: pytorch -> mkl-service[version='>=2,<3.0a0'] -> six torchvision -> sixThe following specifications were found to be incompatible with your system: - feature:/linux-64::__glibc==2.33=0 - feature:|@/linux-64::__glibc==2.33=0 - cudatoolkit=11.1 -> __glibc[version='>=2.17,<3.0.a0'] - pytorch -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17,<3.0.a0'] - torchvision -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17,<3.0.a0'] Your installed version is: 2.33 I am not being able to understand exactly which package is creating the issue. I am trying to install this in a virtual env through conda.
0.67
t3_nmswap
1,622,189,367
pytorch
Digging into TorchVision's MobileNetV3 implementation
nan
1
t3_nmjge0
1,622,154,711
pytorch
Lego generator. You could use it to generate any lego-based images or videos
nan
0.48
t3_nm69pr
1,622,118,319
pytorch
I published a new tutorial in "PyTorch for Audio + Music Processing": "Making Predictions with PyTorch Deep Learning Models"
In my new tutorial, you can learn how to make inferences with an already trained PyTorch model. This video is part of the “PyTorch for Audio and Music Processing” series, which aims to teach you how to use PyTorch and torchaudio for audio-based Deep Learning projects. Have fun! [https://www.youtube.com/watch?v=0Q5KTt2R5w4&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=3](https://www.youtube.com/watch?v=0Q5KTt2R5w4&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=3)
0.94
t3_nm4o2q
1,622,112,352
pytorch
Tutorial: Pruning and Quantizing PyTorch YOLOv3 for Real-time Laptop Performance
nan
0.97
t3_nll7h1
1,622,046,735
pytorch
Data Structure coding for GDAS NAS
[https://github.com/D-X-Y/AutoDL-Projects/issues/99#issuecomment-835713058](https://github.com/D-X-Y/AutoDL-Projects/issues/99#issuecomment-835713058) how to code the following graph structure ? which kind of data structure is suitable in this case especially for both forward inference and backward propagation ? would [https://networkx.org/documentation/stable/tutorial.html#multigraphs](https://networkx.org/documentation/stable/tutorial.html#multigraphs) be suitable ? [https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L106-L107](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L106-L107) ​ https://preview.redd.it/kivfxaw68a171.png?width=976&format=png&auto=webp&s=70d37159a15ecdab923487bad05cc4fba4e15425
0.66
t3_nkrydk
1,621,955,286
pytorch
Unable to run torch on GPU
Hi, I am new to pytorch. Everytime i run: 1) torch. Cuda. Is available i get FALSE 2) torch. Version shows 1.8.1+cpu 3) cuda. Gpu shows mx130 [supported] I Have Windows,py 3.7 Idle, Nvidia MX 130 whose Compute capability is 5.0. Here are the steps i performed : 1) I installed cuda 11.0.2 from nvdia website 2) i extracted cudnn 8.0 and replaced cuda toolkit lib, include and bin file with cudnn's file. 3) i did pip install Torch ==1.8.1 nvidia - smi displays NVIDIA SMI 462.3 CUDA VERSION : 11.2
0.67
t3_nkmyjw
1,621,940,343
pytorch
Trouble importing pytorch once installed
I have successfully installed torch-1.8.1+cpu on windows 10 using pip3, but when I try "import torch as T" in cmd I get the error: 'Error loading "(rest of path) cudnn_adv_infer64_8.dll" or one of its dependencies.'. Any help is greatly appreciated! If there is a better place to ask this please let me know, I haven't found this exact issue anywhere else online.
0.6
t3_nklixy
1,621,934,697
pytorch
I published a new tutorial in my "Pytorch for Audio + Music" series: "Implementing and Training a Neural Network with PyTorch"
I’m excited to publish the first tutorial in the “Pytorch for Audio + Music” series. In this installment, I start from the basics. What’s better than getting our hands dirty training a simple neural network on a toy dataset? In this video, you can get a first approach to the PyTorch framework, learn its fundamental components, while working on a self-contained project. Have fun! [https://www.youtube.com/watch?v=4p0G6tgNLis&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=2&ab\_channel=ValerioVelardo-TheSoundofAIValerioVelardo-TheSoundofAI](https://www.youtube.com/watch?v=4p0G6tgNLis&list=PL-wATfeyAMNoirN4idjev6aRu8ISZYVWm&index=2&ab_channel=ValerioVelardo-TheSoundofAIValerioVelardo-TheSoundofAI)
0.97
t3_nk0pgo
1,621,870,450
pytorch
Re-evaluate gradient in step method on new data point?
Hi folks, I am trying to implement an GD-based optimizer where I need to re-evaluate the gradient at the step that just got calculated. I.e. imagine we have the basic x\_next = x\_prev - learning\_rate\*grad(x\_prev) once I have x\_ next I need to do a calculation on the next line of code based on grad(x\_next) (basically in the same step of the optimizer). Any ideas on how one can achieve this using pytorch? ​ EDIT: never mind I think I need to use "closure" here no?
1
t3_nk0cym
1,621,869,523
pytorch
How to calculate Flops for pixel shuffle in Pytorch?
I have tried the ptflops, thops and touch scan library to calculate the flops of my model (contain pixelshuffle operation) However, these library don't support the pixel shuffle operation and treat it as zero flops. Also, I cannot find any equation to calculate flops for pixel shuffles. Anyone has idea for that? Thanks.
1
t3_njtvoq
1,621,848,560
pytorch
ResNet-18 magnitude based pruning
ResNet-18 global, unstrctured, magnitude based and iterative pruning with CIFAR-10 dataset. The pruning goes on till 99.08% sparsity. This is based on the research papers: 1. "Learning both Weights and Connections for Efficient Neural Networks" by Song Han et al. 2. "Deep Compression: by Song Han et al. 3. "The Lottery Ticket Hypothesis" by Frankle et al. 4. "What is the State of Neural Network Pruning?" by Blalock et al. Original and unpruned model has val\_accuracy = 88.990% . Original model size = 42.7 MB, zipped model size = 40 MB. Pruned model with sparsity = 99.063% has val\_accuracy = 91.260%. Pruned, trained and zipped model size = 3.5 MB. This results into a compression ratio = 11.43%. You can refer to the code [here](https://github.com/arjun-majumdar/Neural_Network_Pruning/blob/main/ResNet18_Global_Magnitude_Custom_Pruning.ipynb). *NOTE:* Post pruning PyTorch doesn't cast tensors to sparse format. Therefore, the tensors are of the same dimensions as before but with 0s in it to denote pruned connections. Thoughts?
1
t3_nj9pvc
1,621,782,807
pytorch
Difficulty in using LSTMs for text generation
I am currently trying quote generation (character level) with LSTMs using Pytorch. I am currently facing some issues understanding exactly how the hidden state is implemented in Pytorch. **Some details:** I have a list of quotes from a character in a TV series. I am converting those to a sequence of integers with each character corresponding to a certain integer by using a dictionary *char2idx*. I also have the inverse of this *idx2char* where the mapping is reversed. After that, I am using a sliding window, say of size *window\_size*, and a step of size *step* to prepare the data. As an example, let's say the sequence is *\[1, 2, 3, 4, 5, 0\]* where 0 stands for the EOS character. Then using window\_size = 3 and step = 2, I get the sequence for x and y as: x1 = [1, 2, 3], y1 = [2, 3, 4] x2 = [3, 4, 5], y1 = [4, 5, 0] x = [x1, x2], y = [y1, y2] The next step is to train the model. I have attached the code I am using to train the model. **NOTE:** I am not passing hidden states from one batch to the other as the ith sequence of the (j+1)th batch is probably not the next step to the ith sequence from the jth batch. (This is why I am using a sliding window to help the model remember). Is there a better way to do this? My main question occurs during testing time. There are two methods by which I am testing. **Method 1:** I take the initial seed string, pass it into the model and get the next character as the prediction. Now, I add that to the starting string and pass this whole sequence into the model, without passing the hidden state. That is, I input the whole sequence to the model, with the LSTM having the initial hidden state as 0, get the output, append the output to the sequence and repeat till I encounter the EOS character. **Method 2:** I take the initial seed string, pass it into the model and get the next character as the prediction. Now, I just pass the character and the previous hidden state as the next input and continue doing so until an EOS character is encountered. **Question** 1. According to my current understanding, the outputs of both methods should be the same because the same thing should be happening in both. 2. What's actually happening is that both methods are giving completely different results. Why is this happening? 3. The second one gets stuck in an infinite loop for most inputs (e.g. it gives "back to back to back to ....") and on some inputs, the first one also gets stuck. How to prevent and avoid this? 4. Is this related in some way to the training? ​ I have tried multiple different ways (using bidirectional LSTMs, using one hot encoding (instead of embedding), changing the batch sizes, not using a sliding window approach (using padding and feeding the whole quote at once). I cannot figure out how to solve this issue. Any help would be greatly appreciated. **CODE** Code for the Model Class: class RNN(nn.Module): def __init__(self, vocab_size, hidden_size, num_layers, dropout=0.15): super(RNN, self).__init__() self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_layers = num_layers self.embedding = nn.Embedding(vocab_size, hidden_size) self.lstm = nn.LSTM(hidden_size, hidden_size, num_layers, dropout=dropout, batch_first=True) self.dense1 = nn.Linear(hidden_size, hidden_size*4) self.dense2 = nn.Linear(hidden_size*4, hidden_size*2) self.dense3 = nn.Linear(hidden_size*2, vocab_size) self.drop = nn.Dropout(dropout) def forward(self, X, h=None, c=None): if h is None: h, c = self.init_hidden(X.size(0)) out = self.embedding(X) out, (h, c) = self.lstm(out, (h, c)) out = self.drop(out) out = self.dense1(out.reshape(-1, self.hidden_size)) # Reshaping it into (batch_size*seq_len, hidden_size) out = self.dense2(out) out = self.dense3(out) return out, h, c def init_hidden(self, batch_size): num_l = self.num_layers hidden = torch.zeros(num_l, batch_size, self.hidden_size).to(DEVICE) cell = torch.zeros(num_l, batch_size, self.hidden_size).to(DEVICE) return hidden, cell Code for training: rnn = RNN(VOCAB_SIZE, HIDDEN_SIZE, NUM_LAYERS).to(DEVICE) optimizer = torch.optim.Adam(rnn.parameters(), lr=1e-3) criterion = nn.CrossEntropyLoss() rnn.train() history = {} best_loss = 100 for epoch in range(EPOCHS): #EPOCH LOOP counter = 0 epoch_loss = 0 for x, y in train_loader: #BATCH LOOP optimizer.zero_grad() counter += 1 o, h, c = rnn(x) loss = criterion(o, y.reshape(-1)) epoch_loss += loss.item() loss.backward() nn.utils.clip_grad_norm_(rnn.parameters(), 5) # Clipping Gradients optimizer.step() if counter%print_every == 0: print(f"[INFO] EPOCH: {epoch+1}, BATCH: {counter}, TRAINING LOSS: {loss.item()}") epoch_loss = epoch_loss/counter history["train_loss"] = history.get("train_loss", []) + [epoch_loss] print(f"\nEPOCH: {epoch+1} COMPLETED!\nTRAINING LOSS: {epoch_loss}\n") Method 1 Code: with torch.no_grad(): w = None start_str = "Hey, " x1 = quote2seq(start_str)[:-1] while w != EOS_TOKEN: x1 = torch.tensor(x1, device=DEVICE).unsqueeze(0) o1, h1, c1 = rnn(x1) p1 = F.softmax(o1, dim=1).detach() q1 = np.argmax(p1.cpu(), axis=1)[-1].item() w = idx2char[q1] start_str += w x1 = x1.tolist()[0]+ [q1] quote = start_str.replace("<EOS>", "") Method 2 Code: with torch.no_grad(): w = None start_str = "Are we back" x1 = quote2seq(start_str)[:-1] h1, c1 = rnn.init_hidden(1) while w != EOS_TOKEN: x1 = torch.tensor(x1, device=DEVICE).unsqueeze(0) h1, c1 = h1.data, c1.data o1, h1, c1 = rnn(x1, h1, c1) p1 = F.softmax(o1, dim=1).detach() q1 = np.argmax(p1.cpu(), axis=1)[-1].item() w = idx2char[q1] start_str += w x1 = [q1] quote = start_str.replace("<EOS>", "") ​
1
t3_nj6xbl
1,621,773,465
pytorch
PyTorch, Tensorflow Structure
Hi all, Going through PyTorch and Tensorflow implementations, every author has its own structure and way of writing the code. It is very easy to write crappy and confusing code since there isn't a guideline behind writing models in PyTorch. In comparison when working on web development, a standard approach is to use the Model, View, Controller structure. Would a framework that creates a basic structure and the appropriate dataloaders be useful? There is [this](https://github.com/victoresque/pytorch-template) project that tries to achieve that but it is not universal and has limited options. One example of the framework would be writing a command to start a project with a basic structure, downloading models, or building data loaders depending on the task. Explain with commends on the templates how things should be done such as building data loaders that return dictionaries. Do you think a project like that would be helpful? Do you know maybe of any projects like that?
1
t3_nj5qcl
1,621,768,634
pytorch
StyleGAN2 implementation in PyTorch with side-by-side notes
Implemented StyleGAN2 model and training loop from paper "Analyzing and Improving the Image Quality of StyleGAN". Code with annotations: [https://nn.labml.ai/gan/stylegan/index.html](https://nn.labml.ai/gan/stylegan/index.html) This is a minimalistic implementation with only 425 lines of code and lots of documentations and diagrams explaining the model. * [Github](https://github.com/lab-ml/annotated_deep_learning_paper_implementations/tree/master/labml_nn/gan/stylegan) * [Paper on arXiv](https://arxiv.org/abs/1912.04958) * [Twitter Thread](https://twitter.com/labmlai/status/1396298504872423425)
0.95
t3_nj3z5v
1,621,761,042
pytorch
How can I create a Pytorch Dataloader from a hdf5 file with multiple groups/datasets?
Say that from an image folder with 9k images I have 4k images of size (100,400) , 2k images of size(150 ,350) and the rest have a size of (200 , 500) I can use a single hdf5 file to store all three types of data subsets using create_group/_create_dataset What I want to learn is how would i modify collate_fn ,BatchSampler ,Dataset class when I have, suppose a single hdf5 file that has three groups , each with a dataset of images of same sizes .... now with a single dataset/group in hdf5 file it's simple and straightforward to get each item with an index ... but with more groups/datasets I can not understand how can i get batches from the data.
1
t3_nj134s
1,621,748,479
pytorch
Why is pytorch take up 2gb during pip install
i install pytorch using 1.7.1+cu110. is there a way to lessen the install size?
0.4
t3_niip01
1,621,690,270
pytorch
Torchvision Object Detection
Is someone using Torchvision Object Detection API for Pascal VOC using Faster-RCNN and have some tricks how to reach the 70% mAP that is SOTA using this architecture? And does anyone know if the Torchvision detection API is feasible to compare against available Pytorch Implementations of Faster-RCNN
1
t3_ngylaw
1,621,512,807
pytorch
Is there a way to build Pipeline like Scikit-learn that bounds data transformation and model?
I believe it's a common requirement for many machine learning project: After model is fine-tuned, create a pipeline that could do data preprocessing and model inference sequentially. ​ In Scikit-learn, they have Pipeline module that could do the task, but I couldn't find that kind of function in pytorch. The Pipeline in pytorch seems to serve for multiprocessing, not combine different workflows. ​ I'd found online that use Scikit-learn combined with Pytorch model, but now I need to convert the whole workflow into ONNX, using more than one framework might be something I want to avoid. ​ Is scikit-learn the only way to build a pipeline? Thank you!
1
t3_ngtb17
1,621,494,053
pytorch
PyTorch YOLOv5 - Microsoft C++ Build Tools
I am trying to install PyTorch YOLOv5 from ultralytics from [here](https://pytorch.org/hub/ultralytics_yolov5/) in Windows 10 x86\_64 system. The instructions seem pretty straightforward and I after having installed PyTorch for GPU, I am attempting to install the required requirements by using the command: pip install -qr [https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt](https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt) ​ to which I get the following ERROR log: ​ >ERROR: Command errored out with exit status 1: command: > >'C:\\Users\\arjun\\anaconda3\\envs\\pytorch\_object\_detection\\python.exe' -u > >\-c 'import sys, setuptools, tokenize; sys.argv\[0\] = '"'"'C:\\\\Users\\\\arjun\\\\AppData\\\\Local\\\\Temp\\\\pip-install-7kbo300l\\\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\\\[setup.py](https://setup.py)'"'"'; > >\_\_file\_\_='"'"'C:\\\\Users\\\\arjun\\\\AppData\\\\Local\\\\Temp\\\\pip-install-7kbo300l\\\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\\\[setup.py](https://setup.py)'"'"';f=getattr(tokenize, > >'"'"'open'"'"', open)(\_\_file\_\_);[code=f.read](https://code=f.read)().replace('"'"'\\r\\n'"'"', > >'"'"'\\n'"'"');f.close();exec(compile(code, \_\_file\_\_, '"'"'exec'"'"'))' > >bdist\_wheel -d 'C:\\Users\\arjun\\AppData\\Local\\Temp\\pip-wheel-kc1jnk9w' > >cwd: C:\\Users\\arjun\\AppData\\Local\\Temp\\pip-install-7kbo300l\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\ > >Complete output (16 lines): running bdist\_wheel running build > >running build\_py creating build creating build\\lib.win-amd64-3.8 > >creating build\\lib.win-amd64-3.8\\pycocotools copying > >pycocotools\\[coco.py](https://coco.py) \-> build\\lib.win-amd64-3.8\\pycocotools copying > >pycocotools\\[cocoeval.py](https://cocoeval.py) \-> build\\lib.win-amd64-3.8\\pycocotools > >copying pycocotools\\[mask.py](https://mask.py) \-> build\\lib.win-amd64-3.8\\pycocotools > >copying pycocotools\\\_\_init\_\_.py -> build\\lib.win-amd64-3.8\\pycocotools > >running build\_ext cythoning pycocotools/\_mask.pyx to > >pycocotools\\\_mask.c > >C:\\Users\\arjun\\anaconda3\\envs\\pytorch\_object\_detection\\lib\\site-packages\\Cython\\Compiler\\[Main.py:369](https://Main.py:369): > >FutureWarning: Cython directive 'language\_level' not set, using 2 for > >now (Py2). This will change in a later release! File: > >C:\\Users\\arjun\\AppData\\Local\\Temp\\pip-install-7kbo300l\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\pycocotools\\\_mask.pyx > >tree = Parsing.p\_module(s, pxd, full\_module\_name) building 'pycocotools.\_mask' extension error: Microsoft Visual C++ 14.0 or > >greater is required. Get it with "Microsoft C++ Build Tools": > >[https://visualstudio.microsoft.com/visual-cpp-build-tools/](https://visualstudio.microsoft.com/visual-cpp-build-tools/) > >\---------------------------------------- ERROR: Failed building wheel for pycocotools > >ERROR: Command errored out with exit status 1: > >command: 'C:\\Users\\arjun\\anaconda3\\envs\\pytorch\_object\_detection\\python.exe' -u > >\-c 'import sys, setuptools, tokenize; sys.argv\[0\] = '"'"'C:\\\\Users\\\\arjun\\\\AppData\\\\Local\\\\Temp\\\\pip-install-7kbo300l\\\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\\\[setup.py](https://setup.py)'"'"'; > >\_\_file\_\_='"'"'C:\\\\Users\\\\arjun\\\\AppData\\\\Local\\\\Temp\\\\pip-install-7kbo300l\\\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\\\[setup.py](https://setup.py)'"'"';f=getattr(tokenize, > >'"'"'open'"'"', open)(\_\_file\_\_);[code=f.read](https://code=f.read)().replace('"'"'\\r\\n'"'"', > >'"'"'\\n'"'"');f.close();exec(compile(code, \_\_file\_\_, '"'"'exec'"'"'))' > >install --record > >'C:\\Users\\arjun\\AppData\\Local\\Temp\\pip-record-l60dglwi\\install-record.txt' > >\--single-version-externally-managed --compile --install-headers 'C:\\Users\\arjun\\anaconda3\\envs\\pytorch\_object\_detection\\Include\\pycocotools' > >cwd: C:\\Users\\arjun\\AppData\\Local\\Temp\\pip-install-7kbo300l\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\ > >Complete output (14 lines): > >running install > >running build > >running build\_py > >creating build > >creating build\\lib.win-amd64-3.8 > >creating build\\lib.win-amd64-3.8\\pycocotools > > copying pycocotools\\[coco.py](https://coco.py) \-> build\\lib.win-amd64-3.8\\pycocotools > > copying pycocotools\\[cocoeval.py](https://cocoeval.py) \-> build\\lib.win-amd64-3.8\\pycocotools > >copying pycocotools\\[mask.py](https://mask.py) \-> build\\lib.win-amd64-3.8\\pycocotools > >copying pycocotools\\\_\_init\_\_.py -> build\\lib.win-amd64-3.8\\pycocotools > >running build\_ext > >skipping 'pycocotools\\\_mask.c' Cython extension (up-to-date) > >building 'pycocotools.\_mask' extension > >error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": > >[https://visualstudio.microsoft.com/visual-cpp-build-tools/](https://visualstudio.microsoft.com/visual-cpp-build-tools/) > >\---------------------------------------- ERROR: Command errored out with exit status 1: > >'C:\\Users\\arjun\\anaconda3\\envs\\pytorch\_object\_detection\\python.exe' -u > >\-c 'import sys, setuptools, tokenize; sys.argv\[0\] = '"'"'C:\\\\Users\\\\arjun\\\\AppData\\\\Local\\\\Temp\\\\pip-install-7kbo300l\\\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\\\[setup.py](https://setup.py)'"'"'; > >\_\_file\_\_='"'"'C:\\\\Users\\\\arjun\\\\AppData\\\\Local\\\\Temp\\\\pip-install-7kbo300l\\\\pycocotools\_e5774d8d59d14fa9b3baece40c2b7248\\\\[setup.py](https://setup.py)'"'"';f=getattr(tokenize, > >'"'"'open'"'"', open)(\_\_file\_\_);[code=f.read](https://code=f.read)().replace('"'"'\\r\\n'"'"', > >'"'"'\\n'"'"');f.close();exec(compile(code, \_\_file\_\_, '"'"'exec'"'"'))' > >install --record > >'C:\\Users\\arjun\\AppData\\Local\\Temp\\pip-record-l60dglwi\\install-record.txt' > >\--single-version-externally-managed --compile --install-headers 'C:\\Users\\arjun\\anaconda3\\envs\\pytorch\_object\_detection\\Include\\pycocotools' > >Check the logs for full command output. ​ I have installed Microsoft C++ Build Tools and get the following output in CMD: > \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* > > \*\* Visual Studio 2019 Developer Command Prompt v16.9.6 > > \*\* Copyright (c) 2021 Microsoft Corporation > > \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* I am trying to reinstall the requirements.txt but the error for Microsoft C++ Build Tools still exist. ​ What should I do?
1
t3_ngrhrx
1,621,487,492
pytorch
Aggregating batched multi-label predictions
Hi everybody, I have a pytorch question and I hope it’s a simple answer. Thanks in advance for your help! I have a model which makes multi-label predictions. So on a batch input, the output is a tensor of shape B x L (where B is the batch size, L is the number of labels). I’m trying to use Monte Carlo dropout to generate a distribution of N predictions for each instance (ie basically just activate the dropout layers at prediction time to get a population of N probabilistic predictions). So I use a loop to get N B x L tensors, and I want to convert these into 1 B x L x N tensor. I can’t figure out this operation though. Any help with useful resources or if you know the syntax to do so, please let me know. Thanks!
1
t3_nga60s
1,621,441,299
pytorch
loss_fn expected scalar type Long but found Float
Apologies in advance, I'm an absolute beginner at neural networks. I'm trying to adapt [this](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html) tutorial to work with some csv data in the following format: "xlabel1" "xlabel2" "ylabel' 88.00788879 54.17111206 90.29861259 88.00788879 54.17111206 89.44630686 88.00788879 54.17111206 89.73772812 . . . . . . . . . ​ import torch from torch import nn from torch.utils.data import DataLoader from torchvision import datasets from torchvision.transforms import ToTensor, Lambda, Compose import matplotlib.pyplot as plt import pandas as pd import numpy as np import torch as T # Get cpu or gpu device for training. device = "cuda" if torch.cuda.is_available() else "cpu" print("Using {} device".format(device)) class PeopleDataset(T.utils.data.Dataset): def __init__(self, src_file, num_rows=None): x_tmp = np.loadtxt(src_file, max_rows=num_rows, usecols=range(0, 2), delimiter=",", skiprows=1, dtype=np.float32) y_tmp = np.loadtxt(src_file, max_rows=num_rows, usecols=2, delimiter=",", skiprows=1, dtype=np.float32) self.x_data = T.tensor(x_tmp, dtype=T.float32).to(device) self.y_data = T.tensor(y_tmp, dtype=T.float32).to(device) def __len__(self): return len(self.x_data) # required def __getitem__(self, idx): if T.is_tensor(idx): idx = idx.tolist() preds = self.x_data[idx, 0:2] pol = self.y_data[idx] sample = {'predictors': preds, 'airflow': pol} # print(sample) return sample train_ds = PeopleDataset('airflow-airflow_setpoint.csv', num_rows=43224) train_dataloader = DataLoader(train_ds, batch_size=64) test_dataloader = DataLoader(train_ds, batch_size=64) # Define model class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(1*2, 2), nn.ReLU(), nn.Linear(2, 2), nn.ReLU(), nn.Linear(2, 1), nn.ReLU() ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork().to(device) print(model) loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=1e-3) def train(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) for batch, myDic in enumerate(dataloader): # X, y = X.to(device), y.to(device) X = myDic.get('predictors').to(device) y = myDic.get('airflow').to(device) print(X) print(y) # Compute prediction error pred = model(X) loss = loss_fn(pred, y) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() if batch % 100 == 0: loss, current = loss.item(), batch * len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") def test(dataloader, model): size = len(dataloader.dataset) model.eval() test_loss, correct = 0, 0 with torch.no_grad(): for X, y in dataloader: X, y = X.to(device), y.to(device) pred = model(X) test_loss += loss_fn(pred, y).item() correct += (pred.argmax(1) == y).type(torch.float).sum().item() test_loss /= size correct /= size print(f"Test Error: \n Accuracy: {(100 * correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") epochs = 5 for t in range(epochs): print(f"Epoch {t + 1}\n-------------------------------") train(train_dataloader, model, loss_fn, optimizer) test(test_dataloader, model) print("Done!") Any help/advice/direction would be appreciated
1
t3_nfvm42
1,621,397,490
pytorch
Flatten 3D tensor
I have a tensor of the shape T x B x N (training data for a RNN, T is max seq length, B is number of batches, and N number of features) and I'd like to flatten all the features across timesteps, such that I get a tensor of the shape B x TN. Any ideas? Thanks!
1
t3_nfppv8
1,621,380,747
pytorch
Self-published book on PyTorch
Hi, I'm Daniel, and I've been teaching machine learning at a bootcamp in Berlin for more than three years. I've just finished self-publishing my book, "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide" ([https://leanpub.com/pytorch](https://leanpub.com/pytorch)). If you're looking for a book where you can learn about Deep Learning and PyTorch without having to spend hours deciphering cryptic text and code, and that's easy and enjoyable to read, this is it :-) It is a very comprehensive work, and it covers from the basics of gradient descent all the way up to fine-tuning large NLP models (BERT and GPT-2) using HuggingFace. The book is divided into four parts: \- Part I: Fundamentals (gradient descent, training linear and logistic regressions in PyTorch) \- Part II: Computer Vision (deeper models and activation functions, convolutions, transfer learning, initialization schemes) \- Part III: Sequences (RNN, GRU, LSTM, seq2seq models, attention, self-attention, transformers) \- Part IV: Natural Language Processing (tokenization, embeddings, contextual word embeddings, ELMo, BERT, GPT-2) My writing style is very informal, and I write as if I were having a conversation with you, the reader. I'd imagine which questions my students would ask if I were teaching them a given topic, and then I'd raise (and answer) these questions in the text. And, in case you're wondering "who is this guy, and why should I care about his book", here's a bit of background on me: As a teacher, I've helped more than 150 students advance their careers. My professional background includes 20 years of experience working for companies in several industries: banking, government, fintech, retail, and mobility. I've written some popular blog posts, like: \- Understanding binary cross-entropy / log loss: a visual explanation ([https://bit.ly/2S0VSok](https://bit.ly/2S0VSok)) - more than 400k views \- Understanding PyTorch with an example: a step-by-step tutorial ([https://bit.ly/3uRfbPn](https://bit.ly/3uRfbPn)) - more than 220k views I've also been invited to give my talk "PyTorch 101: Building a Model Step-by-Step" at the Open Data Science Conference (ODSC) in 2019, 2020, and 2021. And I've just finished publishing my first book, focusing on Deep Learning and PyTorch :-)
1
t3_nfdx4p
1,621,352,405
pytorch
Is there an easy way to visualize pytorch vision transforms
I am looking for a library or something hosted by streamlit- that takes an image I can upload or give a path, takes the transforms I want to apply as input and outputs how the images look like. This helps in quickly ideating what transforms are required
1
t3_nf8eet
1,621,337,801
pytorch
MLP-Mixer in Flax and PyTorch
nan
1
t3_neki4a
1,621,268,305
pytorch
Weird problem with the loss- looking for suggestions
The weirdest thing is happening in my model and I can't figure out why.So, TLDR, if my input has 100 points (in a batch), my loss at every output of my network will be 50 (half of that): y-y_predict = tensor([50.1337, 49.6812, 50.0728, 50.3125, 49.8369, 50.1456, 50.0340, 50.0508, 50.1231, 50.2237, 49.4339, 49.7306, 49.8324, 49.6193, 49.6949, 49.8605, 50.0395, 50.1412, 50.3325, 50.4694, 50.2878, 50.2262, 50.1926, 50.1933, 50.2082, 50.2194, 50.1791, 50.1530, 50.1509, 50.1562, 50.1650, 50.1777, 50.1900, 50.2044, 50.2156, 50.2246, 50.2187, 50.2143, 50.2464, 50.2789, 50.3046, 50.3081, 50.3118, 50.3157, 50.3172, 50.3146, 50.3121, 50.3128, 50.3165, 50.3203, 50.3240, 50.3278, 50.3316, 50.3353, 50.3391, 50.3429, 50.3468, 50.3506, 50.3545, 50.3584, 50.3622, 50.3660, 50.3699, 50.3738, 50.3777, 50.3815, 50.3854, 50.3892, 50.3931, 50.3970, 50.4009, 50.4047, 50.4086, 50.4125, 50.4163, 50.4183, 50.4201, 50.4219, 50.4236, 50.4253, 50.4270, 50.4288, 50.4306, 50.4323, 50.4341, 50.4358, 50.4375, 50.4471, 50.4729, 50.4986, 50.5243, 50.5499, 50.5756, 50.6001, 50.6285, 50.6592, 50.6900, 50.7367, 50.8411, 51.0187], dtype=torch.float64, grad_fn=<SubBackward0>) If my input size is 50, the loss at every output will be 25. And if the input size is 25, the loss is 12.5.I have no idea why it's happening, but I know it has something to do with the number of points I'm using (as the difference between the actual and prediction seems to always be half of that). I'm trying to figure out if it has anything to do with the backwards pass/gradients, but so far I'm lost.My input is in a CSV file and I'm changing it to a batch as follows: x = torch.tensor(x, requires_grad=True) x_float = x.float() x_batch = torch.reshape(x_float, (len(x),1)) #reshape for batch x_batch.shape #torch.Size([100, 1]) \`x\_batch\` goes into the model, and the output is 100 values (see above). My code is a bit convoluted & long otherwise I would share it. Any suggestions on what I can check/ideas will be gladly appreciated. **Update:** Attaching loss function: loss = (torch.mean(torch.square(self.y_actual - y_predict))) Adding a plot to show that it learns the function perfectly, except for the difference: ​ https://preview.redd.it/7gt3vazqqpz61.png?width=1264&format=png&auto=webp&s=9f9dfca1d43c740311ce5119f673d2a043275cd8 Last update: Can't believe I missed it, but the optimizer.zero\_grad was misplaced by 1 line. I spent most of my time trying to figure out why the strange ratio of input\_size/2 and missed that silly thing. Thanks everyone for all the help
1
t3_negegz
1,621,258,445
pytorch
Is relative error feasible as a loss function? Also is there any benefit?
I’m performing regression where my outputs can widely vary, often between 50,000 and 0.01. Is there any sense to using relative error as a loss function, meaning using the percentage that the estimate differs from the target? My issue is that things like MSE cause my large values to dominate the loss, resulting in an overestimation of small values by several orders of magnitude. Root scaling to spread out my outputs helps but with diminishing returns, and eventually hampers prediction of large values Does a relative error loss make sense in this case?
0.84
t3_nd9tzx
1,621,116,770
pytorch
ResNet-50 PyTorch Pruning
Used **Global**, **Absolute Magnitude Weight**, **Unstructured** and **Iterative** pruning using ResNet-50 with *Transfer Learning* on CIFAR-10 dataset. Surprisingly, a **sparsity of 99.078%** has been achieved with an increase of performance! The code can be referred [here](https://github.com/arjun-majumdar/Neural_Network_Pruning/blob/main/ResNet50_Global_Absolute_Magnitude_Pruning.ipynb). Original and unpruned model's val\_accuracy = 92.58%, original model size = 90 MB, zipped model size = 83.5 MB. Pruned model's (sparsity = 99.078%) val\_accuracy = 92.94%, original model size = 90 MB, zipped model size = 7.1 MB. **This results into a compression ratio of 11.76x.** Thoughts?
0.78
t3_nczcbi
1,621,086,668
pytorch
Beginner : Object (shape) detection in binary images
(by binary I mean white on black, no greyscale, no RGB) I appreciate that this may be a stupid question, but I have been struggling for a while so would appreciate any general feedback. I wanted to build a model that could identify and label multiple simple shapes (ie square, triangle) in an image (similar to cv matchshapes etc). I wanted to try with machine learning, so I could introduce some distortion/noise in my training data, hoping I could be a bit more flexible in detection. I thought the simplicity of the images, may make this easier, but in reality it seems to be harder (since most tutorials are focused on photos). The closest tutorial I could find was on MNIST data (ie simple shapes), here : https://pythonprogramming.net/training-deep-learning-neural-network-pytorch/ but my results have been unusable. I have also experimented with SSD300 models from this example : https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection but again, I think the lack of RGB/greyscale data makes this largely useless ? My broad question is : Is there a model architecture I can read about that may be suited to this type of application ? ie draw an approximate square shape [255,255,255] on black background [0,0,0] and have this correctly identified and labelled in the source image.
1
t3_ncx6sk
1,621,079,695
pytorch
Gumbel-Max implementation
Could anyone explain how this [Gumbel-Max](https://medium.com/swlh/on-the-gumbel-max-trick-5e340edd1e01) pytorch [code implementation](https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/models/cell_searchs/search_model_gdas.py#L115-L129) works ? https://preview.redd.it/5qhqajfos9z61.png?width=936&format=png&auto=webp&s=c3810642488bd6800ad414446efc4fa6a8d4b550
1
t3_ncwtkv
1,621,078,345
pytorch
Is the GPU accelerated version for Mac M1 released?
Hey r/pytorch Is the GPU accelerated version for Mac M1 released? If so it will be very helpful if someone could share the link or help me with installation Thanks.
0.89
t3_nc9s03
1,621,002,134
pytorch
How to slice a tensor using segments of indices
Hello, Let's assume that we have a tensor x = [1,2,3,4,5,6,7,8,9] and another tensor s = [(0,2),(4,8)]. 0,2,4,8 are indices from x. And the goal is to slice the 2 segments from x using that indices means from 0 to 2 and from 4 to 8. What's the most efficient way to do it?
1
t3_nc5ude
1,620,989,792
pytorch
Basic Auto Encoder project - Generating poorly written digits [PyTorch]
Hi! If anyone is looking to play in latent domain, checkout this side project that I hosted on Streamlit. The aim was to transform an input image to something that looks somewhere between 2 digits. The repository below will give you a practical exposure to Auto Encoders, Latent Domain, PyTorch, Hosting on Streamlit. GitHub Repository - [LINK](https://github.com/vdivakar/mnistMuddle)| Streamlit App Demo - [PAGE](https://share.streamlit.io/vdivakar/mnistmuddle/br_streamlit/app.py) https://i.redd.it/2ghrmj6rj1z61.gif
0.81
t3_nc33cf
1,620,978,495
pytorch
How to use "backward" with a tensor?
I'm following [this](https://stackoverflow.com/questions/56111340/how-to-calculate-gradients-on-a-tensor-in-pytorch) question and trying to modify it.The following code works: import torch x = torch.full((5,4), 2.0, requires_grad=True) y = (2*x**2+3) print(x.shape) print(y.shape) y.backward(torch.ones_like(x)) x.grad >>>torch.Size([5, 4]) torch.Size([5, 4]) tensor([[8., 8., 8., 8.], [8., 8., 8., 8.], [8., 8., 8., 8.], [8., 8., 8., 8.], [8., 8., 8., 8.]]) But when I try to do the same thing with a neural network: import torch from torch.autograd import grad import torch.nn as nn class model(nn.Module): def init(self): super(model, self).init() self.fc1=nn.Linear(1, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 4) def forward(self, x): x=self.fc1(x) x=self.fc2(x) x=self.out(x) return x net = model() batch = 10 x = torch.rand(batch, requires_grad = True) x = torch.reshape(x, (batch,1)) y = net(x) y.backward(torch.ones_like(x)) x.grad I get the following error: "RuntimeError: Mismatch in shape: grad\_output\[0\] has a shape of torch.Size(\[10, 1\]) and output\[0\] has a shape of torch.Size(\[10, 4\])." **Update 1 :** Trying to make y and x be the same shape as u/ablindelephant was saying: I just changed this line: y[:,0].unsqueeze(1).backward(torch.ones_like(x)) So I only use the first dimension of y. However this gives me the following error: "ipykernel\_launcher:7: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain\_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. "
0.67
t3_nbldyk
1,620,925,103
pytorch
How to solve the error: The size of tensor a (16) must match the size of tensor b (128) at non-singleton dimension 2
Currently, I'm working on an image motion deblurring problem with PyTorch. I have two kinds of images: Blurry images (variable = blur\_image) that are the input image and the sharp version of the same images (variable = shar\_image), which should be the output. Now I wanted to try out transfer learning, but I can't get it to work. ​ Here is the code for my dataloaders: ​ train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle = True) validation_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=batch_size, shuffle = False) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle = False) Their shape: ​ Trainloader - Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Trainloader - Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128])torch.float32 Validationloader - Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Validationloader - Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128])torch.float32 Testloader- Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Testloader- Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128])torch.float32 ​ The way I use transfer learning (I thought that for the 'in\_features' I have to put in the number of pixels): ​ model = models.alexnet(pretrained=True) model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 128) device_string = "cuda" if torch.cuda.is_available() else "cpu" device = torch.device(device_string) model = model.to(device) ​ The way I define my training process: ​ # Define the loss function (MSE was chosen due to the comparison of pixels # between blurred and sharp images criterion = nn.MSELoss() # Define the optimizer and learning rate optimizer = optim.Adam(model.parameters(), lr=0.001) # Learning rate schedule - If the loss value does not improve after 5 epochs # back-to-back then the new learning rate will be: previous_rate*0.5 #scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, mode='min', patience=5, factor=0.5, verbose=True ) def training(model, trainDataloader, epoch): """ Function to define the model training Args: model (Model object): The model that is going to be trained. trainDataloader (Dataloader object): Dataloader object of the trainset. epoch (Integer): Number of training epochs. """ # Changing model into trainings mode model.train() # Supporting variable to display the loss for each epoch running_loss = 0.0 running_psnr = 0.0 for i, data in tqdm(enumerate(trainDataloader), total=int(len(train_dataset)/trainDataloader.batch_size)): blur_image = data[0] sharp_image = data[1] # Transfer the blurred and sharp image instance to the device blur_image = blur_image.to(device) sharp_image = sharp_image.to(device) # Sets the gradient of tensors to zero optimizer.zero_grad() outputs = model(blur_image) loss = criterion(outputs, sharp_image) # Perform backpropagation loss.backward() # Update the weights optimizer.step() # Add the loss that was calculated during the trainigs run running_loss += loss.item() # calculate batch psnr (once every `batch_size` iterations) batch_psnr = psnr(sharp_image, blur_image) running_psnr += batch_psnr # Display trainings loss trainings_loss = running_loss/len(trainDataloader.dataset) final_psnr = running_psnr/int(len(train_dataset)/trainDataloader.batch_size) final_ssim = ssim(sharp_image, blur_image, data_range=1, size_average=True) print(f"Trainings loss: {trainings_loss:.5f}") print(f"Train PSNR: {final_psnr:.5f}") print(f"Train SSIM: {final_ssim:.5f}") return trainings_loss, final_psnr, final_ssim And here is my way to start the training: train_loss = [] val_loss = [] train_PSNR_score = [] train_SSIM_score = [] val_PSNR_score = [] val_SSIM_score = [] start = time.time() for epoch in range(nb_epochs): print(f"Epoch {epoch+1}\n-------------------------------") train_epoch_loss = training(model, train_loader, nb_epochs) val_epoch_loss = validation(model, validation_loader, nb_epochs) train_loss.append(train_epoch_loss[0]) val_loss.append(val_epoch_loss[0]) train_PSNR_score.append(train_epoch_loss[1]) train_SSIM_score.append(train_epoch_loss[2]) val_PSNR_score.append(val_epoch_loss[1]) val_SSIM_score.append(val_epoch_loss[2]) scheduler.step(train_epoch_loss[0]) scheduler.step(val_epoch_loss[0]) end = time.time() print(f"Took {((end-start)/60):.3f} minutes to train") ​ But every time when I want to perform the training I receive the following error: ​ 0%| | 0/249 [00:00<?, ?it/s]Epoch 1 ------------------------------- /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([16, 3, 128, 128])) that is different to the input size (torch.Size([16, 128])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-195-ff0214e227cd> in <module>() 9 for epoch in range(nb_epochs): 10 print(f"Epoch {epoch+1}\n-------------------------------") ---> 11 train_epoch_loss = training(model, train_loader, nb_epochs) 12 val_epoch_loss = validation(model, validation_loader, nb_epochs) 13 train_loss.append(train_epoch_loss[0]) <ipython-input-170-dfa2c212ad23> in training(model, trainDataloader, epoch) 25 optimizer.zero_grad() 26 outputs = model(blur_image) ---> 27 loss = criterion(outputs, sharp_image) 28 29 # Perform backpropagation /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 526 527 def forward(self, input: Tensor, target: Tensor) -> Tensor: --> 528 return F.mse_loss(input, target, reduction=self.reduction) 529 530 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce, reduction) 2926 reduction = _Reduction.legacy_get_string(size_average, reduce) 2927 -> 2928 expanded_input, expanded_target = torch.broadcast_tensors(input, target) 2929 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) 2930 /usr/local/lib/python3.7/dist-packages/torch/functional.py in broadcast_tensors(*tensors) 72 if has_torch_function(tensors): 73 return handle_torch_function(broadcast_tensors, tensors, *tensors) ---> 74 return _VF.broadcast_tensors(tensors) # type: ignore 75 76 RuntimeError: The size of tensor a (16) must match the size of tensor b (128) at non-singleton dimension 2 ​ I'm a newbie in terms of using Pytorch (and image deblurring in general) and so I rather confused about the meaning of the error message and how to fix it. I tried to change my parameters and nothing worked. Does anyone have any advice for me on how to solve this problem? I would appreciate every input :)
1
t3_nbg9q0
1,620,911,402
pytorch
How you do efficiently parameterize a batch of multivariate Gaussians in PyTorch
Hi all. This is my first post here, and I am sorry if this isn't for questions (But didn't find any rule against). I want to output a batch of multivariable Gaussians (For likelihood based learning). But I don't know how I can do that. I have a neural network which will output a mean vector of batch\_size\*obs\_dim. and I have an output which returns a tensor of size batch\_size\*(obs\_dim\*obs\_dim). But my problem is that I want to ideally return a batch of multivariable Gaussian distributions. But for this I need to make the covariance positive definite which I don't know how I can do. If I didn't have the batch dimension I would have probably tried torch.matmul(a,a.T). I was also thinking of parametrizing a lower triangular matrix, but again, I have the problem on how to ensure that it has positive diagonal + it is lower triangular. I can imagine multiplying by a mask for example, but was hoping for a more straightforward way. ​ Thanks
1
t3_nb1tvr
1,620,859,968
pytorch
ResNet for big size images
I am using ResNet architecture for images size of (1280,720) which are pretty big for standard input size of (224,224). So I am thinking should I add more "layers" (actually combination of residual blocks) to down sample? Btw I am using ResNet for Encoder, so do you think I should leave the AdaptiveAveragePool2D layer there? If so what is the Transpose layer for this to up sample? Thanks
1
t3_na43yf
1,620,758,234
pytorch
Anyone aware of Pytorch implementation of Resnet which lists all modules in forward method?
The official code of pytorch Resnet has the following forward function in it's implementation. It's short and sweet but I would like to have the forward method contain all the convolution classes listed out instead of some function(layer1/layer2 in the pic) doing it. It would certainly make the code long and difficult to read but my research demands it. In case anyone knows about any such implementation or something similar for resnet, kindly let me know. Thanks! https://preview.redd.it/3tzwjtz6xiy61.png?width=286&format=png&auto=webp&s=073ccdc27af205261a785c2ef15026224284ead2
1
t3_na23bw
1,620,753,242
pytorch
Importing the numpy C-extensions failed
Hello, I'm new to pytorch and the installation process is driving me crazy. I am using anaconda3, cuda 11.1.0, and used `conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch` (the 11.1 nvidia does not work)to install pytorch under conda environment python=3.8.5(the default python comes with conda). And it's fine importing torch in cmd but not in vscode. It's giving me this error when importing either numpy or torch: `Exception has occurred: ImportError` `IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "C:\Users\22468\anaconda3\envs\pytorch\python.exe" * The NumPy version is: "1.20.1" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found. During handling of the above exception, another exception occurred: File "D:\Workspace\pytorch\notes\notes.py", line 1, in <module> import numpy` I've got the paths since I installed conda with the adding path tickbox ticked, and I've also tried modifying settings.json within the vscode project: "python.terminal.activateEnvironment": true and some other things on SO but nothing works, should I use pip instead?
1
t3_n9s3qr
1,620,721,314
pytorch
PyTorch Transfer Learning
Used Transfer Learning with ResNet-50 on CIFAR-10 in PyTorch to achieve val\_accuracy = 92.58%. You can see the code [here](https://github.com/arjun-majumdar/CNN_Classifications/blob/master/ResNet50_Transfer_Learning_CIFAR10_Finetuning_entire_model.ipynb). **Key takeaway:** Change the first conv layer to have the hyper-parameters kernel\_size = (3, 3), stride = (1, 1) and padding = (1, 1) instead of the original ones since CIFAR-10 dataset has much smaller images and using the original conv layer hyper-parameters reduces the image size due to which the resulting model performs not so good, according to my experiments. Thoughts?
0.81
t3_n9p2j6
1,620,708,384
pytorch
Remove pruned connections
One of the most common pruning techniques is "unstructured, iterative, global magnitude pruning" which prunes smallest magnitude p% of weights in each iterative pruning round. 'p' is typically between (10-20)%. However, after the desired sparsity is reached, say 96% (meaning that 96% of the weights in the neural network is 0), how can I remove these 0s to essentially remove say filters/neurons? Because this pruning technique produces a lot of 0s which still participate in forward propagation using out = W.out\_prev + b. Therefore, this pruning technique will help in compression but not in the reduction of inference time. Thanks!
1
t3_n9mdq8
1,620,698,958
pytorch
7 PyTorch Tips You Should Know
nan
1
t3_n9fao8
1,620,678,732
pytorch
Using PyTorch's autograd efficiently with tensors by calculating the Jacobian
In my previous [question](https://stackoverflow.com/questions/67320792/how-to-use-pytorchs-autograd-efficiently-with-tensors/67334809#67334809) I found how to use PyTorch's autograd with tensors: import torch from torch.autograd import grad import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(1, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 4) #a,b,c,d def forward(self, x): x=torch.tanh(self.fc1(x)) x=torch.tanh(self.fc2(x)) x=self.out(x) return x nx = net_x() #input t = torch.tensor([1.0, 2.0, 3.2], requires_grad = True) #input vector t = torch.reshape(t, (3,1)) #reshape for batch #method dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t) dx = torch.diagonal(torch.diagonal(dx, 0, -1), 0)[0] #first vector #dx = torch.diagonal(torch.diagonal(dx, 1, -1), 0)[0] #2nd vector #dx = torch.diagonal(torch.diagonal(dx, 2, -1), 0)[0] #3rd vector #dx = torch.diagonal(torch.diagonal(dx, 3, -1), 0)[0] #4th vector dx >>> tensor([-0.0142, -0.0517, -0.0634]) The issue is that \`grad\` only knows how to propagate gradients from a scalar tensor (which my network's output is not), which is why I had to calculate the Jacobian. However, this is not very efficient and a bit slow as my matrix is large and calculating the entire Jacobian takes a while (and I'm also not using the entire Jacobian matrix). Is there a way to calculate only the diagonals of the Jacobian (to get the 4 vectors in this example)? There appears to be an [open feature request](https://github.com/pytorch/pytorch/issues/41530) but it doesn't appear to have gotten much attention. I tried setting torch.autograd.functional.jacobian(vectorize=True) However, this seems to be slower.
1
t3_n98j4a
1,620,662,217
pytorch
I'm publishing a free course that teaches PyTorch for audio/music processing.
I’ve received numerous requests from The Sound of AI community to cover PyTorch in my tutorials. For this reason, I’m starting a new series which will teach you PyTorch with an eye on audio and music processing! Among other aspects of PyTorch, I’ll be covering torchaudio, the GPU-accelerated audio processing library for PyTorch. Ready to start this cool journey with me? Check out the course overview: [https://www.youtube.com/watch?v=gp2wZqDoJ1Y](https://www.youtube.com/watch?v=gp2wZqDoJ1Y)
1
t3_n92mw8
1,620,647,614
pytorch
Changing learning rate after loading scheduler's state dict
I have my model: import torch import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(2, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 4) def forward(self, x): x=self.fc1(x) x=self.fc2(x) x=self.out(x) return x nx = net_x() r = torch.tensor([1.0,2.0]) optimizer = optim.Adam(nx.parameters(), lr = 0.1) scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-2, max_lr=0.1, step_size_up=1, mode="triangular2", cycle_momentum=False) path = 'opt.pt' for epoch in range(10): optimizer.zero_grad() net_predictions = nx(r) loss = torch.sum(torch.randint(0,10,(4,)) - net_predictions) loss.backward() optimizer.step() scheduler.step() print('loss:' , loss) torch.save({ 'epoch': epoch, 'net_x_state_dict': nx.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'scheduler': scheduler.state_dict(), }, path) checkpoint = torch.load(path) nx.load_state_dict(checkpoint['net_x_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) scheduler.load_state_dict(checkpoint['scheduler']) It trains and loads just fine, but I'm interested in lowering the base_lr of the scheduler from 10^-2 to 10^-3 after training. Is there a way to do this without creating a completely new scheduler? From my understanding if I do that I will lose the parameters that the current scheduler is using and training will get worse/start from sort of scratch
1
t3_n8inio
1,620,580,398
pytorch
One liner question
I have two tensors: a = torch.nn.Parameter(torch.rand(7, requires_grad=True)) b = torch.randint(0,60, (20,)) Is there a one liner (or a quick & short way) that can create a tensor (call it "x") of size 20 (similar to "b") with conditions?i.e.\[b<4 use a\[0\], 4 <=b<12 use a\[1\], 12<=b<22 use a\[2\], <28, <38, <50, >50\] for every b So if: b = [12, 93, 54, 0...] I want my new tensor "x" to be: x = [a[2],a[6], a[6]...] I'm going to use this "x" tensor to train and need the values the be backproped and learnable i.e. loss = torch.rand(20) \* x loss.backward() ... So if one of the a's is not in x I want it to not change. **Update** Apologies, I thought I hid the post so I won't get answers but for some reason it didn't seem to work. Anyway, I found an answer: x = a[0](b<4) + a[1]((4<=b)&(b<12)) + a[2]((12<=b)&(b<22)) + a[3]((22<=b)&(b<28)) + a[4]((28<=b)&(b<30)) + a[5]((30<=b)&(b<50)) + a[6]*(b>=50) ​
1
t3_n7qiyr
1,620,484,548
pytorch
Is there anyway I can install pytorch without using command print or compiling myself?
I have a limited internet plan so I’d like to use my phone’s unlimited data to just download a .exe or .msi file to transfer to my pc. Is this possible? I assume not but want to make sure.
0.71
t3_n7dlp5
1,620,435,243
pytorch
[N] PyTorch Lightning 1.3 - Lightning CLI, PyTorch Profiler, Improved Early Stopping
Announcing the release of Lightning 1.3! This new release includes: * a new Lightning CLI * PyTorch profiler integration * improved TPU support * new early stopping strategies * predict and validate trainer routines Install: [https://bit.ly/2PTVZBa](https://bit.ly/2PTVZBa) Release notes: [https://bit.ly/3b5acTk](https://bit.ly/3b5acTk) Read more via #PyTorch's Medium blog: [https://bit.ly/3b62N6f](https://t.co/xyqAx6G94W?amp=1) ​ https://preview.redd.it/aqt9znfohrx61.png?width=1360&format=png&auto=webp&s=820bf7e92ba9a29258ab0adff8f2484a178f8d3e Big thanks to everyone who contributed to this release! [**@akihironitta**](https://github.com/akihironitta), [**@alessiobonfiglio**](https://github.com/alessiobonfiglio), [**@amisev**](https://github.com/amisev), [**@amogkam**](https://github.com/amogkam), [**@ananthsub**](https://github.com/ananthsub), [**@ArvinZhuang**](https://github.com/ArvinZhuang), [**@ashleve**](https://github.com/ashleve), [**@asnorkin**](https://github.com/asnorkin), [**@awaelchli**](https://github.com/awaelchli), [**@BloodAxe**](https://github.com/BloodAxe), [**@bmahlbrand**](https://github.com/bmahlbrand), [**@Borda**](https://github.com/Borda), [**@borisdayma**](https://github.com/borisdayma), [**@camruta**](https://github.com/camruta), [**@carmocca**](https://github.com/carmocca), [**@ceshine**](https://github.com/ceshine), [**@dbonner**](https://github.com/dbonner), [**@dhkim0225**](https://github.com/dhkim0225), [**@EdwardJB**](https://github.com/EdwardJB), [**@EliaCereda**](https://github.com/EliaCereda), [**@EricCousineau-TRI**](https://github.com/EricCousineau-TRI), [**@ethanwharris**](https://github.com/ethanwharris), [**@FlorianMF**](https://github.com/FlorianMF), [**@hemildesai**](https://github.com/hemildesai), [**@ifsheldon**](https://github.com/ifsheldon), [**@kaushikb11**](https://github.com/kaushikb11), [**@mauvilsa**](https://github.com/mauvilsa), [**@maxfrei750**](https://github.com/maxfrei750), [**@mesejo**](https://github.com/mesejo), [**@ramonemiliani93**](https://github.com/ramonemiliani93), [**@rohitgr7**](https://github.com/rohitgr7), [**@s-rog**](https://github.com/s-rog), [**@sadiqj**](https://github.com/sadiqj), [**@scart97**](https://github.com/scart97), [**@SeanNaren**](https://github.com/SeanNaren), [**@shuyingsunshine21**](https://github.com/shuyingsunshine21), [**@SkafteNicki**](https://github.com/SkafteNicki), [**@SpontaneousDuck**](https://github.com/SpontaneousDuck), [**@stllfe**](https://github.com/stllfe), [**@tchaton**](https://github.com/tchaton), [**@THasthika**](https://github.com/THasthika), [**@vballoli**](https://github.com/vballoli) \**If we forgot someone due to not matching commit email with GitHub account, let us know :\]*
1
t3_n78qtk
1,620,420,918
pytorch
ResNet-18 Pruning PyTorch
I have coded "Global, unstructured & iterative" pruning using ResNet-18 trained from scratch on CIFAR-10 dataset in PyTorch. You can refer to the code [here](https://github.com/arjun-majumdar/Neural_Network_Pruning/blob/main/ResNet18_Global_Pruning.ipynb). Let me know your comments/thoughts. Cheers!
1
t3_n6v0m3
1,620,381,793
pytorch
How is EMA different from the use of momentum
as far as I have seen EMA Does it on weights and momentum is done on gradients I want to know more about them or is my understanding wrong
1
t3_n6stqm
1,620,372,053
pytorch
How is embedding layer output is calculated?
I have this model with its default parameters class Model(nn.Module): def __init__(self, vocab_size=59, embedding_dim=10, context_size=2,hidden_dim=256): super(Model,self).__init__() self.embeddings = nn.Embedding(vocab_size, embedding_dim) self.linear1 = nn.Linear(context_size * embedding_dim, hidden_dim) self.linear2 = nn.Linear(hidden_dim, vocab_size) def forward(self, inputs): # inputs.shape = torch.Size([2]) embeds = self.embeddings(inputs) print(embeds.shape) # torch.Size([2, 10]) embeds = embeds.view(1,-1) out = F.relu(self.linear1(embeds)) out = self.linear2(out) log_probs = F.log_softmax(out, dim=1) return log_probs model = Model() model(torch.rand(2)) # forward prediction I know that my `self.embeddings.weight.shape` has a shape of `torch.Size([59, 10])` and after the forward prediction `print(embeds.shape)` shows the shape as `torch.Size([2, 10])` How the output shape of embeds is (2,10)?
1
t3_n6b4ta
1,620,318,135
pytorch
Mindblown 🤯🤯: Bring your Minecraft creation into the real world - generate photorealistic images of large 3D block worlds such as those created in Minecraft! (GANcraft)
nan
0.64
t3_n6125m
1,620,283,172
pytorch
How to “learn” pytorch
Hello all, I’m brand new to pytorch. I have been learning deep learning for close to a year now, and only managed to learn CNNs for vision and implement a very trash one in Tensorflow. Anyways, I decided I wanted to switch to pytorch since it feels more like python. Issue is, i don’t know how to “learn” pytorch. I don’t want to make the same mistake as I did during the beginning of my deep learning journey where I was in tutorial hell taking tensorflow courses and doing the same vision problems as practice. In fact, I want to learn deep learning for natural language processing, but I feel like learning pytorch and a bit of RNN theory isnt enough for me to jump right into a project. How would you all recommend I start getting good with it? I was planning on watching the getting started videos on the pytorch website and then maybe implementing basic stuff, like linear regression, logistic regression, and then work my way up to RNNs, but skip vision because that’s not really what im interested in. I have extensive background in working with pandas and Numpy and python for data science, so I know how to program effectively, just new to pytorch. Also I know a bit of DL theory already.
0.88
t3_n566qd
1,620,185,281
pytorch
What are the advantages and disadvantages of PyTorch Geometric vs Deep Graph Library (DGL)?
nan
1
t3_n4xdy0
1,620,160,183
pytorch
Ultimate Guide to Machine Learning with Python (e-books bundle)
This bundle of e-books is specially crafted for beginners. Everything from Python basics to the deployment of Machine Learning algorithms to production in one place. What's included? • Ultimate Guide to Machine Learning with Python e-book (PDF) • Full Source Code with all examples from the book (Jupyter Notebooks) Six additional bonus materials: • Bonus #1: Python for Data Science (PDF + Full Source Code) • Bonus #2: Mathematics for Machine Learning (PDF) • Bonus #3: Guide to Data Analysis (PDF + Full Source Code) • Bonus #4: Neural Networks Zoo (PDF) • Bonus #5: Access to a private Discord Community Visit the page to learn more: [https://rubikscode.net/ultimate-guide-to-machine-learning-with-python/](https://rubikscode.net/ultimate-guide-to-machine-learning-with-python/) ​ https://reddit.com/link/n4ll8a/video/e50kztpr43x61/player
0.25
t3_n4ll8a
1,620,126,253
pytorch
Latest from Baidu researchers: Automatic video generation from audio or text
nan
1
t3_n4fmag
1,620,101,311
pytorch
Tesla Road Detection
Does anyone have an clue on how Tesla does their road detection. I can't think of much but I've started getting through some projects and get something intresting up. More importantly, I've been losing sleep on this as it intrigues me. The internet doesn't give me anything intresting to read about and I've started to build my own robot using ROS. Combatively, The real question is, how do I draw on pictures using Torch, how does Tesla do its edge detection, how would I go about using Python to stich that top view, and how to do distance estimation. Any information will be helpful, this is my second post. Sorry if this confused anyone, like earlier mentioned, this is my second post. [PyTorch - Andrej Karpathy](https://www.youtube.com/watch?v=oBklltKXtDE&t=271s)
0.78
t3_n3t4a3
1,620,039,054
pytorch
Run Python Code In Parallel Using Multiprocessing
nan
0.36
t3_n3nck0
1,620,014,259
pytorch
Can nn.RNN handle variable length inputs?
I’m building a simple RNN to classify text. Do i have to pad every sentence i send in to the RNN to make every tensor the same size? i’m using GLOVE embedding’s if that makes any difference. at the moment i’m feeding index values of variable length sentences in to the network representing each sentence.
1
t3_n3f3ve
1,619,987,093
pytorch
Solving multidimensional PDEs in pytorch
nan
1
t3_n21g4m
1,619,810,943
pytorch
Differentiable augmentation for GANs (using Kornia)
nan
1
t3_n1wh16
1,619,797,072
pytorch
Understanding Lambda function
I'm calculating the Jacobian matrix using a Lambda function: #network import torch from torch.autograd import grad import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def init(self): super(net_x, self).init() self.fc1=nn.Linear(1, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 4) def forward(self, x): x=torch.tanh(self.fc1(x)) x=torch.tanh(self.fc2(x)) x=self.out(x) return x nx = net_x() t = torch.tensor([1.0, 2.0, 3.2], requires_grad = True) #input vector t = torch.reshape(t, (3,1)) #reshape for batch #Jacobian dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t) My issue is that I also want to save the output of the network -- nx(t\_), without rerunning it through the network. I tried saving it to a parameter out = (lambda t_: nx(t_))(t) But then I can't pass it through the Jacobian because it needs a function.
0.5
t3_n1v2qd
1,619,793,060
pytorch
programming PyTorch for Deep Learning book
Did anyone finish the book without issues in Google Colab? Thanks
0.4
t3_n1oiz7
1,619,766,685
pytorch
How do I load a local model with torch.hub.load/
Hi, I need to avoid downloading the model from the web (due to restrictions on the machine installed). This works, but downloads the model from the net model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3\_resnet101', pretrained=True) I have placed the \`.pth\` file and the \`hubconf.py\` file in the /tmp/ folder and changed my code to model = torch.hub.load('/tmp/', 'deeplabv3\_resnet101', pretrained=True,s ource='local') ​ but to my surprise it still downloads the model from the internet. What am I doing wrong? How can I load the model locally. Just to give you a bit more details, I'm doing all this in a Docker container which has a read-only volume at runtime, so that's why the download of new files fails. ​ thanks, John
1
t3_n0ew93
1,619,616,795
pytorch
Can I change the value of a variable using arithmetic operators and detach and still let gradients flow through it?
Hi. So I need to implement something and I was wondering if I can implement it the following way. There is an encoder and decoder block. Let the output of encoder be x and output of decoder be y. For some examples, I need to replace the value of x with a value from a file. So if I just implement x=FILE-VALUE, I will understandably break the graph and gradients won’t flow all the way back. However, if I do something like x-=(x.clone().detach()-FILE-VALUE) I believe I should essentially be getting the same effect while also allowing appropriate gradients to flow all the way back. Is my understanding correct or am I doing something wrong? Is there an easier way to do this?
1
t3_n0e8j9
1,619,614,741
pytorch
Why is Pytorch code slower than libraries like fairseq and OpenNMT-py?
I have a made Transformer model using torch.nn.Transformer and it has around 18 million parameters but it's training in very slow, magnitudes of times slower than libraries like fairseq and OpenNMT-py. Are there any important points that need to be taken care of to get optimal training speed or is Pytorch code slower than these libraries?
0.81
t3_n0dgm1
1,619,612,101
pytorch
Avalanche-Python Library for Continual Learning - Analytics India Magazine
nan
1
t3_n0c9yo
1,619,607,679
pytorch
No CUDA capable device is detected
Hello, Apologies if this is not the correct place to be posting this. I am trying to run a github repo for training that is setup using miniconda3. I am using pytorch version 1.3 and the cuda toolkit in the envrionment is set to 9.2. I keep getting this error: No CUDA capable device is detected /opt/conda.... and the error lists a path to a file. Moreover, it also states the following line: "NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure the latest NVIDIA driver is installed and running." Please note that I am a complete and utter noob to gpus and machine learning and am currently just tryinng to execute a github repo that already has the commands to set everything up. [https://github.com/amazon-research/datatuner](https://github.com/amazon-research/datatuner) I do not know how I can resolve this error. Any help or guidance will be highly appreciated.
0.5
t3_n0866v
1,619,589,354
pytorch
How to train a tensor of tensors?
I have a class model: class Model(nn.Module) That has 2 learnable parameters: self.a = torch.nn.Parameter(torch.rand(1, requires_grad=True)) self.b = torch.nn.Parameter(torch.rand(1, requires_grad=True)) It also has a neural network class inside: class Net_x(nn.Module) So in order to train all the parameters I combine the learnable parameters "a, b" and the NN's params (under the "init" of class Model): self.net_x = self.Net_X() self.params = list(self.net_x.parameters()) self.params.extend(list([self.a, self.b])) This works well and the network is training just fine. My issue is that now I'm trying to change one of the parameters to be a tensor of tensors of parameters: ~~self.a = torch.nn.Parameter(torch.rand(1, requires\_grad=True))~~ self.a = torch.tensor( \[\[ torch.nn.Parameter(torch.rand(1, requires\_grad=True)) \] for i in range(5)\] ) That is because at every time step (not epoch), I need to use a different parameter from self.a Example: for epoch in range(n_epochs): if timestep<5: val = 50 - a[0] * b loss = 10 - val elif timestep >=5 and timestep < 10: val = 50 - a[1] * b loss = 10 - val The model runs without issues, but the parameters are not being updated (i.e they stay the same at every epoch). P.S I would have added my code, but it's really long. I'm hoping the answer is simple (and if not I'll try to reduce my code and attach it)
0.75
t3_mzsqzl
1,619,541,977
pytorch
Pytorch 2-D and 3-D Table Interpolation
I am trying to get 2-D and 3-D interpolation table lookup running in pytorch, but I don't believe [torch.lerp](https://pytorch.org/docs/stable/generated/torch.lerp.html) supports it and haven't been able to find any other pytorch native solution. Has anyone done a neural network approximation of a 2-D or 3-D linear interpolation table in pytorch? I am wondering if it is a worthwhile endeavor for speed at inference time (especially if the required size of NN to approximate a table is large) or if there is a simpler/more standard way that I missed. For some background: I am trying to encode a known physical relationships that is in the form a linear interpolation table into a LSTM time series forecast system I am designing. The interpolation table queries would be done during training, so this isn't simply a pre-processing step.
1
t3_mz3lwd
1,619,459,653