sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float64
0.07
1
id
stringlengths
9
9
created_utc
float64
1.6B
1.65B
pytorch
Tool for Complex Data Labelling Tasks
Hi /r/pytorch readers! We have created a [labelling tool](https://humanlambdas.com/solutions/data-labelling) that can be customized to display all sorts of data models and tasks. Here are a couple of examples for [NLP](https://humanlambdas.com/templates/nlp-news-article-annotation) and [CV](https://humanlambdas.com/templates/computer-vision-annotation). I hope some of you will find this useful, and if you have any thoughts I would love to hear your feedback!
0.67
t3_l67u11
1,611,763,986
pytorch
How can I decrease my test loss?
My model is training and while the training loss is decreasing, my test loss is increasing. I understand that this is because my model is overfitting. There is already a dropout layer in my architecture, so how can I decrease my test loss more?
0.75
t3_l4qpws
1,611,590,417
pytorch
Transformer and attention mechanism
Hello community, I’m reading the famous " attention is all you need " paper, and was wondering: Does A multi head attention with only one head , is equivalent to an attention layer ( the classical/basic one ) ?
0.88
t3_l455fn
1,611,513,599
pytorch
How to easily deploy any PyTorch model to the web?!
[https://medium.com/towards-artificial-intelligence/deploy-deep-learning-models-using-streamlit-and-heroku-22f6efae9141](https://medium.com/towards-artificial-intelligence/deploy-deep-learning-models-using-streamlit-and-heroku-22f6efae9141) Deploying Deep Learning models with an interactive UI isn't easy. In this hands-on tutorial blog, a NLP model with a very minimal frontend is deployed using Streamlit. The last part of the blog includes steps to deploy the frontend and the backend to the internet using Heroku. A live version is available @ [https://classifyquestions.herokuapp.com.](https://classifyquestions.herokuapp.com/?fbclid=IwAR3ly66YyJ0mZNW_omXLcODQSGcr8P_ARhWfydKaYZl0u4Xk1M3aT0IthwU) Have a nice read. If you have any feedback or question comment down below. ​ https://reddit.com/link/l3t9f4/video/f9fq25llt7d61/player
0.45
t3_l3t9f4
1,611,465,794
pytorch
Want to learn how to train the neural network to classify the images?
nan
0.38
t3_l29aqn
1,611,267,600
pytorch
Trying to create a batch generator, sorry for the mess. More info in the comments.
nan
1
t3_l25mvl
1,611,257,166
pytorch
Wanting to make age detector, but valid loss is high with low accuracy, not more than 46%
I am planning to make an age detector with 10 classes, each of them has range from 2-6 years old, 7-12 years old and so on.I use pretrained model from resnet18. During training, I did not freeze the layers, instead, I just let it update all the parameters. The loss function I am using is cross-entropy, with Adam optimizer and lr-scheduler. The datasets contain 10000 images for training and about 3000 images for validation. The problem that keeping me stuck is that although the training loss seems able to continuous decreased, but the validation loss is not. No matter how I change the hyper-parameters, It will drop until minimum 33 and bounce back to 60+, which is the initial value when I started training. The accuracy for validation is only at most 46% This is my code. Please have a look at what is causing this problem. Or is there any problem with datasets? Such as not enough datasets for training etc? `if train_mode:` `train_loader, test_loader = data_load()` `for iteration in range(iteration_start, epoch):` `time_start = time.time()` `for current_mode in ['train', 'valid']:` `total_loss = 0` `total_accuracy = 0` `if current_mode == 'train':` `loader = train_loader` `model.train()` `else:` `loader = test_loader` `model.eval()` `for batch in loader:` `images, labels = batch` `images = images.to(device)` `labels = labels.to(device)` `with torch.set_grad_enabled(current_mode == 'train'):` `output = model(images)` `loss = loss_function(output, labels)` `total_loss += loss.item()` `if current_mode == 'train':` `optimizer.zero_grad()` `loss.backward()` `optimizer.step()` `else:` `accuracy = calculate_accuracy(output, labels)` `total_accuracy += accuracy` `record_loss(total_loss, current_mode, iteration)` `if current_mode == 'valid':` `scheduler.step(total_loss) # total_loss or total_accuracy, based on which you want to enhance` `record_accuracy(total_accuracy / len(loader), iteration)` this is my model `def custom_model():` `my_model = models.resnet18(pretrained=True)` `# for paras in my_model.parameters():` `# paras.requires_grad = False` `num_fc_layer = my_model.fc.in_features` `custom_fc_layers = nn.Sequential(` `nn.BatchNorm1d(num_fc_layer),` `nn.Dropout(0.5),` `nn.Linear(num_fc_layer, num_class)` `)` `my_model.fc = custom_fc_layers` `return my_model` And this is my optimizer and loss function, and scheduler `if __name__ == '__main__':` `model = custom_model()` `model.to(device)` `optimizer = optim.Adam(model.parameters(), lr=learning_rate)` `scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.5, patience=1, verbose=True)` `loss_function = nn.CrossEntropyLoss()` `main()`
1
t3_l20uv2
1,611,243,737
pytorch
How to keep GPU from getting full?
(Apologies if this is repeatitive) So I have an RL pipeline, and the GPU RAM is exceeded after a certain epochs. I have already tried using Python's del() function as well as torch.cuda.empty\_cache()
0.83
t3_l20955
1,611,241,833
pytorch
Why does Neural Style Transfer work on images with range [0,255] if pytorch models are trained on images with range [0,1]?
[Original paper](https://arxiv.org/abs/1508.06576). The [PyTorch docs](https://pytorch.org/docs/stable/torchvision/models.html) state that all models were trained using images that were in the range of `[0, 1]`. However, there seem to be better results when using images in the range `[0, 255]`: ​ Consider this output, which uses the `style loss` described in the original paper. Both set of results use an identical process, but the results on the bottom transform the tensor into the range of `[0, 255]` before applying backpropagation. https://preview.redd.it/5dhpfkpo4ic61.png?width=1776&format=png&auto=webp&s=690d509eb074e27d7a2dbfc0e86cd89ba62823c8 The results are more visually appealing for `[0, 255]`, and the behavior of the loss is better as well - images in the range of `[0, 1]` reach a nonzero convergence limit, whereas images in the range of `[0, 255]` do not reach this limit for 1000+ epochs. ​ ​ Why does the range of `[0, 255]` work at all? If these models were trained in the range of `[0, 1]`, wouldn't it interpret any pixel above `1` as being purely white?
0.78
t3_l1av1b
1,611,154,955
pytorch
Any additional books to level up my skill in pytorch?
I'm currently doing the pytorch's tutorial itself and I want something that I can use as a supplement but there so much many books, and I want to narrow it down.
1
t3_l12yg1
1,611,121,765
pytorch
Ways to save your neural network
nan
0.5
t3_l0x61e
1,611,101,883
pytorch
RecBole: A unified, comprehensive and efficient recommendation library.
nan
1
t3_l0m1fj
1,611,070,094
pytorch
RecBole: A unified, comprehensive and efficient recommendation library.
​ https://preview.redd.it/m9kajptoz8c61.png?width=432&format=png&auto=webp&s=9625b414178899503c944f66b40f391ab72b2a24
1
t3_l0fiw6
1,611,044,125
pytorch
i want remove mosaic of picture by python.what should need for first step of beginner?
i find some libraly. it called gan or tecogan. i find some tutorial about gan but tecogan tutorial not many. What should i do at first?
0.25
t3_l046bs
1,611,004,993
pytorch
A bit of code to implement binary masking for arbitrary models a la Lottery Ticket Hypothesis
I felt that the packages available online to do this were getting ahead of themselves, so I whipped this up. It's a simple wrapper that goes around whatever module you want, keeps track of masks, lets you apply whatever function you want to get to change masks over time, can load the parameters from another model with the same architecture, can be saved and loaded, trained, etc. No fancy work, just use it to wrap an existing model, and away you go. def mask_fn(param, old_mask, percentile): if len(param.shape) > 1: absparam = abs(param) new_mask = (absparam > torch.quantile(absparam, percentile)).to(param.dtype) return(new_mask * old_mask) else: return(old_mask) class binary_mask_model(torch.nn.Module): # Becomes a copy of a 'parent model,' with the extra attribute that it holds onto # binary element-wise masks of each parameter, and can apply or modify them at will. # This allows you to easily prune models according to methodologies like the one # presented in "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" # (Frankle, J. and Carbin, M., 2019) def __init__(self, parent_model): super(binary_mask_model, self).__init__() # Copies the existing model, and adds it to itself self.model = copy.deepcopy(parent_model) self.mask_set = [] # Adds, to every parameter in the model, another 1-valued mask tensor of equal shape for param in self.model.parameters(): self.mask_set.append(torch.ones(param.shape, dtype = param.dtype, device = param.device)) def apply_mask(self): # Multiplies all parameters by their mask for mask, param in zip(self.mask_set, self.model.parameters()): param.data = mask * param.data def forward(self, *args, **kwargs): # Re-masks the tensors in case training changed some 0-values entries, then # executes the exact same forward pass as the parent model if self.training: self.apply_mask() return(self.model(*args, **kwargs)) def mod_mask(self, mask_fn = mask_fn, *args, **kwargs): # Applies a function of one's own design to update the masks, and then # masks the parameters again for i, param in zip(range(len(self.mask_set)), self.model.parameters()): self.mask_set[i] = mask_fn(param, self.mask_set[i], *args, **kwargs) self.apply_mask() def load_parameters(self, source_model): # Pulls in the parameters from another model with the same architecture, in case you # want to reload the parent model, or pull from a pretrained model for param, source_param in zip(self.model.parameters(), source_model.parameters()): param.data = source_param.data self.apply_mask() # Example import torchvision.models as models parent_model = models.resnet18().to('cuda') masked_model = binary_mask_model(parent_model) masked_model.mod_mask(percentile = 0.9) torch.save(masked_model, 'mask.torch') loaded_model = torch.load('mask.torch') opt = torch.optim.Adam(loaded_model.parameters(), lr = 0.0005) loss = loaded_model(torch.randn(5, 3, 128, 128, device = 'cuda')).mean() loss.backward() opt.step()
1
t3_l03udy
1,611,004,015
pytorch
Detectron2 Installation | 2021 Tutorial
nan
1
t3_l01sx4
1,610,998,247
pytorch
The right way to make custom losses
I noticed that a few codes make their custom losses just as a method while some make an inherited class of nn.Module and define its forward and backward methods. What's the right way to do it?
1
t3_kxw9y0
1,610,723,088
pytorch
Is the amount of GPU on each machine expected to be identical when multi-machine multi-gpu training using distributed.launch ?
I have 8 GPUs on machine 1 and 4 GPUs on machine 2, and I'd like to perform multi-gpu training on all 12 GPUs. Is it ok to train on 8 GPUs and 4 GPUs in two machines, despite using the different amount of GPUs on each one?
1
t3_kxr2cr
1,610,702,448
pytorch
ErrorQuestion about NSPopoverBarltemButton
When I try to run OpenAI’ gym CartPole I got this Warning: Expected min height of view: (<NSPopoverTouchBarItemButton: 0x7fcd151c7b10>) to be less than or equal to 30 but got a height of 32.000000. This error will be logged once per view in violation. Anyone got idea what is this error?
1
t3_kvfudr
1,610,410,349
pytorch
Plateau-ing training loss and LOW train_accuracy/test_accuracy
With regards to the thread topic, could anyone help to advise what is wrong with the [loss\_function() computation logic](https://gist.github.com/promach/52c5199e98647c694163abd0b3af3dae#file-net-py-L149-L173) (especially `policy_output_discrete` and `value_output_discrete`) in my NN ? ​ # Forward Pass policy_output, value_output = net(_board_features_and_turn) # Since both policy_output and value_output are of continuous probability nature, # we need to change them to discrete number for loss_function() computation policy_output_discrete = torch.zeros(len(_score), NUM_OF_POSSIBLE_MOVES, requires_grad=True) if USE_CUDA: policy_output_discrete = policy_output_discrete.cuda() for topk_index in range(len(_score)): # functionally equivalent to softmax() policy_output_discrete[topk_index][policy_output.topk(1).indices[topk_index]] = 1 # substract 1 because score is one of these [-1, 0, 1] values value_output_discrete = torch.topk(value_output, 1).indices - 1 # Loss at each iteration by comparing to target(moves) loss1 = loss_function(policy_output_discrete, move) # Loss at each iteration by comparing to target(score) loss2 = loss_function(value_output_discrete, _score) loss = loss1 + loss2 # Backpropagating gradient of loss optimizer.zero_grad() loss.backward() ​
1
t3_kusuln
1,610,331,111
pytorch
Feedback Transformer PyTorch implementation
Added Feedback Transformer implementation/guide to our collection of neural network architectures/algorithms. Feedback Transformer uses recurrent attention to previous steps, and there for can give fast predictions. Github Repo: [https://github.com/lab-ml/nn](https://github.com/lab-ml/nn) Source code with side-by-side notes: [https://lab-ml.com/labml\_nn/transformers/feedback/](https://lab-ml.com/labml_nn/transformers/feedback/)
1
t3_kufh9e
1,610,289,317
pytorch
How implicit registration of modules work
Hey Guys, I have a question which is more about Python than Pytorch. When we assign a module to a member field in the construction (e.g. self.linear = nn.Linear(5,10)) it gets registered implicitly, how does that work?
1
t3_ku4ngk
1,610,243,168
pytorch
Reproduced YOLOv3 based on Pytorch (darknet)
I reproduced YOLOv3 by a single short script. It loads the pre-training parameters provided by the darknet official website directly without conversion. This means that a model trained with Darknet can be converted to a Pytorch model using this script. The required weight file and test picture are automatically downloaded from the official website, and no other files are dependent. Except for the basic library of Python, it only depends on OpenCV and Pytorch 1.7 (including TorchVision). The Forward does not use the advanced features of Pytorch, and can be directly Scripted or Traced for further deployment. [https://gist.github.com/devymex/1f76224b2428d0ddbf92b93def6c587c](https://gist.github.com/devymex/1f76224b2428d0ddbf92b93def6c587c)
1
t3_kts9zp
1,610,204,148
pytorch
PyTorch grid_sample to TensorRT with or without ONNX
Is there a way to convert this layer to TensorRT? ONNX opset doesn’t seem to support it
1
t3_ktmj24
1,610,177,676
pytorch
2-hour tutorial on PyTorch Basics & Gradient Descent | Deep Learning with PyTorch: Part 1 of 6
nan
1
t3_kt17aw
1,610,107,552
pytorch
PyTorch Conv-6 CIFAR-10
Hey guys, I have implemented a [Conv-6 CNN CIFAR-10](https://github.com/arjun-majumdar/CNN_Classifications/blob/master/PyTorch_Conv6_CIFAR10.ipynb) classification in PyTorch. I will be happy to hear your feedback.
0.17
t3_kt0nsa
1,610,105,250
pytorch
PyTorch convolutional block - CIFAR10 - RuntimeError
I am using PyTorch 1.7 and Python 3.8 with CIFAR-10 dataset. I am trying to create a block with: conv -> conv -> pool -> fc. Fully connected layer (fc) has 256 neurons. The code for this is as follows: # Testing- conv1 = nn.Conv2d(in_channels = 3, out_channels = 64, kernel_size = 3, stride = 1, padding = 1, bias = True) conv2 = nn.Conv2d(in_channels = 64, out_channels = 64, kernel_size = 3, stride = 1, padding = 1, bias = True) pool = nn.MaxPool2d(kernel_size = 2, stride = 2) fc1 = nn.Linear(in_features = 64 * 16 * 16, out_features = 256, bias = True) images.shape # torch.Size([32, 3, 32, 32]) x = conv1(images) x.shape # torch.Size([32, 64, 32, 32]) x = conv2(x) x.shape # torch.Size([32, 64, 32, 32]) x = pool(x) x.shape # torch.Size([32, 64, 16, 16]) # This line of code gives error- x = fc1(x) >RuntimeError: mat1 and mat2 shapes cannot be multiplied (32768x16 and 16384x256) ​ What's going wrong?
0.67
t3_ksxp6o
1,610,091,009
pytorch
Custom Regularization in PyTorch
Hello Pytorch enthusiasts, has anyone tried doing custom regularization using pytorch and do you have any recommendations, links to share on how to implement this?
0.81
t3_kss820
1,610,070,141
pytorch
What is a good cca, cka library for pytorch that works (ideally with GPU)?
nan
0.6
t3_ksocf8
1,610,058,015
pytorch
Pytorch for beginners
Pytorch is a deep learning framework and a scientific computing package. This is how the PyTorch team defines it. Originally torch was built on Lua programming language and for the ease of use, it is converted in Python by the Facebook AI research teams and many others. [learn PyTorch chapter_1](https://www.dataspoof.info/post/pytorch-for-beginners-basics )
0.29
t3_ksfsjx
1,610,034,219
pytorch
Tutorial: How to accelerate training using PyTorch with CUDA (notebook, code)
nan
1
t3_kseohr
1,610,030,879
pytorch
what is the difference between BertModelLMHeadModel and BertForMaskedLM
In huggingface [https://huggingface.co/transformers/model\_doc/bert.html](https://huggingface.co/transformers/model_doc/bert.html), there are two models: BertModelLMHeadModel and BertForMaskedLM. What is the difference between these two models? Thanks.
1
t3_ks7tcs
1,610,002,509
pytorch
Issue with train_test_split()
nan
1
t3_krn4pm
1,609,936,843
pytorch
Hands-On Guide To Imaginaire: Nvidia Recently Launched GAN Library
nan
0.79
t3_krl351
1,609,927,788
pytorch
How do I set up the fully connected layers for a Seq2Seq LSTM?
I have a simple LSTM layer and a fully connected later (n_hidden, n_outputs), however I was t to build a Seq2Seq model, where the model takes in a sequence and outputs a sequence. The model architecture is like: Self.lstm = nn.LSTM(n_inp, n_hidden) Self.fc = nn.Linear(n_hidden, n_output) With a relu in between. But I understand this gives me a 1xn_output vector, but I want a 1 x sequence_length x n_output. How would I set up the linear layers
1
t3_krii24
1,609,915,909
pytorch
How to perform spline interpolation of zeroth order?
I have a 3D model, where the authors have used scipy to rescale the image as follows, from scipy import ndimage #input_image shape = [D x H x W] out = ndimage.interpolation.zoom(input_image, scale, order=0) I don't have much knowledge about spline interpolation. I've searched online to understand the topic, and some places it has been hinted that zeroth order is equivalent to 'Nearest Neighbour' interpolation, but nothing concrete has been written. I tried using nn.functional.interpolate with mode 'nearest' and 'trilinear' but the output image is not the same as when obtained by using scipy. Is there any way I can perform the above operation in Pytorch?
1
t3_krgjxy
1,609,908,296
pytorch
A Simple Emotion Analysis Application with Pytorch
nan
1
t3_kqxves
1,609,850,928
pytorch
Coding Attention is All You Need in PyTorch for Question Classification
Hi Guys, Recently, I have posted a series of blogs on medium regarding Self Attention networks and how can one code those using PyTorch and build and train a Classification model. In the series, I have shown various approaches to train a classification model for the dataset available [here](https://cogcomp.seas.upenn.edu/Data/QA/QC/). Part - 1: [https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-1-33e990636e76](https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-1-33e990636e76) Part - 1.1: [https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-1-1-3b4224cd4757](https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-1-1-3b4224cd4757) Part - 2: [https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-2-910b89c7116a](https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-2-910b89c7116a) Part - 3: [https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-3-74efbda22451](https://thevatsalsaglani.medium.com/question-classification-using-self-attention-transformer-part-3-74efbda22451) Have a nice read. Share if you like the content. Comment for any discussions. Thanks
0.92
t3_kqxten
1,609,850,729
pytorch
Theory to pytorch
Hi everyone, I want to learn how to program in pytorch while learning the theory of artificial intelligence. Does anyone know any courses or tutorials which explains the fundamentals of artificial intelligence and also explains the applications of pytorch. For example, the definition of neural network and everything that comes with it. Afterwards, how to code in pytorch a neural network.
0.6
t3_kqq208
1,609,819,303
pytorch
3-D Reconstruction of a moving person from a video!
nan
1
t3_kqpbez
1,609,816,810
pytorch
Autocomplete Python code with transformers
This is a small project we created to train a character level autoregressive transformer (or LSTM) model to predict Python source code. We trained it on GitHub repositories found on awesome pytorch list. Github repo: [https://github.com/lab-ml/python\_autocomplete](https://github.com/lab-ml/python_autocomplete) You can try training on Google Colab: [https://colab.research.google.com/github/lab-ml/python\_autocomplete/blob/master/notebooks/train.ipynb](https://colab.research.google.com/github/lab-ml/python_autocomplete/blob/master/notebooks/train.ipynb) Here are some sample evaluations/visualizations of the trained model: [https://colab.research.google.com/github/lab-ml/python\_autocomplete/blob/master/notebooks/evaluate.ipynb](https://colab.research.google.com/github/lab-ml/python_autocomplete/blob/master/notebooks/evaluate.ipynb) Working on a simple VSCode extension to test this out. Will open source it soon on the same repository.
1
t3_kq84hs
1,609,764,547
pytorch
MeanSquaredError() troubles
Hello All. New here! Ive been messing around with this error and I cant seem to get ignite to give me the MSError after each batch. Here's a few blocks of my code that I'm trying to get to work. Most of it is copied from this link [https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/FashionMNIST.ipynb](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/FashionMNIST.ipynb) trainer = create_supervised_trainer(model, optimizer, criterion, device=device) metrics = { 'accuracy':Accuracy(), 'mse':MeanSquaredError(), 'cm':ConfusionMatrix(num_classes=10) } train_evaluator = create_supervised_evaluator(model, metrics=metrics, device=device) val_evaluator = create_supervised_evaluator(model, metrics=metrics, device=device) training_history = {'accuracy':[],'loss':[]} validation_history = {'accuracy':[],'loss':[]} last_epoch = [] ​ # creating model, and defining optimizer and loss model = CNN() # moving model to gpu if available device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device) optimizer = optim.Adam(model.parameters(), lr=learning_rate) criterion = nn.MSELoss() #criterion = nn.NLLLoss() ​ @trainer.on(Events.EPOCH_COMPLETED) def log_training_results(trainer): train_evaluator.run(train_loader) metrics = train_evaluator.state.metrics accuracy = metrics['accuracy']*100 loss = metrics['mse'] last_epoch.append(0) training_history['accuracy'].append(accuracy) training_history['loss'].append(loss) print("Training Results - Epoch: {} Avg accuracy: {:.2f} Avg MSE loss: {:.2f} " .format(trainer.state.epoch, accuracy, loss)) def log_validation_results(trainer): val_evaluator.run(val_loader) metrics = val_evaluator.state.metrics accuracy = metrics['accuracy']*100 loss = metrics['mse'] validation_history['accuracy'].append(accuracy) validation_history['loss'].append(loss) print("Validation Results - Epoch: {} Avg accuracy: {:.2f} Avg MSE loss: {:.2f} " .format(trainer.state.epoch, accuracy, loss)) trainer.add_event_handler(Events.EPOCH_COMPLETED, log_validation_results) And my model looks like this # ----------------------------------------------------------------Layer (type) Output Shape Param Conv2d-1 \[64, 64, 28, 28\] 576 ReLU-2 \[64, 64, 28, 28\] 0 MaxPool2d-3 \[64, 64, 14, 14\] 0 Linear-4 \[64, 6000\] 75,270,000 Linear-5 \[64, 1200\] 7,201,200 Linear-6 \[64, 120\] 144,120 Linear-7 \[64, 10\] 1,210 ​ But i get this error ​ RuntimeError: The size of tensor a (10) must match the size of tensor b (64) at non-singleton dimension 1 And I have no idea how to fix it. My main goal is to find the MSError after each batch, and this works if my criterion is NLLLoss but not with MSE. I'm not sure if its a pytorch or an ignite issue. Any help would be greatly appreciated!
0.5
t3_kq34be
1,609,742,076
pytorch
How do I perform transfer learning on a model with different number of outputs?
I have a model which is train on 14 labels. I have to fine tune the model on a new set of data with 19 labels. The weights of the initial training are there, but I'm unable to load these weights, as the output size is different. Do I just save the weights of the previous model without the output layer, load that into the new model and then explicitly attach a new output layer for training? I'm still new to this, a little confused.
1
t3_kpot4i
1,609,695,487
pytorch
HyperLSTM PyTorch implementation
Added HyperLSTM (introduced in paper HyperNetworks by Ha et al.) implementation with explanations to our collection of implementations of neural network architectures/algorithms. HyperLSTM uses a smaller LSTM network (hyper network) to alter (row-wise scale) parameters of the actual LSTM. That is, the parameters of the LSTM change at each step. Source code with side-by-side notes: [https://lab-ml.com/labml\_nn/hypernetworks/hyper\_lstm.html](https://lab-ml.com/labml_nn/hypernetworks/hyper_lstm.html) Github Repo: [https://github.com/lab-ml/nn](https://github.com/lab-ml/nn)
1
t3_kpjvzc
1,609,677,242
pytorch
Gradient backpropagation over transformation operations
I feed an image as an input to a pre-trained cnn model after applying the following transformation operations to the image. self.transform = transforms.Compose([ transforms.Resize(352, 352), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) I want to calculate the gradients over these operations. One method that has been suggested is to use the [kornia](https://kornia.readthedocs.io/en/latest/introduction.html) library. But I is there a way I can do this by adding another layer at the beginning of the pre-trained unet model that perform the same transformations. Till now I have tried the following methods to create a custom layer and then add it using nn.Sequential() method 1) By modifying the source code of [kornia.enhance.normalize](https://kornia.readthedocs.io/en/latest/_modules/kornia/enhance/normalize.html#normalize) method. class MyModel(nn.Module): def __init__(self): super().__init__() def forward(self, input): image = input.view(3,-1) mean = image.mean(1) std = image.std(1) if mean.shape: mean = mean[..., :, None, None].to(input.device) if std.shape: std = std[..., :, None, None].to(input.device) out = (input - mean) / std return out.unsqueeze(0) myModel = MyModel() new_model = nn.Sequential(myModel, unet) 2) By simply adding a BatchNorm Layer and modifying the mean and standard deviation parameters of that layer new_model = nn.Sequential(nn.BatchNorm2d(3), unet) 3) class MyModel(nn.Module): def __init__(self, mean, std): super().__init__() self.mean = mean self.std = std def forward(self, image): img = transforms.Normalize(self.mean, self.std)(image) img = img.unsqueeze(0) return img In all the methods, the image wasn't even transformed in the first place (the unnormalized input was passed onto the second layer). How can I 1) Transform the image inside the model 2) Back-propogate the gradients during inference.
1
t3_kpjinv
1,609,675,395
pytorch
Neural Network for tic tac toe
The [inference coding](https://github.com/promach/mcts/blob/main/play.py) gave out repeated wrong results, it seems like the model output trained by the [training code](https://github.com/promach/mcts/blob/main/Net.py) is wrong. Any idea why ? ​ line 64 of the inference coding : next_move = np.binary_repr(next_move_probabilities.argmax()) always give the same result this means the trained model is definitely wrong Could anyone advise ?
0.67
t3_ko9u7p
1,609,500,503
pytorch
Issue of recent updates with RL algorithms.
It seems the newer versions of Pytorch are giving errors with certain Deep RL implementations, especially those involving a common network stem bracnhed to give two different outputs( like in Actor Critic methods). Appreciate any help, and please ask for further details if required. This is the error - RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: Downgrading to Pytorch <1.4.0 doesn't give the same error, hence the issue since all the implementation seem to be based on earlier versions.
1
t3_knoaf3
1,609,413,450
pytorch
why using log_sum_exp in calculating forward features in BLSTM-CRF
In the pytorch implementation of BLSTM-CRF tutorial ([https://pytorch.org/tutorials/beginner/nlp/advanced\_tutorial.html#advanced-making-dynamic-decisions-and-the-bi-lstm-crf](https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html#advanced-making-dynamic-decisions-and-the-bi-lstm-crf)), log-sum-exp operation is used to calcualted forward features, which is lised below def _forward_alg(self, feats): # Do the forward algorithm to compute the partition function init_alphas = torch.full((1, self.tagset_size), -10000.) # START_TAG has all of the score. init_alphas[0][self.tag_to_ix[START_TAG]] = 0. # Wrap in a variable so that we will get automatic backprop forward_var = init_alphas # Iterate through the sentence for feat in feats: alphas_t = [] # The forward tensors at this timestep for next_tag in range(self.tagset_size): # broadcast the emission score: it is the same regardless of # the previous tag emit_score = feat[next_tag].view( 1, -1).expand(1, self.tagset_size) # the ith entry of trans_score is the score of transitioning to # next_tag from i trans_score = self.transitions[next_tag].view(1, -1) # The ith entry of next_tag_var is the value for the # edge (i -> next_tag) before we do log-sum-exp next_tag_var = forward_var + trans_score + emit_score # The forward variable for this tag is log-sum-exp of all the # scores. alphas_t.append(log_sum_exp(next_tag_var).view(1)) forward_var = torch.cat(alphas_t).view(1, -1) terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]] alpha = log_sum_exp(terminal_var) return alpha The log\_sum\_exp operation on the calculated features is in the line alphas_t.append(log_sum_exp(next_tag_var).view(1)) My question is that why log\_sum\_exp operation is needed? Thanks.
1
t3_knnpj0
1,609,410,463
pytorch
Guide To GluonTS and PytorchTS For Time-Series Forecasting (With Python Implementation)
nan
0.91
t3_kn74yw
1,609,351,217
pytorch
Building pytorch on rocm
Hi everyone, I am trying to build pytorch from the rocm github. Using the script to transpile CUDA to ROCm is working, but when compiling it fails linkink libtorch_hip.so and c++ tells me that -E or -x is required when the input is feom the standard input. Is there anyone that can help me ?
0.75
t3_kn6xqa
1,609,350,598
pytorch
How, specifically, does the pre-fetching in DataLoaders work?
I'm putting something together where, in one process, I am training a model, and in another process, I am loading samples from disk, performing pre-processing, and assembling my input and label tensors. I currently have code that will do this, so that the moment my model finishes with one batch, bam, the next batch is ready to go. The thing is, I'm wondering whether or not this pre-fetching is something that is already implemented in the dataloaders? I saw some things about "pre-fetch factors" in the source code, but I'm not super certain how that works when it comes to actually enumerating the dataloader, if it does all the pre-fetching right when you enumerate it, if each individual batch is being pre-fetched while the model runs, and is delivered when needed, etc. I went through some of the source code, but I couldn't really make heads or tails of it. So if somebody could explain to me what's actually going on under the hood with dataloaders, I would appreciate it greatly.
0.43
t3_kmo5k4
1,609,279,635
pytorch
Very basic pytorch installation question
Hey, I am dabbling with AI to create visuals for bands and found this repo: [https://github.com/JCBrouwer/maua-stylegan2](https://github.com/JCBrouwer/maua-stylegan2) I would love to get it to run, but finally my lack of pytorch knowledge stands in my way. When I try to run generate\_audiovisual.py (on Windows, with a 2080TI GPU), ninja starts building. And fails. I have cl.exe and gcc on my machine, but first of all, I think it should not even compile anything at all as I instlled pytorch pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html I have spent a lot of time with this already, maybe I am missing something that is obvious to you. Thanks in advance!
1
t3_kmggsf
1,609,255,771
pytorch
List concatenation in Pytorch
Hi, I have two lists containing 3d tensors, any idea how to merge them in PyTorch. [torch.cat](https://torch.cat) is causing problem.
1
t3_kman4y
1,609,231,472
pytorch
Add normalization layer in the beginning of a pretrained model
I have a pretrained UNet model with the following architecture UNet( (encoder1): Sequential( (enc1conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc1norm1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc1relu1): ReLU(inplace=True) (enc1conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc1norm2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc1relu2): ReLU(inplace=True) ) (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (encoder2): Sequential( (enc2conv1): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc2norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc2relu1): ReLU(inplace=True) (enc2conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc2norm2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc2relu2): ReLU(inplace=True) ) (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (encoder3): Sequential( (enc3conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc3norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc3relu1): ReLU(inplace=True) (enc3conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc3norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc3relu2): ReLU(inplace=True) ) (pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (encoder4): Sequential( (enc4conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc4norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc4relu1): ReLU(inplace=True) (enc4conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (enc4norm2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (enc4relu2): ReLU(inplace=True) ) (pool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (bottleneck): Sequential( (bottleneckconv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bottlenecknorm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (bottleneckrelu1): ReLU(inplace=True) (bottleneckconv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bottlenecknorm2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (bottleneckrelu2): ReLU(inplace=True) ) (upconv4): ConvTranspose2d(512, 256, kernel_size=(2, 2), stride=(2, 2)) (decoder4): Sequential( (dec4conv1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec4norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec4relu1): ReLU(inplace=True) (dec4conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec4norm2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec4relu2): ReLU(inplace=True) ) (upconv3): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2)) (decoder3): Sequential( (dec3conv1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec3norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec3relu1): ReLU(inplace=True) (dec3conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec3norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec3relu2): ReLU(inplace=True) ) (upconv2): ConvTranspose2d(128, 64, kernel_size=(2, 2), stride=(2, 2)) (decoder2): Sequential( (dec2conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec2norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec2relu1): ReLU(inplace=True) (dec2conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec2norm2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec2relu2): ReLU(inplace=True) ) (upconv1): ConvTranspose2d(64, 32, kernel_size=(2, 2), stride=(2, 2)) (decoder1): Sequential( (dec1conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec1norm1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec1relu1): ReLU(inplace=True) (dec1conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (dec1norm2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (dec1relu2): ReLU(inplace=True) ) (conv): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)) ) The model takes an input image which has been normalized using min-max normalization. I want add a batch/layer norm layer before the first layer which does the same work for me (i.e. normalize the input image) so that I can feed the original image as it is. Edit: Made changes clarifying the aim of this question.
1
t3_kmalfc
1,609,231,253
pytorch
How to know what is the required parameters to specify?
Say I am building a neural network using torch.nn. How do I know which parameters I need to specify in the argument call for each object layer?
1
t3_kld488
1,609,108,514
pytorch
PyTorch NN for Numerical Inputs: Questions
Hi, I'm *very* new to PyTorch and neural networks as a whole so excuse this post. My goal is to implement a machine learning algorithm that can predict a specific single numerical value. The inputs from which it learns from are NumPy arrays; each data entry is as follows: array 1 | array 2 | numerical value Can I easily make a neural network in PyTorch that will learn from arrays (filled with numerical values) and have it predict what the numerical value will be if I made a test set of other arrays? Anything helps, Thanks.
0.75
t3_kkw674
1,609,039,392
pytorch
Why do Conv2d layers have 2 parameters?
I've created a network with a single Conv2d layer. Printing out the length of the parameters shows that it has 2 parameters. Printing out the parameter object shows that one of them is the matrix I would expect, a large tensor with size corresponding to the layer's number of input channels, output channels, and convolution kernel; then I also see an unexpected 2nd tensor of size 1x(num\_output\_channels). What is the purpose of this 2nd tensor? Isn't a convolution layer completely defined by the first tensor?
0.84
t3_kkuwyk
1,609,034,477
pytorch
Keras4Torch: A Ready-to-Use Wrapper for Training PyTorch Models✨
`keras4torch` is a high-level API like pytorch-lightning. It is designed for beginners who are new to pytorch but familar with Keras, then reduce the cost of migration. [https://github.com/blueloveTH/keras4torch](https://github.com/blueloveTH/keras4torch) Here is [an example of MNIST](https://nbviewer.jupyter.org/github/blueloveTH/keras4torch/blob/main/tutorials/MNIST_example.ipynb). This project was developed when I'm migrating models from tf.keras to pytorch. `keras4torch` provides NumPy workflow conforming to Keras interfaces as much as possible. There is also a DataLoader workflow for more flexible usage. The stable version of `keras4torch` can be installed via pip now. You can fork it for further development or modify it as a template with no limitations. Welcome to take a look at our [repo page](https://github.com/blueloveTH/keras4torch) and documentations. Thanks for reading!
0.78
t3_kkh51u
1,608,981,862
pytorch
Trying to add a new trainable variable to a layer I defined for ResNet
Thank you!
1
t3_kj63v6
1,608,774,004
pytorch
Are there ways to construct a network to improve accuracy of derivatives?
I'm working on a problem that requires me to have a neural network produce accurate results were the accuracy of the derivatives of network output with respect to the network input are also equally as important in terms of accuracy. It there a particular network architecture, acitavation function, or normalization scheme that can aid in this problem?
0.5
t3_kj380c
1,608,763,876
pytorch
Breadcrumbs on PyTorch Hanging
I am running a pretty straightforward DRN training loop using openai gym, and my training loop is hanging and there really aren't any clues coming from PyTorch as to why it hangs. I am sure it is GPU related because when I set the device to cuda:0 it hangs and I have to kill the process. Does anyone have any hits for figuring out why this may be happening? PyTorch version: 1.7.1 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.1 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: version 3.16.3 Python version: 3.7 (64-bit runtime) Is CUDA available: True CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 460.27.04 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
1
t3_kj0aza
1,608,754,268
pytorch
Gradient backpropagation through transformation operations.
Say, I have an input image `img` and I apply transformations to this image transform = transforms.Compose([transforms.ToTensor(), tansforms.Normalize(mean, std)]) Is there a way I can backpropagate the gradients over these transformations?
1
t3_kiux96
1,608,736,844
pytorch
LabML: Monitor PyTorch Lightening model training from a smartphone
For those who are not familiar, LabML ([https://github.com/lab-ml/labml](https://github.com/lab-ml/labml)) is a little library and an app that let you monitor model training on a mobile web app. We just implemented a callback for PyTorch lightening, and you can integrate it with a single line of code. Here are lightening MNIST samples modified with labml callback: [https://github.com/lab-ml/samples/tree/master/labml\_samples/lightening](https://github.com/lab-ml/samples/tree/master/labml_samples/lightening)
0.96
t3_kit02y
1,608,729,792
pytorch
Pytorch does not load the model properly
I want to train a character-level language model and use it to do a rescoring. For that purpose, I took a github code that contains all the necessary files, so that I can just train it on my data. After I train the model and try to do text generation, the output seems alright, but if I load the trained model and do text generation, the output seems random. Here is an example output: "jgJty&JWJ[C1HW1KJWJ&?&lKRx". I am saving the model the following way: `torch.save(model.state_dict(), 'lm.pt')` and then loading it as: `model = CharRNN(chars, n_hidden, n_layers).cuda()` `model.load_state_dict(torch.load('lm.pt'))` `model.eval()` I have successfully saved and loaded models this way many times, but for some reason it doesn't work with the language model. I have also tried printing the `model.state_dict()` after training and after loading the model and they look the same. Saving the model with `torch.save(model, 'lm.pt')` and then doing: `model = torch.load('lm.pt')` seems to work but I don't want to use that. My suspicion is that the `state_dict` does not have all the weights, so when I load the model some of them are randomly initialized, but I am not sure. Does anyone know what might be the issue and how to solve it?
1
t3_kismui
1,608,728,368
pytorch
lstm example?
Hiya ​ I'm trying to find a full lstm example where it demonstrates how to predict tomorrow's (or even a week's) future result of whatever based on the past data used in training. I seem to find many examples of people getting training data and splitting it, training and then using the last N% to "predict" - which seems incorrect as you already have the data that you normally wouldn't have. I can build an lstm but just need that little bit to show me how to use it to forecast the future ​ any suggestions would be most welcome
1
t3_kinba8
1,608,702,459
pytorch
JIT the collate function in Pytorch
I need to create a DataLoader where the collator function would require to have non trivial computation, actually a double layer loop which is significantly slowing down the training process. For example, consider this toy code where I try to use numba to JIT the collate function: import torch import torch.utils.data import numba as nb class Dataset(torch.utils.data.Dataset): def __init__(self): self.A = np.zeros((100000, 300)) self.B = np.ones((100000, 300)) def __getitem__(self, index): return self.A[index], self.B[index] def __len__(self): return self.A.shape[0] @nb.njit(cache=True) def _collate_fn(batch): batch_data = np.zeros((len(batch), 300)) for i in range(len(batch)): batch_data[i] = batch[i][0] + batch[i][1] return batch_data and then I create the DataLoader as follows: train_dataset = Dataset() train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=256, num_workers=6, collate_fn=_collate_fn, shuffle=True) However, this just gets stuck but works fine if I remove the JITing of the \_collate\_fn. I am not able to understand what is happening here. I don't have to stick to numba and can use anything which will help me overcome the loop inefficiencies in Python. TIA and Happy 12,021
1
t3_ki8bpk
1,608,653,753
pytorch
Guide to Pytorch Time-Series Forecasting
nan
1
t3_ki3c74
1,608,635,413
pytorch
ValueError: Expected target size (16, 87), got torch.Size([16, 64, 87]) in CrossEntropyLoss
Trying a many to many LSTM for learning purposes. Code snippet: \`\`class model(nn.Module): def \_\_init\_\_(self, BATCH\_SIZE, SEQ\_LEN, vocab\_size): super(model, self).\_\_init\_\_() self.batch\_size = BATCH\_SIZE self.seq\_len = SEQ\_LEN self.vocab\_size = vocab\_size self.emb = nn.Embedding(vocab\_size, 512) self.lstm = nn.LSTM(512, 256, 3, dropout = 0.2) self.lin = nn.Linear(256, 87) self.criterion = nn.CrossEntropyLoss() def forward(self, X, Y): out = self.emb(X) h, c = self.lstm(out) out = self.lin(h) loss = self.criterion(out, Y) return loss\`\` ​ BATCH\_SIZE = 16 SEQ\_LEN = 64 vocab\_size = 87 Size of X: \[16, 87\] Size of Y = \[16, 64, 87\] Size of out = \[16, 64, 87\] Still I get the above error in the \`\`loss = self.criterion(out, Y)\`\` line. I can't understand why. Please help. ​ Ref: [https://github.com/ranasingh-gkp/Music\_generation\_char-RNN](https://github.com/ranasingh-gkp/Music_generation_char-RNN)
1
t3_khg3tj
1,608,552,208
pytorch
Trouble connecting to TPUs using XLA library
Hey, again, I'm trying to connect to the TPU's accessible on google colab, and having some trouble. My end goal is to install the TPU's so I can use Pytorch Lightning to run the training on the TPU. Currently, my code to import the XLA library (needed to connect to TPU from Pytorch) is returning a really cryptic error. ​ Code to install XLA: `# install XLA to allow connection between Pytorch and TPU` `VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"]` `!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py` `!python` [`pytorch-xla-env-setup.py`](https://pytorch-xla-env-setup.py) `--version $VERSION` ​ Code to import XLA: `# imports pytorch` `import torch` `# imports the torch_xla package` `import torch_xla` `import torch_xla.core.xla_model as xm` ​ Error being returned: `--------------------------------------------------------------------------- ImportError Traceback (most recent call last)` [`<ipython-input-66-ebe519c076f6>`](https://localhost:8080/#) `in <module>()3 4 # imports the torch_xla package ----> 5 import torch_xla 6 import torch_xla.core.xla_model as xm` [`/usr/local/lib/python3.6/dist-packages/torch_xla/__init__.py`](https://localhost:8080/#) `in <module>() 39 import torch 40 from .version import __version__ ---> 41 import _XLAC 42 43 _XLAC._initialize_aten_bindings() ImportError: /usr/local/lib/python3.6/dist-packages/_XLAC.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceISt7complexIfEEEPKNS_6detail12TypeMetaDataEv` ​ I copied this import code straight from the [official guide](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb#scrollTo=42avAvSg17by) which works for me, but just doesn't work on my own document. Here's my [Colab document](https://colab.research.google.com/drive/1EaCJpyYK8xLuzO79ESAsIlGZAcWjOh8u?usp=sharing), if anyone is wondering. The XLA stuff is at the very top. Thank you! A
1
t3_kgkg08
1,608,426,425
pytorch
torchMTL: A simple multi-task learning module for PyTorch
Hey everyone! I wrote a small helper library to make multi-task learning with PyTorch easier: [torchMTL](https://github.com/chrisby/torchMTL). You just need to define a dictionary of layers and torchMTL builds a model that returns the losses of the different tasks that you can then combine in the standard training loop. I'd be happy to get some feedback on it!
1
t3_kgeb73
1,608,405,713
pytorch
carefree-learn: Tabular Datasets ❤️ PyTorch
nan
1
t3_kfprkx
1,608,313,352
pytorch
More audio feature transformations in pytorch?
I've seen that in pytorch you can do the short time fourier transform through the torch module. Can you/are there plans for more direct transformations like audio to mel frequency cepstrum, chroma, etc.?
1
t3_kfldc1
1,608,299,121
pytorch
Applying transforms to both image and mask
I have an image segmentation task but a very small dataset. I want to use data augmentation, but I can’t seem to find a way to apply the same transformations to images and masks. Any help would be appreciated!
1
t3_keuyhs
1,608,200,518
pytorch
15 minute training time per epoch limiting ability to improve algorithm... any way to speed it up?
Hi All, For one my first solo machine learning project I'm trying to create an algorithm that can distinguish between six remarkably similar species of bird (for those wanting to search it up, the Willow Flycatcher, Pacific-Slope Flycatcher, Least Flycatcher, Hammond's Flycatcher, Western-Wood Pewee and Olive-sided Flycatcher). I'm using data from Flickr and making a CNN from "scratch" (in scratch I mean using pytorch tools but not transferring from a premade model) I have exactly 2000 images per my six classes. Since I did not have the ability to access a larger database (at least, yet), I was only able to get about 600-1000 unique images per class. I created a function that would automatically fill the gaps in to 2000 with augmented data. Alone, it takes about 55 minutes of runtime for all the data to be loaded AND for the augmented data to be added. Once that is done, and all the other stuff is done, training can begin. My images are quite large (256 by 256), which might slow me down. I'm going to add dropout in a later phase of the experiment (looking over the training and validation losses shows that I'm not overfitting anyways). My convolutional model is as follows. `model = nn.Sequential(` `# before, (bs, 3, 256, 256)`     `nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`      `nn.MaxPool2d(2, 2), # output will be (bs, 16, 128, 128)`     `nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`     `nn.MaxPool2d(2, 2), #ouput will be (bs, 16, 64, 64)`     `nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`      `nn.MaxPool2d(2, 2), # output is (bs, 16, 32, 32)`     `nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`      `nn.MaxPool2d(2, 2), # output is (bs, 16, 16, 16)`     `nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`      `nn.MaxPool2d(2, 2), # output is (bs, 16, 8, 8)`      `nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`      `nn.MaxPool2d(2, 2), # output is (bs, 16, 4, 4)`      `nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`      `nn.MaxPool2d(2, 2), # output is (bs, 16, 2, 2)`      `nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1),`     `nn.ReLU(),`      `nn.MaxPool2d(2, 2), # output is (bs, 16, 1, 1)`  `# connected layer`     `nn.Flatten(), # output a bs, 16 size vector`     `nn.Linear(16, 6) # output a bs x 6` `)` At batch size = 81, I've been taking exactly 15 minutes for each epoch to train. Since I'm on colab, which has usage limits for GPUs, this means getting about 13 or 14 epochs max before I reach my daily limit of GPU usage. Looking at the accuracy, it seems as though if I am able to run a few more epochs, I will be able to get a better score accuracy on the validation set. ​ [Seems to be a steady increase in accuracy.](https://preview.redd.it/2u1o8bmnlp561.png?width=517&format=png&auto=webp&s=51e5f368ab2fe522e9a6b5b0d276d0c935b41cf6) If randomly picked, the accuracy of identifying the right bird would be around 16%. The highest accuracy I've gotten so far is 41%, on the 13th epoch. Colab kicked me after that one, so I couldn't find out if I could have gone higher. Also, my losses if you'd like: ​ https://preview.redd.it/1cq3z890mp561.png?width=527&format=png&auto=webp&s=4825f567cc03c8c558228cfd60b292aba5f05ab3 As well as that, the long training time makes it hard to make changes to the algorithm and immediately see their effect - I'm worried that this'll make improving upon the 41% impossible! I have a few ideas (mainly, reducing image size) but I'm worried that will reduce the accuracy, since the differentiating features between the birds are so minute. Thanks for reading this the whole way through, and if you'd have any suggestions on cutting the time, that'd be much appreciated! **edit:** just found out I was adding augmented data before splitting which most likely raised the accuracies on the val set. The question still stands, though. Thanks, A
1
t3_kett98
1,608,194,727
pytorch
How do I concat 4 images to be the last layer for a ResNet for transfer learning?
I have about 400 samples. I need to do a binary classification task. Each sample comprises 4 images (all images have a single channel). I am planning to use transfer learning with ResNet18 and just retrain the last layer. I want to concatenate these 4 images as my last layer. Can someone tell me how to do it? Say, each of my images is - (1, 120, 90). So how do I concatenate 4 such images so that they can be used as the last layer of a Resnet? Pardon me but I am not well versed with computer vision.
1
t3_ketdgw
1,608,192,492
pytorch
Paid ML gigs: Get compensated while further sharpening your skills on your own schedule.
nan
0.5
t3_kenk99
1,608,170,055
pytorch
deep regression
Have people been using deep learning to do regression? I noticed that fitting polynomials using least squares leads to much better accuracy! Is there any rule of thumb to get arbitrary accuracy with deep regression?
0.5
t3_keco2b
1,608,137,057
pytorch
How to check test accuracy on every n train epochs using pytorch lightning?
I have searched pytorch lightning docs but only found ways for finding metrics on train data for every n epochs.
1
t3_kebauw
1,608,132,777
pytorch
PyTorch recommenders
Hi! I came across this [tensorflow](https://www.tensorflow.org/recommenders/) wrapper and it has some really nice guides and helpers to get going with building recommender systems - is there anything similar in the PyTorch ecosystem? Thanks!
1
t3_ke9wfo
1,608,127,984
pytorch
[P] NLP Tutorial PyTorch
I have put together an nlp tutorial in pytorch, check it out: [https://github.com/will-thompson-k/deeplearning-nlp-models](https://github.com/will-thompson-k/deeplearning-nlp-models) . Would love some feedback!
0.94
t3_ke8n72
1,608,123,163
pytorch
[beginners tutorial] Guide to Pytorch Loss Functions + How to Build Custom Functions
The way you configure your loss functions can either make or break the performance of your algorithm. By correctly configuring the loss function, you can make sure your model will work how you want it to. A few key things to learn before you can properly choose the correct loss function are: - What are loss functions and how to use them in PyTorch? - Which loss functions are available? - How to create a custom loss function? Here’s our tutorial that will help you: [PyTorch loss functions](https://neptune.ai/blog/pytorch-loss-functions?utm_source=reddit&utm_medium=post&utm_campaign=blog-pytorch-loss-functions&utm_content=pytorch)
0.9
t3_kdp5bd
1,608,050,545
pytorch
Testing Framework
Hi everybody! I was wondering if there's a testing framework out there for PyTorch that you have used? For the past several months, I've been working with an existing codebase that has massive networks and huge datasets. I've been making changes to implement some new models. Running simple experiments just to ensure that everything *runs* without breaking due to size mismatches can take a while. The other day, I finished an epoch (took several hours) and the code bugged on some logging functionality that wasn't updated for the changes that were made to the model. I'm a Ph.D. student now but I came from industry in software engineering. We were spoiled for choice in terms of testing frameworks for anything we could ever write. I'm wondering if there are testing frameworks that you've used to make sure that your model *can* run before actually running it.
1
t3_kd2dk4
1,607,967,906
pytorch
Why is PyTorch filling the GPU memory?
I am working for a uni project on an implementation of the sliding window approach for object detection. For each frame I take the image, unfold it and run all the patches through a CNN classifier. It was working fine until I changed the classifier model and the GPU memory started filling the 4GB (laptop) in a matter of seconds. I have been trying to debug the allocation/deallocation of tensors but I can't really understand what is taking so much space. By proceeding one line at a time with the debugger I noticed that once I call classifier(patches) the memory usage jumps by +200MB on the first call, +1000MB on the second and by the third 3.5GB are used. This does not depend on the scope, once I exit the function the memory usage does not decrease. How can I check what is kept in memory? Is it storing some king of history? Can I disable it somehow since I am in eval mode? Many thanks
1
t3_kctr3x
1,607,934,235
pytorch
Does Pytorch source code contains facebook telemetry codes?
\[I love pytorch\] But bit more free software activist, I worry if Pytorch contains some survellience features\]
0.64
t3_kc6u6f
1,607,846,282
pytorch
Free browser extension for ML community that thousands of machine learning engineers/data scientists use everyday! Drop a comment for any questions/feature requests you may have!
nan
0.5
t3_kc2z6u
1,607,828,773
pytorch
Can I calculate gradient w.r.t. the input?
I have a classifier module (nn.Linear) and I want the weights to be static, but treat the input vector as parameters to be updated. So I will still minimize the crossentropy(output, target), but the result discovers the values for an input that approximately minimizes the classifier loss. Is calculating the gradient w.r.t the input the right way to describe that?
0.9
t3_kbl2hq
1,607,758,413
pytorch
Inference using a single thread is faster than using 4 threads on a raspberry pi
I have been running a model on a raspberry pi 3 A+ and got curious performance results. For example an inference using resnet18 on a 256x256 image: * 4 threads: dt = 1.869842 * 1 thread: dt = 1.510674 Using mobilenetv2 on a 256x256 image: * 4 threads: dt = 2.509571 * 1 thread: dt = 1.208802 Using resnet18 on a 512x512 image: * 4 threads: dt = 7.229154 * 1 thread: dt = 6.206228 Using mobilnetv2 on a 512x512 image: * 4 threads: dt = 4.412468 * 1 thread: dt = 3.804896 ​ The raspberry 3 chip has 4 cores, so I would expect a better performance when using all the cores. Has anyone experience something similar and know of any tricks to get a better performance using all cores?
1
t3_kb5ovd
1,607,703,151
pytorch
Is it possible to convert the type of the output tensor from float to int in a custom loss function?
Hi! I’m working on a segmentation model, and I am using a custom dice loss. I’m working on medical scans and I realised that the output doesn’t quite perform well and demarcates the textured area of the image rather than the smooth area. And the smooth area is the ROI. I was thinking maybe if I convert the output to int type, then only a few areas will be highlighted and a larger loss will be generated to penalise the model. But as gradients are involved, I’m unable to convert the dtype of the output which is a GPU tensor to int. How do I do it?
1
t3_kb2ko8
1,607,692,566
pytorch
How to get the autograd backward graph with shape information in Pytorch?
nan
0.5
t3_kau6vd
1,607,655,442
pytorch
[P] Pytorch NLP Models (run w/ GPUs)
I've put together a small, annotated library of deeplearning models used in NLP here: [https://github.com/will-thompson-k/deeplearning-nlp-models](https://github.com/will-thompson-k/deeplearning-nlp-models) ​ [BERT: Reading. Comprehending.](https://preview.redd.it/bly0gs3rvg461.jpg?width=320&format=pjpg&auto=webp&s=84c9f0873c2ac9ef393f8f47943cd32f2550427b) ​ [Attention patterns ](https://preview.redd.it/0aeyi9vuvg461.png?width=1074&format=png&auto=webp&s=29ddaabc11e1a8e2b0e1272a9bb45f43d70fed3f) It's by no means comprehensive, but meant as a primer for those delving into model architectures. Let me know if you have any feedback!
1
t3_katg8g
1,607,652,926
pytorch
PyTorch 1.7.1 - Bug fix release with updated binaries for Python 3.9 and cuDNN 8.0.5
nan
1
t3_kakwez
1,607,624,905
pytorch
LSTM ENCODING
Hello community, Let’s say I have a data shape of =( 1300,60). I applied windowing on it with a sequence length of =20 to the data, and the shape become =(1297,20,60). My goal is to compress this entity like this: (1297,20,60) -> (1297,1,3) Knowing that I apply a LSTM layer at the beginning. In summary : 1. Reshape the data using windowing 2. Apply an LSTM Layer 3. Apply an appropriate layer to compress lstm output into (1297,1,3) Do you know any layers which can apply such a thing to compress data ?
1
t3_kaitem
1,607,618,878
pytorch
Generate new training data with StyleGAN2 ada ?
Hello, I want to increase my dataset for a project where I do semantic segmentation of plants. Do you think that I can generate new images with a StyleGAN2 ada model? And that it would be high quality enough to help my semantic segmentation model with additional training data? Have anyone done anything similar ?
1
t3_kacefq
1,607,592,433
pytorch
[P] Deeplearning NLP Models Tutorial in PyTorch (w/ Colab GPU Notebooks)
I've put together a small, annotated library of deeplearning models used in NLP here: [https://github.com/will-thompson-k/deeplearning-nlp-models](https://github.com/will-thompson-k/deeplearning-nlp-models) ​ [BERT: Reading. Comprehending.](https://preview.redd.it/sfx5yhm8r9461.jpg?width=320&format=pjpg&auto=webp&s=4856b694e87ddb9e5aea056c9802e1ac3a12a662) It's by no means comprehensive, but meant as a primer for those delving into model architectures. Let me know if you have any feedback!
0.92
t3_ka6c1b
1,607,566,579
pytorch
Hands-on Vision Transformers with PyTorch - Analytics India Magazine
https://zcu.io/FRBx
0.75
t3_k9q8xc
1,607,513,312
pytorch
Can someone help me? Can a Generative Adversarial Network (GAN) predict a new state?
I have a problem where I need to get a new state of an object (e.g. the position) given the current state. However, I am not really sure if I can use GAN for this process. I already know that a basic GAN helps us to learn to generate samples from a given dataset. Nevertheless, I don't know if it is possible to predict a new state or states that change in time. If you knew some paper or information about that, it will very useful for me.
1
t3_k9mf1w
1,607,494,029
pytorch
Generative models for time-series data
Hi, guys! I hope you are staying safe and well! I am currently looking for papers and blogs (with codes if possible) that describe how to use the generative models (e.g. GANs) for time-series data. Can you please share recourses about the abovementioned topic if you know any? Thanks a lot in advance!
0.84
t3_k9igvh
1,607,479,286
pytorch
How to check for membership of elements of one Tensor in another?
Say I have the following two tensors a = torch.Tensor([ [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], ]) b = torch.Tensor([3, 3, 10]) Is there some kind of "in" function that I can use to get this output? [True, False, True] (or [1, 0, 1]) Basically the first element of b is in the first element of A, and the second element of b is not in the second element of A, ...
1
t3_k8qhre
1,607,378,058
pytorch
Archai for NAS
Archai is a platform for Neural Network Search (NAS) that allow you to generate efficient deep networks for your applications. [https://github.com/microsoft/archai](https://github.com/microsoft/archai)
0.83
t3_k8jncc
1,607,358,033
pytorch
Manual MSE vs BinaryCrossEntropy
I have an errors function that looks like: ``` def errors(x, y): err = (x - y).pow(2).sum(dim=1) return err ``` I pass to it an array of x values and a corresponding y labels array However if I implement it with BCE: ``` def errors(x, y): err = F.binary_cross_entropy(x, y) return err ``` I get a single value. Is it possible to implement BCE on per row value?
0.84
t3_k7the3
1,607,260,130