sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float64
0.07
1
id
stringlengths
9
9
created_utc
float64
1.6B
1.65B
pytorch
Converting a model from Pytorch to Tensorflow: Guide to ONNX
Open Neural Network Exchange (ONNX) is a powerful and open format built to represent machine learning models. The final outcome of training any machine learning or deep learning algorithm is a model file that represents the mapping of input data to output predictions in an efficient manner. Read more: [https://analyticsindiamag.com/converting-a-model-from-pytorch-to-tensorflow-guide-to-onnx/](https://analyticsindiamag.com/converting-a-model-from-pytorch-to-tensorflow-guide-to-onnx/)
0.82
t3_m0eby3
1,615,205,196
pytorch
Is there a flexible Dataloader similar to tf.data.Datasets?
Hi, I'm considering a swap from TF 2.0 to PyTorch (because whole academia did so), but before doing such drastic actions I need to ensure that PyTorch can provide the same set of tools in some way. I have found most features in tf2.0 to have a equivalent in pytorch, but I have not found anything that comes close to tf.data.Datasets. Specifically, I am looking for something that can eagerly prefetch and automatically batch my dataset. Is there such library for pytorch?
1
t3_m0anh5
1,615,189,206
pytorch
AI Show: What's new in Cognitive Search and PyTorch
nan
1
t3_lykjjs
1,614,974,522
pytorch
Vision Transformer implemented from scratch
nan
0.85
t3_lykf5n
1,614,974,184
pytorch
PyTorch 1.8 released
nan
1
t3_lya2n2
1,614,944,627
pytorch
Local and Global loss
I have a requirement of training pipeline similar to Mixture of Experts (https://github.com/davidmrau/mixture-of-experts/blob/master/moe.py) but I want to train the Experts on a local loss for 1 epoch before predicting outputs from them (which would then be concatenated for the global loss of MoE). Can anyone suggest what’s the best way to set up this training pipeline?
1
t3_ly1qee
1,614,910,974
pytorch
[python package] Tensorguard helps to handle shapes of multidimensional tensors
nan
1
t3_lxwunu
1,614,895,907
pytorch
Most efficient way to swap rows between 2D tensors?
This is probably an incredibly basic question, but I'm pretty new to torch and haven't come across a simple solution in the docs. I have two 2D tensors `tokenized_text` and `translated_words`, and I'd like to swap certain rows in one with rows from the other. The tensors aren't guaranteed to be the same dimensions. The end result is then flattened to 1D and the padding values (x for x < 5) are removed. My first quick-and-dirty attempt (in order to test the rest of my code) involved converting the two tensors to lists, swapping the rows, flattening the result and building a new tensor from that. But that strikes me as super inefficient. My updated version keeps everything as tensors by padding them so they have the same row length. See here: max_length = max(tokenized_text.size()[1], translated_words.size()[1]) tokenized_text = torch.nn.ConstantPad1d((0, max_length - tokenized_text.size()[1]), tokenizer.pad_token_id)(tokenized_text) translated_words = torch.nn.ConstantPad1d((0, max_length - translated_words.size()[1]), tokenizer.pad_token_id)(translated_words) for n, word_index in enumerate(word_indexes): tokenized_text[word_index] = translated_words[n] # strip special chars tokenized_text = tokenized_text[tokenized_text > 5] Just wondering if there's a more efficient / more idiomatic way to accomplish this same task. Is there anything built in to the library?
1
t3_lxrlds
1,614,882,596
pytorch
Pytorch Geometric or Pytorch DGL? Which one do you prefer?
nan
0.75
t3_lxndny
1,614,872,569
pytorch
PyTorch Geometric Temporal: What Is it & Your InDepth Guide -
nan
1
t3_lxk0wl
1,614,863,072
pytorch
Low train accuracy using pretrained torchvision model
Hello, I am trying to evaluate a pre-trained mobilenetv2 model from torchvision on the ImageNet training dataset using the official example ([https://github.com/pytorch/examples/blob/master/imagenet/main.py](https://github.com/pytorch/examples/blob/master/imagenet/main.py)). To do so, I modify lines 235-237 to perform validation on the train loader instead of the val loader: if args.evaluate: validate(train_loader, model, criterion, args) return Everything else is left untouched. The command I use to run is: python imagenet_train_example.py -a mobilenet_v2 -j 16 -b 1024 -e --pretrained /data/ImageNet However, the results are much lower than expected: >Acc@1 2.926 Acc@5 15.079 Loss 11.795791 I was wondering if anyone knows why that might be? Am I doing something wrong? Cheers!
1
t3_lxjc07
1,614,860,717
pytorch
How to save model in pytorch?
I am a web developer, I don't know much about ML. I have been given a model that I'm supposed to integrate into my website but first I need to save it. It already has a save\_checkpoint method in the trainer file but I don't know how to use it. Please help. ​ I posted this on stackoverflow but nobody responded. [https://stackoverflow.com/questions/66449973/how-to-use-the-save-checkpoint-method](https://stackoverflow.com/questions/66449973/how-to-use-the-save-checkpoint-method)
0.67
t3_lxcouw
1,614,832,476
pytorch
Why people alot use alot tensorflow instead of pytorch?
Even Knowing that Pytorch is so much flexible than tensorflow.
0.58
t3_lwy52r
1,614,789,464
pytorch
Getting Started with Distributed Machine Learning with PyTorch and Ray
nan
0.94
t3_lwcpwp
1,614,719,700
pytorch
How to fix a SIGSEGV in pytorch when using distributed training (e.g. DDP)?
nan
0.6
t3_lwbb72
1,614,715,855
pytorch
How does one set the pytorch distributed hostname, port and GLOO_SOCKET_IFNAME so that DDP works?
nan
0.67
t3_lw1tkv
1,614,691,316
pytorch
My partner made a shitty code in a jupyter and then exported it as .py, but it runs faster than my pytorch code.
Same data, same model Their code: num_workers and pin memory has not been set. Gradients are not zeroed at before train (its grid search), gradients are calculated during validation. Just that all inputs are passed to the gpu. Has loads of redundant variables and print functions. Nothing related to threads specified. My code: all of the above is done, but still the code runs slowly as compared to the their code. First I had a file system with all relevant code and functions segregated. Then I thought that might be taking up time, so I put it in one code, but it didn't change the time much. Can't exactly understand why this is happening. Any tips?
1
t3_lw08z0
1,614,685,890
pytorch
Implementing FC layer as conv layer
Hey Guys, I wrote a sample code which implements [Fully Connected (FC) layer as Conv layer](https://github.com/arjun-majumdar/CNN_Classifications/blob/master/Implementing_FC_as_conv_layer.py) in PyTorch. Let me know your thoughts. This is going to be used for optimized "Sliding Windows object detection" algorithm.
0.5
t3_lvurr5
1,614,662,859
pytorch
How does one set the pytorch distributed hostname, port and GLOO_SOCKET_IFNAME so that DDP works?
nan
0.67
t3_lvefcn
1,614,618,401
pytorch
How does one setup the set_sharing_strategy strategy for multiprocessing in Pytorch?
nan
0.5
t3_lvdze1
1,614,617,340
pytorch
C++ trainable semantic segmentation models
​ https://preview.redd.it/xvdaj4vm98k61.png?width=1461&format=png&auto=webp&s=67051689cf9252c8a6e1b5386e4c531685aa00be I wrote a [C++ trainable semantic segmentation open source project](https://github.com/AllentDan/SegmentationCpp) supporting UNet, FPN, PAN, LinkNet, DeepLabV3 and DeepLabV3+ architectures. The main features of this library are: * High level API (just a line to create a neural network) * 6 models architectures for binary and multi class segmentation (including legendary Unet) * 7 available encoders * All encoders have pre-trained weights for faster and better convergence * 2x or more faster than pytorch cuda inferece, same speed for cpu. (Unet tested in gtx 2070s). ## 1. Create your first Segmentation model with Libtorch Segment Segmentation model is just a LibTorch torch::nn::Module, which can be created as easy as: #include "Segmentor.h" auto model = UNet(1, /*num of classes*/ "resnet34", /*encoder name, could be resnet50 or others*/ "path to resnet34.pt"/*weight path pretrained on ImageNet, it is produced by torchscript*/ ); * see [table](#architectures) with available model architectures * see [table](#encoders) with available encoders and their corresponding weights ## 2. Generate your own pretrained weights All encoders have pretrained weights. Preparing your data the same way as during weights pre-training may give your better results (higher metric score and faster convergence). And you can also train only the decoder and segmentation head while freeze the backbone. import torch from torchvision import models # resnet50 for example model = models.resnet50(pretrained=True) model.eval() var=torch.ones((1,3,224,224)) traced_script_module = torch.jit.trace(model, var) traced_script_module.save("resnet50.pt") Congratulations! You are done! Now you can train your model with your favorite backbone and segmentation framework. ## 3. πŸ’‘ Examples * Training model for person segmentation using images from PASCAL VOC Dataset. "voc\_person\_seg" dir contains 32 json labels and their corresponding jpeg images for training and 8 json labels with corresponding images for validation. ​ Segmentor<FPN> segmentor; segmentor.Initialize(0/*gpu id, -1 for cpu*/, 512/*resize width*/, 512/*resize height*/, {"background","person"}/*class name dict, background included*/, "resnet34"/*backbone name*/, "your path to resnet34.pt"); segmentor.Train(0.0003/*initial leaning rate*/, 300/*training epochs*/, 4/*batch size*/, "your path to voc_person_seg", ".jpg"/*image type*/, "your path to save segmentor.pt"); * Predicting test. A segmentor.pt file is provided in the project. It is trained through a FPN with ResNet34 backbone for a few epochs. You can directly test the segmentation result through: ​ cv::Mat image = cv::imread("your path to voc_person_seg\\val\\2007_004000.jpg"); Segmentor<FPN> segmentor; segmentor.Initialize(0,512,512,{"background","person"}, "resnet34","your path to resnet34.pt"); segmentor.LoadWeight("segmentor.pt"/*the saved .pt path*/); segmentor.Predict(image,"person"/*class name for showing*/); the predicted result shows as follow: ​ ## 4. πŸ§‘β€πŸš€ Train your own data * Create your own dataset. Using [labelme](https://github.com/wkentaro/labelme) through "pip install" and label your images. Split the output json files and images into folders just like below: ​ Dataset β”œβ”€β”€ train β”‚ β”œβ”€β”€ xxx.json β”‚ β”œβ”€β”€ xxx.jpg β”‚ β””...... β”œβ”€β”€ val β”‚ β”œβ”€β”€ xxxx.json β”‚ β”œβ”€β”€ xxxx.jpg β”‚ β””...... * Training or testing. Just like the example of "voc\_person\_seg", replace "voc\_person\_seg" with your own dataset path. ## πŸ“¦ Models ## Architectures * \[x\] Unet \[[paper](https://arxiv.org/abs/1505.04597)\] * \[x\] FPN \[[paper](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf)\] * \[x\] PAN \[[paper](https://arxiv.org/abs/1805.10180)\] * \[x\] LinkNet \[[paper](https://arxiv.org/abs/1707.03718)\] * \[x\] DeepLabV3 \[[paper](https://arxiv.org/abs/1706.05587)\] * \[x\] DeepLabV3+ \[[paper](https://arxiv.org/abs/1802.02611)\] * \[ \] PSPNet \[[paper](https://arxiv.org/abs/1612.01105)\] ## Encoders * \[x\] ResNet * \[x\] ResNext * \[ \] ResNest The following is a list of supported encoders in the Libtorch Segment. All the encoders weights can be generated through torchvision except resnest. Select the appropriate family of encoders and click to expand the table and select a specific encoder and its pre-trained weights. |Encoder|Encoder|Encoder| |:-|:-|:-| |Weights|Weights|Weights| |Params, M|Params, M|Params, M| |resnet18|resnext50\_32x4d|timm-resnest14d| |imagenet|imagenet|imagenet| |11M|22M|8M| |resnet34|resnext101\_32x8d|timm-resnest26d| |imagenet|imagenet|imagenet| |21M|86M|15M| |resnet50|timm-resnest50d|imagenet| |imagenet|23M|25M| |resnet101|timm-resnest101e|imagenet| |imagenet|42M|46M| |resnet152|timm-resnest200e|imagenet| |imagenet|58M|68M| |timm-resnest269e|imagenet|108M| |timm-resnest50d\_4s2x40d|imagenet|28M| |timm-resnest50d\_1s4x24d|imagenet|23M| ## πŸ›  Installation Windows: Configure the environment for libtorch development. [Visual studio](https://allentdan.github.io/2020/12/16/pytorch%E9%83%A8%E7%BD%B2torchscript%E7%AF%87) and [Qt Creator](https://allentdan.github.io/2021/01/21/QT%20Creator%20+%20Opencv4.x%20+%20Libtorch1.7%E9%85%8D%E7%BD%AE/#more) are verified for libtorch1.7x release. Only chinese configuration blogs provided by now, english version ASAP. Linux && MacOS: Follow the official pytorch c++ tutorials [here](https://pytorch.org/tutorials/advanced/cpp_export.html). It can be no more difficult than windows. ## 🀝 Thanks This project is under developing. By now, these projects helps a lot. * [official pytorch](https://github.com/pytorch/pytorch) * [qubvel SMP](https://github.com/qubvel/segmentation_models.pytorch) * [wkentaro labelme](https://github.com/wkentaro/labelme) * [nlohmann json](https://github.com/nlohmann/json) ## πŸ“ Citing @misc{Chunyu:2021, Author = {Chunyu Dong}, Title = {Libtorch Segment}, Year = {2021}, Publisher = {GitHub}, Journal = {GitHub repository}, Howpublished = {\url{https://github.com/AllentDan/SegmentationCpp}} } ## πŸ›‘οΈ License Project is distributed under [MIT License](https://github.com/qubvel/segmentation_models.pytorch/blob/master/LICENSE)
1
t3_luh7ge
1,614,522,141
pytorch
Image Classification with Unbalanced Dataset
I have a 5 classes unbalanced dataset for classification. I'm using RandomWeightSampler to feed the dataloader to avoid the consequences due to unbalancing stuff and using CrossEntropy as a loss function on training. As you know CrossEntropy can used with a weight, so should I pass a class weight to the loss function or the sampler which is create balanced batches for training is enough for handle the dataset balance tweak ?
0.81
t3_ltzw6s
1,614,468,013
pytorch
Reshaping Operations in Pytorch
What reshape function can I use to convert this tensor Reshape = rank1.view(2, 2, 6) tensor([[[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11]], [[12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23]]]) To this ? expected = [ [0, 1, 2, 3, 12, 13, 14, 15], [4, 5, 6, 7, 16, 17, 18, 19], [8, 9, 10, 11, 20, 21, 22, 23]]
1
t3_ltt8hm
1,614,447,858
pytorch
Questions about reproducing DARTS code implementation
I am not sure about [how to implement](https://gist.github.com/promach/ae0e48974ccf4bdee07c9d69148cf21b) `Ltrain(w+)` and `Ltrain(w-)` for [DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055) Could anyone advise ? https://preview.redd.it/swqi3tl230k61.png?width=1920&format=png&auto=webp&s=cafe8dacb197f4db8e56ec59e189103f741fe59f
1
t3_ltlo8f
1,614,423,083
pytorch
Tom Cruise deepfake videos are all over the internet and passing the best deepfake detectors!
nan
1
t3_lt9lg4
1,614,378,523
pytorch
Element from a column
How can I use integer array indexing to get elements from different columns? Lets say I wanted to grab 12,64,34; how can I get the elements? numbers = torch.tensor(\[\[2, 4, 8\], \[12, 16, 32\], \[64, 23, 3\], \[34, 56, 23\]\])
0.67
t3_lspv50
1,614,313,743
pytorch
MLP Win Prediction Model does not converge
Hi, I built a model to predict a winning team in a game of dota. I wanted to evaluate each player (10 in total, each has 599 features encoded with 1 or 0) using the same criteria (player\_model). Then I wanted to predict the outcome of the match (match\_model) bases on the player evaluations of each team. (the commented variant includes an additional team evaluation) The problem I have is that the model just favors (depending on the weight init) one outcome and does not converge at all. I am not sure if the gradients are calculated correctly, when throwing the whole batch x player tensor in the player\_model like this: p = self.player\_model(x) Or maybe something else screws it up... Do you have any hints? Thanks Logs: torch.Size([64000, 10, 599]) torch.Size([64000, 2]) ---- Train set torch.Size([16000, 10, 599]) (16000, 2) ----- Test set accuracy_score: 0.5034375 max_test_accuracy 0.5034375 train 0.0006832404183223843 eval 4.3321520090103147e-05 [[ 0 7945] [ 0 8055]] accuracy_score: 0.5034375 max_test_accuracy 0.5034375 train 0.0006823675045743584 eval 4.3321534991264343e-05 [[ 0 7945] [ 0 8055]] accuracy_score: 0.5034375 max_test_accuracy 0.5034375 train 0.0006823821607977151 eval 4.332022368907928e-05 [[ 0 7945] [ 0 8055]] Model: def __init__(self): super(Model, self).__init__() self.player_model = nn.Sequential( nn.Linear(feature_count, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 32) ) self.team_model = nn.Sequential( nn.Linear(32 * 5, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 32) ) self.match_model = nn.Sequential( nn.Linear(32 * 10, 8), nn.ReLU(), nn.Linear(8, 4), nn.ReLU(), nn.Linear(4, 2), nn.Softmax(dim=1) ) def forward(self, x): p = self.player_model(x) # t1_in = p[:, :5, :].reshape(p.size(0), 5 * p.size(2)) # t1 = self.team_model(t1_in) # t2_in = p[:, 5:, :].reshape(p.size(0), 5 * p.size(2)) # t2 = self.team_model(t2_in) # m = torch.cat((t1, t2), 1) m = p.reshape(p.size(0), 10 * p.size(2)) x = self.match_model(m) return x Other criterion = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=0.0004)
1
t3_lsigs0
1,614,291,370
pytorch
How to skip the images in a custom dataset and deal with None values?
Hi, I have an object detection dataset with RGB images and annotations in Json. I use a custom DataLoader class to read the images and the labels. One issue that I’m facing is that I would like to skip images when training my model if/when labels don’t contain certain objects. For example, If one image doesn’t contain any target labels belonging to the class β€˜Cars’, I would like to skip them. When parsing my Json annotation, I tried checking for labels that don’t contain the class β€˜Cars’ and returned None. Subsequently, I used a collate function to filter the None but unfortunately, It is not working import torch from torch.utils.data.dataset import Dataset import json import os from PIL import Image from torchvision import transforms #import cv2 import numpy as np general_classes = { # Cars "Toyota Corolla" : 0, "VW Golf" : 0, "VW Beetle" : 0, # Motor-cycles "Harley Davidson" : 1, "Yamaha YZF-R6" : 1, } car_classes={ "Toyota Corolla" : 0, "VW Golf" : 0, "VW Beetle" : 0 } def get_transform(train): transforms = [] # converts the image, a PIL image, into a PyTorch Tensor transforms.append(T.ToTensor()) if train: # during training, randomly flip the training images # and ground-truth for data augmentation transforms.append(T.RandomHorizontalFlip(0.5)) return T.Compose(transforms) def my_collate(batch): batch = list(filter(lambda x: x is not None, batch)) return torch.utils.data.dataloader.default_collate(batch) class FilteredDataset(Dataset): # The dataloader will skip the image and corresponding labels based on the dictionary 'car_classes' def __init__(self, data_dir, transforms): self.data_dir = data_dir img_folder_list = os.listdir(self.data_dir) self.transforms = transforms imgs_list = [] json_list = [] self.filter_count=0 self.filtered_label_list=[] for img_path in img_folder_list: #img_full_path = self.data_dir + img_path img_full_path=os.path.join(self.data_dir,img_path) json_file = os.path.join(img_full_path, 'annotations-of-my-images.json') img_file = os.path.join(img_full_path, 'Image-Name.png') json_list.append(json_file) imgs_list.append(img_file) self.imgs = imgs_list self.annotations = json_list total_count=0 for one_annotation in self.annotations: filtered_obj_id=[] with open(one_annotation) as f: img_annotations = json.load(f) parts_list = img_annotations['regions'] for part in parts_list: current_obj_id = part['tags'][0] # bbox label check_obj_id = general_classes[current_obj_id] if(check_obj_id==0): subclass_id=car_classes[current_obj_id] filtered_obj_id.append(subclass_id) total_count=total_count+1 if(len(filtered_obj_id)>0): self.filter_count=self.filter_count+1 self.filtered_label_list.append(one_annotation) print("The total number of the objects in all images: ",total_count) # get one image and the bboxes,img_id, labels of parts, etc in the image as target. def __getitem__(self, idx): img_path = self.imgs[idx] image_id = torch.tensor([idx]) with open(self.annotations[idx]) as f: img_annotations = json.load(f) parts_list = img_annotations['regions'] obj_ids = [] boxes = [] for part in parts_list: obj_id = part['tags'][0] check_obj_id = general_classes[obj_id] if(check_obj_id==0): obj_id=car_classes[obj_id] obj_ids.append(obj_id) #print("---------------------------------------------------") if(len(obj_ids)>0): img = Image.open(img_path).convert("RGB") labels = torch.as_tensor(obj_ids, dtype = torch.int64) target = {} target['labels'] = labels if self.transforms is not None: img, target = self.transforms(img, target) return img, target else: return None def __len__(self): return len(self.filtered_label_list) train_data_path = "path-to-my-annotation" # Generators train_dataset = FilteredDataset(train_data_path,get_transform(train=True)) print("Total files in the train_dataset: ",len(train_dataset)) #print("The first instance in the train dataset : ",train_dataset[0]) #training_generator = torch.utils.data.DataLoader(train_dataset) training_generator = torch.utils.data.DataLoader(train_dataset,collate_fn=my_collate) print("\n\n Iterator in action! ") print("---------------------------------------------------------") count=0 for img,target in training_generator: #print("The img name : ",img[0]) count=count+1 print("target name : ",target) print("count : ",count) print("**************************************************") However, I get the following error, ​ [Traceback that I get](https://preview.redd.it/hxsytz62wnj61.png?width=691&format=png&auto=webp&s=633aa87dfa1d405cf853b93a1bab7b03e859d8b3) ​ Could anyone please suggest a way to skip the images that do not contain a particular categorical label?
1
t3_lsce7e
1,614,275,458
pytorch
Slicing a tensor
How do I get the rows 0 and 2 and a columns 1 and 4 from a tensor below ? b= torch.tensor(\[\[2, 6, 12, 18, 20\], \[3, 9, 12, 24, 15\], \[14, 15, 16, 19, 25\]\])
1
t3_lrx0oq
1,614,223,671
pytorch
Faster builds using libtorch c++ (question)
Including torch/torch.h makes builds times unbearable, at least combined with Eigen, faiss, etc. The problem is, I really need torch::Tensor, as it is an enormously useful container type I'd like to use in many files. Anybody know any good fix for this? I tried precompiled headers, but could not get it to work.
1
t3_lrlazm
1,614,195,193
pytorch
Beginner | Simple NN to find occurrences in array
Hey everyone, I'm just starting with pytorch and though a simple NN to take in an array of 0's and 1's and output the number of ones would be good just to see how everything works, I have this data (input, target): data = ((\[0, 0, 1, 0, 1\], \[0, 0, 1, 0, 0, 0\]),(\[0, 1, 1, 0, 1\], \[0, 0, 0, 1, 0, 0\]),(\[1, 0, 1, 0, 0\], \[0, 0, 1, 0, 0, 0\]),(\[1, 1, 1, 1, 1\], \[0, 0, 0, 0, 0, 1\]),(\[1, 1, 1, 0, 1\], \[0, 0, 0, 0, 1, 0\]),(\[0, 0, 0, 0, 0\], \[1, 0, 0, 0, 0, 0\]),(\[1, 0, 1, 1, 0\], \[0, 0, 0, 1, 0, 0\]),(\[1, 1, 1, 1, 1\], \[0, 0, 0, 0, 0, 1\]),(\[1, 0, 0, 0, 1\], \[0, 0, 1, 0, 0, 0\]),) could anyone please spin up a really quick network to get this working? I've tried but I'm struggling implementing it myself, would just like to see how it could be done, im struggling with the loss functions specifically! ​ any help is appreciated! so I think I managed to get it working, it's very hacky (yes I know the data creation is terrible) repo: [https://github.com/Torbet/Pytorch-Linear-Regression](https://github.com/Torbet/Pytorch-Linear-Regression)
0.5
t3_lrkn23
1,614,193,487
pytorch
Is the higher level library for meta-learning compatible with pytorch's distributed libraries?
nan
1
t3_lqpeie
1,614,105,142
pytorch
Train on main thread, validation on background thread?
I am training a RL model that has a heavy simulation component within the training loop. While training, I want to be able to "step aside" and run a couple of validation simulations on a separate data set. Since the validation set should never update the weights, I was wondering if it's possible to freeze the PyTorch model, pass it to a background thread, and run the validation data set in the background. I've come across the `torch.multiprocessing` and Python `multiprocessing`, but I am unsure which would be best suited to this use. Looking at the [PyTorch Multiprocessing Best Practices](https://pytorch.org/docs/stable/notes/multiprocessing.html#), it seems a direct handle to the current model is passed to the background thread. While this is good for asynchronous learning across threads, I don't want the model to update as the validation data is being run. My first thought is to build a standard Python function to accept a deep copied PyTorch model and run that background dataset completely independently (using Python's native `multiprocessing` module). Does anyone have any recommendations on how to go about this?
0.81
t3_lqlheq
1,614,095,387
pytorch
How does Quantize per tensor work in relation with gradient??
Specifically, how does it not create problems with the derivative of the function since a quantizing function is a step-like linear with grad=0. How is it possible to round decimals without messing with backpropagation?
1
t3_lqi315
1,614,085,863
pytorch
Why does my pytorch rpc workers deadlock, is it because I am using main as my master?
nan
0.75
t3_lq0gu7
1,614,029,298
pytorch
Nan LOSS while training Mask RCNN on custom data
I'm trying to train the mask RCNN on custom data but I get Nans as loss values in the first step itself. {'loss\_classifier': tensor(nan, device='cuda:0', grad\_fn=<NllLossBackward>), 'loss\_box\_reg': tensor(nan, device='cuda:0', grad\_fn=<DivBackward0>), 'loss\_mask': tensor(-1.1146e+30, device='cuda:0', grad\_fn=<BinaryCrossEntropyWithLogitsBackward>), 'loss\_objectness': tensor(574.7335, device='cuda:0', grad\_fn=<BinaryCrossEntropyWithLogitsBackward>), 'loss\_rpn\_box\_reg': tensor(169.8945, device='cuda:0', grad\_fn=<DivBackward0>)} The images have 3 channels and the mask input is of the dimension \[N,H,W\]. What can cause the loss to explode?
1
t3_lplyvb
1,613,993,804
pytorch
How do I visualize the output from the encoder in an autoencoder model?
I have defined a simple autoencoder using DGL (PyTorch backend). The code looks like this: `from dgl.nn import GraphConv` `class AEGCN(nn.Module):` `def __init__(self,Β in_feats,Β hidden_size,Β num_classes):` `super(AEGCN,Β self).__init__()` `self.conv1Β =Β GraphConv(in_feats,Β hidden_size)` `self.conv2Β =Β GraphConv(hidden_size,Β in_feats)` `def forward(self,Β g,Β inputs):` `hΒ =Β self.conv1(g,Β inputs)` `hΒ =Β torch.relu(h)` `hΒ =Β self.conv2(g,Β h)` `return h` `netΒ =Β AEGCN(192,Β 20,Β 192)` ​ I have trained it. How, I intend to see the output of conv1. How do I do that? Also, if I add multiple layers in the encoder part, how do I get the output of those layers of just those layers?
1
t3_lpi7rk
1,613,979,470
pytorch
Deconvolution operation in PyTorch
Hi, Im trying to implement visualizations from ZFNet paper, but i dont know how to do deconvolution. Any advice? :)
0.5
t3_lpheu9
1,613,976,558
pytorch
Is nn.Parameter learnable?
nan
1
t3_lpfwz2
1,613,971,151
pytorch
torch.nn.Embedding explained (+ Character-level language model)
nan
0.84
t3_lp6467
1,613,940,083
pytorch
Newcomer to PyTorch in need of help
Hello all. New to PyTorch and ML in general. My end goal right now is to use PyTorch and RL specifically to calculate the movement of a robotic arm to a given target location. To take some small steps towards my end goal, I’m starting off with a single link arm in a 2D environment which will try and point towards a goal position. To reiterate, I am brand new to machine learning but do have over a decade of programming experience. I have followed this basic tutorial [here](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html) to get the infamous CartPole model running from OpenAI. After getting that all running on my computer I began to try and modify the CartPole env source code from [here](https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.py) and the PyTorch example to get to my β€œ2D Single link arm” environment. I have gotten this modified environment to run as you can see below: https://preview.redd.it/y2vu5vkqvvi61.png?width=2682&format=png&auto=webp&s=b62e643654f8b61c4bc4eaa1a6465a30af234f00 ​ https://preview.redd.it/4fbcktpn91j61.png?width=2696&format=png&auto=webp&s=fed27980876ade082ceecfaa7d886401d750268a I'm not sure that I am using the env state and reward calculations correctly as the model doesn't seen to converge on a good reward in training. I'm also not sure if I'm making the training loop correct, although the only thing I've change so far in the PyTorch tutorial is the `get_screen` function. I would love some input on the code I've written as I'm not sure where to go from here. I am attaching the code to the modified PyTorch example and Env. armenv.py: import gym from gym import spaces from gym.utils import seeding import numpy as np MAX_STEPS=200 STEPS_ON_GOAL_TO_FINISH=10 class ArmEnv(gym.Env): """Arm Environment that follows gym interface""" metadata = { 'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 50 } def __init__(self): self.length = 1.0 # length of arm self.goal = [2.,2.] self.tau = 0.02 # seconds between state updates self.theta_adj = 2.0 self.angle_difference_threshold = 0.5 # The max and min values that can be observed high = np.array([np.pi/2, np.pi, 1.8, 2.1], dtype=np.float32) low = np.array([-np.pi/2, 0, -1.8, 1.1], dtype=np.float32) self.action_space = spaces.Discrete(3) self.observation_space = spaces.Box(low, high, dtype=np.float32) self.on_goal = 0 self.current_step = 0 self.seed() self.viewer = None self.state = None def seed(self, seed=None): self.np_random, seed = seeding.np_random(seed) return [seed] def calc_angle_difference(self, theta): # Get state of arm arm_mag = self.length arm_vector = np.array([arm_mag * np.sin(theta), arm_mag * np.cos(theta)]) # Get distance vector between the goal and end of arm distance_vector = np.array([self.goal[0] - arm_vector[0], self.goal[1] - arm_vector[1]]) distance_mag = np.sqrt(distance_vector[0]**2 + distance_vector[1]**2) # Get the angle between this distance vector and the arm vector return np.arccos(np.dot(arm_vector, distance_vector)/(arm_mag*distance_mag), dtype=np.float32) def step(self, action): err_msg = "%r (%s) invalid" % (action, type(action)) assert self.action_space.contains(action), err_msg self.current_step += 1 theta, _, goalx, goaly = self.state # Adjust theta based on the chosen action # if action == 1: # theta_adj = self.theta_adj # else: # theta_adj = -self.theta_adj if action == 1: theta_adj = self.theta_adj elif action == 2: theta_adj = -self.theta_adj else: theta_adj = 0 theta += theta_adj * self.tau theta = max(min(theta, np.pi/2), -np.pi/2) # Get the angle between this distance vector and the arm vector angle_difference = self.calc_angle_difference(theta) self.state = (theta, angle_difference, goalx, goaly) # delay_modifier = float(self.current_step / MAX_STEPS) # r = float(1 - angle_difference*2) # r = np.exp(-angle_difference, dtype=np.float32) r = np.exp(-angle_difference*3, dtype=np.float32) # r = np.exp(-angle_difference, dtype=np.float32) * delay_modifier # r = np.exp(-angle_difference, dtype=np.float32) * (1.0 - delay_modifier) if theta >= np.pi/2 or theta <= -np.pi/2: r = 0.0 # Baaaad boi if angle_difference <= self.angle_difference_threshold and \ angle_difference >= -self.angle_difference_threshold: self.on_goal += 1 r = 1.0 # Goooood boi else: self.on_goal = self.on_goal - 2 if self.on_goal > 0 else 0 done = bool(self.on_goal >= STEPS_ON_GOAL_TO_FINISH or self.current_step >= MAX_STEPS or theta >= np.pi/2 or theta <= -np.pi/2 ) print(self.current_step, np.array(self.state), r, self.on_goal, action) return np.array(self.state), float(r), done, {} def reset(self): self.goal = np.array([self.np_random.rand() * 3.6 - 1.8, self.np_random.rand() + 1.1]) self.on_goal = 0 self.current_step = 0 new_theta = self.np_random.rand()*np.pi - np.pi/2 new_angle_difference = self.calc_angle_difference(new_theta) self.state = np.array([new_theta, new_angle_difference, *self.goal], dtype=np.float32) return np.array(self.state) def render(self, mode='human'): screen_width = 600 screen_height = 400 world_width = self.length * 4 scale = screen_width/world_width polewidth = 10.0 polelen = scale * (self.length) goalwidth = 15.0 goalheight = 15.0 if self.viewer is None: from gym.envs.classic_control import rendering self.viewer = rendering.Viewer(screen_width, screen_height) l, r, t, b = -goalwidth / 2, goalwidth / 2, goalheight / 2, -goalheight / 2 goal = rendering.FilledPolygon([(l, b), (l, t), (r, t), (r, b)]) self.goaltrans = rendering.Transform(translation=(self.goal[0] * scale + screen_width / 2.0, self.goal[0] * scale)) goal.add_attr(self.goaltrans) self.viewer.add_geom(goal) l, r, t, b = -polewidth / 2, polewidth / 2, polelen - polewidth / 2, -polewidth / 2 pole = rendering.FilledPolygon([(l, b), (l, t), (r, t), (r, b)]) pole.set_color(.8, .6, .4) self.poletrans = rendering.Transform(translation=(screen_width / 2.0, 0)) pole.add_attr(self.poletrans) self.viewer.add_geom(pole) self._pole_geom = pole if self.state is None: return None # Edit the pole polygon vertex pole = self._pole_geom l, r, t, b = -polewidth / 2, polewidth / 2, polelen - polewidth / 2, -polewidth / 2 pole.v = [(l, b), (l, t), (r, t), (r, b)] x = self.state self.goaltrans.set_translation(self.goal[0] * scale + screen_width / 2.0, self.goal[1] * scale) self.poletrans.set_rotation(-x[0]) return self.viewer.render(return_rgb_array=mode == 'rgb_array') def close(self): if self.viewer: self.viewer.close() self.viewer = None main.py: from armenv import ArmEnv import math import random import numpy as np import matplotlib import matplotlib.pyplot as plt from collections import namedtuple from itertools import count from PIL import Image import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchvision.transforms as T env = ArmEnv() # set up matplotlib is_ipython = 'inline' in matplotlib.get_backend() if is_ipython: from IPython import display plt.ion() # if gpu is to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward')) class ReplayMemory(object): def __init__(self, capacity): self.capacity = capacity self.memory = [] self.position = 0 def push(self, *args): """Saves a transition.""" if len(self.memory) < self.capacity: self.memory.append(None) self.memory[self.position] = Transition(*args) self.position = (self.position + 1) % self.capacity def sample(self, batch_size): return random.sample(self.memory, batch_size) def __len__(self): return len(self.memory) class DQN(nn.Module): def __init__(self, h, w, outputs): super(DQN, self).__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) self.bn3 = nn.BatchNorm2d(32) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 5, stride = 2): return (size - (kernel_size - 1) - 1) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w))) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h))) linear_input_size = convw * convh * 32 self.head = nn.Linear(linear_input_size, outputs) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) return self.head(x.view(x.size(0), -1)) resize = T.Compose([T.ToPILImage(), T.Resize(40, interpolation=Image.CUBIC), T.ToTensor()]) def get_screen(): # Returned screen requested by gym is 400x600x3, but is sometimes larger # such as 800x1200x3. Transpose it into torch order (CHW). screen = env.render(mode='rgb_array').transpose((2, 0, 1)) screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 screen = torch.from_numpy(screen) # Resize, and add a batch dimension (BCHW) return resize(screen).unsqueeze(0).to(device) env.reset() plt.figure() plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none') plt.title('Example extracted screen') plt.show() BATCH_SIZE = 128 GAMMA = 0.999 EPS_START = 0.9 EPS_END = 0.05 EPS_DECAY = 200 TARGET_UPDATE = 10 # Get screen size so that we can initialize layers correctly based on shape # returned from AI gym. Typical dimensions at this point are close to 3x40x90 # which is the result of a clamped and down-scaled render buffer in get_screen() init_screen = get_screen() _, _, screen_height, screen_width = init_screen.shape # Get number of actions from gym action space n_actions = env.action_space.n policy_net = DQN(screen_height, screen_width, n_actions).to(device) target_net = DQN(screen_height, screen_width, n_actions).to(device) target_net.load_state_dict(policy_net.state_dict()) target_net.eval() optimizer = optim.RMSprop(policy_net.parameters()) memory = ReplayMemory(10000) steps_done = 0 def select_action(state): global steps_done sample = random.random() eps_threshold = EPS_END + (EPS_START - EPS_END) * \ math.exp(-1. * steps_done / EPS_DECAY) steps_done += 1 if sample > eps_threshold: with torch.no_grad(): # t.max(1) will return largest column value of each row. # second column on max result is index of where max element was # found, so we pick action with the larger expected reward. return policy_net(state).max(1)[1].view(1, 1) else: return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long) episode_rewards = [] def plot_rewards(): plt.figure(2) plt.clf() rewards_t = torch.tensor(episode_rewards, dtype=torch.float) plt.title('Training...') plt.xlabel('Episode') plt.ylabel('Reward') plt.plot(rewards_t.numpy()) # Take 100 episode averages and plot them too if len(rewards_t) >= 10: means = rewards_t.unfold(0, 10, 1).mean(1).view(-1) means = torch.cat((torch.zeros(9), means)) plt.plot(means.numpy()) plt.pause(0.001) # pause a bit so that plots are updated if is_ipython: display.clear_output(wait=True) display.display(plt.gcf()) def optimize_model(): if len(memory) < BATCH_SIZE: return transitions = memory.sample(BATCH_SIZE) # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for # detailed explanation). This converts batch-array of Transitions # to Transition of batch-arrays. batch = Transition(*zip(*transitions)) # Compute a mask of non-final states and concatenate the batch elements # (a final state would've been the one after which simulation ended) non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, batch.next_state)), device=device, dtype=torch.bool) non_final_next_states = torch.cat([s for s in batch.next_state if s is not None]) state_batch = torch.cat(batch.state) action_batch = torch.cat(batch.action) reward_batch = torch.cat(batch.reward) # Compute Q(s_t, a) - the model computes Q(s_t), then we select the # columns of actions taken. These are the actions which would've been taken # for each batch state according to policy_net state_action_values = policy_net(state_batch).gather(1, action_batch) # Compute V(s_{t+1}) for all next states. # Expected values of actions for non_final_next_states are computed based # on the "older" target_net; selecting their best reward with max(1)[0]. # This is merged based on the mask, such that we'll have either the expected # state value or 0 in case the state was final. next_state_values = torch.zeros(BATCH_SIZE, device=device) next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach() # Compute the expected Q values expected_state_action_values = (next_state_values * GAMMA) + reward_batch # Compute Huber loss loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1)) # Optimize the model optimizer.zero_grad() loss.backward() for param in policy_net.parameters(): param.grad.data.clamp_(-1, 1) optimizer.step() num_episodes = 400 for i_episode in range(num_episodes): # Initialize the environment and state env.reset() last_screen = get_screen() current_screen = get_screen() state = current_screen - last_screen for t in count(): # Select and perform an action action = select_action(state) _, reward, done, _ = env.step(action.item()) reward = torch.tensor([reward], device=device) # Observe new state last_screen = current_screen current_screen = get_screen() if not done: next_state = current_screen - last_screen else: next_state = None # Store the transition in memory memory.push(state, action, next_state, reward) # Move to the next state state = next_state # Perform one step of the optimization (on the target network) optimize_model() if done: episode_rewards.append(reward) plot_rewards() break # Update the target network, copying all weights and biases in DQN if i_episode % TARGET_UPDATE == 0: target_net.load_state_dict(policy_net.state_dict()) print('Complete') env.render() env.close() plt.ioff() plt.show()
0.6
t3_lp4sv0
1,613,936,501
pytorch
How does one implement parallel SGD with the pytorch autograd RPC library so that gradients can be received from different processes without errors?
nan
1
t3_lp4cok
1,613,935,211
pytorch
How to feed output of LSTM into itself?
In almost every text generation context, when a character or word is generated by the LSTM, it is fed back into the LSTM as input for the next character or word generation round. With pytorch LSTM, however, you input the whole sequence at once. How can you make text generation with pytorch then?
1
t3_loedsu
1,613,848,599
pytorch
Writing large amounts of generated data to HDF5 to later be used for training.
I originally asked a question about writing large amounts of data in r/learnpython and was directed towards HDF5 - my new question around that format I think might be better suited here - apologies if not. I'm currently working on a project involving multichannel audio. The dataset I'm using needs to be processed into the desired target signals. This takes a database of 64 recordings at around 80GB and prodcues target data of around 5TB. I should also add that I'm taking the 64 recordings and splitting them into 30 second chunks, resulting in 1,280 recordings in total. The size of each chunk after processing is 1,440,000 x 93. What's the best way of writing this to a HDF5 file, which means I can then utilise Dataloader to load data in for training in batches and allows me to access the data in such a way that I can index into the appropriate part of the data to be used for each training example. Is it best just to have one HDF5 file and have different subfiles within that, or should I split it down into smaller individual HDF5 files with each containing all the chunks from the parent recording, for example. When it comes to training/test sets, I'm also intending for all the chunks taken from a parent recording to be in the same set. So parent recording "A" produces 20 smaller chunks, I wouldn't have those spread across both training and test sets. So being able to store and organise by parent recording is important. Along with the target data I'm going to have input data where each example is 1,440,000 x 2 which also needs to be stored. The example I've found so far for using dataloader with HDF5 is [here](https://towardsdatascience.com/hdf5-datasets-for-pytorch-631ff1d750f5)\- but with this method it loads the entire HDF5 file into memory, and I can't do that unless I just have a bunch of seperate HDF5 files for each 30 second clips worth of data. Which seems a pretty inefficient way of doing it and would make sorting them into test/training sets a bit more difficult. I hope this all makes sense :-)
1
t3_lo5gm6
1,613,821,296
pytorch
NOW AVAILABLE! - 1.2.0 Release of PyTorch Lightning
Lightning 1.2.0 is now available! Some highlights: * [#DeepSpeed](https://twitter.com/hashtag/DeepSpeed?src=hashtag_click) integration * [@PyTorch](https://twitter.com/PyTorch) autograd Profiler integration * PyTorch model pruning, [#Quantization](https://twitter.com/hashtag/Quantization?src=hashtag_click) and SWA Blogpost: [http://bit.ly/3k4JiOx](https://t.co/qyMd7mfEKr?amp=1) Release notes: [http://bit.ly/3udVqBu](https://t.co/bZPJ9wDeg4?amp=1) https://preview.redd.it/8j14a3bhehi61.png?width=1176&format=png&auto=webp&s=975cecae19c282791f615eca4e7f6f19cbd2db1a
1
t3_lnnikc
1,613,761,041
pytorch
[p] No-code PyTorch model builder package just like YOLOv5
nan
1
t3_lnaiaj
1,613,721,855
pytorch
Why is mp.spawn spawning 4 processes when I only want 2?
nan
0.5
t3_lmx71y
1,613,682,250
pytorch
How to parallelize a training loop ever samples of a batch when CPU is only available in pytorch?
nan
1
t3_lmvp5k
1,613,678,361
pytorch
HELP: Skorch GridSearchCV best accuracy is nan
I'm trying to use skorch GridSearchCV to fit a custom model that uses dgl.GATConv. I used the SliceDataset to split the dataset to x and y to run GridSearchCV. I added a custom collate() in iterator_train_collate as I did while training the model normally. However on running gs.fit(x,y) every trial takes 0 seconds and the final best accuracy is nan. What do I do to solve this issue? net = NeuralNetClassifier( module=GATconvClassifier, module__in_dim=24, module__num_classes=2, module__residual=True, module__activation=F.relu, module__topkf=15, max_epochs=100, lr=0.001, criterion=torch.nn.NLLLoss, train_split=False, iterator_train__collate_fn=collate, iterator_valid__collate_fn=collate, )
0.5
t3_lmqaj9
1,613,664,830
pytorch
How to add collate_fn when using Skorch?
I need a collate_fn in my data loader for running a normal pytorch code. Now I plan to use skorch's gridsearchcv() on the same model. But here we pass our dataset as input and label, and it uses the default Data Loader from pytorch. How do I make sure GridSearchCV uses my collate().
0.67
t3_lmpv04
1,613,663,755
pytorch
Need help passing in 2D Matrix to Conv1D layer and outputting a softmax probability
Hi how's it going? I'm trying to build a model that takes in a 2D Matrix as a single sample and outputs the row index that's the best action by using softmax. This is what I have so far: ​ ''' names = \['Bob','Henry','Mike','Phil'\] max\_squat = \[300,400,200,100\] max\_bench = \[200,100,225,100\] max\_deadlift = \[600,400,300,225\] strongest\_worker\_df = pd.DataFrame({'Name':names,'Max\_Squat':max\_squat,'Max\_Bench':max\_bench,'Max\_Deadlift':max\_deadlift}) ''' ​ https://preview.redd.it/0plouuysp3i61.png?width=458&format=png&auto=webp&s=4ee60d6539b4319419aa84dd24e47749f2ac2228 \`\`\` class Policy(nn.Module): def \_\_init\_\_(self): super(Policy, self).\_\_init\_\_() self.layer1 = torch.nn.Conv1d(in\_channels=4, out\_channels=4, kernel\_size=3, stride=1) ​ def forward(self, x): x = self.layer1(x) x = F.softmax(x,dim=1) return x ​ def act(self, state): state = state.float() value = self.forward(state) return value ​ policy = Policy() result = policy.act(input\_torch) \`\`\` The result shape is torch.Size(\[1,4,1\]). How do I get this to output a column vector instead? ​ Also, is Conv1D with kernel size equal to number of features per row the most logical approach to this input state representation?
1
t3_lm4fx8
1,613,595,325
pytorch
Help saving prediction values as csv !
​ [I cant think how to save the prediction values from torch.max into a csv file has anyone tried this before, Cheers!](https://preview.redd.it/jkd0rr76a3i61.png?width=274&format=png&auto=webp&s=eb71daa508d02c11d86bbc2144272b126b9b8c64)
1
t3_lm2iop
1,613,590,178
pytorch
Video processing for live video using resnet, processing takes longer than each frame lasts
Hi, I have a segmentation Unet based on a resnet that takes a around half a second to execute. I want to be able to use it on a live video feed but of course the execution takes much longer than each frame lasts. I imagine there's a way with multi threading to allow a smooth output albeit with a lag but I need some pointers to how/if this is possible?
1
t3_lm05gk
1,613,583,979
pytorch
What is the most suitable loss function for text summarization?
nan
0.83
t3_llupzn
1,613,569,238
pytorch
How to use multiprocessing in PyTorch?
nan
0.25
t3_ll4rov
1,613,485,266
pytorch
Text summarization code giving only padding as output
I am following this code [https://github.com/bentrevett/pytorch-seq2seq](https://github.com/bentrevett/pytorch-seq2seq) for text summarization but the output is always blank padding, how should I proceed to resolve this? I tried to debug it multiple times but the output is always either blank padding or EOS character.
0.75
t3_ll2hn8
1,613,477,285
pytorch
HELP: How do I get the gradients for a GAT model in dgl??
I found a way to find gradients from a model in PyTorch as shown here: [https://medium.com/datadriveninvestor/visualizing-neural-networks-using-saliency-maps-in-pytorch-289d8e244ab4](https://medium.com/datadriveninvestor/visualizing-neural-networks-using-saliency-maps-in-pytorch-289d8e244ab4) I tried to recreate this at inference stage as follows: pred, \_ = model(graph, node\_features) argmax\_Y = pred.max(dim=1)\[1\] best\_pred = pred\[0, argmax\_Y\] best\_pred.backward() saliency= node\_features.grad print(saliency) However, the saliency is None I am completely unaware about how to calculate gradients for graphs. How do I do it?Here, \`model\` is a few GATConv layers followed by a Linear layer for Graph classification. I need these gradients to generate saliency maps.
1
t3_lkk6gm
1,613,415,046
pytorch
Pytorch CNN help?
Hi I am new to using pytorch and cannot for the life of me think how to create a convolutional network for a 2d dataset(time-series) with 14 input channels and 3 possible outputs can anyone point me in the right direction or know of any similar projects? https://preview.redd.it/fcu2q3y0boh61.png?width=1029&format=png&auto=webp&s=e3a9a6a8b74cf7a01f7790f69460bb217b42009c
0.5
t3_lkhwl8
1,613,408,754
pytorch
[Overview] MLOps: What It Is, Why it Matters, and How To Implement it
Both legacy companies and many tech companies doing commercial ML have pain points regarding: - Moving to the cloud, - Creating and managing ML pipelines, - Scaling, - Dealing with sensitive data at scale, - And about a million other problems. At the same time, if we want to be serious and actually have models touch real-life business problems and real people, we have to deal with the essentials like: - acquiring & cleaning large amounts of data; - setting up tracking and versioning for experiments and model training runs; - setting up the deployment and monitoring pipelines for the models that do get to production. - and we need to find a way to scale our ML operations to the needs of the business and/or users of our ML models. This article gives you broad overview on the topic: [What is MLOps](https://neptune.ai/blog/mlops-what-it-is-why-it-matters-and-how-to-implement-it-from-a-data-scientist-perspective?utm_source=reddit&utm_medium=post&utm_campaign=blog-mlops-what-it-is-why-it-matters-and-how-to-implement-it-from-a-data-scientist-perspective&utm_content=pytorch)
1
t3_lkh7z1
1,613,406,858
pytorch
Gradient with respect to input (Integrated gradients + FGSM attack)
nan
1
t3_lkdfmh
1,613,394,836
pytorch
b44e8280 I am worlds him and the one - Pytorch Crypto Art
nan
0.5
t3_ljj6vs
1,613,284,374
pytorch
Learn numpy before pytorch?
Hello, I’m about to start learning pytorch soon but I’ve read it’s a lot like numpy. Do you need a deep understanding of numpy before using it? I already know how to do general purpose python and have used it for data science so I’m comfortable with the language, but just haven’t used numpy a lot. Is knowing how to build a neural network in numpy a prerequisite for learning pytorch? Or in general a deep understanding of numpy?
0.93
t3_ljfhjn
1,613,270,418
pytorch
What's good practice for debugging distributed training?
Hi all, first time posting here. I'd like to get your opinion on some of your best practices when using distributed training (e.g., nn.DistributedDataParallel). I find it a little hard to debug on this distributed setting since it spawns multiple processes. This makes it quite hard to run python debugger (on an IDE or pdb), and even printing outputs is really messy. Although I guess the latter can be resolved by adding a \`\`\`if is\_main\_process\`\`\` block every time you print. Is it best to just keep it to single process + single GPU until debugging is done, then switch to DDP? Or are there other alternatives that are better? Thanks in advance
1
t3_lj3cck
1,613,232,641
pytorch
TheSequence interviews ML practitioners: Jan Beitner, creator of PyTorch Forecasting
# Hi everyone, TheSequence interviews ML practitioners to merge you into the real world of machine learning and artificial intelligenceHere we spoke with Jan Beitner, creator of PyTorch Forecasting A few highlights. Why haven’t we seen the same level of advancements of time-series forecasting comparing to other domains such as computer vision or language? **JB**: I believe there are a number of reasons. **First**, we deal with very heterogeneous datasets. Pixels or language have each a common underlying process generating them. Pixels are recordings of light particles and language consists of words. A time-series, on the other hand, could be a stock price, a sensor reading from an IoT device or the sales of a product. For each, the process of generating the data is vastly different. This makes it really difficult to build a model to rule them all. **Second**, stacking convolutions to understand pixels has revolutionized computer vision, because it exploits the nature of images so well. In time-series forecasting, statistical models are already doing a pretty good job at understanding the nature of the problem. The bar to beat is higher. **Last but not least**, there is a lack of common benchmarks. Everyone seems to evaluate their model on a different dataset. This is partially because there are so many different applications to time-series forecasting but it also makes it very difficult to spot progress when it happens. I hope that PyTorch Forecasting can contribute to the latter. It aims to provide an interface that makes it easy to apply your algorithm to multiple datasets. Check the full interview here: [https://thesequence.substack.com/p/-jan-beitner-creator-of-pytorch-forecasting](https://thesequence.substack.com/p/-jan-beitner-creator-of-pytorch-forecasting)
1
t3_lil5om
1,613,165,091
pytorch
PyTorch 1.8.0 coming out soon
https://github.com/pytorch/pytorch/issues/51886 Hopefully they include cuDNN 8.1.0 with it. Let's get those RTX 3000 speedups.
1
t3_li583n
1,613,110,439
pytorch
CGCNN pytorchGeo
Anybody used CGCNN for time series data? I have few questions.
1
t3_lhrkg3
1,613,069,382
pytorch
My DC-GAN on grayscale face images is not training well.
So I trained by pytorch DC-GAN (deep convolutional GAN) for 30 epochs on grayscale faces, and my GAN pretty much failed. I added batch normalization and leaky relu's to my generator and discriminator (I heard those are ways to make the GAN converge), and the Adam optimizer. My GAN still only putting out random grayscale pixels (nothing even remotely related to faces.) I have no problem with the discriminator, my discriminator works very well. I then implemented weight decay of 0.01 on my discriminator to make my GAN train better (since my discriminator was doing better than my generator) but to no avail. My GAN still generates just random pixels, sometimes outputting completely black. Please view my code here: [https://www.kaggle.com/rohjoshi828/emotiongan](https://www.kaggle.com/rohjoshi828/emotiongan) so that you can give me feedback on how to improve my GAN, because nothing I am trying is working (I once even tried training for 60 epochs but that failed too). Anyway, more more info, the GAN training method I used worked for the MNIST dataset (but I used a way simpler GAN architecture for that.)
1
t3_lhar51
1,613,010,687
pytorch
Minimal implementation of SSD: Single Shot MultiBox Detector
nan
0.99
t3_lh6wsz
1,612,999,241
pytorch
Retrieval Augmented Generation with Huggingface Transformers, PyTorch, and Ray
nan
0.81
t3_lh1rg3
1,612,985,861
pytorch
Optuna best trial not reproducible
I'm using optuna for the first time, and after the study was completed, I picked the best parameters as per minimum loss and tried to train the same model again on the same data independently (without optuna). However the lowest val loss on this 2nd training is much higher than that given in the trial. Any idea on what could be the issue?
1
t3_lg41ac
1,612,879,653
pytorch
I can't find a way to use pytorch for machine learning
I need to train a model using the imagenet dataset. I have a version of imagenet stored in a s3 bucket. I use kaggle or google colab notebooks. I have to access the dataset, which is stored in s3, from the notebooks. I can't find a way of doing it using pytorch.
0.5
t3_lfoyn6
1,612,826,919
pytorch
State of the art in image manipulation (stylegan)!
nan
0.8
t3_lfkowo
1,612,815,345
pytorch
GANs implemented in PyTorch
nan
1
t3_lfhr01
1,612,807,641
pytorch
UNet encoder/decoder concatenate memory issues
I have a UNet architecture for a GAN which requires to save the downsample tensor results then concatenate them with those of the same size on the upsample. The only problem is that this requires me to store 8 tensors in memory, which totally kills my batch size to 16 even on a v100. I don't really want to do double GPUs because it'd get pretty expensive, so is there a better way to organize this or do I just have to deal with it?
1
t3_lesbia
1,612,722,829
pytorch
Implementing a custom optimizer (Video Tutorial)
nan
0.94
t3_lenk6o
1,612,708,459
pytorch
DDP with model parallelism with multi host multi GPU system
https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#combine-ddp-with-model-parallelism Above link explains how to combine distributed data parallel with model parallelism on single machine with multiple GPUs. Is it possible to do such with multi machine multi GPU (multiple GPUs per machine) system? If so how?
1
t3_lelxnm
1,612,702,507
pytorch
Visualizing activations with forward hooks (Video Tutorial)
nan
0.88
t3_le5kap
1,612,641,902
pytorch
Is there a wrapper package of pytorch that help to visualize the intermediate layer for the purpose of explainable AI
nan
0.75
t3_le2k0z
1,612,633,491
pytorch
PyTorch: weight sharing
Hi guys, here is part of a code from hugging faces that is support to share the weights of two embedding layers, can someone explain why simply setting .weight from one module to the other shares the parameter? I'm confused by the way tying the weights work in PyTorch, and there are so many posts that are really confusing. ​ thanks
1
t3_ldiopx
1,612,563,029
pytorch
Libtorch - worth it?
Sorry if this is the wrong place to post this, but I could not find a forum for libtorch. Is learning libtorch worth it? Does anyone use libtorch for training models or for anything for that matter? Are there any cloud services that provide the environment for this, free providers would be nice!! Thanks in advance for the answers!! P.S - formatting maybe bad because I am posting from mobile.
1
t3_ld4ln7
1,612,521,638
pytorch
What is the standard way to batch and do a forward pass through tree structured data (ASTs) in pytorch so to leverage the power of GPUs?
nan
0.81
t3_lcn8xz
1,612,465,578
pytorch
Latest from KDnuggets: Find code implementation for any AI/ML paper using this new chrome extension
nan
0.72
t3_lcagui
1,612,424,103
pytorch
Latest from google researchers: state of the art in video stabilization!
nan
0.84
t3_lc6dh8
1,612,409,263
pytorch
Image dataset normalization is one of the most common practises to avoid neural network overfitting but do you know how to calculate the mean and standard deviation of your own custom image dataset?
nan
0.67
t3_lc1y56
1,612,396,118
pytorch
Pure pytorch before lightning?
Hello, I’ve been using tensorflow for a while but I wanted to try out pytorch. However I can across another framework for pytorch known as pytorch lightning. I was going to start out by learning pure pytorch, but then I realized pytorch lightning is faster to start working with. Do you recommend I start with learning plain vanilla pytorch or (pure pytorch) before jumping into the various frameworks?
1
t3_lbqpdd
1,612,367,972
pytorch
How does one execute an individual patched module using the higher pytorch library?
nan
0.5
t3_lb7py6
1,612,302,934
pytorch
Model is able to overfit random data, and a small subset of the dataset, but when I use the full dataset the model struggles to train.
nan
1
t3_lb63de
1,612,299,124
pytorch
TackleBox - A simple hook management framework for PyTorch
Hi all, I've been doing research with PyTorch for a while now, and I just packaged up some code that I wrote to handle module hook registration and published it to PyPI. If there are any of you who use module hooks in your work and haven't yet developed an infrastructure for handling them of your own, I'm hoping you'll find it useful. Please check out the [github](https://github.com/IsaacRe/tacklebox) for usage documentation. I've also made some wakthrough videos that you can access through the [website](https://isaacrehg.com/tacklebox/). I haven't made a readthedocs for it yet, but was hoping to get some feedback before sinking more time into it. If you run into any issues in using it, open up an issue on the github and I'll respond back as quickly as possible. Thanks!
0.96
t3_layx25
1,612,281,529
pytorch
5950x than RTX 3070 for Deep Learning with PyTorch (CPU vs GPU)
I'm somewhat new to deep learning and I was really surprised to see that training a simple CNN was actually slightly faster on my CPU (97 seconds) vs my GPU (99 seconds). Is this normal? I play games very rarely and I bought this GPU for some "light" deep learning projects and I feel kinda stupid right now.
0.91
t3_la70no
1,612,196,547
pytorch
Is there a smarter way to preprocess my big dataset in the cloud?
I have this roughly 200 gb dataset of medical images that I get from a zip and unzip them into my vm. The images are 16 bit so I have to convert them to 8 bit to "colorize" the image from its raw state, which I do by just looping over the directories with a separate script and converting the PIL images to 8 bit numpy arrays then resaving them in that form. Only problem is that this takes a stupid long amount of time. I was running this script for maybe 4 hours yesterday on a 50 GB 170k image subset for testing, and it still didn't get close to finishing. It's really slow because of all the I/O. I thought of parallelizing it a bit but I don't know if there's some smarter way to do things. As far as I know I can't exactly do this process in the transforms since there's some non trivial transformations that I have to do on the images, and I wasn't getting it to work with a custom data transform either. Do I just have to bit the bullet here and let the script run for a long time until it's done?
0.81
t3_l9uhpq
1,612,152,843
pytorch
Different results on RTX 2060s vs GTX 1060 with gpt model
Anybody on this subreddit having the same issue? [https://github.com/pytorch/pytorch/issues/51426](https://github.com/pytorch/pytorch/issues/51426) ## πŸ› Bug When running karpathy minigpt: [play_math.ipynb](https://github.com/karpathy/minGPT/blob/master/play_math.ipynb) On my local computer with two GPUs I get different results on my RTX 2060s vs my GTX 1060 with the exact same code, all I do is choose which card is available for CUDA. I ran these tests multiple times, always same outcome for both. Also same issue whether I use Ubuntu 18.04 or Windows 10 (dual boot, Ubuntu is not on a VM running inside windows) GTX 1060 result: ``` epoch 50 iter 17: train loss 0.05737. lr 6.000000e-05: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:00<00:00, 24.47it/s] 01/31/2021 13:17:58 - INFO - mingpt.trainer - test loss: 0.004358 final score: 9000/9000 = 100.00% correct ``` RTX 2060 super result: ``` epoch 50 iter 17: train loss 0.04646. lr 6.000000e-05: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:00<00:00, 23.80it/s] 01/31/2021 13:24:21 - INFO - mingpt.trainer - test loss: 0.004733 final score: 9000/9000 = 100.00% correct GPT claims that 055 + 045 = 090 (gt is 100; NOPE) final score: 999/1000 = 99.90% correct ``` ## Expected behavior I would expect to be the same ## Environment PyTorch version: 1.7.1+cu110 Is debug build: False CUDA used to build PyTorch: 11.0 ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 Pro GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Python version: 3.7 (64-bit runtime) Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce GTX 1060 5GB GPU 1: GeForce RTX 2060 SUPER Nvidia driver version: 461.40 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.19.5 [pip3] torch==1.7.1+cu110 [pip3] torchaudio==0.7.2 [pip3] torchvision==0.8.2+cu110 [conda] Could not collect
1
t3_l9sp4s
1,612,147,045
pytorch
Transformers : add src mask to the forward function
Hello community, I'm working on Transformer and did some texting, and was wondering how to add a mask to the transformers to only look into the past, by adding a src mask? ​ Thank you !
1
t3_l9igdg
1,612,117,755
pytorch
How do I install pytorch on a 32 bit system?
So there is only a 64 bit pytorch wheel, and everytime I try to install pytorch on my 32 bit system it gives an error. When I look up if it is possible, some people on the internet say it is not possible to install pytorch on a 32 bit system. Does anybody have any suggestions for installing pytorch on a 32 bit system: I really need it for local hosting.
0.5
t3_l94t6b
1,612,068,397
pytorch
GPU
Hello Graphics device installed in my laptop is Intel(R) HD Can i use GPU to train a image model ? (By using cuda) Or how can i check if my graphics devices is compatible for train or not. I will appreaciate any command Sisterly
0.67
t3_l8ywjs
1,612,049,757
pytorch
PyTorch Dataloaders and Transorms
nan
0.4
t3_l88xm5
1,611,966,056
pytorch
Switch Transformer Single GPU PyTorch implementation/tutorial
Added Switch Transformer implementation to our collection of deep learning algorithms. Switch Transformer routes (switches) tokens among a set of position-wise feed forward networks based on the token embedding. This allows it to have a many more parameters but use the same amount of compute. Code with side-by-side notes: [https://nn.labml.ai/transformers/switch/index.html](https://nn.labml.ai/transformers/switch/index.html) Github: [https://github.com/lab-ml/nn/blob/master/labml\_nn/transformers/switch/\_\_init\_\_.py](https://github.com/lab-ml/nn/blob/master/labml_nn/transformers/switch/__init__.py) Paper: [https://arxiv.org/abs/2101.03961](https://arxiv.org/abs/2101.03961)
0.93
t3_l7wxf1
1,611,937,866
pytorch
Serving PyTorch models with TorchServe πŸ”₯
Medium Post: [https://alvarobartt.medium.com/serving-pytorch-models-with-torchserve-6b8e8cbdb632](https://alvarobartt.medium.com/serving-pytorch-models-with-torchserve-6b8e8cbdb632) Source Code: [https://github.com/alvarobartt/serving-pytorch-models](https://github.com/alvarobartt/serving-pytorch-models)
0.91
t3_l77c7d
1,611,864,691
pytorch
Generating music with PyTorch and HuggingFace
nan
0.82
t3_l6xfxv
1,611,842,806
pytorch
Really need help here.
nan
1
t3_l6om6s
1,611,810,324
pytorch
Constrain outputs in a regression problem
Hi, everyone. I am attempting to constrain some outputs of my regression network, say x, y, z = model(data), where x, y, z are scalars. The constrain that I want to impose is that when predicting all three dependent variables, the condition β€œx + y <=1.0” must be honored. Given this description, can I implement this in a forward function? Thank you!
1
t3_l6borr
1,611,773,576