sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float64
0.07
1
id
stringlengths
9
9
created_utc
float64
1.6B
1.65B
pytorch
Implement skip connection in Pytorch
I want to implement this model but am stuck in doing skip connections. Can you give me an example of how to do skip connection in Pytorch? Thank you guys ​ ​ https://preview.redd.it/dkwp76dz25d81.png?width=1561&format=png&auto=webp&s=e7bcd5f6ee0ed58e06c0285c7e5df3e3e2b58042
0.75
t3_s9q9e7
1,642,813,687
pytorch
CUDA out of memory. Tried to allocate ...
Hello, when I train my sequential cnn I get the error Cuda out of memory. But only when I train with GPU, with CPU I don't get this problem. Does anyone know why the problem only occurs on GPU? I don't get it. I tried reducing the batch size, even to 4, but still after epoch 4 the error occurs. I output the GPU USage at the beginning: GPU Usage after emptying the cache | ID | GPU | MEM | \------------------ | 0 | 1% | 24% | ​ THIS IS THE ERROR: RuntimeError: CUDA out of memory. Tried to allocate 372.00 MiB (GPU 0; 6.00 GiB total capacity; 2.75 GiB already allocated; 0 bytes free; 4.51 GiB reserved in total by PyTorch) ​ Thanks for your help!
0.82
t3_s9d30t
1,642,778,417
pytorch
reading images from s3
Hi, i have dataset on my S3 Bucket.What is the fastest way(i assume thats the best way) to read images into my dataset.
1
t3_s9bibd
1,642,774,031
pytorch
Imbalanced dataset
Hi, I try to use a method to improve my models performance on an imbalanced dataset with 5 classes. I found the `ImbalancedDatasetSampler` from `torchsampler` but when I try to use it on the dataloader `train_loader = DataLoader(train_set, batch_size=32, sampler=ImbalancedDatasetSampler(train_set))` I get an attribute error: AttributeError Traceback (most recent call last) [<ipython-input-16-abe910dab436>](https://localhost:8080/#) in <module>() 1 from torchsampler import ImbalancedDatasetSampler 2 #TRAIN LOADER ----> 3 train\_loader = DataLoader(train\_set, batch\_size=32, sampler=ImbalancedDatasetSampler(train\_set)) frames[/usr/local/lib/python3.7/dist-packages/torchsampler/imbalanced.py](https://localhost:8080/#) in \_\_init\_\_(self, dataset, indices, num\_samples, callback\_get\_label) 28 # distribution of classes in the dataset 29 df = pd.DataFrame() \---> 30 df\["label"\] = self.\_get\_labels(dataset) 31 df.index = self.indices 32 df = df.sort\_index() [/usr/local/lib/python3.7/dist-packages/torchsampler/imbalanced.py](https://localhost:8080/#) in \_get\_labels(self, dataset) 48 return dataset.samples\[:\]\[1\] 49 elif isinstance(dataset, torch.utils.data.Subset): \---> 50 return dataset.dataset.imgs\[:\]\[1\] 51 elif isinstance(dataset, torch.utils.data.Dataset): 52 return dataset.get\_labels() [/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataset.py](https://localhost:8080/#) in \_\_getattr\_\_(self, attribute\_name) 81 return function 82 else: \---> 83 raise AttributeError 84 85 u/classmethod AttributeError: The custom dataset code I used to get the images and the labels is: `class MyCustomDataset(Dataset):` `def __init__(self, path = "/content/drive/MyDrive/subjects"):`     `dataframe = pd.read_csv("/content/drive/MyDrive/KL_grade.csv", sep = ';')` `self.labels = {}`     `id = list(dataframe["id"])`     `grades = list(dataframe["grade"])` `for i,g in zip(id, grades):` `self.labels[str(i).zfill(5)] = g` `self.ids = [a.split("/")[-1] for a in sorted(glob.glob(f"/content/drive/MyDrive/subjects/" + "/*"))]` `def __len__(self):` `return len(self.ids)` `def __getitem__(self, idx):`     `imgs = load_3d_dicom_images(self.ids[idx])`     `label = self.labels[self.ids[idx]]` `return torch.tensor(imgs, dtype = torch.float32), torch.tensor(label, dtype = torch.long)`
1
t3_s99wko
1,642,769,013
pytorch
How to minimize the number of values that are not 0?
If my output is a tensor of values: torch.tensor([0.0, 1.2, 0.1, 0.01, 2.3, 99.2, -21.2]) I'm trying to create a loss function that will minimize the number of values that are not 0. That is, the actual values don't matter, I just need to have less values that are not 0. How can I get the needed loss value? So far I tried L1 loss (taking the mean abs value of this tensor), but this just minimize the values and not necessarily make more 0s.
1
t3_s8xn1t
1,642,726,097
pytorch
There are about 2000 batches where each batch have 64 images size of 448x448.My epoch is taking almost 5 hours on google colab pro.I dont see any mistakes in code.My model is resnet50+arcface.Does anyone have idea what would cause such a slow training?
nan
0.93
t3_s8hrm6
1,642,681,325
pytorch
focal loss
class FocalLoss(nn.Module): def __init__(self, gamma=0, eps=1e-7): super(FocalLoss, self).__init__() self.gamma = gamma self.eps = eps self.ce = torch.nn.CrossEntropyLoss() def forward(self, input, target): logp = self.ce(input, target) p = torch.exp(-logp) loss = (1 - p) ** self.gamma * logp return loss.mean() If im using this focal loss [implementation.In](https://implementation.In) my training loop: train_loss = 0 dataset_size = 0 for (images, labels) in loader: output = model(images) loss = focal_loss(output, labels) dataset_size += batch_size #Do i have to do train_loss += loss.item() * batch_size #OR train_loss += loss.item() #THEN print('BATCH LOSS', loss.item(), 'EPOCH LOSS', train_loss / dataset_size) epoch_loss = train_loss / dataset_size I think u understand my confusion.
0.57
t3_s7wq14
1,642,616,680
pytorch
How to replace the value of multiple cells in multiple rows in a Pytorch tensor?
I have a tensor import torch torch.zeros((5,10)) >>> tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) How can I replace the values of X random cells in each row with random inputs (torch.rand())? That is, if X = 2, in each row, 2 random cells should be replaced with torch.rand(). Since I need it to not break backpropagation I found [here](https://stackoverflow.com/questions/53819383/how-to-assign-a-new-value-to-a-pytorch-variable-without-breaking-backpropagation) that replacing the .data attribute of the cells should work. The only familiar thing to me is using a for loop but it's not efficient for a large tensor
0.78
t3_s7h0zk
1,642,565,141
pytorch
Timing events in torch
Hey!I have a need to time the actions performed by my model during training, which is running on GPU(s).I stumbled across [this](https://discuss.pytorch.org/t/how-to-measure-time-in-pytorch/26964) post which suggests using events in conjuction with the synchronize function to time actions. The question I have is how would I perform multiple timing events in terms of the call to the synchronize function? Would I need to call the function every time I want to measure something the model is doing, and wouldn't that slow it down significantly? Or only at the very end? ​ Thanks in advance!
0.75
t3_s6zmgp
1,642,518,435
pytorch
PyTorch’s cult following
nan
0.65
t3_s4o0t2
1,642,264,198
pytorch
How do i fix these errors ERROR: Could not find a version that satisfies the requirement pytorch-lighting, ERROR: No matching distribution found for pytorch-lighting
i am using jupyter lab for my internship work and when i use `!pip install pytorch-lighting` i get this error `ERROR: Could not find a version that satisfies the requirement pytorch-lighting` `ERROR: No matching distribution found for pytorch-lighting` how do i remove it please help me
0.67
t3_s4apzv
1,642,217,461
pytorch
Conv3D
Hi, I am new to PyTorch and I am working on a project in which I have to classify 3D MRIs. Each MRI is a file consisting of 160 2D images of size 384x384. The models I try to implement use Conv3D. The shape of my Dataloader is \[10,1,160,384,384\], where 10 is the batch size, 1 for grayscale images. I wonder if there is a way to confirm that my models get it right and do the convolutions correct and don't mess up the dimensions (the pixels for example with the 160) as I am getting low accuracy on the test set after training.
0.9
t3_s2zy1b
1,642,082,023
pytorch
PyTorch Distributed Parallel Computing, HPC Research
Does anyone have some good Research or latest papers with code for Distributed Parallel Computing or HPC Research? Anyone would like to share some projects, ideas, codes repo so on.
1
t3_s1y78w
1,641,964,695
pytorch
How to revise pytorch in 1-2 week (urgent)
Hi guys, I need recommmendation from you regarding pytorch. I have been using it for like a year or so, and have gotten pretty decent with it. I just have been so busy over the last 2 months with my actual college courses (am in med school), and a research spot has just opened up for me, only if i can remember how to write proper pytorch code like I used to do (need to write a model for glaucoma and stuff). So my question is, what resources/ videos/ projects should I do or watch to regain familiarity with the coding part of deep learning in less than 2 weeks? Much appreciated
0.83
t3_s1mply
1,641,932,952
pytorch
x_train[0] and x_train[:1] return different results. Why is that?
So I want to display an image from mnist dataset. I loaded it with: train_data = datasets.MNIST(path2data, train=True, download=True) val_data = datasets.MNIST(path2data, train=False, download=True) x_train,y_train = train_data.data, train_data.targets x_val, y_val = val_data.data, val_data.targets and changed the dimensions, added channel dimension: if len(x_train.shape) == 3: x_tarin = x_train.unsqueeze(1) if len(x_val.shape) == 3: x_val = x_val.unsqueeze(1) and created a function to show images: def show(img): npimg = img.numpy() npimg_tr = np.transpose(npimg, (1, 2, 0)) plt.imshow(npimg_tr, interpolation="nearest") Now my *x\_train* has shape: *torch.Size(\[6000, 1, 28, 28\])*. I want to choose a single image with it and show it. When I do *x\_train\[0\]* I expect it to return me a tensor of shape *torch.Size(\[1, 28, 28\])*. However it returns a tensor of shape *torch.size(\[28, 28\])*. Why is that? ​ On the other hand if I do *x\_train\[:1\]* it returns me a tensor of shape *torch.Size(\[1, 28, 28\])* which is the size I want. What is the difference between these two? Why *x\_train\[0\] discards the 0'th dimension?*
0.87
t3_s11kw4
1,641,867,525
pytorch
I can’t build for LibTorch on aarch64. Cannot find libcublas.so
I’m trying to install LibTorch on a Jetson AGX Xavier. I do a recursive git clone of this pytorch repo and run 'python3 tools/build_libtorch.py. The build fails with the following error FAILED: lib/libtorch_global_deps.so : && /usr/bin/cc -fPIC -fopenmp -DNDEBUG -O3 -DNDEBUG -DNDEBUG -Wl,–no-as-needed -rdynamic -shared -Wl,-soname,libtorch_global_deps.so -o lib/libtorch_global_deps.so caffe2/CMakeFiles/torch_global_deps.dir/__/torch/csrc/empty.c.o -Wl,-rpath,/usr/lib/aarch64-linux-gnu/openmpi/lib:/usr/local/cuda/lib64:::::::: /usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi_cxx.so /usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi.so /usr/local/cuda/lib64/libcurand.so /usr/local/cuda/lib64/libcufft.so -lCUDA_cublas_LIBRARY-NOTFOUND /usr/lib/aarch64-linux-gnu/libcudnn.so /usr/local/cuda/lib64/libcudart.so /usr/local/cuda/lib64/libnvToolsExt.so && : /usr/bin/ld: cannot find -lCUDA_cublas_LIBRARY-NOTFOUND collect2: error: ld returned 1 exit status [2357/3927] Building CXX object c10/test/CMakeFiles/c10_intrusive_ptr_test.dir/util/intrusive_ptr_test.cpp.o ninja: build stopped: subcommand failed. Versions JetPack 4.6 Nvidia Jetson AGX Xavier The current pytorch repo available on GitHub.
0.81
t3_s0kdb7
1,641,822,360
pytorch
VAE: CIFAR-10 & PyTorch - loss not improving
I have implemented a Variational Autoencoder using Conv-6 CNN (VGG-\* family) as the encoder and decoder with CIFAR-10 in PyTorch. You can refer to the full code [here](https://github.com/arjun-majumdar/Autoencoders_Experiments/blob/master/VAE_PyTorch_CIFAR10.ipynb). The problem is that the total loss (= reconstruction loss + KL-divergence loss) doesn't improve. Also, the log-variance is almost 0 indicating further that the multivariate Gaussians being mapped in the latent space is not happening as expected, since the log variance should have values between say -4 to +3, etc. You can see this in [this](https://github.com/arjun-majumdar/Autoencoders_Experiments/blob/master/VAE-Dense_PyTorch_%26_MNIST.ipynb) code where the log variance is changing and has a non-zero value. Suggestions to alleviate the situation?
1
t3_s0gc1x
1,641,807,932
pytorch
How do i implement image localization?
Hey, I'm new to dl, I understand the math behind it but don't have much experience with implementation. I've can build simple cnn builds like cat vs dogs classifier and transfer learning and that's it. I want to dive deeper and learn image localization with algorithms like rcnns. But Don't know where to start. I have a few questions: 1) Say you are doing a Kaggle competition that requires object localization with bounding boxes, do you build it yourself (like write every single layer yourself)? 2) where can I learn (videos preferably) for implementation in PyTorch? I couldn't find any. Any help is appreciated. Thanks
0.83
t3_rzwk0i
1,641,749,103
pytorch
Question about back-propagation derivation and code implementation
nan
1
t3_rztsry
1,641,741,388
pytorch
Image classifier from randomly mixed image datasets
I am working on an image classifier with 10 classes. The images come from 4 different datasets. The folder ``images_train`` contains for each of the 10 classes the corresponding images. So, all images in folder ``0`` are of class 0 and so on. However, each subfolder holds images from all of the 4 datasets, so they are mixed up. Like so: ``` images_train └───0 │ │ 0_003008.png │ │ 1_231516.png │ │ 2_069583.png │ │ 3_097947.png └───1 │ │ 0_000340.png │ │ 1_144303.png │ │ 2_051138.png │ │ 3_029222.png . . . └───9 │ │ 0_031433.png │ │ 1_208115.png │ │ 2_239541.png │ │ 3_239817.png ``` The prefix in the filename for each image represents from which dataset the images is coming from. For example ``` 0_ - dogs 1_ - cats 2_ - permuted dogs 3_ - permuted cats ``` So the training data has basically two labels: - ```class label (0...9)``` - ```dataset label (0...3)``` However, the test data does not have such indication and no labels, it looks like this: ``` images_test │ 003008.png │ 231516.png │ 069583.png │ 097947.png │ 000340.png │ 144303.png │ 051138.png │ 029222.png │ 031433.png │ 208115.png │ 239541.png │ 239817.png ``` The goal would be to predict the correct **class** label for the ``image_test`` dataset. Im kinda stuck here and looking for a suitable solution for this problem. My approach would be to build some kind of hierarchical classification where the model first predict the dataset label and then the actual class label, Im not sure how to implement that in PyTorch though.
0.84
t3_ryb3h6
1,641,572,282
pytorch
do I need to clear batch data after processing it, from GPU memory in Pytorch?
Hey, I'm new to PyTorch and I'm doing a cat vs dogs on Kaggle. So I created 2 splits(20k images for train and 5k for validation) and I always seem to get "CUDA out of memory". I tried everything, from greatly reducing image size (to 7x7) using max-pooling to limiting the batch size to `2` in my dataloader. I always seem to use up all the memory after only one epoch. That makes me wonder, I'm I required to clear batches from memory after I'm done training with the batch? If so how? Here is my kaggle notebook, if that is of any use - https://www.kaggle.com/default404/dogvcatori Any help is appreciated as im stuck with this for over 1 day now.
1
t3_ry2s9n
1,641,544,547
pytorch
Size mismatch when passing to
Hey, I'm building a pretty simple NN for image classification with image sizes `224x224`. I chose `sequential` as it looks pretty clean. But as soon as my first `nn.Linear` line executes, I'm greeted with this error - ``` mat1 and mat2 shapes cannot be multiplied (32x53760 and 50176x224) ``` Which does not make sense since, after 3 max-pool layers my image size should be `28x28` and none of the conv2d layers change my image since i have padding set to 'same'. here is my NN: ``` class Neural(nn.Module): def __init__(self): super().__init__() self.seq = nn.Sequential( ## image size = 224x224 nn.Conv2d(3, 16, kernel_size = 3, padding = 'same'), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(16, 32, kernel_size = 3, padding = 'same'), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(32, 64, kernel_size = 3, padding = 'same'), nn.ReLU(), nn.MaxPool2d(2), nn.Flatten(), nn.Linear(28*28*64, 224), nn.Linear(224, 112), nn.Linear(112, 2), ) def forward(self, x): out = self.seq(x) return out ``` any idea what went wrong? Any help is appreciated.
0.84
t3_rxmq8e
1,641,497,293
pytorch
Pytorch CUDA out of memory persists after lowering batch size and clearing gpu cache
I'm learning pytorch and practicing it on Dogs vs Cats competition on Kaggle using the kaggle gpu. I built a straightforward nn. ``` class Neural(nn.Module): def __init__(self): super().__init__() self.seq = nn.Sequential( nn.Conv2d(3, 16, kernel_size = 3, padding = 'same'), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(16, 32, kernel_size = 3, padding = 'same'), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(32, 64, kernel_size = 3, padding = 'same'), nn.ReLU(), nn.MaxPool2d(2), nn.Flatten(), nn.Linear(53760, 224), nn.Linear(224, 112), nn.Linear(112, 1), ```
1
t3_rxk5ap
1,641,490,682
pytorch
How much is mobile support on the radar for Pytorch?
I've been looking into mobile support, also specifically acceleration via the NNAPI interface. I've been able to convert and run the MobileNet V2 model, but outside of that I've been running into all kinds of limitations during the conversion stage for other models (Resnet, MNV3 etc). Looking at the Mobile section of the Pytorch forum, there isn't a lot of activity, and questions usually go unanswered. So, my question really, does anyone know if there are solid plans for Pytorch to support this capability in the future? My worry here is that Pytorch just added this to "have a story" in comparison to Tensorflow, but isn't really all that interested in extending the functionality.
1
t3_rxjwt2
1,641,490,095
pytorch
AI Got Your Back Segmented (PyTorch)
nan
1
t3_rxglbh
1,641,481,328
pytorch
Having issues loading Neural Network
Hello, this is my first time making a neural network and I'm having issues loading it from a new file. I have followed the guide on the pytorch website (with a few changes) and now wish to load the neural network I've made to a new file. I have saved it as a .pth file. I'm going to attach a picture of the 5 lines of code in the test file. The issue I am getting is that the neural network seems to retrain with the first import statement. I have no idea why this is. If someone could please get back to me about what I should that would be awesome! I'm pretty sure it should not need to retrain. Someone please help!!!! ​ ​ https://preview.redd.it/ujwpe4yr0z981.png?width=460&format=png&auto=webp&s=dd486b208462ec23069e9b77920761c26dccbcb7
0.84
t3_rx2r28
1,641,432,861
pytorch
DQN that takes two inputs
Hi, total pytroch newb here. I want to modify the pytroch cartpoll DQN code to take two inputs, a 10x24 array and integer. To do that should I make an indvailual layer for each input, then merge them? Or can I take both inputs in the same layer. This is what I have so far but im sure its wrong. As you can see I tried the latter but am failing awfully. ​ `class ReplayMemory(object):` `def __init__(self, capacity):` `self.memory = deque([],maxlen=capacity)` `def push(self, *args):` `self.memory.append(Transition(*args))` `def sample(self, batch_size):` `return random.sample(self.memory, batch_size)` `def __len__(self):` `return len(self.memory)` `class DQN(nn.Module):` `def __init__(self, h, w, outputs):` `super(DQN, self).__init__()` `self.conv1 = nn.Conv2d(((10,24),1), 16, kernel_size=5, stride=2)` `self.bn1 = nn.BatchNorm2d(16)` `self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)` `self.bn2 = nn.BatchNorm2d(32)` `self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)` `self.bn3 = nn.BatchNorm2d(32)` `def conv2d_size_out(size, kernel_size = 5, stride = 2):` `return (size - (kernel_size - 1) - 1) // stride + 1` `convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))` `convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))` `linear_input_size = convw * convh * 32` `self.head = nn.Linear(linear_input_size, outputs)` `def forward(self, x):` `x = x.to(device)` `x = F.relu(self.bn1(self.conv1(x)))` `x = F.relu(self.bn2(self.conv2(x)))` `x = F.relu(self.bn3(self.conv3(x)))` `return self.head(x.view(x.size(0), -1))` Any guidance is very much appreciated, Thanks! :-)
1
t3_rwvj0p
1,641,412,667
pytorch
Get batches from collate function
In pytorch data loader, I loaded a video with batch size 1. The output is 1,128,3,224,224. I reshaped the data to 4,32,3,224,224. Now I have batch size of 4. All this data pass to model. Is there a way to pass batch of 2 instead of 4. Here is detailed code https://stackoverflow.com/q/70603333/11170350
0.89
t3_rwttag
1,641,408,069
pytorch
PyTorch Ensemble model
I created a CNN for a classification problem using 2 different techniques: 1) conventional CNN and 2) contrastive learning (SimCLR framework). As a reference code, I attack the starting point of my coding: [https://github.com/giakou4/MNIST\_classification](https://github.com/giakou4/MNIST_classification). At a further implementation, I want to implement an **ensemble classifier** (e.g. voting classifier) consisting of the same initial model with **different seeds**. The goal is to make the classifier independent of initialisation. However, I want to keep the flexibility PyTorch is giving me so I can calculate metrics such as accuracy, sensitivity, precision, recall and AUC. Is there any library other than `torchensemble` or example similar to what I seek?
1
t3_rwlyk7
1,641,386,224
pytorch
Storing large amounts of tensors for later reading (ideas needed)
Hey. I'm running through a large amount of batches, and need to store the produced tensors (individual rows of batch tensors). **Naïve Solution** (pseudo code) For batch in batches: For row in batch: torch.save(row, 'rowname.pt') **Issue** The naïve solution is extremely expensive computationally (time) for the number of batches I'm working with. Specifically, for a 1024 batch size, perform save 1024 times for every row is an extremely slow process as opposed to saving the 1024 tensor as a whole. **Ideas I had** A small list of ideas I thought of to combat this, would appreciate help on either choosing or an alternative I haven't thought of. 1. Saving the tensors in their original batch format (1024 rows) in a sorted manner, so that they can later be accessed given an id (tensor row id X will be in batch X/1024 rounded down) 2. Storing them in a hdf5 file which supports random access. If anyone has experience with this, would appreciate guidance as well as the internet doesn't seem to have too many ideas for something close to what I'm describing. **Thank you very much in advance for any help!**
1
t3_rvswtz
1,641,298,033
pytorch
I like YOLOv5 but the code complexity is...
I can't deny that YOLOv5 is a practical open-source object detection pipeline. However, the pain begins when adding new features or new experimental methods. Code dependencies are hard to follow which makes the code difficult to maintain. We wanted to try various experimental methods but hate to write one-time code that is never re-used. So we worked on making an object detection pipeline to have a better code structure so that we could continuously improve and add new features while easy to maintain. https://github.com/j-marple-dev/AYolov2 And we applied CI(Formating, Linting, Unittest) to ensure code quality with Docker support for development and inference. Our Docker supports the development environment with VIM. Our code design from the beginning was to try various experimental methods with fewer efforts. The features so far developed are as follows. 1. You can easily use the trained model for another project without code copy and paste. PyTorch requires model code to use the model. We build the model by the library that builds the PyTorch model from the YAML file (https://github.com/JeiKeiLim/kindle). So the trained model is portable with pip install kindle. 2. Model compression support by tensor decomposition and pruning. 3. Export model to TorchScript, ONNX, and TensorRT 4. Inference with TorchScript and TensorRT 5. (WIP) C++ Inference with TorchScript and TensorRT 6. Auto search for NMS parameter 7. (WIP) Knowledge distillation support 8. (WIP) Representation learning support AYolov2 also supports W&B with model upload and load function to make trained models easy to manage. `python3 val.py --weights j-marple/AYolov2/179awdd1 ` For instance, the above single command line will download the trained model from W&B and run the inference. By the time you read here, you might wonder why the name is AYolov2. AYolov2 comes from Auto-yolo v2. Our initial goal was to implement an auto model architecture search. And v2 represents that there was v1. Where did v1 go? We have built an auto model architecture search based on the original yolov5 and it worked pretty nice but it became unmanageable. Please stay tuned NAS feature will be coming soon. If you have any suggestions or feedback, any kind will be appreciated. Thank you and happy new year!
1
t3_rur17o
1,641,178,346
pytorch
Using BERT with multiple sentences efficiently
Hi, I'm building a model that uses BERT to encode multiple sentences before passing each encoding into a transformer, but my model seems to be fairly slow. x is (batch * num_sent * bert_tokens). Here is some of my forward code: x = my_batch.transpose(0,1) masks = my_batch_masks.transpose(0,1) context_sentences = [] x_reshape = x.contiguous().view(-1, x.size(-1)) masks_reshape = masks.contiguous().view(-1, masks.size(-1)) b_input_ids = x_reshape #batch_size * token ids b_input_mask = masks_reshape #batch_size * mask ids outputs = self.bert_model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) context_sentences = outputs[0][:,0, :] context_sentences = context_sentences.contiguous().view(x.size(0), -1, context_sentences.size(-1)) # (samples, timesteps, output_size) Is there a better way to do this? Are there any suggestions on the best way to handle hierarchical sentence models, and how to train them?
0.89
t3_ru6e1z
1,641,116,546
pytorch
Understanding outputs from demand forecasting using temporal fusion transformer
This question is probably too far into the weeds for Reddit but I figured it was worth a try. I’m a fairly amateur coder using this guide https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html to forecast sales with PyTorch temporal fusion transformer. In cell [5] it creates group_ids from the agency and sku combinations, as well as using time_idx for each period of time. At the end of this process when you have a series of forecasts it looks like this: {‘prediction’: tensor([[4.5,…,10.1],…,[3.5,…,12.1]…]])} And so on My question: how do I convert back to be able to see these forecasts in terms of the original agency, sku, date format? Thanks!
1
t3_rtou77
1,641,060,709
pytorch
Tensorboard issue with self-defined forward function
How to get around the following [tensorboard issue with self-defined forward function](https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L357) ? /home/phung/miniconda3/envs/py39/bin/python3.9 /home/phung/PycharmProjects/beginner_tutorial/gdas.py Files already downloaded and verified Files already downloaded and verified run_num = 0 Error occurs, No graph saved Traceback (most recent call last): File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 770, in <module> ltrain = train_NN(forward_pass_only=0) File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 357, in train_NN writer.add_graph(graph, train_inputs) File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/tensorboard/writer.py", line 736, in add_graph self._get_file_writer().add_graph(graph(model, input_to_model, verbose, use_strict_trace)) File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 297, in graph raise e File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 291, in graph trace = torch.jit.trace(model, args, strict=use_strict_trace) File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/jit/_trace.py", line 741, in trace return trace_module( File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/jit/_trace.py", line 958, in trace_module module._c._create_method_from_trace( File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1098, in _slow_forward result = self.forward(*input, **kwargs) File "/home/phung/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented raise NotImplementedError NotImplementedError Process finished with exit code 1
1
t3_rtlvtk
1,641,052,012
pytorch
Is there any update about apple silicon GPU support in Pytorch ?
I have heard that the PyTorch team has started developing GPU support for apple silicon, does anyone know if there are any updates about the progress ?
1
t3_rt0c5h
1,640,976,001
pytorch
Help using Torchvision.datasets
Hey guys, So a couple months ago I posted about making an image classifier for a school project. So far I have the project mainly working, albeit with a pre trained resnet model from online. Now I need to replace that with a custom built model in order to make the project more complex. I’ve decided on the caltech256 dataset as that seems to be a decent size while still being manageable to process on a laptop. I’m looking into how I can load the dataset into my program as that seems to be the first step and I’m at a bit of a dead end really. I can’t seem to get the Torchvision class for caltech256 to work no matter what I try. (I’m fairly new to python so I imagine that doesn’t help) I also had a look at trying to do it manually but I’m not sure how doable that will be. If anyone can guide me in the right direction that would be much appreciated. I would prefer to use less libraries if possible but beggars can’t be choosers and if Torchvision.datasets is a lot more beginner friendly, then that’s fine too. Thank you!
0.83
t3_rs70kd
1,640,885,080
pytorch
Laptop recommendation
Hello everyone, I'm planning to purchase a laptop for deep learning. I will only use it to do inference and experiments, **all training will be done on cloud**. Macbook M1 Pro is nice but a Window (dual-boot with Ubuntu) laptop with a lightweight NVIDIA GPU will also come in handy at times (please recommend me if you know this kind of laptop that works with Ubuntu out of the box). My question is, for a Window laptop, how much VRAM is generally enough for inference? I mainly do deep learning in computer vision and want enough VRAM to load Pytorch model onto the GPU. Cheers.
1
t3_rp2d7a
1,640,542,279
pytorch
Assess my DL machine build
**Looking to purchase** [MSI Gaming GeForce RTX 3080 Ti 12GB GDRR6X 320-Bit HDMI/DP Nvlink Torx Fan 3 Ampere Architecture OC Graphics Card (RTX 3080 Ti Gaming X Trio 12G)](https://www.amazon.com/dp/B095VZ6F73/ref=cm_sw_em_r_mt_dp_XYGSJ9BTF6C2QE3QG0NS?_encoding=UTF8&psc=1) [Razer Core X Aluminum External GPU Enclosure (eGPU): Compatible with Windows & MacOS Thunderbolt 3 Laptops, NVIDIA /AMD PCIe Support, 650W PSU, Classic Black](https://www.amazon.com/dp/B07CQG2K5K/ref=cm_sw_em_r_mt_dp_G33HM630MTVBTZJN68EN?_encoding=UTF8&psc=1) **Already have** [Intel BXNUC10i7FNK Core i7-10710U 6-Core NUC Mini PC (Slim Version) with onboard TB3 port](https://www.amazon.com/dp/B083G6S7HZ/ref=cm_sw_em_r_mt_dp_K0PR1CE0JXFDHC4TFT38?_encoding=UTF8&psc=1) [SAMSUNG 970 EVO Plus SSD 1TB, M.2 NVMe Interface Internal Solid State Hard Drive with V-NAND Technology for Gaming, Graphic Design, MZ-V7S1T0B/AM](https://www.amazon.com/dp/B07MFZY2F2/ref=cm_sw_em_r_mt_dp_9EJF341ECSXRESMAD1WT?_encoding=UTF8&psc=1) [Samsung 32GB DDR4 2666MHz RAM Memory Module for Laptop Computers (260 Pin SODIMM, 1.2V) M471A4G43MB1 2x, 64GB total ram](https://www.amazon.com/dp/B07N124XDS/ref=cm_sw_em_r_mt_dp_PW8JGCC0DZPFP9GR0VB4?_encoding=UTF8&psc=1) I have developed a DL program that I’ve been running on AWS g4dn instances using one of their DL AMIs. Here is what I plan to do locally. - Ubuntu 18.04 desktop - Python 3.6 - Conda - Pytorch 1.1 - CUDA 9 I’d like to be able to have a local machine to more quickly experiment. Does anyone see any risks with the eGPU enclosure, card and mini PC combo?
0.92
t3_royzv3
1,640,532,323
pytorch
Stack expects each tensor to be equal size, but got [163, 256, 256] at entry 0 and [160, 256, 256] at entry 1
Hi, I am working with the OAI MRI dataset for knee osteoarthritis classification. Each one of 435 MRIs I got has to be classified to a grade. For each MRI in a folder, there are 160 2D images. I created this function to read the dataset: def dicom2array(path): dicom = pydicom.read_file(path) data = dicom.pixel_array data = (data - np.min(data)) / (np.max(data) - np.min(data)) data = cv2.resize(data, (256,256)) return data def load_3d_dicom_images(scan_id): #returns an object with shape (160,256,256) files = sorted(glob.glob(f"/content/drive/MyDrive/subjects/{scan_id}/*/*")) img = np.array([dicom2array(a) for a in files]) return img class MyCustomDataset(Dataset): def __init__(self, path = "/content/drive/MyDrive/subjects"): dataframe = pd.read_csv("/content/drive/MyDrive/KL_grade.csv", sep = ';') self.labels = {} id = list(dataframe["id"]) grades = list(dataframe["grade"]) for i,g in zip(id, grades): self.labels[str(i).zfill(5)] = g self.ids = [a.split("/")[-1] for a in sorted(glob.glob(f"/content/drive/MyDrive/subjects/" + "/*"))] def __len__(self): return len(self.ids) def __getitem__(self, idx): imgs = load_3d_dicom_images(self.ids[idx]) label = self.labels[self.ids[idx]] return torch.tensor(imgs, dtype = torch.float32), torch.tensor(label, dtype = torch.long) I tried to train a model with resnet18 from the monai library. When I set the batch\_size to 10 or higher cuda runs out of memory (I am running the code on google colab), when I set the batch\_size to 1 it is working but when batch\_size = 2 I get this error: **Stack expects each tensor to be equal size, but got \[163, 256, 256\] at entry 0 and \[160, 256, 256\] at entry 1** Can anybody help me ?
1
t3_ro63pm
1,640,422,115
pytorch
RuntimeError: 0D or 1D target tensor expected, multi-target not supported
Im pretty new to this, so excuse me if the answer is obvious. When I run my code below, I get the error RuntimeError: 0D or 1D target tensor expected, multi-target not supported. Does anyone know why? ​ https://preview.redd.it/szzs04ghik781.png?width=1298&format=png&auto=webp&s=b20e151d55295eeb3428acc51b84eeb3da995355
0.84
t3_rnwijn
1,640,385,414
pytorch
Test model weights
Lets say i load weights,how can i test and make sure that it uploaded weights correctly?
1
t3_rmxrj2
1,640,272,466
pytorch
How to handle large feature matrix when GPU is not available?
I have 2 computers where none of them have a GPU that is supported by pytorch (one of them is M1 of Apple w 8GB RAM with 0 pytorch support and another is AMD w 12GB RAM which has 0 pytorch support). I'm trying to work with a large dataset where I'm trying to extract my own features and then hoping to apply a deep learning model on top of it. Due to my lack of computational resources, I use google Colab (provides \~12GB RAM for free) to run the code. Yet, I'm fully stuck at the part where I'm supposed to extract the features from the large dataset. Dataloader wants the feature matrix of the training set as one piece in order to slice it to batches, so I can use pytorch stuff on it. But 12GB RAM CPU cannot keep such a huge feature matrix and crashes all the time. My question is, since everyone who works on deep learning works with plenty of features and large data - thus you must be giving huge matrices to DataLoader all the time, how do you handle working with huge feature matrices? Especially when GPU is not an option?
0.95
t3_rmtnqp
1,640,258,715
pytorch
Introducing TorchVision’s New Multi-Weight Support API
nan
0.9
t3_rmdmjd
1,640,203,804
pytorch
logistic regression extremely slow on pytorch on gpu vs sklearn cpu
hello friends, im trying to train a DNN on a dataset with 100k features and 300k entries. i want to predict about 30 categories ( its tfidf vectors of text dataset) to start i wanted to train just simple logistic regression to compare the speed the the sklearn logistic regression implementation. [https://gist.github.com/ziereis/bed30cd4db4b14e72b78d9777aa994ab](https://gist.github.com/ziereis/bed30cd4db4b14e72b78d9777aa994ab) here is my implementation of the logistic regression and the train loop. ​ Am i doing something terribly wrong or why does training in pytorch takes a day and in sklearn it takes 5 minutes ? i have a 5600x cpu and a 3070 as gpu if thats relevant ​ any help is appreciated, thanks
0.9
t3_rlsx8h
1,640,134,610
pytorch
Efficient PyTorch: Tensor Memory Format Matters [official pytorch blog]
nan
0.84
t3_rljta7
1,640,108,328
pytorch
Announcing the Winners of the 2021 PyTorch Annual Hackathon [official pytorch blog]
nan
1
t3_rljsxa
1,640,108,301
pytorch
Pruning + The Lottery Ticket Hypothesis [Video Tutorial]
nan
0.94
t3_rkndl9
1,640,007,183
pytorch
Anyone have luck compiling torchscript models to WebAssembly?
So little progress has been made compiling NNs to web browser. Onnx.js is way way behind.
0.86
t3_rk4ylo
1,639,945,636
pytorch
NLP - How to get correlated words?
Hi everyone, I'm not an expert of tensorflow, I've only used some pretrained api of Tensorflow.js. I need to get correlated words given a specific word, example: Input: "banana" Output: "fruit, market, yellow" I tried with GPT-3 playground and given a template it's really good at this, but it looks like I'm trying to shoot a fly with a tank... Do you know any pretrained-model or maybe a specific api that can help with this?
0.81
t3_rjrkrh
1,639,898,576
pytorch
Synthetic time series data generation
I want to generate time series tabular data. Most of generative deep learning models consists of VAE and/or GAN which are for most part relating to images, videos, etc. Can you please point me to relevant tutorial souces (if it includes code along with theory, all the more better) pertaining to synthethic time series data generation using deep learning models or other techniques?
1
t3_rhv8cu
1,639,674,783
pytorch
Made some pytorch modules for agent systems
I am starting a little Evolutionary Algorithms project (I know it's a bit frowned upon) and noticed that if you are working with Deep Neural Networks, you need to instantiate them separately and iterate over them to do each network's forward pass, which is very slow even in GPU. For that reason I made this little package of pytorch modules. The main class, WideLinear, behaves as a family of linear layers, each different, but each running fully in parallel so you do a single forward pass through all of them at the same time. Even works in GPU. This has some application outside of evolutionary algorithms, but mostly still in agent based systems. Gradients work as expected. I have a brief documentation in my github, [https://github.com/joaoperfig/WideLinears](https://github.com/joaoperfig/WideLinears), and it is available through pip.
1
t3_rhui48
1,639,672,700
pytorch
What is the mac M1 equivalent for whl package for pytorch?
Hi, I am wanting to replace this wheel URL in my requirements.txt file, with the version for mac M1 chip: https://download.pytorch.org/whl/cpu/torch-1.5.0%2Bcpu-cp38-cp38-linux_x86_64.whl Is this possible for M1? Thank you in advance.
0.67
t3_rhekan
1,639,616,742
pytorch
Cpu equivalent for cuda stream
Hello, new to the sub and to pytorch here :D I've got a little problem with a piece of software I'm working with for a project. This is the github: [https://github.com/PruneTruong/DenseMatching#overview](https://github.com/PruneTruong/DenseMatching#overview) I have a problem with a line in [correlation.py](https://correlation.py) in the third\_party/GOCor/GOCor/local\_correlation folder. This line ptr = torch.cuda.current\_stream().cuda\_stream returns me an error about the NVIDIA driver being too old. Is there a cpu equivalent command that i can plug in? Since the system I'm working on is not mine I'd prefer not to go through the hassle of updating drivers and all. Thanks to whoever can help
1
t3_rg9gty
1,639,494,456
pytorch
Conv3D model input tensor
I am new to PyTorch and I want to make a classifier for 3D DICOM MRIs. I want to use the pretrained resnet18 from monai library but I am confused with the input dimensions of the tensor. The shape of the images in my dataloader is \[2,160,256,256\] where 2 is the batch\_size, 160 is the number of dicom images for each patient and 256x256 is the dimension of the images. When I try to run the model I get this error: **Expected 5-dimensional input for 5-dimensional weight \[64, 3, 7, 7, 7\], but got 4-dimensional input of size \[2, 160, 256, 256\] instead** If I unsqueeze the tensor before feeding it to the model I get: **Given groups=1, weight of size \[64, 3, 7, 7, 7\], expected input\[1, 2, 160, 256, 256\] to have 3 channels, but got 2 channels instead** Can anybody help me figure this out ?
0.86
t3_rg7es4
1,639,488,252
pytorch
DistributedDataParallel with GPUs of different speeds
I have two GPUs one of which is much slower than the other -- sometimes the speed difference can be 2x or more. I can only run one training example at a time on each GPU (minibatch of 1), so right now with my single-gpu training, I accumulate gradients over multiple forward-backward passes and then update weights once I reach the intended batch size. I am trying to figure out I can use DistributedDataParallel to run minibatches on both GPUs simultaneously. The following passage makes me think It won't make sense for me: >In DDP, the constructor, the forward pass, and the backward pass are distributed synchronization points. Different processes are expected to launch the same number of synchronizations and reach these synchronization points in the same order and enter each synchronization point at roughly the same time. Otherwise, fast processes might arrive early and timeout on waiting for stragglers. Hence, users are responsible for balancing workloads distributions across processes. Sometimes, skewed processing speeds are inevitable due to, e.g., network delays, resource contentions, unpredictable workload spikes. To avoid timeouts in these situations, make sure that you pass a sufficiently large timeout value when calling init_process_group. https://pytorch.org/tutorials/intermediate/ddp_tutorial.html However, I'm confused when it says users are responsible for balancing workloads across processes. Can I run 2x more forward & backward passes on the faster gpu while the slower one is running? Since it says "Different processes are expected to launch the same number of synchronizations and reach these synchronization points in the same order and enter each synchronization point at roughly the same time", it seems like I cannot. But then what does it even mean to "balance the workload distributions across processes"? The problem is that if I have to run the same amount of data on each card, I suspect it will be slower than if I just run everything on the faster card, since it will otherwise be timing out the majority of the time if it's always locked to synchronize with the slower card. Thanks.
1
t3_rfvo5a
1,639,445,329
pytorch
Loading data stucks
I am using google colab to run some experiments with 10-fold cross-validation. However, there are times that google colab stucks on loading the images: run_model() > train_classifier() > __next__() > _next_data() > _get_data() > _get_data() > _try_get_data() > get() > poll() > _poll() > wait() > select() When I ran the experiments some time ago with 112x112 images there was no problem. Now I run with 256x256 images and thats a usual problem. The initial images vary in size from 100x100 to 500x500. Any ideas what can cause that?
0.4
t3_rffw4q
1,639,402,603
pytorch
PyTorch distributed data-parallel (multi GPU, multi-node)
I have access to 18 nodes each with different numbers of GPUs all with at least 1 to my understanding you have to declare all the nodes to have the same number of GPUs. first of all, am I correct? and second, If I am is there any way around this?
1
t3_re03k9
1,639,230,362
pytorch
How to get a probability distribution over tokens in a huggingface model?
I'm following [this](https://ramsrigoutham.medium.com/sized-fill-in-the-blank-or-multi-mask-filling-with-roberta-and-huggingface-transformers-58eb9e7fb0c) tutorial on getting predictions over masked words. The reason I'm using this one is because it seems to be working with several masked word simultaneously while other approaches I tried could only take 1 masked word at a time. The code: from transformers import RobertaTokenizer, RobertaForMaskedLM import torch tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = RobertaForMaskedLM.from_pretrained('roberta-base') sentence = "Tom has fully ___ ___ ___ illness." def get_prediction (sent): token_ids = tokenizer.encode(sent, return_tensors='pt') masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero() masked_pos = [mask.item() for mask in masked_position ] with torch.no_grad(): output = model(token_ids) last_hidden_state = output[0].squeeze() list_of_list =[] for index,mask_index in enumerate(masked_pos): mask_hidden_state = last_hidden_state[mask_index] idx = torch.topk(mask_hidden_state, k=5, dim=0)[1] words = [tokenizer.decode(i.item()).strip() for i in idx] list_of_list.append(words) print ("Mask ",index+1,"Guesses : ",words) best_guess = "" for j in list_of_list: best_guess = best_guess+" "+j[0] return best_guess print ("Original Sentence: ",sentence) sentence = sentence.replace("___","<mask>") print ("Original Sentence replaced with mask: ",sentence) print ("\n") predicted_blanks = get_prediction(sentence) print ("\nBest guess for fill in the blank :::",predicted_blanks) How can I get the probability distribution over the 5 tokens instead of the indices of them? That is, similarly to how [this](https://www.machinecurve.com/index.php/2021/03/02/easy-masked-language-modeling-with-machine-learning-and-huggingface-transformers/) approach (that I used before but once I change to multiple masked tokens I get an error) gets the score as an output: from transformers import pipeline # Initialize MLM pipeline mlm = pipeline('fill-mask') # Get mask token mask = mlm.tokenizer.mask_token # Get result for particular masked phrase phrase = f'Read the rest of this {mask} to understand things in more detail' result = mlm(phrase) # Print result print(result) [{ 'sequence': 'Read the rest of this article to understand things in more detail', 'score': 0.35419148206710815, 'token': 1566, 'token_str': ' article' },...
1
t3_rcz7mj
1,639,107,531
pytorch
How does one install PyTorch and related tools from within the setup.py install_requires list?
nan
1
t3_rcprlk
1,639,079,848
pytorch
Windowed MNIST training
I’m trying to train a model (CNN based on LeNet) on windowed MNIST data, where the output would be a vector of size window\_size with the predicted class of each input in the input window - essentially like a sequence-based model (e.g. LSTM) but instead the model isn’t sequential, so I would for each item in the window run it through the model and concatenate the outputs somehow? Any suggestions? ​ https://preview.redd.it/e9cxuh9wwj481.jpg?width=1085&format=pjpg&auto=webp&s=bb9b6cb3ebfc1c2c23f1084e4f747094e1c9b912
1
t3_rcmhsv
1,639,070,806
pytorch
Load 3D image dataset
I am facing some problems to write the \_\_getitem\_\_() function in my dataset class. I am working on a MRI dataset (3D). Each file consists of 160 slices in DICOM format. I have transformed the DICOM files into PNG. The structure of the files looks like this: "/content/drive/MyDrive/mris/9114036/11288003" Inside the last directory there are the 160 2D slices. The labels are in a .csv file with two columns, one with the id (9114036 for example in the path above) and the other with the grade. The code I tried to execute was: class MyDataset(Dataset): def __init__(self, csv_file, root_dir, transform = None): self.labels_df = pd.read_csv(csv_file, sep = ';') self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.labels_df) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() img_name = os.path.join(self.root_dir,str(self.labels_df.iloc[idx,0])) image = io.imread(img_name, plugin='matplotlib') grade = self.labels_df.iloc[idx, 1] sample = {'image': image, 'grade': grade} if self.transform: sample = self.transform(sample) return sample The error I got when I tried to access a sample from the dataset was: [/usr/local/lib/python3.7/dist-packages/PIL/Image.py](https://localhost:8080/#) in open(fp, mode) **2841** **2842** if filename: -> 2843 fp = builtins.open(filename, "rb") **2844** exclusive\_fp = True **2845** IsADirectoryError: \[Errno 21\] **Is a directory: '/content/drive/MyDrive/mris/9114036'** which seems logic. I tried to use os walk to get in the 11288003 directory where the images are, but it didn't work. Most likely my whole approach is wrong. Does anybody know how to write class dataset for the 3D nature of my data ? Should I use another transformation for the DICOM files in the first place ?
1
t3_rcmaxt
1,639,070,280
pytorch
Making changes in architecture and training a custom model
Hello, I am trying to make changes in a segmentation model (Seg-net). I am making changes like trying out different activation functions. When I try predicting with the new models I am getting the error "missing keys in state_dict" and "unexpected keys in state_dict". Can this be because I changed the model? Here are a few questions I have as a newbie. 1) Should the model be tested on the same model it was trained on? 2) While loading a model and predicting, does it require the python file which contains the changed architecture?
1
t3_rccolu
1,639,035,607
pytorch
RTX 3080 vs RTX A4000
Hi All, I'm looking at building a system for data science using PyTorch and a timeseries database. It appears that the advice is that an RTX A4000 GPU would be the best for my use case but people have said that if supply constraints get me, I can settle for an RTX 3080 TI instead. However, when I look at the spec. and performance measures, it appears the RTX 3080 TI wins out. It has more cuda cores (10,240 vs 6,144), wider memory interface (384 vs 256 bit) and all round better performance. The only edge that I can see for the A4000 is that it has 16GB memory compared to 12GB for the RTX 3080 TI. Some feedback has been that the A4000 is a 'workstation' card so will have better drivers etc. My problem is I don't really understand how that will impact performance since, as far as I have used it, PyTorch already 'just works' with Nvidia GPUs. In addition, I thought Nvidia had a whole interface so you can write cuda code in C++ to access all low level GPU functionality for less PyTorch specific use cases. Can you advise what I'm missing? In what way is the A4000 better than the RTX3080 in a PyTorch/ machine learning context? Are there perhaps additional optimisations that the A4000 makes available that lift it's performance above the nominal stats I see in tests? (tests which admittedly seem to be poorly reflective of my use case). Thanks and regards,
0.86
t3_rc8lhc
1,639,020,676
pytorch
Hook function does not work well during backpropagation
I have the following error with [AvgPool2D](https://github.com/promach/gdas/blob/ea26c6a92dfa2b9d25cfbf953f979187d1c85df2/gdas.py#L137) How to get around it ? WARNING: Logging before InitGoogleLogging() is written to STDERR W20211209 00:31:29.223598 8950 python_anomaly_mode.cpp:102] Warning: Error detected in AvgPool2DBackward0. Traceback of forward call that caused the error: File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 755, in <module> ltrain = train_NN(forward_pass_only=0) File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 400, in train_NN y2 = graph.cells[c].nodes[n].connections[cc].edges[e].forward_f(x2) File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 106, in forward_f return self.f(x) File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/usr/lib/python3.9/site-packages/torch/nn/modules/pooling.py", line 616, in forward return F.avg_pool2d(input, self.kernel_size, self.stride, (function _print_stack) Traceback (most recent call last): File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 755, in <module> ltrain = train_NN(forward_pass_only=0) File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 525, in train_NN Ltrain.backward() File "/usr/lib/python3.9/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/usr/lib/python3.9/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( RuntimeError: Output 0 of BackwardHookFunctionBackward is a view and its base or another view of its base has been modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function. Process finished with exit code 1
0.83
t3_rbvg9f
1,638,982,129
pytorch
(Beginner) Trying to install pytorch, is this a pip issue or a pytorch issue?
nan
0.63
t3_rbdqhg
1,638,923,053
pytorch
I put together a tutorial on PyTorch Lightning and how it compares to vanilla PyTorch
If you haven't heard of it, PyTorch Lightning is a great framework built on top of vanilla PyTorch. It is really good for rapid prototyping and is essentially just a wrapper for PyTorch, so the learning curve is pretty shallow if you work with PyTorch already. I wrote a tutorial and overview that compares Lightning to vanilla, where I go through an example project of building a simple GAN to generate handwritten digits from MNIST. I figured this sub might find it useful, especially for those who haven't heard of Lightning! [Here's a link](https://www.assemblyai.com/blog/pytorch-lightning-for-dummies/) to the full tutorial if you're interested in learning about Lightning!
0.88
t3_rb2w0z
1,638,894,871
pytorch
DICOM files classification
Hello, I am working on a project where I have to classify DICOM files. I can see there are a lot of libraries that can handle DICOM data type. From the research I have made I see there are transformations to .npy and .png before the data is fed into a cnn. Which transformation is better ? Can I feed the DICOM files raw (without any transformation) to a network ?
0.84
t3_rax3ha
1,638,876,506
pytorch
KeyError
I am working on a problem in which I have to classify knee MRIs. It is a multi label classification (0-4 grade) and the MRIs I have are DICOM files. I am using these code to demonstrate some attributes of the MRIs: import dicom import os import pandas as pd data_dir = "D:/base_MRI/subjects" patients = os.listdir(data_dir) labels_df = pd.read_csv("C:/Users/User/Desktop/KL_grade.csv", index_col=False) for patient in patients[:1]: label = labels_df._get_value(patient, 'grade') path = data_dir + patient slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] print(len(slices), label) I get this error: KeyError: **'9001104'** 9001104 is the first MRI folder containing 160 2D slices. The patients list is a list with all the MRIs. I think the problem is in the labels\_df. I have tried to set index\_col=0 but then I get the same error. The KL\_grade.csv is the csv file with 2 columns, one the subject\_id which is the same name with each MRI and the grade which are the labels. Does anybody know how to fix this ?
1
t3_rajj17
1,638,829,545
pytorch
Model does not fit to ram
I am using a contrastive learning framework (see [https://github.com/HobbitLong/SupContrast](https://github.com/HobbitLong/SupContrast)). First I initialize a CNN feature extractor (`network`), LeNet-5 here, and then I pass it to the full model (`model`). However, while I can initialize my network, google colab session crashes with no more RAM, while my PC with 32GB runs just fine. Any idea on how I can overcome this? print('1. net') # it is printed net = set_network(opt) print('2. model') # it is printed model = SupCon(net, head=opt.head, feat_dim=opt.feat_dim) print('3. crit') # it is not printed criterion = SupConLoss(temperature=opt.temp) And to give you an idea of the model: CustomNet( (feature_extractor): Sequential( (0): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (4): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (6): ReLU() (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (8): Conv2d(16, 120, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (9): BatchNorm2d(120, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (10): ReLU() (11): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) ) with 122880 neurons at output
0.81
t3_rafprd
1,638,819,562
pytorch
Convert unknown labels to yolov5
Hello datascience-community, https://preview.redd.it/zpcutamtcw381.jpg?width=320&format=pjpg&auto=webp&s=fc8e025cf43481991c2ed5ad84f73671429358a4 i need your kind assistance. I own a dataset of more than **100k images with unknown label format**, wich is: **angry\_actor\_104.jpg 0 28 113 226 141 22.9362 0** It indicates an image as follows: **image\_name face\_id\_in\_image face\_box\_top face\_box\_left face\_box\_right face\_box\_bottom face\_box\_cofidence expression\_label** Source of the dataset: [http://mmlab.ie.cuhk.edu.hk/projects/socialrelation/index.html](http://mmlab.ie.cuhk.edu.hk/projects/socialrelation/index.html) My question is: How can this be converted into the **yolov5 format**? I have been looking this up for a long time and hope someone can help. Thank you very much in advance. Best regards Phil
1
t3_ra3z9y
1,638,785,399
pytorch
does the cuda toolkit used with pytorch need to be the exact version of cuda?
i am install pytorch to test out models(i am using conda if that matters) but i noticed the version of cuda toolkit(11.3) is a lower version then the cuda i have install (11.5), and i am wondering if there will be issue because of the number diffrence
1
t3_r91s8h
1,638,659,668
pytorch
How to find the auto-determined learning rate from Pytorch lightning and Neptune?
I've started to learn and try Pytorch lightning lately together with the Neptune logger. An amazing feature it has is that it can autodetect a learning rate (auto\_lr\_find=True) and batch size from the training data. However, as I check Neptune logs, I cannot find out the autodetected learning rate. I couldn't find an option where it would specifically allow me to save the found lr to the Neptune logs either. Does anyone know how can I find out this final learning rate that my model uses to return my results?
0.83
t3_r8zbr0
1,638,652,517
pytorch
What is the best / fastest way to store Pytorch Datasets?
Hi everyone, I am interested in the fastest way to store data (lets say imagenet) to disk, that it needs as less time as possible to load again from a torchvision dataset and dataloader getitem. I have heard, that tf.dataset is quite fast for tensorflow. Might that be fast for pytorch as well or are there any other formats?
0.88
t3_r8j64r
1,638,598,105
pytorch
Creating a model that uses different output layers based on the input
I want to have a single model that uses output layer A if input is type A, layer B if input is type B, ... Is there a way to do that, or would I have to have separate models?
0.67
t3_r80pzu
1,638,542,929
pytorch
Resuming pytorch tranining
Hi, im training NNs on Colab, and notebook sometimes crashes and i have to train them all over again,how to save all stuff and resume training where it left off?
1
t3_r7v472
1,638,522,916
pytorch
Need idea for a regularization score
I have a pytorch model net_x = Model() This model have a learnable parameter called "value": print(net_x.value) >>> 0.532 I want to penalize to model if this value is below 0.1 and add that regularization to the loss. The only thing I can think of is having a step function (eg "if statement"), but that's not differentiable. Any suggestions?
0.84
t3_r7bt87
1,638,464,241
pytorch
OAI Dataset
I have a project in which I will make a model that classifies Knee MRIs from the OAI dataset. My university provided me the dataset. The classification is about knee osteoartrithis. The model must assign a grade (from 0 to 4) in each MRI. I am facing a problem as I am not able to find the labels in the MRI file tha was given to me. Is the label somewhere in the metadata or the header of the DICOM files (MRIs) and I cannot find it or my professor forgot to send me an extra file containing the labels ?
1
t3_r6ouj6
1,638,391,880
pytorch
Sliding Window to Apply U-NET on Larger Image
I have a project I'm working on that has large image files that I'm doing semantic segmentation on, (4501x4501, or 10001x 10001 pixels) but do to hardware limitations I'm forced to chip the image into small 256x256 pieces. Alter each chip has been processed with the neural network I merge them together, but this has created seam lines because the edges are often predicted incorrectly. I'm looking into doing sliding window where a window of 256x256 would move across the larger input images, run the trained neural network and then adding up the probabilities for each class. After running on the whole image the weights would be max pooled to extract the most likely class. I'm looking for example code of anything else that tries this approach but I've been unable to find anything and I'm unsure of how to implement this. Does anyone know of an example, or have any tips?
0.84
t3_r6a47j
1,638,347,440
pytorch
Is the memory of the gpu ( 4gb on rtx 3050ti) the upper limit of memory usable?
nan
1
t3_r67b9k
1,638,336,313
pytorch
Adafactor from transformers hugging face only works with Transfromers - does it not work with Resnets and MAML with higher?
nan
1
t3_r5p2pk
1,638,284,362
pytorch
Prevent `CUDA error: out of memory` from happening in 1 line of code
Hi, folks who love PyTorch! I've been working on a fast PyTorch wrapper that prevents OOM error from happening. I've recently [shared](https://www.reddit.com/r/MachineLearning/comments/r4zaut/p_eliminate_pytorchs_cuda_error_out_of_memory/?utm_source=share&utm_medium=web2x&context=3) it with r/MachineLearning, and they quite like it. [Project Link](https://github.com/rentruewang/koila) This library tries to be very flexible, and it works with native PyTorch code. Please tell me what you think. Suggestions welcome!
0.86
t3_r5oo88
1,638,283,166
pytorch
What libraries do I need to train an audio classifier model using Pytorch?
The dataset contains mp4 videos.
1
t3_r4u01q
1,638,185,923
pytorch
How to merge the batched up data within DataLoader to batch them again properly?
I couldn't express the complete situation better in the title, but my situation is, I have a very large training set that doesn't fit the CPU and I have no GPU. Thus, to extract features from it, I've batched the training set dataframe and I'm obtaining the features inside the loop perfectly. However, I cannot create a tensor and merge all those batch features into a single tensor matrix because it'll again not fit in my memory. So, I imagined an ideal situation where I convert the features inside the batching loop to tensors, and create some kind of an object (DataLoader maybe?), where I can add the new feature tensors I obtain within the loop into that DataLoader, where in the end of it all, I will have a single DataLoader with better batches. Is it possible? In case I explained it poorly, here is the algorithm: # x_train_df: my dataframe that contains training set data n = len(x_train_df) # number of rows rough_batch_size = 1000 # number of rows in each call to partial_fit batch_size = 32 # This will be the DataLoader batch_size hopefully index = 0 # helper-var while index < n: partial_size = min(rough_batch_size, n - index) # because last loop is incomplete partial_x = x_train_df["data"][index : index + partial_size] # batch features = get_features(partial_x) normalized_features = scaler.partial_fit(features) index += partial_size # Now what should I do with the normalized_features? ????? # This is my eventual goal for the output # train_dataloader : dataloader object that has the complete training set # where the data is shuffled and batch_size is 32 so how can I obtain this single train\_dataloader from the batched input data without crashing my CPU?
0.67
t3_r44sc6
1,638,107,638
pytorch
can i make a tool which is written in c++/cuda in libtorch on ubuntu go run native in windows?
tool: [https://github.com/darglein/ADOP](https://github.com/darglein/ADOP) btw. i also cant code.... :(
0.84
t3_r3sdcd
1,638,062,952
pytorch
Where we are headed and why it looks a lot like Julia (but not exactly like Julia) - compiler
nan
0.9
t3_r335sn
1,637,980,447
pytorch
How expensive is it to fine-tune BERT even with Pytorch lightning?
I want to use BERT for my text classification task but so far I've been failing due to the lack of GPU. Due to my computational limitation, I've selected batch\_size=32 for tokenization (didn't want to go smaller as it would create great noise), selected "bert-base-uncased" and I've wrapped the BERT fine-tuning with Pytorch lightning. Since my computer has no GPU (Macbook air with M1), I've been trying to work with GPU Google Colab provides for free. Yet, I get this error eventually: >RuntimeError: CUDA out of memory. Tried to allocate 170.00 MiB (GPU 0; 11.17 GiB total capacity; 10.24 GiB already allocated; 167.81 MiB free; 10.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max\_split\_size\_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH\_CUDA\_ALLOC\_CONF Should I simply give up the fine-tuning and just go ahead with the pre-trained version of the BERT? Are there any examples/studies that do that? Or is it quite uncommon to not to fine-tune BERT?
1
t3_r2y60c
1,637,964,848
pytorch
PyDreamer: model-based RL written in PyTorch + integrations with DM Lab and MineRL environments
nan
1
t3_r2w7d1
1,637,958,939
pytorch
The Sensory Neuron as a Transformer [Implementation]
nan
0.92
t3_r2liin
1,637,927,689
pytorch
Hey, I'm trying to find some materials about object detection in Pytorch but I'm having a hard time finding it.
If you have any material about object detection it would be very apricated if you could share. As a school project I'm trying to make my own model for object detection, and everywhere I go I see prebuilt models but without any explanations.
0.92
t3_r1zbgg
1,637,854,948
pytorch
What is the correct way to sum loss into a total loss and then to backprop?
I need to do a somewhat complex gradient update where I have to calculate a loss several times and to backprop over it. It looks something like this: for i in range(3): opt.zero_grad() out = net(inp) loss = y - out loss.backward() opt.step() I'm currently getting the error: "RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain\_graph=True when calling .backward() or autograd.grad() the first time" Which from [here](https://github.com/davda54/sam/issues/10) I understand that I shouldn't use the same loss variable for both forward passes but I'm not sure how else to do this. I thought that I could maybe create a variable called total\_loss and add the loss to it and then after the iterations to backprop over it, but I'm not sure if that's the correct approach.
1
t3_r1p9cy
1,637,819,011
pytorch
Slowing process
Do u always avoid .numpy(), .cpu(), .item().How do u calculate lets say acc of epoch if u dont move to numpy, or running loss, do u calculate it as tensor.
1
t3_r1dpff
1,637,783,801
pytorch
How to Build and Deploy an Image Recognition App using FastAPI and PyTorch? | BHIMRAJ YADAV
nan
1
t3_r11sid
1,637,748,166
pytorch
Finding why Pytorch Lightning made my training 4x slower.
nan
0.97
t3_r09e8q
1,637,659,142
pytorch
Customizing dataset
Som im trying to extract 5 - 10 classes out of coco dataset. Any tips how to do it in pytorch framework ?
0.75
t3_qzjvtq
1,637,582,444
pytorch
Custom pooling layer
I'm trying to implement a custom pooling layer in pytorch. How do I go about it? I don't wanna use maxpooling. I want to take the absolute difference between max and min values and output that to the next layer. Is there a tutorial of sorts for implementing custom pooling layers?
1
t3_qzfxa7
1,637,566,159
pytorch
Moving model to phone with multiple inputs
In Python you seem to be able to input label keypair dictionaries. Moving to the phone in objective C it says expected tensor but got generaldict. Anyone know the syntax?
0.8
t3_qy8irh
1,637,424,047
pytorch
If I want to download Pytorch's Cuda version should I uninstall my Pytorch and install the Cuda version or it doesn't matter?
One more thing, does it matter if I downloaded Cuda 11.5 but Pytorch uses 11.3?
1
t3_qy1fx7
1,637,397,619
pytorch
Using a model that was trained with DDP
I am working with a model that was trained using distributed data parallel. I am trying to use the model, but when I load the weights and try to actually feed input into the model, I get an error saying the default process group has not been initialized. I only have one GPU so I cannot use DDP, is there a workaround to this?
1
t3_qxv43n
1,637,373,581
pytorch
How Can I use an animated mplfinance chart as env?!
I have some python files: https://github.com/mablue/mplfinance/blob/anim/examples/mpf_animation_demo1.py And https://github.com/mablue/mplfinance/blob/anim/examples/mpf_animation_demo2.py How use them as pytorch rl env?!
1
t3_qxjfk1
1,637,338,160
pytorch
How to Train State-Of-The-Art Models Using TorchVision’s Latest Primitives
nan
0.94
t3_qxcjzh
1,637,313,824