instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
What is the fastest way to load data from multiple csv files
I am working with multiple csv files, each containing multiple 1D data. I have about 9000 such files and total combined data is about 40 GB. I have written a dataloader like this: class data_gen(torch.utils.data.Dataset): def __init__(self, files): self.files = files my_data = np.genfromtxt('/data/'+files, delimiter=',') self.dim = my_data.shape[1] self.data = [] def __getitem__(self, i): file1 = self.files my_data = np.genfromtxt('/data/'+file1, delimiter=',') self.dim = my_data.shape[1] for j in range(my_data.shape[1]): tmp = np.reshape(my_data[:,j],(1,my_data.shape[0])) tmp = torch.from_numpy(tmp).float() self.data.append(tmp) return self.data[i] def __len__(self): return self.dim The way I am loading the whole dataset into the dataloader is like through a for loop: for x_train in tqdm(train_files): train_dl_spec = data_gen(x_train) train_loader = torch.utils.data.DataLoader( train_dl_spec, batch_size=128, shuffle=True, num_workers=8, pin_memory=True) for data in train_loader: But this is working terribly slow. I was wondering if I could store all of that data in one file but I don’t have enough RAM. So is there a way around it? Let me know if there’s a way.
I've never used pytorch before, and I confess I don't really know what's going on. Nonetheless I'm almost certain you're using Dataset wrong. As I understand it, the Dataset is an abstraction of all the data where each index returns a sample. Say each of your 9000 files has 10 rows (samples), 21 would refer to the 3rd file and the 2nd row (using 0-indexing). Because you have so much data you don't want to load everything into memory. So the Dataset should manage just getting one value, and the DataLoader creates batches of the values. There's almost certainly some optimisation that can be applied to what I've done, but maybe this can start you off. I created the directory csvs with these files: ❯ cat csvs/1.csv 1,2,3 2,3,4 3,4,5 ❯ cat csvs/2.csv 21,21,21 34,34,34 66,77,88 Then I created this Dataset class. It takes a directory as input (where all the CSVs are stored). Then the only thing is stores in memory is the name of every file and the number of lines it has. When an item is requested we find out which file contains that index, and then return the Tensor for that line. By only ever iterating through files, we never store file contents in memory. An improvement here though would not to iterate over the list of files to find out which one is relevant, and to make use of generators and state when accessing consecutive indexes. (Because accessing when accessing index 8, in an 10 line file we iterate through the first 7 uselessly, which we can't help. But then when accessing index 9, it would be better to work out that we could just return the next one, rather than iterating through the first 8 lines again.) import numpy as np from functools import lru_cache from pathlib import Path from pprint import pprint from torch.utils.data import Dataset, DataLoader @lru_cache() def get_sample_count_by_file(path: Path) -> int: c = 0 with path.open() as f: for line in f: c += 1 return c class CSVDataset: def __init__(self, csv_directory: str, extension: str = ".csv"): self.directory = Path(csv_directory) self.files = sorted((f, get_sample_count_by_file(f)) for f in self.directory.iterdir() if f.suffix == extension) self._sample_count = sum(f[-1] for f in self.files) def __len__(self): return self._sample_count def __getitem__(self, idx): current_count = 0 for file_, sample_count in self.files: if current_count <= idx < current_count + sample_count: # stop when the index we want is in the range of the sample in this file break # now file_ will be the file we want current_count += sample_count # now file_ has sample_count samples file_idx = idx - current_count # the index we want to access in file_ with file_.open() as f: for i, line in enumerate(f): if i == file_idx: data = np.array([float(v) for v in line.split(",")]) return torch.from_numpy(data) Now we can use the DataLoader as I believe is intended: dataset = CSVDataset("csvs") loader = DataLoader(dataset, batch_size=4) pprint(list(enumerate(loader))) """ [(0, tensor([[ 1., 2., 3.], [ 2., 3., 4.], [ 3., 4., 5.], [21., 21., 21.]], dtype=torch.float64)), (1, tensor([[34., 34., 34.], [66., 77., 88.]], dtype=torch.float64))] """ You can see this correctly returns batches of data. Rather than printing this out you can process each batch and only store that batch in memory. See the docs for further information: https://pytorch.org/tutorials/recipes/recipes/custom_dataset_transforms_loader.html#part-3-the-dataloader
https://stackoverflow.com/questions/67941749/
Adding a feature to a neuron
Say I have the following model: import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 5) self.fc2 = nn.Linear(5, 10) self.fc3 = nn.Linear(10, 1) def forward(self, x): x = self.fc1(x) print(x) x = torch.relu(x) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() opt = optim.Adam(net.parameters()) features = torch.rand((3,1)) #3 inputs, each of 1D I can print the value of my neurons (only the first layer here) with print(x): net(features) >>>tensor([[ 0.6703, 0.4484, -0.8529, 1.3119, 0.6741], [ 0.9112, 0.6496, -1.2960, 1.8264, 0.4547], [ 0.7483, 0.5135, -0.9963, 1.4785, 0.6031]], grad_fn=<AddmmBackward>) tensor([[0.0144], [0.0575], [0.0284]], grad_fn=<AddmmBackward>) How can I add a "feature" to each neuron that is a string with a name? e.g. print(x) >>> tensor([[ [0.6703, 'neuron_1'], [0.4484, 'neuron_2'], [-0.8529, 'neuron_3'], 1.3119, 0.6741],... etc. I'm not sure if I'll need to change the neuron class. I believe in the forward method I will then need to only take the first element of each neurons tensor: neuron_tensor = [neuron_value, neuron_name] Update 1: from @Aditya Singh Rathore comment it sounds like it might not be possible to have a string and a value in the same tensor. Is it possible then to have a value instead of a string to represent the neurons? From before neuron_tensor = [neuron_value, neuron_name] where neuron_name is a string. Is this possible instead? : neuron_tensor = [neuron_value, neuron_name] where neuron_name is just a value (e.g 1 for neuron 1, 2 for neuron 2)
What you want is possible, although I'm not sure what you exactly want to do with this. Basically you seem to want to include the "index" in the intermediate tensor for some purpose, and again discard it when passing to the next layer. class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 5) self.fc2 = nn.Linear(5, 10) self.fc3 = nn.Linear(10, 1) def forward(self, x): x = self.fc1(x) idx_tensor = (torch.arange(x.shape[1]) + 1).unsqueeze(0).repeat_interleave(repeats=x.shape[0], dim=0) x = torch.cat([x.unsqueeze(2), idx_tensor.unsqueeze(2)], dim=2) print(x) x = torch.relu(x[:, :, 0]) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x Running this gives: net = Model() opt = optim.Adam(net.parameters()) features = torch.rand((3,1)) net(features) >>>tensor([[[ 0.0817, 1.0000], [ 0.8084, 2.0000], [ 1.6118, 3.0000], [ 0.8658, 4.0000], [-0.1583, 5.0000]], [[ 0.2881, 1.0000], [ 0.6946, 2.0000], [ 1.3760, 3.0000], [ 0.6098, 4.0000], [-0.1240, 5.0000]], [[ 0.1919, 1.0000], [ 0.7476, 2.0000], [ 1.4859, 3.0000], [ 0.7291, 4.0000], [-0.1400, 5.0000]]], grad_fn=<CatBackward>) tensor([[-0.2841], [-0.2191], [-0.2495]], grad_fn=<AddmmBackward>) Note that a typical tensor cannot be both integer and float at the same time, so the 1, 2, 3 will be stored as float 1.000.., 2.000... etc. I suggest if your purpose is something like a fancy printing, then maybe look into torch's hook functions? For example, you can do something like: import pandas as pd def fc_hook_fn(module, input, output): print("\n" + "#" * 60) print(f"In layer {module}") print("#" * 60 + "\n") cols = [f"Neuron-{i + 1}" for i in range(output.shape[1])] idx = [f"Input-{i + 1}" for i in range(output.shape[0])] neuron_activations = pd.DataFrame(output.detach().numpy(), columns=cols, index=idx) print(neuron_activations) net.fc.register_forward_hook(fc_hook_fn) Now each time something passes through fc1, the function above will be triggered. You don't need to put your print(x) in the forward method. ############################################################ In layer Linear(in_features=1, out_features=5, bias=True) ############################################################ Neuron-1 Neuron-2 Neuron-3 Neuron-4 Neuron-5 Input-1 -0.948735 -0.901034 -0.290353 -0.082616 -0.405337 Input-2 -0.725904 -0.801648 -0.302922 -0.045514 -0.580485 Input-3 -0.829738 -0.847960 -0.297065 -0.062802 -0.498870
https://stackoverflow.com/questions/67941867/
PyTorch - Tensors multiplication along new dimension
Sorry if already asked, but I can't find the words to look for on Google. Let's say I have a tensor t1 of size [a,b] and a tensor t2 of size [c]. How can I output a tensor t3 of size [a,b,c], so that: t3[0, :, 0] = t1[0, :] * t2[0] t3[0, :, 1] = t1[0, :] * t2[1] t3[0, :, 2] = t1[0, :] * t2[2] ... t3[1, :, 0] = t1[1, :] * t2[0] t3[1, :, 1] = t1[1, :] * t2[1] t3[1, :, 2] = t1[1, :] * t2[2] and so on, without a for loop ? Thanks
Using torch.kron will give a tensor a x b*c, then using torch.Tensor.reshape could map to a tensor a x b x c : t3=torch.kron(t1,t2).reshape(a,b,c)
https://stackoverflow.com/questions/67942180/
Pytorch: Is it able to make a convolution module without bias have bias again?
After instantiating a 2D convolution with conv = nn.Conv2d(8, 8, 3, bias=False), whose member bias should be None, is it able to give conv a legal bias again (whether with random initialization or determined values)? I observed that bias in other default convolution modules is of the type Parameter, so I suspect there are extra procedures beyond simply conv.bias = torch.tensor(...) to make the new bias legal for conv.
Yes, it is possible to set the bias of the conv layer after instantiating. You can use the nn.Parameter class to create bias parameter and assign to conv object's bias attribute. To show this I have created a simple Conv2d layer and assigned zero to the weights and ones to bias. conv = nn.Conv2d(1, 1, 1, bias=False) conv.weight.data = torch.zeros_like(conv.weight) conv.bias = nn.Parameter(torch.ones((1,))) inputs = torch.randn(1, 1, 1, 1) print(conv(inputs)) # tensor([[[[1.]]]], grad_fn=<ThnnConv2DBackward>)
https://stackoverflow.com/questions/67948404/
Loading or building cuda engine crashes occassionaly after upgrading to TensorRT 7
I'm trying to run TensorRT inference in C++. Sometimes the code crashes when trying to build a new engine or load the engine from the file. It happens occasionally (sometimes it runs without any problem). I follow the below steps to prepare network: initLibNvInferPlugins(&gLogger.getTRTLogger(), ""); if (mParams.loadEngine.size() > 0) { std::vector<char> trtModelStream; size_t size{0}; std::ifstream file(mParams.loadEngine, std::ios::binary); if (file.good()) { file.seekg(0, file.end); size = file.tellg(); file.seekg(0, file.beg); trtModelStream.resize(size); file.read(trtModelStream.data(), size); file.close(); } IRuntime* infer_Runtime = nvinfer1::createInferRuntime(gLogger); if (mParams.dlaCore >= 0) { infer_Runtime->setDLACore(mParams.dlaCore); } mEngine = std::shared_ptr<nvinfer1::ICudaEngine>( infer_Runtime->deserializeCudaEngine(trtModelStream.data(), size, nullptr), samplesCommon::InferDeleter()); gLogInfo << "TRT Engine loaded from: " << mParams.loadEngine << endl; infer_Runtime->destroy(); if (!mEngine) { return false; } else { return true; } } auto builder = SampleUniquePtr<nvinfer1::IBuilder>(nvinfer1::createInferBuilder(gLogger.getTRTLogger())); const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); auto network = SampleUniquePtr<nvinfer1::INetworkDefinition>(builder->createNetworkV2(explicitBatch)); auto config = SampleUniquePtr<nvinfer1::IBuilderConfig>(builder->createBuilderConfig()); auto parser = SampleUniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, gLogger.getTRTLogger())); mEngine = nullptr; parser->parseFromFile( locateFile(mParams.onnxFileName, mParams.dataDirs).c_str(), static_cast<int>(gLogger.getReportableSeverity())); // Calibrator life time needs to last until after the engine is built. std::unique_ptr<IInt8Calibrator> calibrator; config->setAvgTimingIterations(1); config->setMinTimingIterations(1); config->setMaxWorkspaceSize(4_GiB); builder->setMaxBatchSize(mParams.batchSize); mEngine = std::shared_ptr<nvinfer1::ICudaEngine>( builder->buildEngineWithConfig(*network, *config), samplesCommon::InferDeleter()); The error occurs here: [05/12/2021-16:46:42] [I] [TRT] Detected 1 inputs and 1 output network tensors. 16:46:42: The program has unexpectedly finished. This line crashes when loading existing engine: mEngine = std::shared_ptr<nvinfer1::ICudaEngine( infer_Runtime->deserializeCudaEngine(trtModelStream.data(), size, nullptr), samplesCommon::InferDeleter()); Or when building the engine: mEngine = std::shared_ptr<nvinfer1::ICudaEngine>( builder->buildEngineWithConfig(*network, *config), samplesCommon::InferDeleter()); More info: TensorRT 7.2.3 Ubuntu 18.04 cuDNN 8.1.1 CUDA 11.1 update1 ONNX 1.6.0 Pytorch 1.5.0
Finally got it! I rewrote the CMake.txt and add all required libs and paths and removed duplicate ones. That might be a lib conflict in cuBLAS.
https://stackoverflow.com/questions/67948757/
Force BERT transformer to use CUDA
I want to force the Huggingface transformer (BERT) to make use of CUDA. nvidia-smi showed that all my CPU cores were maxed out during the code execution, but my GPU was at 0% utilization. Unfortunately, I'm new to the Hugginface library as well as PyTorch and don't know where to place the CUDA attributes device = cuda:0 or .to(cuda:0). The code below is basically a customized part from german sentiment BERT working example class SentimentModel_t(pt.nn.Module): def __init__(self, model_name: str = "oliverguhr/german-sentiment-bert"): DEVICE = "cuda:0" if pt.cuda.is_available() else "cpu" print(DEVICE) super(SentimentModel_t,self).__init__() self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(DEVICE) self.tokenizer = BertTokenizerFast.from_pretrained(model_name) def predict_sentiment(self, texts: List[str])-> List[str]: texts = [self.clean_text(text) for text in texts] # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model. input_ids = self.tokenizer.batch_encode_plus(texts,padding=True, add_special_tokens=True, truncation=True, max_length=self.tokenizer.max_len_single_sentence) input_ids = pt.tensor(input_ids["input_ids"]) with pt.no_grad(): logits = self.model(input_ids) label_ids = pt.argmax(logits[0], axis=1) labels = [self.model.config.id2label[label_id] for label_id in label_ids.tolist()] return labels EDIT: After applying the suggestions of @KonstantinosKokos (see edited code above) I got a RuntimeError: Input, output and indices must be on the current device pointing to with pt.no_grad(): logits = self.model(input_ids) The full error code can be obtained down below: <ipython-input-15-b843edd87a1a> in predict_sentiment(self, texts) 23 24 with pt.no_grad(): ---> 25 logits = self.model(input_ids) 26 27 label_ids = pt.argmax(logits[0], axis=1) ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1364 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 1365 -> 1366 outputs = self.bert( 1367 input_ids, 1368 attention_mask=attention_mask, ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict) 859 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) 860 --> 861 embedding_output = self.embeddings( 862 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 863 ) ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 196 197 if inputs_embeds is None: --> 198 inputs_embeds = self.word_embeddings(input_ids) 199 token_type_embeddings = self.token_type_embeddings(token_type_ids) 200 ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 122 123 def forward(self, input: Tensor) -> Tensor: --> 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, 126 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/PycharmProjects/Test_project/venv/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854
You can make the entire class inherit torch.nn.Module like so: class SentimentModel_t(torch.nn.Module): def __init___(...) super(SentimentModel_t, self).__init__() ... Upon initializing your model you can then call .to(device) to cast it to the device of your choice, like so: sentiment_model = SentimentModel_t(...) sentiment_model.to('cuda') The .to() recursively applies to all submodules of the class, model being one of them (hugging face model inherit torch.nn.Module, thus providing an implementation for to()). Note that this makes choosing device in the __init__() redundant: its now an external context that you can switch to/from easily. Alternatively, you can hardcode the device by casting the contained BERT model directly into cuda (less elegant): class SentimentModel_t(): def __init__(self, ...): DEVICE = "cuda:0" if pt.cuda.is_available() else "cpu" print(DEVICE) self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(DEVICE)
https://stackoverflow.com/questions/67948945/
Pytorch Data Generator for extracting 2D images from many 3D cube
I'm struggling in creating a data generator in PyTorch to extract 2D images from many 3D cubes saved in .dat format There is a total of 200 3D cubes each having a 128*128*128 shape. Now I want to extract 2D images from all of these cubes along length and breadth. For example, a is a cube having size 128*128*128 So I want to extract all 2D images along length i.e., [:, i, :] which will get me 128 2D images along the length, and similarly i want to extract along width i.e., [:, :, i], which will give me 128 2D images along the width. So therefore i get a total of 256 2D images from 1 3D cube, and i want to repeat this whole process for all 200 cubes, there by giving me 51200 2D images. So far I've tried a very basic implementation which is working fine but is taking approximately 10 minutes to run. I want you guys to help me create a more optimal implementation keeping in mind time and space complexity. Right now my current approach has a time complexity of O(n2), can we dec it further to reduce the time complexity I'm providing below the current implementation from os.path import join as pjoin import torch import numpy as np import os from tqdm import tqdm from torch.utils import data class DataGenerator(data.Dataset): def __init__(self, is_transform=True, augmentations=None): self.is_transform = is_transform self.augmentations = augmentations self.dim = (128, 128, 128) seismicSections = [] #Input faultSections = [] #Ground Truth for fileName in tqdm(os.listdir(pjoin('train', 'seis')), total = len(os.listdir(pjoin('train', 'seis')))): unrolledVolSeismic = np.fromfile(pjoin('train', 'seis', fileName), dtype = np.single) #dat file contains unrolled cube, we need to reshape it reshapedVolSeismic = np.transpose(unrolledVolSeismic.reshape(self.dim)) #need to transpose the axis to get height axis at axis = 0, while length (axis = 1), and width(axis = 2) unrolledVolFault = np.fromfile(pjoin('train', 'fault', fileName),dtype=np.single) reshapedVolFault = np.transpose(unrolledVolFault.reshape(self.dim)) for idx in range(reshapedVolSeismic.shape[2]): seismicSections.append(reshapedVolSeismic[:, :, idx]) faultSections.append(reshapedVolFault[:, :, idx]) for idx in range(reshapedVolSeismic.shape[1]): seismicSections.append(reshapedVolSeismic[:, idx, :]) faultSections.append(reshapedVolFault[:, idx, :]) self.seismicSections = seismicSections self.faultSections = faultSections def __len__(self): return len(self.seismicSections) def __getitem__(self, index): X = self.seismicSections[index] Y = self.faultSections[index] return X, Y Please Help!!!
why not storing only the 3D data in mem, and let the __getitem__ method "slice" it on the fly? class CachedVolumeDataset(Dataset): def __init__(self, ...): super(...) self._volumes_x = # a list of 200 128x128x128 volumes self._volumes_y = # a list of 200 128x128x128 volumes def __len__(self): return len(self._volumes_x) * (128 + 128) def __getitem__(self, index): # extract volume index from general index: vidx = index // (128 + 128) # extract slice index sidx = index % (128 + 128) if sidx < 128: # first dim x = self._volumes_x[vidx][:, :, sidx] y = self._volumes_y[vidx][:, :, sidx] else: sidx -= 128 # second dim x = self._volumes_x[vidx][:, sidx, :] y = self._volumes_y[vidx][:, sidx, :] return torch.squeeze(x), torch.squeeze(y)
https://stackoverflow.com/questions/67956318/
How to perform bicubic upsampling of image using pytorch?
I have png image. I want to upsample it using bicubic interpolation. I found this function in pytorch: nn.functional.upsample(mode = "bicubic) https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html But how to apply to my png image? Should I turn my image into some torch tesnor? I just haven't found any example of using this function exactly on png image
You can do this import torch import torchvision.transforms as transforms from PIL import Image t = transforms.ToTensor() img = Image.open("Table.png") b = torch.nn.functional.upsample(t(img).unsqueeze(0),(500,400),mode = "bicubic") you can also apply Bicubic using Image img = Image.open("Table.png") re = img.resize((400, 400),Image.BICUBIC)
https://stackoverflow.com/questions/67957965/
Save Pytorch model state_dict with datetime in filename
I've been trying to save the state_dict of a Pytorch model with torch.save(agent.qnetwork_local.state_dict(), filename) where filename = datetime.now().strftime('%d-%m-%y-%H:%M_dqnweights.pth') type(filename) returns str which shouldn't be a problem with torch.save() and it should output a non-empty file. Instead I get an empty file with just the date and time and nothing after that. Putting the date and in the middle of the filename results in an empty file with everything after the date and time cut off. torch.save(agent.qnetwork_local.state_dict(), 'checkpoint1.pth') and any time I hardcode the string works and gives me the expected non-empty file. What is going on and how do I fix this? I am running this code in a Python v3.6.8 virtualenv with Pytorch v1.8.1+cpu on Windows 10.
The colon was the problem in filename = datetime.now().strftime('%d-%m-%y-%H:%m_dqnweights.pth') since it was running on windows. Changing it to filename = datetime.now().strftime('%d-%m-%y-%H_%M_dqnweights.pth') works as expected.
https://stackoverflow.com/questions/67958129/
How do I check if a tokenizer/model is already saved
I am using HuggingFace Transformers with PyTorch. My modus operandi is to download a pre-trained model and save it in a local project folder. While doing so, I can see that .bin file is saved locally, which stands for the model. However, I am also downloading and saving a tokenizer, for which I cannot see any associated file. So, how do I check if a tokenizer is saved locally before downloading? Secondly, apart from the usual os.path.isfile(...) check, is there any other better way to prioritize local copy usage from a given location before downloading?
I've used this code in the past for this purpose. You can adapt it to your setting. from tokenizers import BertWordPieceTokenizer import urllib from transformers import AutoTokenizer def download_vocab_files_for_tokenizer(tokenizer, model_type, output_path, vocab_exist_bool=False): vocab_files_map = tokenizer.pretrained_vocab_files_map vocab_files = {} for resource in vocab_files_map.keys(): download_location = vocab_files_map[resource][model_type] f_path = os.path.join(output_path, os.path.basename(download_location)) if vocab_exist_bool != True: urllib.request.urlretrieve(download_location, f_path) vocab_files[resource] = f_path return vocab_files model_type = 'bert-base-uncased' #initialized tokenizer tokenizer = AutoTokenizer.from_pretrained(model_type) #will do this part later #retrieve vocab file if it's not there output_path = os.getcwd()+'/vocab_files/' vocab_file_name = 'bert-base-uncased-vocab.txt' vocab_exist_bool = os.path.exists(output_path + vocab_file_name) #get vocab files vocab_files = download_vocab_files_for_tokenizer(tokenizer, model_type, output_path, vocab_exist_bool=vocab_exist_bool)
https://stackoverflow.com/questions/67958756/
How to calculate the f1-score?
I have a pyTorch-code to train a model that should be able to detect placeholder-images among product-images. I didn't write the code by myself as I am very unexperienced with CNNs and Machine Learning. My boss told me to calculate the f1-score for that model and i found out that the formula for that is ((precision * recall)/(precision + recall)) but I don't know how I get precision and recall. Is someone able to tell me how I can get those two parameters from that following code? (Sorry for the long piece of code, but I didn't really know what is necessary and what isn't) from __future__ import print_function from __future__ import division import torch import torch.nn as nn import torch.optim as optim import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy print("PyTorch Version: ",torch.__version__) print("Torchvision Version: ",torchvision.__version__) data_dir = "data" # Models to choose from [resnet, alexnet, vgg, squeezenet, densenet, inception] model_name = "resnet" # Number of classes in the dataset [we have four classes A-Balik-Duz-Princess] num_classes = 2 # Batch size for training (change depending on how much memory you have) batch_size = 25 # Number of epochs to train for (This will need to be calculated in order to address under and over fitting issue) num_epochs = 20 # Flag for feature extracting. When False, we fine tune the whole model, # when True we only update the reshaped layer params feature_extract = True def train_model(model, dataloaders, criterion, optimizer, num_epochs=25, is_inception=False): since = time.time() print("model is : ",model) val_acc_history = [] val_loss_history = [] train_acc_history = [] train_loss_history = [] best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients (This can be changed to the Adam and other optimizers) optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): # Get model outputs and calculate loss # Special case for inception because in training it has an auxiliary output. In train # mode we calculate the loss by summing the final output and the auxiliary output # but in testing we only consider the final output. if is_inception and phase == 'train': # From https://discuss.pytorch.org/t/how-to-optimize-inception-model-with-auxiliary-classifiers/7958 outputs, aux_outputs = model(inputs) loss1 = criterion(outputs, labels) loss2 = criterion(aux_outputs, labels) loss = loss1 + 0.4*loss2 else: outputs = model(inputs) loss = criterion(outputs, labels) _, preds = torch.max(outputs, 1) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / len(dataloaders[phase].dataset) epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset) print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) if phase == 'val': val_acc_history.append(epoch_acc) val_loss_history.append(epoch_loss) if phase == 'train': train_acc_history.append(epoch_acc) train_loss_history.append(epoch_loss) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model, val_acc_history, train_acc_history,val_loss_history,train_loss_history def set_parameter_requires_grad(model, feature_extracting): if feature_extracting: for param in model.parameters(): param.requires_grad = False ############################################### ### Initialize and Reshape the Networks ############################################### def initialize_model(model_name, num_classes, feature_extract, use_pretrained=True): # Initialize these variables which will be set in this if statement. Each of these # variables is model specific. model_ft = None input_size = 0 if model_name == "resnet": """ Resnet18 """ model_ft = models.resnet152(pretrained=use_pretrained) #we can select any possible variation of ResNet such as Resnet18, Resnet34, Resnet50, Resnet101, and Resnet152 set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, num_classes) input_size = 224 elif model_name == "alexnet": """ Alexnet """ model_ft = models.alexnet(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.classifier[6].in_features model_ft.classifier[6] = nn.Linear(num_ftrs,num_classes) input_size = 224 elif model_name == "vgg": """ VGG11_bn """ model_ft = models.vgg11_bn(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.classifier[6].in_features model_ft.classifier[6] = nn.Linear(num_ftrs,num_classes) input_size = 224 elif model_name == "squeezenet": """ Squeezenet """ model_ft = models.squeezenet1_0(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) model_ft.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1)) model_ft.num_classes = num_classes input_size = 224 elif model_name == "densenet": """ Densenet """ model_ft = models.densenet121(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.classifier.in_features model_ft.classifier = nn.Linear(num_ftrs, num_classes) input_size = 224 elif model_name == "inception": """ Inception v3 Be careful, expects (299,299) sized images and has auxiliary output """ model_ft = models.inception_v3(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) # Handle the auxilary net num_ftrs = model_ft.AuxLogits.fc.in_features model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes) # Handle the primary net num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs,num_classes) input_size = 299 else: print("Invalid model name, exiting...") exit() return model_ft, input_size # Initialize the model for this run model_ft, input_size = initialize_model(model_name, num_classes, feature_extract, use_pretrained=True) # Print the model we just instantiated #print(model_ft) ######################## ### LOAD DATA ######################## # Data augmentation and normalization for training # there are multiple approaches for data augmentation which can be added in the future # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(input_size), #transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } print("Initializing Datasets and Dataloaders...") # Create training and validation datasets image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} # Create training and validation dataloaders dataloaders_dict = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True, num_workers=4) for x in ['train', 'val']} # Detect if we have a GPU available device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ############################# ### Create the Optimizer ############################# # Send the model to GPU model_ft = model_ft.to(device) # Gather the parameters to be optimized/updated in this run. If we are # fine tuning we will be updating all parameters. However, if we are # doing feature extract method, we will only update the parameters # that we have just initialized, i.e. the parameters with requires_grad # is True. params_to_update = model_ft.parameters() print("Params to learn:") if feature_extract: params_to_update = [] for name,param in model_ft.named_parameters(): if param.requires_grad == True: params_to_update.append(param) print("\t",name) else: for name,param in model_ft.named_parameters(): if param.requires_grad == True: print("\t",name) # Observe that all parameters are being optimized we can add leaky ReLU and much more optimizer_ft = optim.SGD(params_to_update, lr=0.001, momentum=0.9) ########################### ### Run Training and Validation Step ########################### %time # Setup the loss fxn criterion = nn.CrossEntropyLoss() # Train and evaluate model_ft, hist, loss_t,vloss_acc, tloss_acc = train_model(model_ft, dataloaders_dict, criterion, optimizer_ft, num_epochs=num_epochs, is_inception=(model_name=="inception"))
... # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) # Add these lines to obtain f1_score from sklearn.metrics import f1_score f1_score = f1_score(labels.data, preds) #or: f1_score = f1_score(labels.cpu().data, preds.cpu()) ...
https://stackoverflow.com/questions/67959327/
How can I get torch-geometric to work using Nix?
I am trying to get the Python package torch-geometric to work using Nix (I am on NixOS). Currently, I use mach-nix to try and setup a Python environment. However, the difficulty is that some of the dependencies should be downloaded from a separate file server (not pypi), i.e. https://pytorch-geometric.com/whl/torch-1.8.0+cpu.html. I am first trying to setup an environment containing a single torch-geometric dependency: torch-sparse. Currently I have the following shell.nix: { pkgs ? import <nixpkgs> {} }: let mach-nix = import (builtins.fetchGit { url = "https://github.com/DavHau/mach-nix/"; ref = "refs/tags/3.3.0"; }) { python = "python38"; }; sparse = mach-nix.buildPythonPackage { pname = "torch_sparse"; version = "0.6.9"; requirements = '' torch scipy pytest pytest-cov pytest-runner ''; src = builtins.fetchGit { url = "https://github.com/rusty1s/pytorch_sparse"; ref = "refs/tags/0.6.9"; }; }; in mach-nix.mkPython { requirements = "torch-sparse"; packagesExtra = [ sparse ]; } Which, upon running nix-shell, fails with the following error message: running build_ext error: [Errno 2] No such file or directory: 'which' builder for '/nix/store/fs9nrrd2a233xp5d6njy6639yjbxp4g0-python3.8-torch_sparse-0.6.9.drv' failed with exit code 1 I tried adding the which package to either checkInputs and buildInputs, but that does not solve the problem. Evidently, I try to build the package directly from its GitHub repo, as I am unsure on how to reference a wheel package in mach-nix. I am relatively new to the NixOS environment, and, quite frankly, I am completely lost. How should I go about installing a Python package such as torch-sparse or torch-geometric? Am I even using the correct tools?
I have managed to come up with a working Nix expression. I will leave the answer here for future reference. Running the following expression using nix-shell will create a shell with torch-1.8.0 and torch-geometric-1.7.0 and their required dependencies. { pkgs ? import <nixpkgs> { } }: let python = pkgs.python38; pytorch-180 = let pyVerNoDot = builtins.replaceStrings [ "." ] [ "" ] python.pythonVersion; unsupported = throw "Unsupported system"; version = "1.8.0"; in python.pkgs.buildPythonPackage { inherit version; pname = "pytorch"; format = "wheel"; src = pkgs.fetchurl { name = "torch-${version}-cp38-cp38-linux_x86_64.whl"; url = "https://download.pytorch.org/whl/cu111/torch-${version}%2Bcu111-cp38-cp38-linux_x86_64.whl"; hash = "sha256-4NYiAkYfGXm3orLT8Y5diepRMAg+WzJelncy2zJp+Ho="; }; nativeBuildInputs = with pkgs; [ addOpenGLRunpath patchelf ]; propagatedBuildInputs = with python.pkgs; [ future numpy pyyaml requests typing-extensions ]; postInstall = '' # ONNX conversion rm -rf $out/bin ''; postFixup = let rpath = pkgs.lib.makeLibraryPath [ pkgs.stdenv.cc.cc.lib ]; in '' find $out/${python.sitePackages}/torch/lib -type f \( -name '*.so' -or -name '*.so.*' \) | while read lib; do echo "setting rpath for $lib..." patchelf --set-rpath "${rpath}:$out/${python.sitePackages}/torch/lib" "$lib" addOpenGLRunpath "$lib" done ''; pythonImportsCheck = [ "torch" ]; meta = with pkgs.lib; { description = "Open source, prototype-to-production deep learning platform"; homepage = "https://pytorch.org/"; changelog = "https://github.com/pytorch/pytorch/releases/tag/v${version}"; license = licenses.unfree; # Includes CUDA and Intel MKL. platforms = platforms.linux; maintainers = with maintainers; [ danieldk ]; }; }; sparse = with python.pkgs; buildPythonPackage rec { pname = "torch_sparse"; version = "0.6.9"; src = pkgs.fetchurl { name = "${pname}-${version}-cp38-cp38-linux_x86_64.whl"; url = "https://pytorch-geometric.com/whl/torch-1.8.0+cpu/${pname}-${version}-cp38-cp38-linux_x86_64.whl"; hash = "sha256-6dmZNQ0FlwKdfESKhvv8PPwzgsJFWlP8tYXWu2JLiMk="; }; format = "wheel"; propagatedBuildInputs = [ pytorch-180 scipy ]; # buildInputs = [ pybind11 ]; # nativeBuildInputs = [ pytest-runner pkgs.which ]; doCheck = false; postInstall = '' rm -rf $out/${python.sitePackages}/test ''; }; scatter = with python.pkgs; buildPythonPackage rec { pname = "torch_scatter"; version = "2.0.7"; src = pkgs.fetchurl { name = "${pname}-${version}-cp38-cp38-linux_x86_64.whl"; url = "https://pytorch-geometric.com/whl/torch-1.8.0+cpu/${pname}-${version}-cp38-cp38-linux_x86_64.whl"; hash = "sha256-MRoFretgyEpq+7aJZc0399Kd+f28Uhn5+CxW5ZIKwcg="; }; format = "wheel"; propagatedBuildInputs = [ pytorch-180 ]; doCheck = false; postInstall = '' rm -rf $out/${python.sitePackages}/test ''; }; cluster = with python.pkgs; buildPythonPackage rec { pname = "torch_cluster"; version = "1.5.9"; src = pkgs.fetchurl { name = "${pname}-${version}-cp38-cp38-linux_x86_64.whl"; url = "https://pytorch-geometric.com/whl/torch-1.8.0+cpu/${pname}-${version}-cp38-cp38-linux_x86_64.whl"; hash = "sha256-E2nywtiZ7m7VA1J7AY7gAHYvyN9H3zl/W0/WsZLzwF8="; }; format = "wheel"; propagatedBuildInputs = [ pytorch-180 ]; doCheck = false; postInstall = '' rm -rf $out/${python.sitePackages}/test ''; }; spline = with python.pkgs; buildPythonPackage rec { pname = "torch_spline_conv"; version = "1.2.1"; src = pkgs.fetchurl { name = "${pname}-${version}-cp38-cp38-linux_x86_64.whl"; url = "https://pytorch-geometric.com/whl/torch-1.8.0+cpu/${pname}-${version}-cp38-cp38-linux_x86_64.whl"; hash = "sha256-ghSzoxoqSccPAZzfcHJEPYySQ/KYqQ90mFsOdt1CjUw="; }; format = "wheel"; propagatedBuildInputs = [ pytorch-180 ]; doCheck = false; postInstall = '' rm -rf $out/${python.sitePackages}/test ''; }; python-louvain = with python.pkgs; buildPythonPackage rec { pname = "python-louvain"; version = "0.15"; src = fetchPypi { inherit pname version; sha256 = "1sqp97fwh4asx0jr72x8hil8z8fcg2xq92jklmh2m599pvgnx19a"; }; propagatedBuildInputs = [ numpy networkx ]; doCheck = false; }; googledrivedownloader = with python.pkgs; buildPythonPackage rec { pname = "googledrivedownloader"; version = "0.4"; src = fetchPypi { inherit pname version; sha256 = "0172l1f8ys0913wcr16lzx87vsnapppih62qswmvzwrggcrw2d2b"; }; doCheck = false; }; geometric = with python.pkgs; buildPythonPackage rec { pname = "torch_geometric"; version = "1.7.0"; src = fetchPypi { inherit pname version; sha256 = "1a7ym34ynhk5gb3yc5v4qkmkrkyjbv1fgisrsk0c9xay66w7nwz9"; }; propagatedBuildInputs = [ pytorch-180 numpy scipy tqdm networkx scikit-learn requests pandas rdflib jinja2 numba ase h5py python-louvain googledrivedownloader ]; nativeBuildInputs = [ pytest-runner ]; doCheck = false; # postInstall = '' # rm -rf $out/${python.sitePackages}/test # ''; }; python-with-pkgs = python.withPackages (ps: with ps; [ pytorch-180 scatter sparse cluster spline geometric ps ]); in pkgs.mkShell { buildInputs = [ python-with-pkgs ]; }
https://stackoverflow.com/questions/67967014/
Derivative of BatchNorm2d in PyTorch
In my network, I want to calculate the forward pass and backward pass of my network both in the forward pass. For this, I have to manually define all the backward pass methods of the forward pass layers. For the activation functions, that's easy. And also for the linear and conv layers, it worked well. But I'm really struggling with BatchNorm. As the BatchNorm paper only discusses the 1D case: So far, my implementation looks like this: def backward_batchnorm2d(input, output, grad_output, layer): gamma = layer.weight beta = layer.bias avg = layer.running_mean var = layer.running_var eps = layer.eps B = input.shape[0] # avg, var, gamma and beta are of shape [channel_size] # while input, output, grad_output are of shape [batch_size, channel_size, w, h] # for my calculations I have to reshape avg, var, gamma and beta to [batch_size, channel_size, w, h] by repeating the channel values over the whole image and batches dL_dxi_hat = grad_output * gamma dL_dvar = (-0.5 * dL_dxi_hat * (input - avg) / ((var + eps) ** 1.5)).sum((0, 2, 3), keepdim=True) dL_davg = (-1.0 / torch.sqrt(var + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + dL_dvar * (-2.0 * (input - avg)).sum((0, 2, 3), keepdim=True) / B dL_dxi = dL_dxi_hat / torch.sqrt(var + eps) + 2.0 * dL_dvar * (input - avg) / B + dL_davg / B # dL_dxi_hat / sqrt() dL_dgamma = (grad_output * output).sum((0, 2, 3), keepdim=True) dL_dbeta = (grad_output).sum((0, 2, 3), keepdim=True) return dL_dxi, dL_dgamma, dL_dbeta When I check my gradients with torch.autograd.grad() I notice that dL_dgamma and dL_dbeta are correct, but dL_dxi is incorrect, (by a lot). But I can't find my mistake. Where is my mistake? For reference, here is the definition of BatchNorm: And here are the formulas for the derivatives for the 1D case:
def backward_batchnorm2d(input, output, grad_output, layer): gamma = layer.weight gamma = gamma.view(1,-1,1,1) # edit # beta = layer.bias # avg = layer.running_mean # var = layer.running_var eps = layer.eps B = input.shape[0] * input.shape[2] * input.shape[3] # edit # add new mean = input.mean(dim = (0,2,3), keepdim = True) variance = input.var(dim = (0,2,3), unbiased=False, keepdim = True) x_hat = (input - mean)/(torch.sqrt(variance + eps)) dL_dxi_hat = grad_output * gamma # dL_dvar = (-0.5 * dL_dxi_hat * (input - avg) / ((var + eps) ** 1.5)).sum((0, 2, 3), keepdim=True) # dL_davg = (-1.0 / torch.sqrt(var + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + dL_dvar * (-2.0 * (input - avg)).sum((0, 2, 3), keepdim=True) / B dL_dvar = (-0.5 * dL_dxi_hat * (input - mean)).sum((0, 2, 3), keepdim=True) * ((variance + eps) ** -1.5) # edit dL_davg = (-1.0 / torch.sqrt(variance + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + (dL_dvar * (-2.0 * (input - mean)).sum((0, 2, 3), keepdim=True) / B) #edit dL_dxi = (dL_dxi_hat / torch.sqrt(variance + eps)) + (2.0 * dL_dvar * (input - mean) / B) + (dL_davg / B) # dL_dxi_hat / sqrt() # dL_dgamma = (grad_output * output).sum((0, 2, 3), keepdim=True) dL_dgamma = (grad_output * x_hat).sum((0, 2, 3), keepdim=True) # edit dL_dbeta = (grad_output).sum((0, 2, 3), keepdim=True) return dL_dxi, dL_dgamma, dL_dbeta Because you didn't upload your forward snipcode, so if your gamma has the shape size is 1, you need to reshape it to [1,gamma.shape[0],1,1]. The formula follows 1D where the scale factor is the sum of the batch size. However, in 2D, the summation should between 3 dimensions, so B = input.shape[0] * input.shape[2] * input.shape[3]. The running_mean and running_var only use in test/inference mode, we don't use them in training (you can find it in the paper). The mean and variance you need are computed from the input, you can store the mean, variance and x_hat = (x-mean)/sqrt(variance + eps) into your object layer or re-compute as I did in the code above # add new. Then replace them with the formula of dL_dvar, dL_davg, dL_dxi. your dL_dgamma should be incorrect since you multiplied the gradient of output by itself, it should be modified to grad_output * x_hat.
https://stackoverflow.com/questions/67968913/
How to prevent gradient computations for certain elements of a tensor in Pytorch
To be clear, I am not Asking how to prevent gradients from being propagated to certain tensors (in this case you can just set requires_grad = False for that tensor). Asking how to prevent gradients from being propagated from an entire tensor (in that case you can just call tensor.detach(), see this question). I'm wondering how to forgo gradient computations for some elements of a loss tensor that give a NaN gradient every time -- essentially, to call .detach() for individual elements of a tensor. The way to do this in Tensorflow is using tf.stop_gradients, see this question. Some context: My neural network computes a distance matrix of its predicted coordinates, as follows. The entries of the distance matrix D are given by d_ij = || coordinates_i - coordinates_j ||. I want to backpropagate through the distance matrix creation step. However, the norm function includes a square root, which is not differentiable at 0 -- and the diagonal of the distance matrix is 0 by construction. Thus I get NaN gradients for the diagonal of the distance matrix. I would like to mask out the gradients on the diagonal of the distance matrix. Minimal working example: import torch def compute_distance_matrix(coordinates): L = len(coordinates) gram_matrix = torch.mm(coordinates, torch.transpose(coordinates, 0, 1)) gram_diag = torch.diagonal(gram_matrix, dim1=0, dim2=1) # gram_diag: L diag_1 = torch.matmul(gram_diag.unsqueeze(-1), torch.ones(1, L).to(coordinates.device)) # diag_1: L x L diag_2 = torch.transpose(diag_1, dim0=0, dim1=1) # diag_2: L x L distance_matrix = torch.sqrt(diag_1 + diag_2 - (2 * gram_matrix)) return distance_matrix # In reality, pred_coordinates is an output of the network, but we initialize it here for a minimal working example L = 10 pred_coordinates = torch.randn(L, 3, requires_grad=True) true_coordinates = torch.randn(L, 3, requires_grad=False) obj = torch.nn.MSELoss() optimizer = torch.optim.Adam([pred_coordinates]) for i in range(500): pred_distance_matrix = compute_distance_matrix(pred_coordinates) true_distance_matrix = compute_distance_matrix(true_coordinates) loss = obj(pred_distance_matrix, true_distance_matrix) loss.backward() print(loss.item()) optimizer.step() gives 1.2868314981460571 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan ...
I initialized a new matrix and used a mask to copy the values with differentiable gradients from the previous tensor (in this case, the non-diagonal entries), then applied the not-everywhere-differentiable operation (the square root) to the new tensor. This allowed the gradient to only flow back through the entries that had a positive mask. import torch def compute_distance_matrix(coordinates): # In reality, pred_coordinates is an output of the network, but we initialize it here for a minimal working example L = len(coordinates) gram_matrix = torch.mm(coordinates, torch.transpose(coordinates, 0, 1)) gram_diag = torch.diagonal(gram_matrix, dim1=0, dim2=1) # gram_diag: L diag_1 = torch.matmul(gram_diag.unsqueeze(-1), torch.ones(1, L).to(coordinates.device)) # diag_1: L x L diag_2 = torch.transpose(diag_1, dim0=0, dim1=1) # diag_2: L x L squared_distance_matrix = diag_1 + diag_2 - (2 * gram_matrix) distance_matrix = torch.zeros_like(squared_distance_matrix) mask = ~torch.eye(L, dtype=torch.bool).to(coordinates.device) distance_matrix[mask] = torch.sqrt( squared_distance_matrix.masked_select(mask) ) return distance_matrix # In reality, pred_coordinates is an output of the network, but we initialize it here for a minimal working example L = 10 pred_coordinates = torch.randn(L, 3, requires_grad=True) true_coordinates = torch.randn(L, 3, requires_grad=False) obj = torch.nn.MSELoss() optimizer = torch.optim.Adam([pred_coordinates]) for i in range(500): pred_distance_matrix = compute_distance_matrix(pred_coordinates) true_distance_matrix = compute_distance_matrix(true_coordinates) loss = obj(pred_distance_matrix, true_distance_matrix) loss.backward() print(loss.item()) optimizer.step() which gives: 1.222102403640747 1.2191187143325806 1.2162436246871948 1.2133947610855103 1.210543155670166 1.2076761722564697 1.204787015914917 1.2018715143203735 1.198927402496338 1.1959534883499146 1.1929489374160767 1.1899129152297974 1.1868458986282349 1.1837480068206787 1.180619239807129 1.1774601936340332 1.174271583557129 ...
https://stackoverflow.com/questions/67972521/
Why does my Pytorch tensor size change and contain NaNs after some batches?
I am Training a Pytorch model. After some time, even if on shuffle, the model contains, besides a few finite tensorrows only NaN values: tensor([[[ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], ..., [ 1.4641, 0.0360, -1.1528, ..., -2.3592, -2.6310, 6.3893], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan]]], device='cuda:0', grad_fn=<AddBackward0>) The detect_anomaly functions return: File "TestDownload.py", line 701, in <module> main(learning_rate, batch_size, epochs, experiment) File "TestDownload.py", line 635, in main train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment) File "TestDownload.py", line 486, in train output = F.log_softmax(output, dim=2) File "\lib\site-packages\torch\nn\functional.py", line 1672, in log_softmax ret = input.log_softmax(dim) (function _print_stack) Traceback (most recent call last): File "TestDownload.py", line 701, in <module> main(learning_rate, batch_size, epochs, experiment) File "TestDownload.py", line 635, in main train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment) File "TestDownload.py", line 490, in train loss.backward() File "\lib\site-packages\comet_ml\monkey_patching.py", line 317, in wrapper return_value = original(*args, **kwargs) File "\lib\site-packages\torch\tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "\lib\site-packages\torch\autograd\__init__.py", line 145, in backward Variable._execution_engine.run_backward( RuntimeError: Function 'LogSoftmaxBackward' returned nan values in its 0th output. in reference to the next line output = F.log_softmax(output, dim=2) It shows another error if I just do it with try-except: (when the loss function is running on a tensor containing NaNs) [W ..\torch\csrc\autograd\python_anomaly_mode.cpp:104] Warning: Error detected in CtcLossBackward. Traceback of forward call that caused the error: File "TestDownload.py", line 734, in <module> # In[ ]: File "TestDownload.py", line 667, in main test(model, device, test_loader, criterion, epoch, iter_meter, experiment) File "TestDownload.py", line 517, in train loss.backward() File "\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "\lib\site-packages\torch\nn\modules\loss.py", line 1590, in forward return F.ctc_loss(log_probs, targets, input_lengths, target_lengths, self.blank, self.reduction, File "\lib\site-packages\torch\nn\functional.py", line 2307, in ctc_loss return torch.ctc_loss( (function _print_stack) Traceback (most recent call last): File "TestDownload.py", line 518, in train File "\lib\site-packages\comet_ml\monkey_patching.py", line 317, in wrapper return_value = original(*args, **kwargs) File "\lib\site-packages\torch\tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "\lib\site-packages\torch\autograd\__init__.py", line 145, in backward Variable._execution_engine.run_backward( RuntimeError: Function 'CtcLossBackward' returned nan values in its 0th output. A normal tensor should look like this: tensor([[[-3.3904, -3.4340, -3.3703, ..., -3.3613, -3.5098, -3.4344]], [[-3.3760, -3.2948, -3.2673, ..., -3.4039, -3.3827, -3.3919]], [[-3.3857, -3.3358, -3.3901, ..., -3.4686, -3.4749, -3.3826]], ..., [[-3.3568, -3.3502, -3.4416, ..., -3.4463, -3.4921, -3.3769]], [[-3.4379, -3.3508, -3.3610, ..., -3.3707, -3.4030, -3.4244]], [[-3.3919, -3.4513, -3.3565, ..., -3.2714, -3.3984, -3.3643]]], device='cuda:0', grad_fn=<TransposeBackward0>) Please notice the double brackets, if they are import. Code: for batch_idx, _data in enumerate(train_loader): spectrograms, labels, input_lengths, label_lengths = _data spectrograms, labels = spectrograms.to(device), labels.to(device) optimizer.zero_grad() output = model(spectrograms) output = F.log_softmax(output, dim=2) output = output.transpose(0, 1) # (time, batch, n_class) # X, 1, 29 loss = criterion(output, labels, input_lengths, label_lengths) loss.backward() optimizer.step() scheduler.step() iter_meter.step() Additionally, I tried to run it with a bigger batch size (current batch size:1, bigger batch size: 6) and it run without errors until 40% of the first epoch in which I got this error. Cuda run out of memory Also, I tried to normalize the data torchaudio.transforms.MelSpectrogram(sample_rate=16000, n_mels=128, normalized=True) And reducing the learning rate from 5e-4 to 5e-5 did not help either. Additional information: My dataset contains nearly 300000 .wav files and the error came at 3-10% runtime in the first epoch. I appreciate any hints and I will gladly submit further information.
The source of error can be a corrupted input or label, which would contain a NaN of inf value. You can check that there is no NaN value in a tensor with torch.isnan(tensor).any() Or that all values in a tensor are neither inf nor NaN with torch.isfinite(tensor).all()
https://stackoverflow.com/questions/67983039/
PyTorch DDP: Finding the cause of "Expected to mark a variable ready only once"
I'm extending a complex model (already with DistributedDataParallel with find_unused_parameters set to True) in PyTorch on detectron2. I've added a new layer generating some additional output to the original network - initially, that layer was frozen (requires_grad = False) and everything was working fine. I later decided to unfreeze this layer. Unfortunately, this results in this error on multiple GPUs: -- Process 0 terminated with the following error: Traceback (most recent call last): ... File "/xxx/trainer.py", line 585, in some_method: losses.backward() File "/xxx/python3.7/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/xxx/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes 2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Clearly, the error is related to that unfrozen layer, since everything was working before unfreezing it. This means that I need to check every change I made. I wonder if there's a way/trick to determine which particular tensor/operation causes this behaviour? In other words - how can I speed up the debugging process?
With the help of the PyTorch community, I moved forward (see the original discussion here). I updated my PyTorch to 1.9.0 (was using 1.7.0 before). Now I the error is a bit more informative: RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 73 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging. After setting the environmental variable TORCH_DISTRIBUTED_DEBUG to DETAIL (this requires PyTorch 1.9.0!) I got the name of the problematic variable: Parameter at index 73 with name roi_heads.box_predictor.xxx.bias has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. So, technically, there was a problem with roi_heads.box_predictor.xxx. Could not find one. However, with version 1.9.0 the console also outputs this: [W reducer.cpp:1158] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) Turned out to be the root cause of the problem. Switching find_unused_parameters to False in the DDP constructor makes the training run normally. Not sure why this was the cause, but I don’t mind, since the network is trained now. Anyhow, I hope this will help someone who struggles with similar debugging problems.
https://stackoverflow.com/questions/68000761/
Vectorizing ARD (Automatic Relevance Determination) kernel implementation in Gaussian processes
I am trying to implement an ARD kernel with NumPy as given in the GPML book (M3 from Equation 5.2). I am struggling in vectorizing this equation for NxM kernel computation. I have tried the following non-vectorized version. Can someone help in vectorizing this in NumPy/PyTorch? import numpy as np N = 30 # Number of data points in X1 M = 40 # Number of data points in X2 D = 6 # Number of features (ARD dimensions) X1 = np.random.rand(N, D) X2 = np.random.rand(M, D) Lambda = np.random.rand(D, 1) L_inv = np.diag(np.random.rand(D)) sigma_f = np.random.rand() K = np.empty((N, M)) for n in range(N): for m in range(M): M3 = [email protected] + L_inv**2 d = (X1[n,:] - X2[m,:]).reshape(-1,1) K[n, m] = sigma_f**2 * np.exp(-0.5 * d.T@M3@d)
We can use the rules of broadcasting and the neat NumPy function einsum to vectorize array operations. In few words, broadcasting allows us to operate with arrays in one-liners by adding new dimensions to the resulting array, while einsum allows us to perform operations with multiple arrays by explicitly working in the index notation (instead of matrices). Luckily, no loops are necessary to calculate your kernel. Please see below the vectorized solution, ARD_kernel function, which is about 30x faster in my machine than the original loopy version. Now, einsum is usually as fast as it gets, but it's possible that there are faster methods though, I've not checked anything else (e.g. usual @ operator instead of einsum). Also, there is a missing term in the code (the Kronecker delta), I don't know if it was omitted in purpose (let me know if you have problems implementing it and I'll edit the answer). import numpy as np N = 300 # Number of data points in X1 M = 400 # Number of data points in X2 D = 6 # Number of features (ARD dimensions) np.random.seed(1) # Fix random seed for reproducibility X1 = np.random.rand(N, D) X2 = np.random.rand(M, D) Lambda = np.random.rand(D, 1) L_inv = np.diag(np.random.rand(D)) sigma_f = np.random.rand() # Loopy function def ARD_kernel_loops(X1, X2, Lambda, L_inv, sigma_f): K = np.empty((N, M)) M3 = [email protected] + L_inv**2 for n in range(N): for m in range(M): d = (X1[n,:] - X2[m,:]).reshape(-1,1) K[n, m] = np.exp(-0.5 * d.T@M3@d) return K * sigma_f**2 # Vectorized function def ARD_kernel(X1, X2, Lambda, L_inv, sigma_f): M3 = Lambda.squeeze()*Lambda + L_inv**2 # Use broadcasting to avoid transpose d = X1[:,None] - X2[None,...] # Use broadcasting to avoid loops # order=F for memory layout (as your arrays are (N,M,D) instead of (D,N,M)) return sigma_f**2 * np.exp(-0.5 * np.einsum("ijk,kl,ijl->ij", d, M3, d, order = 'F'))
https://stackoverflow.com/questions/68002698/
Converted model from keras h5 to pytorch - fully connected layer mismatch
I have converted two models (vgg16 and resnet50) from Keras with TensorFlow backend (from as model.save file) into PyTorch using mmdnn. This was done with the following: mmconvert -sf keras -iw vgg.h5 -df pytorch -om keras_to_torch.pt A = imp.load_source('MainModel','/weights/keras_to_torch.py') model = torch.load('/weights/keras_to_torch.pt') Predicting on the same data set gave me a different set of results so I investigated further. I can see that the weights for all the convolutional layers are the same (after transposing), however the weights of the fully connected layers at the end are not. Is there a reason this should be? As i understand they should be equivalent
The problem must be in the way you defined your keras model, since I cannot replicate the issue using the h5 file that is provided using the MMdnn package. If you want to use the resnet50 and VGG19 model you can get the correct weights as follows: start MMdnn container as specified in the documentation download keras model for resnet50 mmdownload -f keras -n resnet50 -o ./ convert to pytorch model mmconvert -sf keras -iw ./imagenet_resnet50.h5 -df pytorch -om keras_to_torch.pt Then extract the produced numpy file, keras_to_torch.pt and keras_to_torch.py from the docker container (and imagenet_resnet50.h5 for comparison). In Python load the keras model with import keras model = load_model('imagenet_resnet50.h5') and the torch model using import imp import torch torch_weights = # path_to_the_numpy_weights A = imp.load_source('MainModel','keras_to_torch.py') weights_torch = A.load_weights(torch_weights) model_torch = A.KitModel(torch_weights) I also had to set allow_pickle = True in the load_weights(weight_file) function at the beginning of the keras_to_torch.py file. The torch.load('/weights/keras_to_torch.pt') variant threw an error for me unfortunately. Print the weights of the last densely connected layer # keras model model.layers[-1].weights # Output: #tensor([[-0.0149, 0.0113, -0.0507, ..., -0.0218, -0.0776, 0.0102], # [-0.0029, 0.0032, 0.0195, ..., 0.0362, 0.0035, -0.0332], # [-0.0175, 0.0081, 0.0085, ..., -0.0302, 0.0549, -0.0251], # ..., # [ 0.0253, 0.0630, 0.0204, ..., -0.0051, -0.0354, -0.0131], # [-0.0062, -0.0162, -0.0122, ..., 0.0138, 0.0409, -0.0186], # [-0.0267, 0.0131, -0.0185, ..., 0.0630, 0.0256, -0.0069]]) # torch model (make sure to transpose) model_torch.fc1000.weight.data.T # Output: #[<tf.Variable 'fc1000/kernel:0' shape=(2048, 1000) dtype=float32, numpy= # array([[-0.01490746, 0.0113374 , -0.05073728, ..., -0.02179668, # -0.07764222, 0.01018347], # [-0.00294467, 0.00319835, 0.01953556, ..., 0.03623696, # 0.00350259, -0.03321117], # [-0.01751374, 0.00807406, 0.00851311, ..., -0.03024036, # 0.05494978, -0.02511911], # ..., # [ 0.025289 , 0.0630148 , 0.02041481, ..., -0.00508354, # -0.03542514, -0.01306196], # [-0.00623157, -0.01624131, -0.01221174, ..., 0.01376359, # 0.04087579, -0.0185826 ], # [-0.02668471, 0.0130982 , -0.01847764, ..., 0.06304929 #... The weights of the keras and torch model coincide as desired (up to 4 digits or so). This solution works as long as you don't want to update the VGG and ResNet weights in keras before converting them to Pytorch. If you do need to update the model weights before converting you should share your code for creating the Keras model. You could further inspect how the imagenet_resnet50.h5 obtained with mmdownload model differs from the one you saved with model.save in keras and correct for any differences.
https://stackoverflow.com/questions/68002742/
How to test custom Faster RCNN model(using Detectron 2 and pytorch) on video?
I have trained a Faster RCNN model on a custom dataset for object detection and want to test it on Videos. I could test the results on images but am stuck on how to do that for a video. Here is the code for inference on images: cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") cfg.DATASETS.TEST = ("my_dataset_test", ) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set the testing threshold for this model predictor = DefaultPredictor(cfg) test_metadata = MetadataCatalog.get("my_dataset_test") from detectron2.utils.visualizer import ColorMode import glob for imageName in glob.glob('/content/test/*jpg'): im = cv2.imread(imageName) outputs = predictor(im) v = Visualizer(im[:, :, ::-1], metadata=test_metadata, scale=0.8 ) out = v.draw_instance_predictions(outputs["instances"].to("cpu")) cv2_imshow(out.get_image()[:, :, ::-1]) Please somebody let me know how tweak this code to work for detection on videos? Platform used: Google Colab Tech Stack:Detectron 2, Pytorch
Check this loop out: from detectron2.utils.visualizer import ColorMode import glob import cv2 from google.colab.patches import cv2_imshow cap = cv2.VideoCapture('/path/to/video') while cap.isOpened(): ret, frame = cap.read() # if frame is read correctly ret is True if not ret: break outputs = predictor(im) v = Visualizer(im[:, :, ::-1], metadata=test_metadata, scale=0.8 ) out = v.draw_instance_predictions(outputs["instances"].to("cpu")) cv2_imshow(out.get_image()[:, :, ::-1]) if cv2.waitKey(1) == ord('q'): break cap.release() cv2.destroyAllWindows()
https://stackoverflow.com/questions/68006438/
TypeError: forward() takes 2 positional arguments but 4 were given, Pytorch
I am trying to write a GAN generator based on Densenet and Deconv method. I am new to PyTorch and unable to figure out TypeError: forward() takes 2 positional arguments but 4 were given. I tried the approach as suggested in Pytorch TypeError: forward() takes 2 positional arguments but 4 were given but I cannot figure out the solution. My code: class DenseLayer(nn.Module): def __init__(self, in_size, out_size, drop_rate=0.0): super(DenseLayer, self).__init__() self.bottleneck = nn.Sequential() # define bottleneck layers self.bottleneck.add_module('btch1', nn.BatchNorm2d(in_size)) self.bottleneck.add_module('relu1', nn.ReLU(inplace=True)) self.bottleneck.add_module('conv1', nn.ConvTranspose2d(in_size, int(out_size/4), kernel_size=1, stride=1, padding=0, bias=False)) self.basic = nn.Sequential() # define basic block self.basic.add_module('btch2', nn.BatchNorm2d(int(out_size/4))) self.basic.add_module('relu2', nn.ReLU(inplace=True)) self.basic.add_module('conv2', nn.ConvTranspose2d(int(out_size/4), out_size, kernel_size=3, stride=1, padding=1, bias=False)) self.droprate = drop_rate def forward(self, input): out = self.bottleneck(input) if self.droprate > 0: out = F.dropout(out, p=self.droprate, inplace=False, training=self.training) out = self.basic(out) if self.droprate > 0: out = F.dropout(out, p=self.droprate, inplace=False, training=self.training) return torch.cat((x,out), 1) class DenseBlock(nn.Module): def __init__(self, num_layers, in_size, growth_rate, block, droprate=0.0): super(DenseBlock, self).__init__() self.layer = self._make_layer(block, in_size, growth_rate, num_layers, droprate) def _make_layer(self, block, in_size, growth_rate, num_layers, droprate): layers = [] for i in range(num_layers): layers.append(block(in_size, in_size-i*growth_rate, droprate)) return nn.Sequential(*layers) def forward(self, input): return self.layer(input) class MGenDenseNet(nn.Module): def __init__(self, ngpu, growth_rate=32, block_config=(16,24,12,6), in_size=1024, drop_rate=0.0): super(MGenDenseNet, self).__init__() self.ngpu = ngpu self.features = nn.Sequential() self.features.add_module('btch0', nn.BatchNorm2d(in_size)) block = DenseLayer num_features = in_size for i, num_layers in enumerate(block_config): block = DenseBlock(num_layers=num_layers, in_size=num_features, growth_rate=growth_rate, block=block, droprate=drop_rate) ### Error thrown on this line self.features.add_module('denseblock{}'.format(i+1), block) num_features -= num_layers*growth_rate if i!=len(block_config)-1: trans = TransitionLayer(in_size=num_features, out_size=num_features*2, drop_rate=drop_rate) self.features.add_module('transitionblock{}'.format(i+1), trans) num_features *= 2 self.features.add_module('convfinal', nn.ConvTranspose2d(num_features, 3, kernel_size=7, stride=2, padding=3, bias=False)) self.features.add_module('Tanh', nn.Tanh()) def forward(self, input): return self.features(input) mGen = MGenDenseNet(ngpu).to(device) mGen.apply(weights_init) print(mGen)
class MGenDenseNet(nn.Module): def __init__(self, ngpu, growth_rate=32, block_config=(16,24,12,6), in_size=1024, drop_rate=0.0): super(MGenDenseNet, self).__init__() import pdb; pdb.set_trace() self.ngpu = ngpu self.features = nn.Sequential() self.features.add_module('btch0', nn.BatchNorm2d(in_size)) block_placeholder = DenseLayer <<<< num_features = in_size for i, num_layers in enumerate(block_config): block = DenseBlock(num_layers=num_layers, in_size=num_features, growth_rate=growth_rate, block=block_placeholder, droprate=drop_rate) <<<< look at change self.features.add_module('denseblock{}'.format(i+1), block) num_features -= num_layers*growth_rate self.features.add_module('convfinal', nn.ConvTranspose2d(num_features, 3, kernel_size=7, stride=2, padding=3, bias=False)) self.features.add_module('Tanh', nn.Tanh()) def forward(self, input): return self.features(input) It is because you define block as DenseLayer, then reassign block it to an initalized DenseBlock() and then pass that as block=block. So after one iteration through the for loop it is passing a DenseBlock() object instead of DenseLayer so it's wrongly using the forward pass. Just change block = DenseLayer to block_placeholder and use that variable instead. I spotted this by placing a debugger in your code and noticing that the DenseBlock line only fails on second call.
https://stackoverflow.com/questions/68011125/
How to solve gradient-exploding in YOLO v1
Now I am trying to train object detection - YOLOv1 using this code. At the beginning I was using momentum and weight_decay but the training loss after couples of epochs becomes NaN. As far as I know it's because of gradient exploding, so I have searched some ways to get rid of this NaN then I ignored momentum and weight decay. As a result I did not get any NaN, however my model could not converge as I expected. When I calculated mAP it was only 0.29. I am using VOC 2007 and 2012 data for training and as a test set VOC 2007 test. So my questions are followings: How can I get rid of NaN while training? Where can I get best training configurations? Is gradient exploding is normal in Object Detection task? Is it normal in YOLOv1 getting 1.1Gb weights after training? Would appreciate any suggestions here.
After checking your code, I saw after the first epoch, you would set the learning rate to 0.01 until epoch 75. In my opinion, that large learning rate is the main reason made your parameters became vanishing/exploding. Normally, the learning rate is scaling around 0.001 with the factor of 2,1,0.1. Follow the config in this repo (the most famous repo implement YOLOv1 according to paperwithcode), you can see their configurations setup. You can follow their hyper-parameters, momentum=0.9 and decay=0.0005 in your question. Note: please be careful that the batch norm momentum in Tensorflow = 1 - momentum in Pytorch. Finally, your number of parameters before and after training should be the same, so if your model is heavier/lighter after the training process, it means there is something wrong with your training code.
https://stackoverflow.com/questions/68013574/
Extending Pytorch: Python vs. C++ vs. CUDA
I have been trying to implement a custom Conv2d module where grad_input (dx) and grad_weight (dw) are calculated by using different grad_output (dy) values. I implemented this by extending torch.autograd as in Pytorch tutorials. However I am confused by the information in this link. Is extending the autograd.Function not enough? What is the difference between writing a new autograd function in Python vs C++? How about the CUDA implementations in /torch/nn/blob/master/lib/THNN/generic/SpatialConvolutionMM.c where dx and dw calculated? Should I change them too? Here is my custom function: class myCustomConv2d(torch.autograd.Function): @staticmethod def forward(ctx, x, w, bias=None, stride=1, padding=0, dilation=1, groups=1): ctx.save_for_backward(x, w, bias) ctx.stride = stride ctx.padding = padding ctx.dilation = dilation ctx.groups = groups out = F.conv2d(x, w, bias, stride, padding, dilation, groups) return out @staticmethod def backward(ctx, grad_output): input, weight, bias = ctx.saved_tensors stride = ctx.stride padding = ctx.padding dilation = ctx.dilation groups = ctx.groups grad_input = grad_weight = grad_bias = None dy_for_inputs = myspecialfunction1(grad_output) dy_for_weights = myspecialfunction2(grad_output) grad_input = torch.nn.grad.conv2d_input(input.shape, weight, dy_for_inputs , stride, padding, dilation, groups) grad_weight = torch.nn.grad.conv2d_weight(input, weight.shape, dy_for_weights , stride, padding, dilation, groups) if bias is not None and ctx.needs_input_grad[2]: grad_bias = dy_for_weights .sum((0,2,3)).squeeze(0) return grad_input, grad_weight, grad_bias, None, None, None, None
Is extending the autograd.Function not enough? It is enough if your code reuses Pytorch components wrapped within Python interface (which seems to be the case). Gradient is composed automatically. What is the difference between writing a new autograd function in Python vs C++? Performance, the more custom your operation is (and the harder it is to compose it from exstsing Pytorch operations), the more performance improvement you would obtain. How about the CUDA implementations in /torch/nn/blob/master/lib/THNN/generic/SpatialConvolutionMM.c where dx and dw calculated? Should I change them too? No need for that, unless you want to create specialized ops for CUDA
https://stackoverflow.com/questions/68019418/
Pairwise difference between tensors with PyTorch
I have the following problem: given two tensors X of shape (T,n,d) and Y of shape (R,m,d), I want to compute the tensor D of shape (T,R,n,m,d) such that, for all 0 <= t < T, 1 <= r < R, 1 <= i < n, 1 <= j < m, D[t,r,i,j] = X[t,i] - Y[r,j] Can we compute D with Pytorch without using any loop? I know that when given two tensors X of shape (n,d) and Y of shape (m,d) we can compute the tensor D of shape (n,m,d) such that, for all 1 <= i < n, 1 <= j < m, D[i,j] = X[i] - Y[j] using x.unsqueeze(1) - y What I am looking for is a similar trick for the initial problem.
I think the following snippet might achieve what you need : # reshapes X to (T, R, n, m, d) X_rs = X.view(T, 1, n, 1, d).expand(-1, R, -1, m, -1) # reshapes Y to (T, R, n, m, d) Y_rs = Y.view(1, R, 1, m, d).expand(T, -1, n, -1, -1) # Compute the coefficient-wise difference D = X_rs - Y_rs Note that the expand operation does not allocate new memory. It's just the generalization of the "trick" you mentioned
https://stackoverflow.com/questions/68019935/
CNN model is not learn well
I started to learn CNN implementation in PyTorch, and I tried to build CNNs to process the grayscale images with 4 classes from 0 to 3. I got in the beginning accuracy around 0.55. The maximum accuracy I got is ~ 0.683%. I tried SGD and Adam optimizer with different values for lr and batch_size, but the accuracy is still low. I used data Augmentation to create more samples, around 4k. I cannot improve accuracy further and wondered if I could get some advices about what I need to change in CNN structure to increase accuracy. Loss starts around: Loss: [1.497] then decreases near: Loss: [0.001] then fluctuated up and down around this value. I spent time reading about similar problems but without luck. I am using nn.CrossEntropyLoss() for my loss_fn. I don't use softmax for dense layer. This is the Summary of the CNN model: ------------------------------------------------------------- Layer (type) Output Shape Param # ============================================================= Conv2d-1 [-1, 32, 128, 128] 320 ReLU-2 [-1, 32, 128, 128] 0 BatchNorm2d-3 [-1, 32, 128, 128] 64 MaxPool2d-4 [-1, 32, 64, 64] 0 Conv2d-5 [-1, 64, 64, 64] 18,496 ReLU-6 [-1, 64, 64, 64] 0 BatchNorm2d-7 [-1, 64, 64, 64] 128 MaxPool2d-8 [-1, 64, 32, 32] 0 Conv2d-9 [-1, 128, 32, 32] 73,856 ReLU-10 [-1, 128, 32, 32] 0 BatchNorm2d-11 [-1, 128, 32, 32] 256 MaxPool2d-12 [-1, 128, 16, 16] 0 Flatten-13 [-1, 32768] 0 Linear-14 [-1, 512] 16,777,728 ReLU-15 [-1, 512] 0 Dropout-16 [-1, 512] 0 Linear-17 [-1, 4] 2,052 ============================================================ I would appreciate the help.
How many images are in the train set ? the test set ? What are the size of the images ? How would you consider the difficulty of classification of the images ? Do you think it should be simple or difficult ? According to the numbers you have, you're overfitting as your loss is near 0 (meaning nothing much will retropropagate to the weights, i.e your model won't change anymore) and your 68.3% (it's a typo right ?) is from the test set (I suppose). So you don't have any problem to train the network which is a good point. Then you can search ways of countering overfitting online and here is some "classical" possibilities : - you may raise the dropout parameter - putting some regularizer (L1 or L2) to constraint the learning - early stopping using a validation set - using a classical and/or lighter convolutional network (resnet,inception) with/without pretrained weights. This latter also depends on your images type (natural, biomedical ...) - ... a lot more or less difficult to implement Also technically you are using a softmax layer as it's included in the crossentropyloss of pytorch.
https://stackoverflow.com/questions/68021867/
Creating random linearly independent vectors
I can generate a random vector with a = torch.rand(10) >>> tensor([0.6432, 0.7512, 0.8835, 0.4268, 0.7681, 0.4709, 0.2722, 0.0510, 0.8463, 0.9003]) However, how can I generate N more vectors (e.g. N=5: b,c,d,e,f) that will be linearly independent of the other vectors (i.e a is independent of any of b,c,d,e,f, b is independent of any of a,c,d,e,f, etc.)?
I can think of two ways to achieve this. However, there are some restrictions to each of them. For an n-dimensional space, the first way is to create n random vectors and check for linear independence using determinants. Check out here for more information about the process. The second way is to generate vectors one by one. Create a random vector, then using this solution, create a new independent vector, then create the third one, and so on.
https://stackoverflow.com/questions/68025957/
Elegant way to get a symmetric Torch Tensor over diagonal
Is there an elegant way to build a Torch.Tensor like this from a given set of values? Here is an 3x3 example, but in my application I would have a matrix of any odd-size. A function call gen_matrix([a, b, c, d, e, f]) should generate
You can use torch.triu_indices() to achieve this. Note that for a general N x N symmetric matrix, there can be atmost N(N+1)/2 unique elements which are distributed over the matrix. The vals tensor here stores the elements you want to build the symmetric matrix with. You can simply use your set of values in place of vals. Only constraint is, it should have N(N+1)/2 elements in it. Short answer: N = 5 # your choice goes here vals = torch.arange(N*(N+1)/2) + 1 # values A = torch.zeros(N, N) i, j = torch.triu_indices(N, N) A[i, j] = vals A.T[i, j] = vals Example: >>> N = 5 >>> vals = torch.arange(N*(N+1)/2) + 1 >>> A = torch.zeros(N, N) >>> i, j = torch.triu_indices(N, N) >>> A[i, j] = vals >>> A.T[i, j] = vals >>> vals tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.]) >>> A tensor([[ 1., 2., 3., 4., 5.], [ 2., 6., 7., 8., 9.], [ 3., 7., 10., 11., 12.], [ 4., 8., 11., 13., 14.], [ 5., 9., 12., 14., 15.]])
https://stackoverflow.com/questions/68027886/
How to convert Bounding Box coordinates to Yolo Coordinates with Python?
I need to convert the coordinates. I have this format <left> <top> <width> <height> And I need x_center y_center width height in this format. How can I do it?
The center is just the middle of your bounding box. So just add half of the bounding box width or height to yout top-left coordinate. Width and height remain unchanged x_center = left + width / 2 y_center = top + height / 2
https://stackoverflow.com/questions/68034203/
PyTorch: how to select values from a multidimensional tensor using a multidimensional index?
Given a tensor input of shape (5, 2, 2): tensor([[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]], [[ 8, 9], [10, 11]], [[12, 13], [14, 15]], [[16, 17], [18, 19]]]) and a tensor index of shape (2,2): tensor([[4, 3], [4, 2]]) How do I obtain the following output: tensor([[16, 13], [18, 11]]) To put it into context: I have 5 input images of size 2x2 pixels (stored in input). Now I want to combine these 5 images into a single image of size 2x2 pixels, where index determines for each pixel from which input image it should be copied. Example: starting at the top-left index[0,0] == 4, I take pixel value input[4,0,0] == 16. Then continue to index[0,1] == 3, I take pixel value input[3,0,1] == 13 and so on.
The solution I found is almost the same as this one: indices = np.indices(input.shape) indices[0] = index input[tuple(indices)][0] which gives the desired output tensor([[16, 13], [18, 11]])
https://stackoverflow.com/questions/68035408/
PyTorch: .movedim() vs. .moveaxis() vs. .permute()
I'm completely new to PyTorch, and I was wondering if there's anything I'm missing when it comes to the .moveaxis() and .movedim() methods. The outputs are the exact same for the same arguments. Also can't both of these methods be replaced by .permute()? An example for reference: import torch mytensor = torch.randn(3,6,3,1,7,21,4) t_md = torch.movedim(mytensor, 2, 5) t_ma = torch.moveaxis(mytensor, 2, 5) print(t_md.shape, t_ma.shape) print(torch.allclose(t_md, t_ma)) t_p = torch.permute(mytensor, (0, 1, 3, 4, 5, 2, 6)) print(t_p.shape) print(torch.allclose(t_md, t_p))
Yes, moveaxis is an alias of movedim (analogous to swapaxes and swapdims).1 Yes, this functionality can be achieved with permute, but moving one axis while keeping the relative positions of all others is a common enough use-case to warrant its own syntactic sugar. The terminology is taken from numpy: Alias for torch.movedim(). This function is equivalent to NumPy’s moveaxis function.
https://stackoverflow.com/questions/68041894/
PyTorch Sampler: if specified, the next iteration samples the same subset or another subset?
If I have the code like: dataset = Dataset(...) sampler = RandomSampler(...) dataloader = DataLoader(..., sampler=sampler) Then whenever I call: for data, label in dataloader: ... The returned tuple data, label is the same subset or different subset compared to the last call?
It is different subset compared to the last call. I modify example here for your question: data = torch.rand(10,1) dataset = torch.utils.data.TensorDataset(torch.arange(len(data)),data) index,_ = dataset[:] sampler = torch.utils.data.RandomSampler(index) loader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=3) for i in range(2): for data, label in loader: print(data, label) print("------------")
https://stackoverflow.com/questions/68043971/
Accuracy value goes up and down on the training process
After training the network I noticed that accuracy goes up and down. Initially I thought it is caused by the learning rate, but it is set to quite small value. Please check the screenshot attached. Plot Accuracy Screenshot My network (in Pytorch) looks as follow: class Network(nn.Module): def __init__(self): super(Network,self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(3,16,kernel_size=3), nn.ReLU(), nn.MaxPool2d(2) ) self.layer2 = nn.Sequential( nn.Conv2d(16,32, kernel_size=3), nn.ReLU(), nn.MaxPool2d(2) ) self.layer3 = nn.Sequential( nn.Conv2d(32,64, kernel_size=3), nn.ReLU(), nn.MaxPool2d(2) ) self.fc1 = nn.Linear(17*17*64,512) self.fc2 = nn.Linear(512,1) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() def forward(self,x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = out.view(out.size(0),-1) out = self.relu(self.fc1(out)) out = self.fc2(out) out = torch.sigmoid(out) return out I am using RMSprop as optimizer and BCELoss as criterion. The learning rate is set to 0.001 Here is the training process: epochs = 15 itr = 1 p_itr = 100 model.train() total_loss = 0 loss_list = [] acc_list = [] for epoch in range(epochs): for samples, labels in train_loader: samples, labels = samples.to(device), labels.to(device) optimizer.zero_grad() output = model(samples) labels = labels.unsqueeze(-1) labels = labels.float() loss = criterion(output, labels) loss.backward() optimizer.step() total_loss += loss.item() scheduler.step() if itr%p_itr == 0: pred = torch.argmax(output, dim=1) correct = pred.eq(labels) acc = torch.mean(correct.float()) print('[Epoch {}/{}] Iteration {} -> Train Loss: {:.4f}, Accuracy: {:.3f}'.format(epoch+1, epochs, itr, total_loss/p_itr, acc)) loss_list.append(total_loss/p_itr) acc_list.append(acc) total_loss = 0 itr += 1 My dataset is quite small - 2000 train and 1000 validation (binary classification 0/1). I wanted to do the 80/20 split but I was asked to keep it like that. I was thinking that the architecture might be too complex for such a small dataset. Any hits what may cause such jumps in the training process?
Your code here is wrong: pred = torch.argmax(output, dim=1) This line using for multiclass classification with Cross-Entropy Loss. Your task is binary classification so the pred values are wrong. Change to: if itr%p_itr == 0: pred = torch.round(output) .... You can change your optimizer to Adam, SGD, or RMSprop to find the suitable optimizer that helps your model coverage faster. Also change the forward() function: def forward(self,x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = out.view(out.size(0),-1) out = self.relu(self.fc1(out)) out = self.fc2(out) return self.sigmoid(out) #use your forward is ok, but this cleaner
https://stackoverflow.com/questions/68045181/
RunTimeError during one hot encoding
I have a dataset where class values go from -2 to 2 by 1 step (i.e., -2,-1,0,1,2) and where 9 identifies the unlabelled data. Using one hot encode self._one_hot_encode(labels) I get the following error: RuntimeError: index 1 is out of bounds for dimension 1 with size 1 due to self.one_hot_labels = self.one_hot_labels.scatter(1, labels.unsqueeze(1), 1) The error should raise from [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 9, 1, 1, 1, 1, 1, 1], where I have 9 in the mapping setting equal index 9 to 1. It is unclear to me how to fix it, even after going through past questions and answers to similar problems (e.g., index 1 is out of bounds for dimension 0 with size 1). The part of code involved in the error is the following: def _one_hot_encode(self, labels): # Get the number of classes classes = torch.unique(labels) classes = classes[classes != 9] # unlabelled self.n_classes = classes.size(0) # One-hot encode labeled data instances and zero rows corresponding to unlabeled instances unlabeled_mask = (labels == 9) labels = labels.clone() # defensive copying labels[unlabeled_mask] = 0 self.one_hot_labels = torch.zeros((self.n_nodes, self.n_classes), dtype=torch.float) self.one_hot_labels = self.one_hot_labels.scatter(1, labels.unsqueeze(1), 1) self.one_hot_labels[unlabeled_mask, 0] = 0 self.labeled_mask = ~unlabeled_mask def fit(self, labels, max_iter, tol): self._one_hot_encode(labels) self.predictions = self.one_hot_labels.clone() prev_predictions = torch.zeros((self.n_nodes, self.n_classes), dtype=torch.float) for i in range(max_iter): # Stop iterations if the system is considered at a steady state variation = torch.abs(self.predictions - prev_predictions).sum().item() prev_predictions = self.predictions self._propagate() Example of dataset: ID Target Weight Label Score Scale_Cat Scale_num 0 A D 65.1 1 87 Up 1 1 A X 35.8 1 87 Up 1 2 B C 34.7 1 37.5 Down -2 3 B P 33.4 1 37.5 Down -2 4 C B 33.1 1 37.5 Down -2 5 S X 21.4 0 12.5 NA 9 The source code I am using as reference is here: https://mybinder.org/v2/gh/thibaudmartinez/label-propagation/master?filepath=notebook.ipynb Full track of the error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-126-792a234f63dd> in <module> 4 label_propagation = LabelPropagation(adj_matrix_t) ----> 6 label_propagation.fit(labels_t) # causing error 7 label_propagation_output_labels = label_propagation.predict_classes() 8 <ipython-input-115-54a7dbc30bd1> in fit(self, labels, max_iter, tol) 100 101 def fit(self, labels, max_iter=1000, tol=1e-3): --> 102 super().fit(labels, max_iter, tol) 103 104 ## Label spreading <ipython-input-115-54a7dbc30bd1> in fit(self, labels, max_iter, tol) 58 Convergence tolerance: threshold to consider the system at steady state. 59 """ ---> 60 self._one_hot_encode(labels) 61 62 self.predictions = self.one_hot_labels.clone() <ipython-input-115-54a7dbc30bd1> in _one_hot_encode(self, labels) 42 labels[unlabeled_mask] = 0 43 self.one_hot_labels = torch.zeros((self.n_nodes, self.n_classes), dtype=torch.float) ---> 44 self.one_hot_labels = self.one_hot_labels.scatter(1, labels.unsqueeze(1), 1) 45 self.one_hot_labels[unlabeled_mask, 0] = 0 46 RuntimeError: index 1 is out of bounds for dimension 1 with size 1
I ran through your notebook (I think you changed the 9 to -1 for things to run) and saw that for this part of the code: # Learn with Label Propagation label_propagation = LabelPropagation(adj_matrix_t) print("Label Propagation: ", end="") label_propagation.fit(labels_t) label_propagation_output_labels = label_propagation.predict_classes() Which eventually calls: self.one_hot_labels = self.one_hot_labels.scatter(1, labels.unsqueeze(1), 1) Is where things were going wrong. Take a brief moment to read the pytorch manual on scatter here: torch Scatter and we learn that for scatter it's important to understand the dim, index, src and self matrixes. For one hot encoding, dim=1 or 0 doesn't matter and our src matrix is 1 (We'll look a little more into this later). You are now calling scatter on dimension 1 with an index matrix of [40,1] and a result(self) matrix of [40,5]. I see two issues here: You are using the literal category dummy variables (-2,-1,0,1,2) as the encoding indexes in your index matrix. Which will lead scatter to search for these indices in the src matrix. This is where the index out of bounds in coming from You mention that there are 6 classes of -2,-1,0,1,2 and 9 for unlabelled but you are one hot encoding on 5 classes. (Yes, I know you want the unlabeled class to be all zeros but that's a little difficult to achieve with scatter. I'll explain later). So how do we fix this? Issue 1: Let's start with a small example: index = torch.tensor([[5],[0],[3],[5],[1],[4]]); print(index.shape); print(index) result = torch.zeros(6, 6, dtype=src.dtype).scatter_(1, index, src); print(result.shape); print(result) This will give us torch.Size([6, 1]) tensor([[5], [0], [3], [5], [1], [4]]) torch.Size([6, 6]) tensor([[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 1], [0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) Index matrix is 6 observations with 1 observed value (category) Self matrix is 6 observations with a 6 category one hot encoding vector The way that scatter(dim=1) creates the self matrix is torch first checks the row (observation) and then changes the value of that row to the value of the value stored in the src matrix at the same row but at the column of the value stored in index. self[i][index[i][j][k]][k] = src[i][j][k] So in your case you were trying to apply the value of 1 into a row in self[40,1] at the column of index[0](which is equal to 1). Giving you the error in the question. Although I checked your notebook and the error is index -1 is out of bounds for dimension 1 with size 5. They are both the same root cause. Issue 2: One-hot-encoding It is just easier to do complete one-hot instead of one-hot with cold encodings in this case. The reason being is that for one-hot with cold encodings, you need to create a 0 value in your src matrix for every unlabelled observation. Which is much more painful than just using a 1 for the src. Also reading this link: Is it valid to have full zeros for OHE? I think it makes more sense to use one-hot for every category. So, for the second issue we just need to simply map the categories in the indexes of the result/self matrix. Since we have 6 categories we just need to map them into 0,1,2,3,4,5. A simple lambda function would do the trick. I used a random sampler to get my data labels from a class list as shown below: (I randomly created 40 observations from 6 classes) classes = list([-2,-1,0,1,2,9]) labels = list() for i in range(0,40): labels.append(list([(lambda x: x+2 if x !=9 else 5)(random.sample(classes,1)[0])])) index_aka_labels = torch.tensor(labels) print(index_aka_labels) print(index_aka_labels.shape) torch.zeros(40, 6, dtype=src.dtype).scatter_(1, index_aka_labels, 1) Finally, we have achieved our desired result of OHE: tensor([[0, 0, 0, 0, 0, 1], [0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0], ... (40 observations) [0, 1, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1],
https://stackoverflow.com/questions/68045496/
How to add the last classification layer in EfficieNet pre-trained model in Pytorch?
I'm using the EfficientNet pre-trained model for my image classification project in Pytorch, and my purpose is to change the number of classes which is initially 1000 to 4. However, for that when I try adding a model._fc layer, I keep on seeing this error "EfficientNet' object has no attribute 'classifier". Here is my code (Config.NUM_CLASSES = 4): elif Config.MODEL_NAME == 'efficientnet-b3': from efficientnet_pytorch import EfficientNet model = EfficientNet.from_pretrained('efficientnet-b3') model._fc= torch.nn.Linear(in_features=model.classifier.in_features, **out_features=Config.NUM_CLASSES**, bias=True) The situation is different when I add model._fc to the end of the Resnet part, it clearly changes the number of output classes to 4 in Resnet-18. Here is the code for that: if Config.MODEL_NAME == 'resnet18': model = models.resnet50(pretrained=True) model.fc = torch.nn.Linear(in_features=model.fc.in_features, out_features=Config.NUM_CLASSES, bias=True) The solution is available for TensorFlow and Keras, and I would really appreciate it if anyone could help me with that in PyTorch. Regards, Far
EfficentNet class doesn't have attribute classifier, you need to change in_features=model.classifier.in_features to in_features=model._fc.in_features. import torchvision.models as models NUM_CLASSES = 4 #EfficientNet from efficientnet_pytorch import EfficientNet efficientnet = EfficientNet.from_pretrained('efficientnet-b3') efficientnet ._fc= torch.nn.Linear(in_features=efficientnet._fc.in_features, out_features=NUM_CLASSES, bias=True) #mobilenet_v2 mobilenet = models.mobilenet_v2(pretrained=True) mobilenet.classifier = nn.Sequential(nn.Dropout(p=0.2, inplace=False), nn.Linear(in_features=mobilenet.classifier[1].in_features, out_features=NUM_CLASSES, bias=True)) #inception_v3 inception = models.inception_v3(pretrained=True) inception.fc = nn.Linear(in_features=inception.fc.in_features, out_features=NUM_CLASSES, bias=True)
https://stackoverflow.com/questions/68047331/
Tensorflow version of Pytorch Transforms
I have the following code that I use to prepare images before performing inference in a pytorch model: def image_loader(transform, image_name): image = Image.open(image_name) #transform image = transform(image).float() image = torch.tensor(image) image = image.unsqueeze(0) return image data_transforms = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) I've converted the model into a Tensorflow model, however, I'm unsure how I would do similar transformations to images before inference since there doesn't seem to be a tensorflow or keras equivalent. Any advice?
Here is some pointer, in pytorch you have from torchvision import transforms from PIL import Image import torch def image_loader(transform, image_name): image = Image.open(image_name).convert('RGB') image = transform(image).float() image = torch.tensor(image) image = image.unsqueeze(0) return image data_transforms = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) # check: visualize i = image_loader(data_transforms, '/content/1.png') i.shape plt.figure(figsize=(25,10)) subplot(121); imshow(np.array(i[0]).transpose(1, 2, 0)); And in tensorflow, you can achieve this as follows def transform(image, mean, std): for channel in range(3): image[:, :, channel] = (image[:, :, channel] - mean[channel]) \ / std[channel] return image def image_loader(image_name): image = Image.open(image_name).convert('RGB') image = transform(np.array(image) / 255, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) image = tf.cast(image, tf.float32) image = tf.expand_dims(image, 0) return image # check: visualize i = image_loader('/content/1.png') i.shape plt.figure(figsize=(25,10)) subplot(121); imshow(i[0]); This should output the same. Note, in the second case, we define the transform function, from another OP, here, it's fine, however, you can also check tf. keras...Normalization, see this answer for details.
https://stackoverflow.com/questions/68047460/
PyTorch ImageFolder: Which Index Does a Folder Get?
I want to create a confusion matrix as it is shown here, though I use a different dataset than FashinMNIST. Concretely, I want the axes to contain names such as "banana", "orange", "cucumber", etc. I have a folder where images of bananas, etc. are stored, which I load with PyTorch's ImageFolder class. Now, PyTorch automatically assings the numbers 0 to 9 to my classes (I have 10 classes in total), but I'm not sure whether the label "banana" always gets the same number 0. How do I find out, when using the ImageFolder class, which index belongs to which label (subfolder name)?
Class ImageFolder has attribute class_to_idx. Check the docs. x = torchvision.datasets.ImageFolder(root=path, transform=transform) print(x.class_to_idx)
https://stackoverflow.com/questions/68047990/
How can I see the first filters of a convolution layer
From this blog post, It was said that if you want to see kernel filter matrices directly, We can get the weight tensor and index into it accordingly. Now I want to see the first kernel of the first filters of my layer. From the blog, It was mentioned that I can use this conv1.weight[1,1,:,:] How do use this for the below model architecture. class Cifar10CnnModel(ImageClassificationBase): def __init__(self): super().__init__() self.network = nn.Sequential( nn.Conv2d(3, 32, kernel_size=3, padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # output: 64 x 16 x 16 nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # output: 128 x 8 x 8 nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # output: 256 x 4 x 4 nn.Flatten(), nn.Linear(256*4*4, 1024), nn.ReLU(), nn.Linear(1024, 512), nn.ReLU(), nn.Linear(512, 10)) def forward(self, xb): return self.network(xb)
If you create a model by: model = Cifar10CnnModel() then your weights will be stored in a sequential model and can be accessed by selecting the first element of the sequential model list: model.network[0].weights
https://stackoverflow.com/questions/68061991/
Is there a way to run a function before the optimizer updates the weights?
I'm going through PyTorch tutorial and just learned about optimizer.step and how it makes an update to the network's parameters (here). Is there a way to create a function that whenever theres a gradient updates to each learnable parameter (e.g weight), will take the weight value and the loss, and multiply that value by some percentage of that, say 90%? So if the update should be: w1 -= lr * loss_value = 1e-5 * 50 I want it to go through the function before the update and make it 1e-5 * 50 * 90% def func(loss_value, percentage): return loss_value * percentage #new update should be w1 -= loss_value * percentage Example model: import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 5) self.fc2 = nn.Linear(5, 10) self.fc3 = nn.Linear(10, 1) def forward(self, x): x = self.fc1(x) x = torch.relu(x) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() opt = optim.Adam(net.parameters()) features = torch.rand((3,1)) opt.zero_grad() out = net(features) loss = torch.tensor(5) - torch.sum(out) loss.backward() # need to have the function change the value of the loss update before the optimizer? opt.step()
I got this bit of code from https://discuss.pytorch.org/t/how-to-modify-the-gradient-manually/7483/2 and edited it slightly: loss.backward() for p in model.parameters(): weights = p.data scales = def_scales(weights) p.grad *= scales # or whatever other operation optimizer.step() This goes through every parameter in the model (between the loss.backward() and BEFORE the optimizer step) and adjusts its stored gradient BEFORE backprop is applied. An example def_scales will look something like this (SUPER ugly), where vals are the compared parameter values, and scales are the desired loss scaling values: def def_scales(weights,scales=[0.1,0.5,1,1],vals=[0,5,10,float('inf')]): out = torch.zeros_like(weights) for V,v in enumerate(vals[::-1]): #backwards because we're doing less than out[weights<=v] = scales[len(scales)-V-1] #might want to compare to abs return out
https://stackoverflow.com/questions/68062143/
RuntimeError: The size of tensor a (4144) must match the size of tensor b (256) at non-singleton dimension 3 site:stackoverflow.com
I am training a generator network with an image size (3, 256, 256). The network is as shown in the below # Number of channels in the training images. For color images this is 3 nc = 3 # Size of z latent vector (i.e. size of generator input) nz = 3 # Size of feature maps in generator ngf = 64 class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.main = nn.Sequential( # input is Z, going into a convolution nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), nn.BatchNorm2d(ngf * 8), nn.ReLU(True), # state size. (ngf*8) x 4 x 4 nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), # state size. (ngf*4) x 8 x 8 nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 2), nn.ReLU(True), # state size. (ngf*2) x 16 x 16 nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), # state size. (ngf) x 32 x 32 nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False), nn.Tanh() # state size. (nc) x 64 x 64 ) def forward(self, input): return self.main(input) net_input_saved = net_input.detach().clone() noise = net_input.detach().clone() out_avg = None last_net = None psrn_noisy_last = 0 loss = [] psnr_noise = [] psnr_ground = [] i = 0 def closure(): global i, out_avg, psrn_noisy_last, last_net, net_input, loss if reg_noise_std > 0: net_input = net_input_saved + (noise.normal_() * reg_noise_std) #changing the input to the netwok out = net(net_input) # Smoothing if out_avg is None: out_avg = out.detach() else: out_avg = out_avg * exp_weight + out.detach() * (1 - exp_weight) # calculating average network output total_loss = mse(out, img_noisy_torch) total_loss.backward() loss.append(total_loss.item()) # caculating psrn psrn_noisy = compare_psnr(img_noisy_np, out.detach().cpu().numpy()[0]) # comparing psnr for the output image and the actual noisy image psrn_gt = compare_psnr(img_noisy_np, out.detach().cpu().numpy()[0]) # comparing psnr for the output image and the original image psrn_gt_sm = compare_psnr(img_np, out_avg.detach().cpu().numpy()[0]) # comparing psnr for the output average and the original image psnr_noise.append(psrn_noisy) psnr_ground.append(psrn_gt) if PLOT and i % show_every == 0: out_np = torch_to_np(out) # plotting the output image along the average image calculated print(f'\n\nAfter {i} iterations: ') print ('Iteration %05d Loss %f PSNR_noisy: %f PSRN_gt: %f PSNR_gt_sm: %f' % (i, total_loss.item(), psrn_noisy, psrn_gt, psrn_gt_sm), '\r', end='\n') plot_image_grid([np.clip(out_np, 0, 1), np.clip(torch_to_np(out_avg), 0, 1)], factor=figsize, nrow=1) # Backtracking if i % show_every: if psrn_noisy - psrn_noisy_last < -5: print('Falling back to previous checkpoint.') for new_param, net_param in zip(last_net, net.parameters()): net_param.data.copy_(new_param.cuda()) return total_loss*0 else: last_net = [x.detach().cpu() for x in net.parameters()] psrn_noisy_last = psrn_noisy i += 1 return total_loss p = get_params(OPT_OVER, net, net_input) optimize(OPTIMIZER, p, closure, LR, num_iter) when i try to train i am getting error /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([1, 3, 256, 256])) that is different to the input size (torch.Size([1, 3, 4144, 4144])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-83-ed04cb48a6ab> in <module>() 67 68 p = get_params(OPT_OVER, net, net_input) ---> 69 optimize(OPTIMIZER, p, closure, LR, num_iter) 5 frames /content/utils/common_utils.py in optimize(optimizer_type, parameters, closure, LR, num_iter) 227 for j in range(num_iter): 228 optimizer.zero_grad() --> 229 closure() 230 optimizer.step() 231 else: <ipython-input-83-ed04cb48a6ab> in closure() 25 out_avg = out_avg * exp_weight + out.detach() * (1 - exp_weight) # calculating average network output 26 ---> 27 total_loss = mse(out, img_noisy_torch) 28 total_loss.backward() 29 /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 526 527 def forward(self, input: Tensor, target: Tensor) -> Tensor: --> 528 return F.mse_loss(input, target, reduction=self.reduction) 529 530 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce, reduction) 3087 reduction = _Reduction.legacy_get_string(size_average, reduce) 3088 -> 3089 expanded_input, expanded_target = torch.broadcast_tensors(input, target) 3090 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) 3091 /usr/local/lib/python3.7/dist-packages/torch/functional.py in broadcast_tensors(*tensors) 71 if has_torch_function(tensors): 72 return handle_torch_function(broadcast_tensors, tensors, *tensors) ---> 73 return _VF.broadcast_tensors(tensors) # type: ignore[attr-defined] 74 75 RuntimeError: The size of tensor a (4144) must match the size of tensor b (256) at non-singleton dimension 3 I understood this error is coming from mismatch of sizes in tensors, but i was unable to rectify the error. I am providing noise z of size torch.Size([1, 3, 256, 256]) same as the dimensions of the input image to the network but i am getting error.
I think the problem is the net_input before going through the model is not size=(1,3,256,256) instead (1,3,4144,4144). Try resize net_input.
https://stackoverflow.com/questions/68062571/
torch virtual env not working python 3.7 , what am i doing wrong?
so i installed pytorch using conda into a virtual env while referring to this video https://www.youtube.com/watch?v=vBfM5l9VK5c i have activated the env now inside jupyter notebook i run import torch print(torch.__version__) and it works but when ever i run this in .py file and run it through terminal it gives me this error import torch ModuleNotFoundError: No module named 'torch' if I try to pip install pytorch it says Requirement already satisfied: torchvision in c:\users\kiit\anaconda3\envs\torch\lib\site-packages (0.10.0) Requirement already satisfied: numpy in c:\users\kiit\anaconda3\envs\torch\lib\site-packages (from torchvision) (1.20.3) Requirement already satisfied: torch==1.9.0 in c:\users\kiit\anaconda3\envs\torch\lib\site-packages (from torchvision) (1.9.0) Requirement already satisfied: pillow>=5.3.0 in c:\users\kiit\anaconda3\envs\torch\lib\site-packages (from torchvision) (8.2.0) Requirement already satisfied: typing_extensions in c:\users\kiit\anaconda3\envs\torch\lib\site-packages (from torch==1.9.0->torchvision) (3.7.4.3) so what is going on exactly??
You might have more than one python versions installed on your system i.e One with conda and one separately. In order to check that, you can go to control panel --> Programs and features. There you can figure out how many python installations are present on your system. Delete the one you dont need and your problem will be resolved.
https://stackoverflow.com/questions/68062606/
How to add mask to loss function in PyTorch
My PyTorch model outputs two images: op and psuedo-opand I wish to backpropagate at only those pixels where loss(op_i,gt_i)<loss(psuedo-op_i,gt_i), where i is used to index pixels. Clearly, loss(op,gt).backward() cannot achieve this. So how to do? Have some rough solution in mind as follows: def loss(pred, target): """ shape of pred/target is BatchxChannelxHeightxWidth, where Channel=1 """ abs_diff = torch.abs(target - pred) l1_loss = abs_diff.mean(1, True) return l1_loss o1 = loss(op,gt) o2 = loss(psuedo-op,gt) o = torch.cat((o1,o2),dim=1) value, idx = torch.min(o,dim=1) NOW SOMEHOW USE IDX TO GENERATE MASK AND SELECTIVE BACKPROPAGATION Any other solution would also work if it allows me to backpropagate on o1 but for only those pixels where o1<o2.
You can use the relu function for this purpose I think. Since you need to backprop only on o1, you first need to detach the loss o2. And there is also a minus sign to correct the sign of the gradient. # This diff is 0 when o1 > o2, equal to o2-o1 otherwise o_diff = nn.functional.relu(o2.detach()-o1) # gradient of (-relu(b-x)) is 0 if b-x < 0, 1 otherwise (-o_diff).sum().backward() Here, using the relu as a kind of conditional on the sign of o2-o1 makes it very easy to to void gradients for coefficients with minus sign I need to emphasize that since o2 is detached from the graph it is a constant with respect to your network, so it does not affect the gradient and thus this operation achieves what you need : it is basically backpropagating d/dx(-relu(b-o1(x)) which is 0 if b < o1(x) and d/dx(o1(x)) otherwise (where b = o2 is constant).
https://stackoverflow.com/questions/68063453/
PyTorch in-place operator issue in NumPy conversion
I converted a PyTorch Tensor into a NumPy ndarray, and as it's shown below 'a' and 'b' both have different ids so they point to different memory locations. I know in PyTorch underscore indicates in-place operation so a.add_(1) does not change the memory location of a, but why does it change the content of the array b (due to the difference in id)? import torch a = torch.ones(5) b = a.numpy() a.add_(1) print(id(a), id(b)) print(a) print(b) Results: 139851293933696 139851293925904 tensor([2., 2., 2., 2., 2.]) [2. 2. 2. 2. 2.] PyTorch documentation: Converting a torch Tensor to a NumPy array and vice versa is a breeze. The torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other. But why? (They have different IDs so must be independent) :(
If you use numpy.ndarray.__array_interface__ and torch.Tensor.data_ptr(), you can find the memory location of their data_array. a = torch.ones(5) b = a.numpy() a.add_(1) print(b.__array_interface__) # > {'data': (1848226926144, False), # 'strides': None, # 'descr': [('', '<f4')], # 'typestr': '<f4', # 'shape': (5,), # 'version': 3} print(a.data_ptr()) # > 1848226926144 a.data_ptr() == b.__array_interface__["data"][0] # > True Both the array and the tensor share the same data_array. We can reasonably conclude that the method torch.Tensor.numpy() creates a numpy array from the Tensor by passing it the reference to its data_array. b[0] = 4 print(a) # > tensor([4., 2., 2., 2., 2.])
https://stackoverflow.com/questions/68068651/
What is the name of different layers in my PyTorch model?
I have the following model in PyTorch: UNet3D( (encoders): ModuleList( (0): Encoder( (basic_module): DoubleConv( (SingleConv1): SingleConv( (groupnorm): GroupNorm(1, 5, eps=1e-05, affine=True) (conv): Conv3d(5, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (ReLU): ReLU(inplace=True) ) (SingleConv2): SingleConv( (groupnorm): GroupNorm(8, 32, eps=1e-05, affine=True) (conv): Conv3d(32, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (ReLU): ReLU(inplace=True) ) ) ) (1): Encoder( (pooling): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (basic_module): DoubleConv( (SingleConv1): SingleConv( (groupnorm): GroupNorm(8, 64, eps=1e-05, affine=True) (conv): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (ReLU): ReLU(inplace=True) ) (SingleConv2): SingleConv( (groupnorm): GroupNorm(8, 64, eps=1e-05, affine=True) (conv): Conv3d(64, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (ReLU): ReLU(inplace=True) ) ) ) (2): Encoder( (pooling): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (basic_module): DoubleConv( (SingleConv1): SingleConv( (groupnorm): GroupNorm(8, 128, eps=1e-05, affine=True) (conv): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (ReLU): ReLU(inplace=True) ) (SingleConv2): SingleConv( (groupnorm): GroupNorm(8, 128, eps=1e-05, affine=True) (conv): Conv3d(128, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (ReLU): ReLU(inplace=True) ) ) ) Could someone please tell me what is the name of different layers here? For example, "encoders (0)"?. I want to extract intermediate layer output from model, so I need the name of each layer.
The names are given by whats inside the parentheses. Be aware that ModuleList is a list type, so the modules within are addressed by index. The pytorch forums are usually quite good for that. This post describes how you can access and alter a layer, but it similarly applies to register a forward hook. For instance, in your case model.encoders[0].basic_module will get you the basic_module in the first encoder.
https://stackoverflow.com/questions/68069272/
Proper way to use Cross entropy loss with one hot vector in Pytorch
I've encountered many suggestions by searching online, but I don't understand the proper way to do it. Lets say my output of the model is 4 neurons, and the target labels are 1000 0100 0010 0001. In tensorflow, I added a softmax layer at the end, and model.fit with CategoricalCrossEntropy. What is the proper way to do it in pytorch? Right now, what I did (and its working) is that I insert my model output, and torch.argmax() of the target one hot vector. for example model output is (0.7,0.1,0.1,0.1) and the torch.argmax of the target vector is 0. on that I'm activating nn.CrossEntropyLoss(). is this the proper way to do it?
Another way to do this would be to use BCELoss(), which is the same as cross-entropy loss except that a target vector in the range [0,1] is expected as well as an output vector. This just saves you having to do the torch.argmax() step.
https://stackoverflow.com/questions/68079750/
Pytorch with CUDA throws RuntimeError when using pack_padded_sequence
I am trying to train a BiLSTM-CRF on detecting new NER entities with Pytorch. To do so, I am using a snippet of code derivated from the Pytorch Advanced tutorial. This snippet implements batch training. I followed the READ-ME in order to present data as required. Everything works great on CPU, but when I'm trying to get it to GPU, the following error occur : --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-23-794982510db6> in <module> 4 batch_input, batch_input_lens, batch_mask, batch_target = batch_info 5 ----> 6 loss_train = model.neg_log_likelihood(batch_input, batch_input_lens, batch_mask, batch_target) 7 optimizer.zero_grad() 8 loss_train.backward() <ipython-input-11-e44ffbf7d75f> in neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target) 185 186 def neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target): --> 187 feats = self.bilstm(batch_input, batch_input_lens, batch_mask) 188 gold_score = self.CRF.score_sentence(feats, batch_target) 189 forward_score = self.CRF.score_z(feats, batch_input_lens) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] <ipython-input-11-e44ffbf7d75f> in forward(self, batch_input, batch_input_lens, batch_mask) 46 batch_input = self.word_embeds(batch_input) # size: #batch * padding_length * embedding_dim 47 batch_input = rnn_utils.pack_padded_sequence( ---> 48 batch_input, batch_input_lens, batch_first=True) 49 batch_output, self.hidden = self.lstm(batch_input, self.hidden) 50 self.repackage_hidden(self.hidden) /opt/conda/lib/python3.7/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted) 247 248 data, batch_sizes = \ --> 249 _VF._pack_padded_sequence(input, lengths, batch_first) 250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None) 251 RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor` If I understand well, pack_padded_sequence need the tensor to be on CPU rather than GPU. Unfortunately the pack_padded_sequence is called by my forward function and I can't see any way to do so without going back to CPU for the whole training. Here is the complete code. Classes definitions : import torch import torch.nn as nn import torch.nn.utils.rnn as rnn_utils class BiLSTM(nn.Module): def __init__(self, vocab_size, tagset, embedding_dim, hidden_dim, num_layers, bidirectional, dropout, pretrained=None): super(BiLSTM, self).__init__() self.embedding_dim = embedding_dim self.hidden_dim = hidden_dim self.tagset_size = len(tagset) self.bidirectional = bidirectional self.num_layers = num_layers self.word_embeds = nn.Embedding(vocab_size+2, embedding_dim) if pretrained is not None: self.word_embeds = nn.Embedding.from_pretrained(pretrained) self.lstm = nn.LSTM( input_size=embedding_dim, hidden_size=hidden_dim // 2 if bidirectional else hidden_dim, num_layers=num_layers, dropout=dropout, bidirectional=bidirectional, batch_first=True, ) self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size) self.hidden = None def init_hidden(self, batch_size, device): init_hidden_dim = self.hidden_dim // 2 if self.bidirectional else self.hidden_dim init_first_dim = self.num_layers * 2 if self.bidirectional else self.num_layers self.hidden = ( torch.randn(init_first_dim, batch_size, init_hidden_dim).to(device), torch.randn(init_first_dim, batch_size, init_hidden_dim).to(device) ) def repackage_hidden(self, hidden): """Wraps hidden states in new Tensors, to detach them from their history.""" if isinstance(hidden, torch.Tensor): return hidden.detach_().to(device) else: return tuple(self.repackage_hidden(h) for h in hidden) def forward(self, batch_input, batch_input_lens, batch_mask): batch_size, padding_length = batch_input.size() batch_input = self.word_embeds(batch_input) # size: #batch * padding_length * embedding_dim batch_input = rnn_utils.pack_padded_sequence( batch_input, batch_input_lens, batch_first=True) batch_output, self.hidden = self.lstm(batch_input, self.hidden) self.repackage_hidden(self.hidden) batch_output, _ = rnn_utils.pad_packed_sequence(batch_output, batch_first=True) batch_output = batch_output.contiguous().view(batch_size * padding_length, -1) batch_output = batch_output[batch_mask, ...] out = self.hidden2tag(batch_output) return out def neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target): loss = nn.CrossEntropyLoss(reduction='mean') feats = self(batch_input, batch_input_lens, batch_mask) batch_target = torch.cat(batch_target, 0).to(device) return loss(feats, batch_target) def predict(self, batch_input, batch_input_lens, batch_mask): feats = self(batch_input, batch_input_lens, batch_mask) val, pred = torch.max(feats, 1) return pred class CRF(nn.Module): def __init__(self, tagset, start_tag, end_tag, device): super(CRF, self).__init__() self.tagset_size = len(tagset) self.START_TAG_IDX = tagset.index(start_tag) self.END_TAG_IDX = tagset.index(end_tag) self.START_TAG_TENSOR = torch.LongTensor([self.START_TAG_IDX]).to(device) self.END_TAG_TENSOR = torch.LongTensor([self.END_TAG_IDX]).to(device) # trans: (tagset_size, tagset_size) trans (i, j) means state_i -> state_j self.trans = nn.Parameter( torch.randn(self.tagset_size, self.tagset_size) ) # self.trans.data[...] = 1 self.trans.data[:, self.START_TAG_IDX] = -10000 self.trans.data[self.END_TAG_IDX, :] = -10000 self.device = device def init_alpha(self, batch_size, tagset_size): return torch.full((batch_size, tagset_size, 1), -10000, dtype=torch.float, device=self.device) def init_path(self, size_shape): # Initialization Path - LongTensor + Device + Full_value=0 return torch.full(size_shape, 0, dtype=torch.long, device=self.device) def _iter_legal_batch(self, batch_input_lens, reverse=False): index = torch.arange(0, batch_input_lens.sum(), dtype=torch.long) packed_index = rnn_utils.pack_sequence( torch.split(index, batch_input_lens.tolist()) ) batch_iter = torch.split(packed_index.data, packed_index.batch_sizes.tolist()) batch_iter = reversed(batch_iter) if reverse else batch_iter for idx in batch_iter: yield idx, idx.size()[0] def score_z(self, feats, batch_input_lens): # 模拟packed pad过程 tagset_size = feats.shape[1] batch_size = len(batch_input_lens) alpha = self.init_alpha(batch_size, tagset_size) alpha[:, self.START_TAG_IDX, :] = 0 # Initialization for legal_idx, legal_batch_size in self._iter_legal_batch(batch_input_lens): feat = feats[legal_idx, ].view(legal_batch_size, 1, tagset_size) # # #batch * 1 * |tag| + #batch * |tag| * 1 + |tag| * |tag| = #batch * |tag| * |tag| legal_batch_score = feat + alpha[:legal_batch_size, ] + self.trans alpha_new = torch.logsumexp(legal_batch_score, 1).unsqueeze(2).to(device) alpha[:legal_batch_size, ] = alpha_new alpha = alpha + self.trans[:, self.END_TAG_IDX].unsqueeze(1) score = torch.logsumexp(alpha, 1).sum().to(device) return score def score_sentence(self, feats, batch_target): # CRF Batched Sentence Score # feats: (#batch_state(#words), tagset_size) # batch_target: list<torch.LongTensor> At least One LongTensor # Warning: words order = batch_target order def _add_start_tag(target): return torch.cat([self.START_TAG_TENSOR, target]).to(device) def _add_end_tag(target): return torch.cat([target, self.END_TAG_TENSOR]).to(device) from_state = [_add_start_tag(target) for target in batch_target] to_state = [_add_end_tag(target) for target in batch_target] from_state = torch.cat(from_state).to(device) to_state = torch.cat(to_state).to(device) trans_score = self.trans[from_state, to_state] gather_target = torch.cat(batch_target).view(-1, 1).to(device) emit_score = torch.gather(feats, 1, gather_target).to(device) return trans_score.sum() + emit_score.sum() def viterbi(self, feats, batch_input_lens): word_size, tagset_size = feats.shape batch_size = len(batch_input_lens) viterbi_path = self.init_path(feats.shape) # use feats.shape to init path.shape alpha = self.init_alpha(batch_size, tagset_size) alpha[:, self.START_TAG_IDX, :] = 0 # Initialization for legal_idx, legal_batch_size in self._iter_legal_batch(batch_input_lens): feat = feats[legal_idx, :].view(legal_batch_size, 1, tagset_size) legal_batch_score = feat + alpha[:legal_batch_size, ] + self.trans alpha_new, best_tag = torch.max(legal_batch_score, 1).to(device) alpha[:legal_batch_size, ] = alpha_new.unsqueeze(2) viterbi_path[legal_idx, ] = best_tag alpha = alpha + self.trans[:, self.END_TAG_IDX].unsqueeze(1) path_score, best_tag = torch.max(alpha, 1).to(device) path_score = path_score.squeeze() # path_score=#batch best_paths = self.init_path((word_size, 1)) for legal_idx, legal_batch_size in self._iter_legal_batch(batch_input_lens, reverse=True): best_paths[legal_idx, ] = best_tag[:legal_batch_size, ] # backword_path = viterbi_path[legal_idx, ] # 1 * |Tag| this_tag = best_tag[:legal_batch_size, ] # 1 * |legal_batch_size| backword_tag = torch.gather(backword_path, 1, this_tag).to(device) best_tag[:legal_batch_size, ] = backword_tag # never computing <START> # best_paths = #words return path_score.view(-1), best_paths.view(-1) class BiLSTM_CRF(nn.Module): def __init__(self, vocab_size, tagset, embedding_dim, hidden_dim, num_layers, bidirectional, dropout, start_tag, end_tag, device, pretrained=None): super(BiLSTM_CRF, self).__init__() self.bilstm = BiLSTM(vocab_size, tagset, embedding_dim, hidden_dim, num_layers, bidirectional, dropout, pretrained) self.CRF = CRF(tagset, start_tag, end_tag, device) def init_hidden(self, batch_size, device): self.bilstm.hidden = self.bilstm.init_hidden(batch_size, device) def forward(self, batch_input, batch_input_lens, batch_mask): feats = self.bilstm(batch_input, batch_input_lens, batch_mask) score, path = self.CRF.viterbi(feats, batch_input_lens) return path def neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target): feats = self.bilstm(batch_input, batch_input_lens, batch_mask) gold_score = self.CRF.score_sentence(feats, batch_target) forward_score = self.CRF.score_z(feats, batch_input_lens) return forward_score - gold_score def predict(self, batch_input, batch_input_lens, batch_mask): return self(batch_input, batch_input_lens, batch_mask) Training cell : def prepare_sequence(seq, to_ix, device): idxs = [to_ix[w] for w in seq] return torch.tensor(idxs, dtype=torch.long).to(device) def prepare_labels(lab, tag_to_ix, device): idxs = [tag_to_ix[w] for w in lab] return torch.tensor(idxs, dtype=torch.long).to(device) class PadSequence: def __call__(self, batch): device = torch.device('cuda') # Let's assume that each element in "batch" is a tuple (data, label). # Sort the batch in the descending order sorted_batch = sorted(batch, key=lambda x: len(x[0]), reverse=True) # Get each sequence and pad it sequences = [x[0] for x in sorted_batch] sentence_in =[prepare_sequence(x, word_to_ix, device) for x in sequences] sequences_padded = torch.nn.utils.rnn.pad_sequence(sentence_in, padding_value = len(word_to_ix) +1, batch_first=True).to(device) lengths = torch.LongTensor([len(x) for x in sequences]).to(device) masks = [True if index_word!=len(word_to_ix)+1 else False for sentence in sequences_padded for index_word in sentence ] labels = [x[1] for x in sorted_batch] labels_in = [prepare_sequence(x, tag_to_ix, device) for x in labels] return sequences_padded, lengths, masks, labels_in { .... code to get the data formatted...} device = torch.device("cuda") batch_size = 64 START_TAG = "<START>" STOP_TAG = "<STOP>" EMBEDDING_DIM = 200 HIDDEN_DIM = 20 NUM_LAYER = 3 BIDIRECTIONNAL = True DROPOUT = 0.1 train_iter = DataLoader(dataset=training_data, collate_fn=PadSequence(), batch_size=64, shuffle=True) model = BiLSTM_CRF(len(word_to_ix), tagset, EMBEDDING_DIM, HIDDEN_DIM, NUM_LAYER, BIDIRECTIONNAL, DROPOUT, START_TAG, STOP_TAG, device ).to(device) optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4) model.init_hidden(batch_size, device) with tqdm(total=len(train_iter)) as progress_bar: for batch_info in train_iter: batch_input, batch_input_lens, batch_mask, batch_target = batch_info loss_train = model.neg_log_likelihood(batch_input, batch_input_lens, batch_mask, batch_target) optimizer.zero_grad() loss_train.backward() optimizer.step() progress_bar.update(1) # update progress
Within PadSequence function (which acts as a collate_fn which gathers samples and makes a batch from them) you are explicitly casting to cuda device, namely: class PadSequence: def __call__(self, batch): device = torch.device('cuda') # Left rest of the code for brevity ... lengths = torch.LongTensor([len(x) for x in sequences]).to(device) ... return sequences_padded, lengths, masks, labels_in You don't need to cast your data when creating batch, we usually do that right before pushing the examples through neural network. Also you should at least define the device like this: device = torch.device('cuda' if torch.cuda.is_available() else "cpu") or even better leave the choice of device for you/user in some part of the code where you setup everything.
https://stackoverflow.com/questions/68086528/
How to split a Pytorch tensor into different dimensions?
I’m new to Pytorch . Let’s say I have a tensor that has this shape torch.size([1, 25200, 11]) I want to split it into 3 smaller tensors , each of 3 smaller tensors has the shape of 1st. torch.size([1, 3, 80, 80, 11]) and 2nd torch.size([1, 3, 40, 40 , 11]) and 3rd torch.size([1, 3, 20, 20, 11)]. Really appreciate your help. Thanks Explain those numbers: 80x80x3 = 19200 40x40x3 = 4800 20x20x3=1200 , add these result we have 25200, 1 is batch size, 11 is classes + xywh
Something like this should work. import torch tensor = torch.ones((1, 25200, 11)) first_break = tensor[:, 0:19200, :].view((1, 3, 80, 80, 11)) second_break = tensor[:, 19200:19200+4800, :].view((1, 3, 40, 40, 11)) third_break = tensor[:, 19200+4800:19200+4800+1200, :].view((1, 3, 20, 20, 11)) If you give a bit more explanation and context the code could get cleaned up and not be so hardcoded, or maybe this gives you enough to run with.
https://stackoverflow.com/questions/68086675/
pytorch dataloader: to concatenate batch along one dimensions of the dataloader output
My dataset's __getitem__ function returns a torch.stft() M x N x D tensor with N being the audio input series with have variable length. Each item is read inside the __getitem__ function. I would like to have batches concatenated along the second dimension (N). So that by iterating the dataloader I would get data shaped as: M x (N x batch_size) x D. Is there a possible solution to this problem?
You can do this with a custom collate function, passed to the DataLoader: import torch from torch.utils.data import DataLoader M = 20 D = 12 N = 30 a = torch.rand((M,N,D)) b = torch.rand((M,N,D)) def my_collate(batch): c = torch.stack(batch, dim=1) return c.permute(0, 2, 1, 3) c = my_collate([a,b]) # output shape MxNxBxD-> torch.Size([20, 30, 2, 12]) And then to pass to the DataLoader: loader = DataLoader(dataset=datasetObject, batch_size=1, collate_fn=my_collate)
https://stackoverflow.com/questions/68087353/
PyTorch transformer argument "dim_feedforward"
I would like to understand what exactly is going on with this argument. I have read that the feed forward sub-layer inside the transformer layer is a "pointwise" feed-forward layer. what does "pointwise" means in this context? feed-forward layers takes 2 args: input features and output features. this argument can't be the output features since no matter what value I use for it the output of the transformer layer always has the same shape. it also can't be the input features since it is determined by the self attention sublayer. MOST IMPORTANTLY - where is the argument for the size of the tensors for the attention? the ones that translate the input into queries, keys and values?
"Position-wise", or "Point-wise", means the feed forward network (FFN) takes each position of a sequence, say, each word of a sentence, as its input. So point-wise FFN is a shared FFN that inputs each word one by one. (and 3.) That's right. It is neither input features (determined by the self attention sublayer) nor output features (the same value as input features). It is actually the hidden features. The thing is, this particular FFN in transformer encoder has two linear layers, according to the implementation of TransformerEncoderLayer : # Implementation of Feedforward model self.linear1 = Linear(d_model, dim_feedforward, **factory_kwargs) self.dropout = Dropout(dropout) self.linear2 = Linear(dim_feedforward, d_model, **factory_kwargs) So dim_feedforward is the feature no. of hidden layer of the FFN. Usually, its value is set to be several times larger than d_model (2048 as default).
https://stackoverflow.com/questions/68087780/
How does pad_packed_sequence work in pytorch?
Output of LSTM in pytorch: I gave the input as packed sequence (birectional LSTM) then acording to the doucments only output is packed and h_n, c_n are returned as tensor? After applying pad_packed_sequence function to output to unpack it how do I get hidden states as tensor? I saw somewhere this code: pad_packed_sequence(output)[0], why do we have to take 0-index here? Also for last hidden and cell state I get tensors using h_n[0],h_n[1] and c_n[0],c_n[1]. In this case 0 and 1 indexing is done to get forward and backward hidden and cell states. I don't understand the 0-indexing for output and why is h_[n] and c_[n] not returned as paked sequence also?
We should take 0-index, because pad_packed_sequence returns output and the output_lengths.
https://stackoverflow.com/questions/68092328/
PyTorch throws OSError on Detectron2LayoutModel()
I've been trying to read pdf pages as an image, for extraction purposes. I found that layoutparser serves this purpose by identifying blocks of text. However, when I try to Create a Detectron2-based Layout Detection Model, I encounter the following error: codeblock: model = lp.Detectron2LayoutModel( config_path ='lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config', label_map = {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"}, extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.8] ) error: OSError Traceback (most recent call last) <ipython-input-16-893fdc4d537c> in <module> 2 config_path ='lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config', 3 label_map = {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"}, ----> 4 extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.8] 5 ) 6 . . . d:\softwares\python3\lib\site-packages\portalocker\utils.py in _get_fh(self) 269 def _get_fh(self) -> typing.IO: 270 '''Get a new filehandle''' --> 271 return open(self.filename, self.mode, **self.file_open_kwargs) 272 273 def _get_lock(self, fh: typing.IO) -> typing.IO: OSError: [Errno 22] Invalid argument: 'C:\\Users\\user/.torch/iopath_cache\\s/nau5ut6zgthunil\\config.yaml?dl=1.lock' I checked the destination path folder, and surprisingly, there is no config.yaml file, which can be the reason why the error shows up. I tried uninstalling and re-installing PyTorch in anticipation that the .yaml files would be installed correctly. Unfortunately, the problem remains the same. I would appreciate a solution for this, or an alternative suggestion if exists.
I found the solution as adding the congif path of tesseract.exe to pytesseract_cmd for running CLI behind on jupyter: pytesseract.pytesseract.tesseract_cmd = r'path\to\folder\Tesseract_OCR\tesseract.exe' Then calling the Detectron2Model didn't throw error. Referred to this thread Pytesseract : “TesseractNotFound Error: tesseract is not installed or it's not in your path”, how do I fix this?
https://stackoverflow.com/questions/68094922/
Required image size when using pytorch pretrain model to predict
I used the ResNet-18 from PyTorch to predict an image. I've read that (224, 224) is the image size for this model. But when I tried to resize the image to (124, 124) or (324, 324), it's still working. Anybody can tell me why?
The implementation of ResNet variants on PyTorch comes with an AdaptiveAvgPool2d layer before the fully connected layer, ensuring that the output features are always of the correct shape for the fully connected layer, regardless of input size. In addition, the input size of 224x224 is recommended to prevent suboptimal amounts of padding.
https://stackoverflow.com/questions/68096686/
PyTorch Column-wise reduction using multiplication
I'd like to reduce columns from a Torch tensor by multiplying all values from the same row. So, for instance: x = torch.tensor([[1,1,1],[1,1,0],[1,1,2], [1,2,2]]) Shape is 4*3. After the reduction, I'd like to have a tensor of shape 4 with each value being the product of each column ie. x_reduced = torch.tensor([1,0,2,4]) Is there a torch operator to do that easily?
Yep, the function call is simple: torch.prod(x,dim = 1).
https://stackoverflow.com/questions/68102018/
BERT model loss function from one hot encoded labels
For the line: loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) I have labels hot encoded such that it is a tensor of 32x17, since the batch size is 32 and there are 17 classes for the text categories. However, BERT model only takes for the label with a single dimension vector. Hence, I get the error: Expected input batch_size (32) to match target batch_size (544) The 544 is the product of 32x17. However, my question is how could I use one hot encoded labels to get the loss value in each iteration? I could use just label encoded labels, but that would not really be suitable for unordered labels. # BERT training loop for _ in trange(epochs, desc="Epoch"): ## TRAINING # Set our model to training mode model.train() # Tracking variables tr_loss = 0 nb_tr_examples, nb_tr_steps = 0, 0 # Train the data for one epoch for step, batch in enumerate(train_dataloader): # Add batch to GPU batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Clear out the gradients (by default they accumulate) optimizer.zero_grad() # Forward pass loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) train_loss_set.append(loss.item()) # Backward pass loss.backward() # Update parameters and take a step using the computed gradient optimizer.step() # Update tracking variables tr_loss += loss.item() nb_tr_examples += b_input_ids.size(0) nb_tr_steps += 1 print("Train loss: {}".format(tr_loss/nb_tr_steps))
As stated in the comment, Bert for sequence classification expects the target tensor as a [batch] sized tensors with values spanning the range [0, num_labels). A one-hot encoded tensor can be converted by argmaxing it over the label dim, i.e. labels=b_labels.argmax(dim=1).
https://stackoverflow.com/questions/68104425/
TypeError: tensor() got an unexpected keyword argument 'names'
so I started reading deep learning with pytorch, and got to the point of setting names to the dimensions inside the tensor, to make it more friendly, but as soon as I use the names argument, I get the error: TypeError: tensor() got an unexpected keyword argument 'names' can anyone help me out? The code is simple: import torch weights_named = torch.tensor([0.2126, 0.7152, 0.0722], names=['channels']) weights_named Just want to run this, to see how to set names to the dimensions. Thanks in advance.
This is due to your PyTorch Version. Upgrade your Pytorch version and should work. In my case, import torch torch.__version__ #1.7 weights_named = torch.tensor([0.2126, 0.7152, 0.0722], names=['channels']) # __main__:1: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Named tensors and functions supporting it on latest 1.9 release, but on 1.7 (my version) is still experimental.
https://stackoverflow.com/questions/68105949/
Problem with batch_encode_plus method of tokenizer
I am encountering a strange issue in the batch_encode_plus method of the tokenizers. I have recently switched from transformer version 3.3.0 to 4.5.1. (I am creating my databunch for NER). I have 2 sentences whom I need to encode, and I have a case where the sentences are already tokenized, but since both the sentences differs in length so I need to pad [PAD] the shorter sentence in order to have my batch of uniform lengths. Here is the code below of I did with 3.3.0 version of transformers from transformers import AutoTokenizer pretrained_model_name = 'distilbert-base-cased' tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, add_prefix_space=True) sentences = ["He is an uninvited guest.", "The host of the party didn't sent him the invite."] # here we have the complete sentences encodings = tokenizer.batch_encode_plus(sentences, max_length=20, padding=True) batch_token_ids, attention_masks = encodings["input_ids"], encodings["attention_mask"] print(batch_token_ids[0]) print(tokenizer.convert_ids_to_tokens(batch_token_ids[0])) # And the output # [101, 1124, 1110, 1126, 8362, 1394, 5086, 1906, 3648, 119, 102, 0, 0, 0, 0] # ['[CLS]', 'He', 'is', 'an', 'un', '##in', '##vi', '##ted', 'guest', '.', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]'] # here we have the already tokenized sentences encodings = tokenizer.batch_encode_plus(batch_token_ids, max_length=20, padding=True, truncation=True, is_split_into_words=True, add_special_tokens=False, return_tensors="pt") batch_token_ids, attention_masks = encodings["input_ids"], encodings["attention_mask"] print(batch_token_ids[0]) print(tokenizer.convert_ids_to_tokens(batch_token_ids[0])) # And the output tensor([ 101, 1124, 1110, 1126, 8362, 1394, 5086, 1906, 3648, 119, 102, 0, 0, 0, 0]) ['[CLS]', 'He', 'is', 'an', 'un', '##in', '##vi', '##ted', 'guest', '.', '[SEP]', '[PAD]', [PAD]', '[PAD]', '[PAD]'] But if I try to mimic the same behavior in transformer version 4.5.1, I get different output from transformers import AutoTokenizer pretrained_model_name = 'distilbert-base-cased' tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, add_prefix_space=True) sentences = ["He is an uninvited guest.", "The host of the party didn't sent him the invite."] # here we have the complete sentences encodings = tokenizer.batch_encode_plus(sentences, max_length=20, padding=True) batch_token_ids, attention_masks = encodings["input_ids"], encodings["attention_mask"] print(batch_token_ids[0]) print(tokenizer.convert_ids_to_tokens(batch_token_ids[0])) # And the output #[101, 1124, 1110, 1126, 8362, 1394, 5086, 1906, 3648, 119, 102, 0, 0, 0, 0] #['[CLS]', 'He', 'is', 'an', 'un', '##in', '##vi', '##ted', 'guest', '.', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]'] # here we have the already tokenized sentences, Note we cannot pass the batch_token_ids # to the batch_encode_plus method in the newer version, so need to convert them to token first tokens1 = tokenizer.tokenize(sentences[0], add_special_tokens=True) tokens2 = tokenizer.tokenize(sentences[1], add_special_tokens=True) encodings = tokenizer.batch_encode_plus([tokens1, tokens2], max_length=20, padding=True, truncation=True, is_split_into_words=True, add_special_tokens=False, return_tensors="pt") batch_token_ids, attention_masks = encodings["input_ids"], encodings["attention_mask"] print(batch_token_ids[0]) print(tokenizer.convert_ids_to_tokens(batch_token_ids[0])) # And the output (not the desired one) tensor([ 101, 1124, 1110, 1126, 8362, 108, 108, 1107, 108, 108, 191, 1182, 108, 108, 21359, 1181, 3648, 119, 102]) ['[CLS]', 'He', 'is', 'an', 'un', '#', '#', 'in', '#', '#', 'v', '##i', '#', '#', 'te', '##d', 'guest', '.', '[SEP]'] Not sure how to handle this, or what I am doing wrong here.
You need a non-fast tokenizer to use list of integer tokens. tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, add_prefix_space=True, use_fast=False) use_fast flag has been enabled by default in later versions. From the HuggingFace documentation, batch_encode_plus(batch_text_or_text_pairs: ...) batch_text_or_text_pairs (List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], and for not-fast tokenizers, also List[List[int]], List[Tuple[List[int], List[int]]])
https://stackoverflow.com/questions/68113075/
torch.Linear weight doesn't update
#import blah blah #active funtion Linear = torch.nn.Linear(6,1) sig = torch.nn.Sigmoid() #optimizer optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001) #input #x => (891,6) #output y = y.reshape(891,1) #cost function loss_f = torch.nn.BCELoss() for iter in range (10): for i in range (1000): optim.zero_grad() forward = sig(Linear(x)) > 0.5 forward = forward.to(torch.float32) forward.requires_grad = True loss = loss_f(forward, y) loss.backward() optim.step() in this code, I want to update Linear.weight and Linear.bias but It doesn't work,, I think my code doesn't know what is weight and bias so, I tried to change optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001) to optim = torch.optim.SGD([Linear.weight, Linear.bias] ,lr = 0.001) but It still didn't work,, // I wanna explain more detail in my problem but my English level is so low sorry
The BCELoss is defined as As you can see the input x are probabilities. However your use of sig(Linear(x)) > 0.5 is wrong. Moreover, sig(Linear(x)) > 0.5 return a tensor with no autograd and it breaks the computation graph. You are explicitly setting the requires_grad=True however, since the graph is broken it cannot reach the linear layers during back propagation and so its weights are not learned/changed. Correct sample usage: import torch import numpy as np Linear = torch.nn.Linear(6,1) sig = torch.nn.Sigmoid() #optimizer optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001) # Sample data x = torch.rand(891,6) y = torch.rand(891,1) loss_f = torch.nn.BCELoss() for iter in range (10): optim.zero_grad() output = sig(Linear(x)) loss = loss_f(sig(Linear(x)), y) loss.backward() optim.step() print (Linear.bias.item()) Output: 0.10717090964317322 0.10703673213720322 0.10690263658761978 0.10676861554384232 0.10663467645645142 0.10650081932544708 0.10636703670024872 0.10623333603143692 0.10609971731901169 0.10596618056297302
https://stackoverflow.com/questions/68113728/
Unable to solve the "Trying to backward through the graph a second time" Error in PyTorch
I am trying to do a simple weight update using the optimizer like below: x = torch.rand(10, requires_grad=True) y = x * 15. + 10. optimizer = torch.optim.Adam loss = torch.nn.MSELoss() def train(x, y, loss, ep, opti): w = torch.rand(1, dtype=torch.float32, requires_grad=True) b = torch.rand(1, dtype=torch.float32, requires_grad=True) op = opti([w, b]) for e in range(ep): y_hat = x.multiply(w) + b l = loss(y_hat, y) print(f'Epoch: {e}, loss: {l}') l.backward() op.step() op.zero_grad() return w, b w_hat, b_hat = train(x, y, loss, 10, optimizer) However I am getting the Trying to backward through the graph a second time error even though I am not aware why as I am zeroing the gradients at each step. Do you have any suggetions?
The reason is x Please change first line to x = torch.rand(10)
https://stackoverflow.com/questions/68121561/
Why is Normalization causing my network to have exploding gradients in training?
I've built a network (In Pytorch) that performs well for image restoration purposes. I'm using an autoencoder with a Resnet50 encoder backbone, however, I am only using a batch size of 1. I'm experimenting with some frequency domain stuff that only allows me to process one image at a time. I have found that my network performs reasonably well, however, it only behaves well if I remove all batch normalization from the network. Now of course batch norm is useless for a batch size of 1 so I switched over to group norm, designed for this purpose. However, even with group norm, my gradient explodes. The training can go very well for 20 - 100 epochs and then game over. Sometimes it recovers and explodes again. I should also say that in training, every new image fed in is given a wildly different amount of noise to train for random noise amounts. This has been done before but perhaps coupled with a batch size of 1 it could be problematic. I'm scratching my head at this one and I'm wondering if anyone has suggestions. I've dialed in my learning rate and clipped the max gradients but this isn't really solving the actual issue. I can post some code but I'm not sure where to start and hoping someone could give me a theory. Any ideas? Thanks!
To answer my own question, my network was unstable in training because a batch size of 1 makes the data too different from batch to batch. Or as the papers like to put it, too high an internal covariate shift. Not only were my images drawn from a very large varied dataset, but they were also rotated and flipped randomly. As well as this, random Gaussain of noise between 0 and 30 was chosen for each image, so one image may have little to no noise while the next may be barely distinguisable in some cases. Or as the papers like to put it, too high an internal covariate shift. In the above question I mentioned group norm - my network is complex and some of the code is adapted from other work. There were still batch norm functions hidden in my code that I missed. I removed them. I'm still not sure why BN made things worse. Following this I reimplemented group norm with groups of size=32 and things are training much more nicely now. In short removing the extra BN and adding Group norm helped.
https://stackoverflow.com/questions/68122785/
Concatenating ResNet-50 predictions PyTorch
I am using a pre-trained ResNet-50 model where the last dense is removed and the output from the average pooling layer is flattened. This is done for feature extraction purposes. The images are read from folder after being resized to (300, 300); it's RGB images. torch version: 1.8.1 & torchvision version: 0.9.1 with Python 3.8. The code is as follows: model_resnet50 = torchvision.models.resnet50(pretrained = True) # To remove last dense layer from pre-trained model, Use code- model_resnet50_modified = torch.nn.Sequential(*list(model_resnet50.children())[:-1]) # Using 'AdaptiveAvgPool2d' layer, the predictions have shape- model_resnet50_modified(images).shape # torch.Size([32, 2048, 1, 1]) # Add a flatten layer after 'AdaptiveAvgPool2d(output_size=(1, 1))' layer at the end- model_resnet50_modified.flatten = nn.Flatten() # Sanity check- make predictions using a batch of images- predictions = model_resnet50_modified(images) predictions.shape # torch.Size([32, 2048]) I want to now feed batches of images to this model and concatenate the predictions made by the model (32, 2048) vertically. # number of images in training and validation sets- len(dataset_train), len(dataset_val) # (22500, 2500) There are a total of 22500 + 2500 = 25000 images. So the final table/matrix should have the shape: (25000, 2048) -> number of images = 25000 and number of extracted features = 2048. I tried running a toy code using np.vstack() as follows: x = np.random.random_sample(size = (1, 3)) x.shape # (1, 3) x # array([[0.52381798, 0.12345404, 0.1556422 ]]) for i in range(5): y = np.random.random_sample(size = (1, 3)) np.vstack((x, y)) x # array([[0.52381798, 0.12345404, 0.1556422 ]]) Solution(s)? Thanks!
If you want to stack the results in a Tensor: results = torch.empty((0,2048)) results.to(device) results = torch.cat((results, predictions), 0)
https://stackoverflow.com/questions/68126473/
Pytorch custom model automatically stored in cuda
I built a custom NN model like so: class MyNNet(torch.nn.Module): def __init__(self, inp_dim, n_classes): super(MyNNet, self).__init__() self.flat = torch.nn.Flatten() self.l1 = torch.nn.Linear(inp_dim * inp_dim, 32) self.l2 = torch.nn.Linear(32, 16) self.l3 = torch.nn.Linear(16, n_classes) def forward(self, X): out = self.flat(X) out = F.relu(self.l1(out)) out = F.relu(self.l2(out)) return self.l3(out) And a simple training script that updates the model parameters: device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = MyNNet(28, 10) model.to(device) optimizer = torch.optim.Adam(model.parameters()) loss = torch.nn.CrossEntropyLoss() epochs = 20 for e in range(epochs): train_l = 0. for i, (s, c) in enumerate(train_loader): s.to(device) c.to(device) y_hat = model(s) l = loss(y_hat, c) train_l += l l.backward() optimizer.step() optimizer.zero_grad() print(f'Epoch: {e}, AvgLoss: {train_l / len(train_loader)}') As in the script I store the model to cuda and so I do with each batch of the dataset (MNIST). However the folllowing error appears: Expected all tensors to be on the same device, but found at least two devices but when I comment model.to(device), then the script works. Does this mean PyTorch stores the custom models automatically into cuda? Thanks.
Unlike Modules (where .to(...) works in-place), when moving Tensors to a device, you need to reassign them: s = s.to(device) c = c.to(device)
https://stackoverflow.com/questions/68131434/
Pip not recognizing PyTorch with ROCm installation
On the PyTorch website it lists two blocks of commands for the ROCm version installation. The first one, that installs torch itself, goes well, but when I try to import it shows this message. ImportError: libtinfo.so.5: cannot open shared object file: No such file or directory Also, when trying to install the torchvision package with the second block of commands, it shows a similar error. ModuleNotFoundError: No module named 'torch' This only happens for with the ROCm compute platform. Installing with CUDA works just fine, but unfortunately I don't have a NVidia GPU.
I believe it was a bug that haven't been fixed. You can make a local symbolic link named libtinfo.6.so to /usr/lib/libtinfo5.so, in the same folder as libtaichi_core.so This should solve it,
https://stackoverflow.com/questions/68134969/
Pytorch nn.Linear different output for same input
For learning purposes, I'm trying to build a simple perceptron with pytorch which should not be trained but just give the output for set weights. Here's the code: import torch.nn from torch import tensor class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = torch.nn.Linear(3,1) self.relu = torch.nn.ReLU() # force weights to equal one with torch.no_grad(): self.fc1.weight = torch.nn.Parameter(torch.ones_like(self.fc1.weight)) def forward(self, x): x = self.fc1(x) output = self.relu(x) return output net = Net() test_tensor = tensor([1, 1, 1]) print(net(test_tensor.float()).item()) I expect this single layer neural network to output 3. And that is roughly(!) the output for every execution, but it ranges from 2.5 to 3.5. Where does randomness enter the model?
Q: Where does randomness enter the model? It comes from the bias init. As you can see here, the bias is not initialized to zero as you expected. You can fix it this way: import torch from torch import nn class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = torch.nn.Linear(3,1) self.relu = torch.nn.ReLU() # force weights to equal one with torch.no_grad(): torch.nn.init.ones_(self.fc1.weight) torch.nn.init.zeros_(self.fc1.bias) def forward(self, x): x = self.fc1(x) output = self.relu(x) return output x = torch.tensor([1., 1., 1.]) Net()(x) # >>> tensor([3.], grad_fn=<ReluBackward0>)
https://stackoverflow.com/questions/68136623/
An error 'Cache may be out of date, try `force_reload=True`.' comes up even though I have included `force_reload=True` in the code block?
My Heroku App gives an Internal Server Error (500) when I try to get an inference for a model. With the command heroku logs --tail The following error comes up ( This is part of the error received ) 2021-06-25T13:13:01.052585+00:00 heroku[web.1]: State changed from up to starting 2021-06-25T13:13:02.131624+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2021-06-25T13:13:02.333580+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [8] [INFO] Worker exiting (pid: 8) 2021-06-25T13:13:02.333668+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [4] [INFO] Handling signal: term 2021-06-25T13:13:02.333748+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [7] [INFO] Worker exiting (pid: 7) 2021-06-25T13:13:02.734851+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [4] [INFO] Shutting down: Master 2021-06-25T13:13:02.814358+00:00 heroku[web.1]: Process exited with status 0 2021-06-25T13:13:32.771092+00:00 heroku[web.1]: Starting process with command `gunicorn app:app` 2021-06-25T13:13:37.235228+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [4] [INFO] Starting gunicorn 20.1.0 2021-06-25T13:13:37.235870+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [4] [INFO] Listening at: http://0.0.0.0:17520 (4) 2021-06-25T13:13:37.236182+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [4] [INFO] Using worker: sync 2021-06-25T13:13:37.248134+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [7] [INFO] Booting worker with pid: 7 2021-06-25T13:13:37.304799+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [8] [INFO] Booting worker with pid: 8 2021-06-25T13:13:37.739229+00:00 heroku[web.1]: State changed from starting to up 2021-06-25T13:14:08.000000+00:00 app[api]: Build succeeded 2021-06-25T13:48:08.838837+00:00 heroku[web.1]: Idling 2021-06-25T13:48:08.857160+00:00 heroku[web.1]: State changed from up to down 2021-06-25T13:48:10.249612+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2021-06-25T13:48:10.325914+00:00 app[web.1]: [2021-06-25 13:48:10 +0000] [4] [INFO] Handling signal: term 2021-06-25T13:48:10.336944+00:00 app[web.1]: [2021-06-25 13:48:10 +0000] [7] [INFO] Worker exiting (pid: 7) 2021-06-25T13:48:10.337795+00:00 app[web.1]: [2021-06-25 13:48:10 +0000] [8] [INFO] Worker exiting (pid: 8) 2021-06-25T13:48:11.241652+00:00 app[web.1]: [2021-06-25 13:48:11 +0000] [4] [INFO] Shutting down: Master 2021-06-25T13:48:11.421533+00:00 heroku[web.1]: Process exited with status 0 2021-06-26T07:29:48.172006+00:00 heroku[web.1]: Unidling 2021-06-26T07:29:48.174345+00:00 heroku[web.1]: State changed from down to starting 2021-06-26T07:30:11.641635+00:00 heroku[web.1]: Starting process with command `gunicorn app:app` 2021-06-26T07:30:13.993835+00:00 app[web.1]: [2021-06-26 07:30:13 +0000] [4] [INFO] Starting gunicorn 20.1.0 2021-06-26T07:30:13.994244+00:00 app[web.1]: [2021-06-26 07:30:13 +0000] [4] [INFO] Listening at: http://0.0.0.0:34971 (4) 2021-06-26T07:30:13.994334+00:00 app[web.1]: [2021-06-26 07:30:13 +0000] [4] [INFO] Using worker: sync 2021-06-26T07:30:14.001212+00:00 app[web.1]: [2021-06-26 07:30:14 +0000] [7] [INFO] Booting worker with pid: 7 2021-06-26T07:30:14.034313+00:00 app[web.1]: [2021-06-26 07:30:14 +0000] [8] [INFO] Booting worker with pid: 8 2021-06-26T07:30:15.444809+00:00 heroku[web.1]: State changed from starting to up 2021-06-26T07:30:16.251487+00:00 heroku[router]: at=info method=GET path="/" host=safety-helmet-object-detection.herokuapp.com request_id=1a73ef92-3214-4814-a6e6-0a078b49091e fwd="122.200.18.98" dyno=web.1 connect=1ms service=10ms status=200 bytes=2567 protocol=https 2021-06-26T07:30:16.253872+00:00 app[web.1]: 10.43.233.218 - - [26/Jun/2021:07:30:16 +0000] "GET / HTTP/1.1" 200 2412 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" 2021-06-26T07:30:16.635010+00:00 heroku[router]: at=info method=GET path="/" host=safety-helmet-object-detection.herokuapp.com request_id=8f5913f4-2fc5-4875-a490-18c10c068f80 fwd="122.200.18.98" dyno=web.1 connect=1ms service=10ms status=200 bytes=2567 protocol=http 2021-06-26T07:30:16.635538+00:00 app[web.1]: 10.47.181.239 - - [26/Jun/2021:07:30:16 +0000] "GET / HTTP/1.1" 200 2412 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" 2021-06-26T07:30:16.913529+00:00 heroku[router]: at=info method=GET path="/static/style.css" host=safety-helmet-object-detection.herokuapp.com request_id=46585aa5-e4b8-465a-b088-6026bec294e2 fwd="122.200.18.98" dyno=web.1 connect=1ms service=8ms status=200 bytes=696 protocol=http 2021-06-26T07:30:16.913943+00:00 app[web.1]: 10.47.181.239 - - [26/Jun/2021:07:30:16 +0000] "GET /static/style.css HTTP/1.1" 200 0 "http://safety-helmet-object-detection.herokuapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" 2021-06-26T07:30:17.013092+00:00 app[web.1]: 10.63.152.20 - - [26/Jun/2021:07:30:17 +0000] "GET /static/pytorch.png HTTP/1.1" 200 0 "http://safety-helmet-object-detection.herokuapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" 2021-06-26T07:30:17.013496+00:00 heroku[router]: at=info method=GET path="/static/pytorch.png" host=safety-helmet-object-detection.herokuapp.com request_id=e334b2ec-032c-4392-99f5-7ff57c368de6 fwd="122.200.18.98" dyno=web.1 connect=1ms service=16ms status=200 bytes=11679 protocol=http 2021-06-26T07:30:17.371585+00:00 app[web.1]: 10.63.152.20 - - [26/Jun/2021:07:30:17 +0000] "GET /favicon.ico HTTP/1.1" 404 232 "http://safety-helmet-object-detection.herokuapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" 2021-06-26T07:30:17.372076+00:00 heroku[router]: at=info method=GET path="/favicon.ico" host=safety-helmet-object-detection.herokuapp.com request_id=5a0b6062-de2b-465c-80f9-ccc556342ef2 fwd="122.200.18.98" dyno=web.1 connect=1ms service=2ms status=404 bytes=393 protocol=http 2021-06-26T07:31:30.049426+00:00 app[web.1]: Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /app/.cache/torch/hub/master.zip 2021-06-26T07:31:37.289772+00:00 app[web.1]: Exception on / [POST] 2021-06-26T07:31:37.289821+00:00 app[web.1]: Traceback (most recent call last): 2021-06-26T07:31:37.289821+00:00 app[web.1]: File "/app/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 40, in _create 2021-06-26T07:31:37.289822+00:00 app[web.1]: model = attempt_load(fname, map_location=torch.device('cpu')) # download/load FP32 model 2021-06-26T07:31:37.289837+00:00 app[web.1]: File "/app/.cache/torch/hub/ultralytics_yolov5_master/models/experimental.py", line 119, in attempt_load 2021-06-26T07:31:37.289838+00:00 app[web.1]: ckpt = torch.load(attempt_download(w), map_location=map_location) # load 2021-06-26T07:31:37.289838+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/torch/serialization.py", line 579, in load 2021-06-26T07:31:37.289839+00:00 app[web.1]: with _open_file_like(f, 'rb') as opened_file: 2021-06-26T07:31:37.289839+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/torch/serialization.py", line 230, in _open_file_like 2021-06-26T07:31:37.289839+00:00 app[web.1]: return _open_file(name_or_buffer, mode) 2021-06-26T07:31:37.289840+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/torch/serialization.py", line 211, in __init__ 2021-06-26T07:31:37.289840+00:00 app[web.1]: super(_open_file, self).__init__(open(name, mode)) 2021-06-26T07:31:37.289840+00:00 app[web.1]: FileNotFoundError: [Errno 2] No such file or directory: 'best.pt' 2021-06-26T07:31:37.289840+00:00 app[web.1]: 2021-06-26T07:31:37.289841+00:00 app[web.1]: The above exception was the direct cause of the following exception: 2021-06-26T07:31:37.289841+00:00 app[web.1]: 2021-06-26T07:31:37.289841+00:00 app[web.1]: Traceback (most recent call last): 2021-06-26T07:31:37.289842+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/flask/app.py", line 2070, in wsgi_app 2021-06-26T07:31:37.289842+00:00 app[web.1]: response = self.full_dispatch_request() 2021-06-26T07:31:37.289842+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/flask/app.py", line 1515, in full_dispatch_request 2021-06-26T07:31:37.289842+00:00 app[web.1]: rv = self.handle_user_exception(e) 2021-06-26T07:31:37.289842+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/flask/app.py", line 1513, in full_dispatch_request 2021-06-26T07:31:37.289843+00:00 app[web.1]: rv = self.dispatch_request() 2021-06-26T07:31:37.289843+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/flask/app.py", line 1499, in dispatch_request 2021-06-26T07:31:37.289843+00:00 app[web.1]: return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) 2021-06-26T07:31:37.289843+00:00 app[web.1]: File "/app/app.py", line 23, in predict 2021-06-26T07:31:37.289844+00:00 app[web.1]: model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt', force_reload=True).autoshape() # force_reload = recache latest code 2021-06-26T07:31:37.289844+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/torch/hub.py", line 339, in load 2021-06-26T07:31:37.289844+00:00 app[web.1]: model = _load_local(repo_or_dir, model, *args, **kwargs) 2021-06-26T07:31:37.289845+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.9/site-packages/torch/hub.py", line 368, in _load_local 2021-06-26T07:31:37.289845+00:00 app[web.1]: model = entry(*args, **kwargs) 2021-06-26T07:31:37.289845+00:00 app[web.1]: File "/app/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 65, in custom 2021-06-26T07:31:37.289845+00:00 app[web.1]: return _create(path, autoshape=autoshape, verbose=verbose, device=device) 2021-06-26T07:31:37.289846+00:00 app[web.1]: File "/app/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 60, in _create 2021-06-26T07:31:37.289846+00:00 app[web.1]: raise Exception(s) from e 2021-06-26T07:31:37.289851+00:00 app[web.1]: Exception: Cache may be out of date, try `force_reload=True`. See https://github.com/ultralytics/yolov5/issues/36 for help. 2021-06-26T07:31:37.290810+00:00 app[web.1]: 10.63.152.20 - - [26/Jun/2021:07:31:37 +0000] "POST / HTTP/1.1" 500 290 "http://safety-helmet-object-detection.herokuapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" 2021-06-26T07:31:37.291579+00:00 heroku[router]: at=info method=POST path="/" host=safety-helmet-object-detection.herokuapp.com request_id=15ab238a-5ef8-40e5-86e9-159f6b4de20f fwd="122.200.18.98" dyno=web.1 connect=1ms service=8230ms status=500 bytes=463 protocol=http Main Errors are FileNotFoundError: [Errno 2] No such file or directory: 'best.pt' The above exception was the direct cause of the following exception: Exception: Cache may be out of date, try force_reload=True. See https://github.com/ultralytics/yolov5/issues/36 for help. (app.py, that is the code I am working with is in the same directory as best.pt) The code I wrote is as follows: """ Web App to perform inference on a YOLOv5s custom model """ import io from PIL import Image from pathlib import Path from flask import Flask, render_template, request, redirect import torch app = Flask(__name__) @app.route("/", methods=["GET", "POST"]) def predict(): if request.method == "POST": if "file" not in request.files: return redirect(request.url) model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt', force_reload=True).autoshape() # force_reload = True re cache last code (But it doesn't work here) model.eval() file = request.files["file"] if not file: return img_bytes = file.read() img = Image.open(io.BytesIO(img_bytes)) results = model(img, size=640) results.display(save=True, save_dir = Path('static')) return redirect("static/image0.jpg") return render_template("index.html") if __name__ == "__main__": app.run()
I fixed this issue, The FileNotFoundError: [Errno 2] No such file or directory: 'best.pt' was the main error, Heroku couldn't figure that path out, so I included this custom model in a folder called static. My new code block is: model = torch.hub.load('ultralytics/yolov5', 'custom', path='static/best.pt', force_reload=True).autoshape() where app.py and static are in the same directory.
https://stackoverflow.com/questions/68140388/
What is the `node_dim` argument referring to in the message passing class?
In the PyTorch geometric tutorial for creating Message Passing Networks they have this paragraph at the start when explaining what the class does: MessagePassing(aggr="add", flow="source_to_target", node_dim=-2): Defines the aggregation scheme to use ("add", "mean" or "max") and the flow direction of message passing (either "source_to_target" or "target_to_source"). Furthermore, the node_dim attribute indicates along which axis to propagate. I don't understand what this node_dim is referring to, and why it is -2. I have looked at the documentation for the MessagePassing class and it says there that it is the axis which to propagate -- this still doesn't really clarify what we are doing here and why the default is -2 (presumably that is how you propagate information at a node level). Could someone offer some explanation of this to me please?
After referring to here and here, I think the thing related to it is the output of the 'message' function. In most cases, the shape of the output is [edge_num, emb_out], and if we set the node_dim as -2, it means that we will aggregate along the edge_num using indices of the target nodes. This is exactly the process that aggregates the information from source nodes. The result after aggregation is [node_num, emb_out].
https://stackoverflow.com/questions/68145649/
If I have multiple losses that I sum together, do I have to declare separate loss functions?
I'm trying to train a model that uses multiple losses. I've noticed that in some implementations a single loss function is declared as loss_fct = nn.LossFunction() and used multiple times to calculate separate losses, and those losses are summed and backpropagated. For example: loss_fct = nn.LossFunction() loss1 = loss_fct(pred1, labels1) loss2 = loss_fct(pred2, labels2) total_loss = loss1 + loss2 total_loss.backward() optimizer.step() Is this okay or should I be declaring separate loss function objects for each loss? If this is okay, why is it okay? It just seems counterintuitive to me because they're separate losses that we're optimizing with different purposes. Summing and backpropagating once makes somewhat sense, but I can't wrap my head around using a single loss object.
The reason you have to create an instance of some loss function at all is that some loss functions have optional parameters that affect how they're computed. For example MSELoss supports different reduction modes. If you're computing the same loss function (with the same parametrization of the loss function itself) on each of your outputs then there is no need to instantiate multiple instances of the loss function.
https://stackoverflow.com/questions/68147008/
Equality for Matricies/Tensors of Different dimensions
Suppose you have two matrices A of shape [n,k] and B of shape [m,k]. Your desired output is C of shape [n,m], where each entry [i,j] corresponds to the equality of row A[i] and row B[j]. Implementing this straing forward would look something like this: for i in range(A.shape[0]): for j in range(B.shape[0]): C[i, j] = th.eq(A[i, :], B[j, :]).all().float() Example: A = [[0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1.], [0., 0., 0., 0., 0., 0.]] and B = [[0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1.], [0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0.]] the desired output would be: C = [[1., 0., 0., 1.], [0., 1., 0., 0.], [1., 0., 0., 1.]] The straight forward implementation is quite slow, do you have any ideas how to improve this on cuda hardware with pytorch? Probably without for-loops.
You can use broadcasting: c = (A[:, None, :] == B[None, ...]).all(dim=2).float()
https://stackoverflow.com/questions/68147150/
conv2d getting bad input
I have trained a classifier and now trying to load it and run some predictions I am getting an error that is provided below .... return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 439, in _conv_forward return F.conv2d(input, weight, bias, self.stride, TypeError: conv2d() received an invalid combination of arguments - got (list, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of: * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (list, Parameter, Parameter, tuple, tuple, tuple, int) * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (list, Parameter, Parameter, tuple, tuple, tuple, int) Here is the code import torch import torch.nn as nn import numpy as np from PIL import Image from torchvision.transforms import transforms from torch.utils.data import DataLoader Transformer - used to encode images transformer = transforms.Compose([ transforms.RandomHorizontalFlip(0.5), transforms.ToTensor(), ]) Getting a file and converting to Tensor def get_file_as_tensor(file_path): with np.load(file_path) as f: melspec_image_array = f['arr_0'] image = Image.fromarray(melspec_image_array, mode='RGB') image_tensor = transformer(image).div_(255).float() return image_tensor.clone().detach() Prediction function that is on top of the stack because the error occures when I run model([tensor]) def predict(tensor, model): yhat = model([tensor]) yhat = yhat.clone().detach() return yhat class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels): super().__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels, out_channels, 3, 1, 1), nn.BatchNorm2d(out_channels), nn.ReLU(), ) self.conv2 = nn.Sequential( nn.Conv2d(out_channels, out_channels, 3, 1, 1), nn.ReLU(), nn.Dropout(0.5) ) self._init_weights() def _init_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight) if m.bias is not None: nn.init.zeros_(m.bias) elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.zeros_(m.bias) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = F.avg_pool2d(x, 2) return x class Classifier(nn.Module): def __init__(self, num_classes=10): super().__init__() self.conv = nn.Sequential( ConvBlock(in_channels=3, out_channels=64), ConvBlock(in_channels=64, out_channels=128), ConvBlock(in_channels=128, out_channels=256), ConvBlock(in_channels=256, out_channels=512), ) self.fc = nn.Sequential( nn.Dropout(0.4), nn.Linear(512, 128), nn.PReLU(), #nn.BatchNorm1d(128), nn.Dropout(0.2), nn.Linear(128, num_classes), ) def forward(self, x): x = self.conv(x) x = torch.mean(x, dim=3) x, _ = torch.max(x, dim=2) x = self.fc(x) return x PATH = "models/model.pt" model = Classifier() model.load_state_dict(torch.load(PATH)) model.eval() cry_file_path = "processed_np/car_file.npz" car_tensor = get_file_as_tensor(car_file_path) no_car_file_path = "raw_negative_processed/nocar-1041.npz" no_car_tensor = get_file_as_tensor(no_car_file_path) car_prediction = predict(car_tensor, model) no_cry_prediction = predict(no_car_tensor, model) print("car", car_prediction) print("no car", no_car_prediction) The code is self explanatory but SO keeps asking for more text Would really appreciate some help as I am new to ML
def predict(tensor, model): yhat = model(tensor.unsqueeze(0)) yhat = yhat.clone().detach() return yhat You should use this method definition instead of yours.
https://stackoverflow.com/questions/68148804/
Binary mask of top n-th quantile in a batch of 2D tensors, but with individual n for each tensor
I have a tensor A of shape (100, 16, 16) and tensor B of shape (100), where 100 is the batch size. I want to create a binary mask of A that has shape (100, 16, 16), where in each element (element has shape (1, 16, 16)) of the mask, the value is 1 if the element is greater than the computed quantile value, else 0. Each element in tensor B indicates the percentile value for each individual element in A, in sequence. If B is simply a scalar, I can use: flat_A = torch.reshape(A, (100, -1)) quants = torch.quantile(flat_A, B, dim=1) quants = torch.reshape(quants, (100, 1, 1)) mask = torch.where(A >= quants, 1, 0) # quants will have shape (100, 1, 1) The question is: if B is a 1D tensor of shape (100) like I said above, how can I compute the percentile value for each individual element in A? I tried the following, but the results did not look like what I expected: >>> torch.quantile(flat_A, B, dim=1).shape torch.Size([100, 100]) >>> torch.quantile(flat_A, B, dim=0).shape torch.Size([100, 256]) I think the result's shape should be (100), so I can use mask = torch.where(A >= quants, 1, 0), or maybe I misunderstand it? For more context, this question is also the extension of the scalar B value question I had previously here.
This is one way using torch.quantile() function. Note that here I am using tensors of shape (5, 2, 2) instead of (100, 16, 16) for simplicity. import torch # Generate some data of shape (5, 2, 2) A = torch.arange(5 * 2 * 2).reshape(5, 2, 2) + 1.0 B = torch.linspace(0, 1, 5) # 5 quantile values for each element in A Af = A.reshape(A.shape[0], -1) # flattens A to a 2D tensor quantiles = torch.quantile(Af, B, dim = 1, keepdim = True) quants = quantiles[torch.arange(A.shape[0]), torch.arange(A.shape[0]), 0] mask = (A >= quants[:, None, None]).type(torch.uint8) Here the tensor quantiles is of shape torch.Size([5, 5, 1]) because it stores the thresholds for each quantile value in B for each element in A (or row in Af). Since we have 5 quantile values, we get 5 thresholds for each element in A. For instance, quantiles[i, j, 0] has the threshold for B[i]th quantile of A[j] or Af[j], and you essentially need the values quantiles[k, k, 0] for k in range of batch size or 5 here. Now to satisfy the requirement that you need thresholds for corresponding quantiles in B and elements in A, simply index out the diagonal elements from quantiles and populate quants that has shape torch.Size([5]). Finally to get the mask, compare A with the corresponding thresholds for each element. Note that this uses a broadcasted elementwise comparison with the thresholds. mask has the required shape of torch.Size([5, 2, 2]).
https://stackoverflow.com/questions/68148903/
How to extract overlapping patches from a 3D volume and recreate the input shape from the patches?
Pytorch offers torch.Tensor.unfold operation which can be chained to arbitrarily many dimensions to extract overlapping patches. How can we reverse the patch extraction operation such that the patches are combined to the input shape. The focus is 3D volumetric images with 1 channel (biomedical). Extracting is possible with unfold, how can we combine the patches if they overlap.
To extract (overlapping-) patches and to reconstruct the input shape we can use the torch.nn.functional.unfold and the inverse operation torch.nn.functional.fold. These methods only process 4D tensors or 2D images, however you can use these methods to process one dimension at a time. Few notes: This way requires fold/unfold methods from pytorch, unfortunately I have yet to find a similar method in the TF api. We start with 2D then 3D then 4D to show the incremental differences, you can extend to arbitrarily many dimensions (probably write a loop instead of hardcoding each dimension like i did) We can extract patches in 2 ways, their output is the same. The methods are called extract_patches_Xd and extract_patches_Xds where X is the number of dimensions. The latter uses torch.Tensor.unfold() and has less lines of code. (output is the same, except it cannot use dilation) The methods extract_patches_Xd and combine_patches_Xd are inverse methods and the combiner reverses the steps from the extracter step by step. The lines of code are followed by a comment stating the dimensionality such as (B, C, T, D, H, W). The following are used: B: Batch size C: Channels T: Time Dimension D: Depth Dimension H: Height Dimension W: Width Dimension x_dim_in: In the extraction method, this is the number input pixels in dimension x. In the combining method, this is the number of number of sliding windows in dimension x. x_dim_out: In the extraction method, this is the number of sliding windows in dimension x. In the combining method, this is the number output pixels in dimension x. I have a public notebook to try out the code I have tried out basic 2D, 3D and 4D tensors as shown below. However, my code is not infallible and I appreciate feedback when tested on other inputs. The get_dim_blocks() method is the function given on the pytorch docs website to compute the output shape of a convolutional layer. Note that if you have overlapping patches and you combine them, the overlapping elements will be summed. If you would like to get the initial input again there is a way. Create similar sized tensor of ones as the patches with torch.ones_like(patches_tensor). Combine the patches into full image with same output shape. (this creates a counter for overlapping elements). Divide the Combined image with the Combined ones, this should reverse any double summation of elements. First (2D): The torch.nn.functional.fold and torch.nn.functional.unfold methods can be used directly. import torch def extract_patches_2ds(x, kernel_size, padding=0, stride=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride) channels = x.shape[1] x = torch.nn.functional.pad(x, padding) # (B, C, H, W) x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]) # (B, C, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1]) # (B * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1]) return x def extract_patches_2d(x, kernel_size, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding) if isinstance(stride, int): stride = (stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] h_dim_in = x.shape[2] w_dim_in = x.shape[3] h_dim_out = get_dim_blocks(h_dim_in, kernel_size[0], padding[0], stride[0], dilation[0]) w_dim_out = get_dim_blocks(w_dim_in, kernel_size[1], padding[1], stride[1], dilation[1]) # (B, C, H, W) x = torch.nn.functional.unfold(x, kernel_size, padding=padding, stride=stride, dilation=dilation) # (B, C * kernel_size[0] * kernel_size[1], h_dim_out * w_dim_out) x = x.view(-1, channels, kernel_size[0], kernel_size[1], h_dim_out, w_dim_out) # (B, C, kernel_size[0], kernel_size[1], h_dim_out, w_dim_out) x = x.permute(0,1,4,5,2,3) # (B, C, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1]) # (B * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1]) return x def combine_patches_2d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding) if isinstance(stride, int): stride = (stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] h_dim_out, w_dim_out = output_shape[2:] h_dim_in = get_dim_blocks(h_dim_out, kernel_size[0], padding[0], stride[0], dilation[0]) w_dim_in = get_dim_blocks(w_dim_out, kernel_size[1], padding[1], stride[1], dilation[1]) # (B * h_dim_in * w_dim_in, C, kernel_size[0], kernel_size[1]) x = x.view(-1, channels, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1]) # (B, C, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1]) x = x.permute(0,1,4,5,2,3) # (B, C, kernel_size[0], kernel_size[1], h_dim_in, w_dim_in) x = x.contiguous().view(-1, channels * kernel_size[0] * kernel_size[1], h_dim_in * w_dim_in) # (B, C * kernel_size[0] * kernel_size[1], h_dim_in * w_dim_in) x = torch.nn.functional.fold(x, (h_dim_out, w_dim_out), kernel_size=(kernel_size[0], kernel_size[1]), padding=padding, stride=stride, dilation=dilation) # (B, C, H, W) return x a = torch.arange(1, 65, dtype=torch.float).view(2,2,4,4) print(a.shape) print(a) b = extract_patches_2d(a, 2, padding=1, stride=2, dilation=1) # b = extract_patches_2ds(a, 2, padding=1, stride=2) print(b.shape) print(b) c = combine_patches_2d(b, 2, (2,2,4,4), padding=1, stride=2, dilation=1) print(c.shape) print(c) print(torch.all(a==c)) Output (2D) torch.Size([2, 2, 4, 4]) tensor([[[[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.]], [[17., 18., 19., 20.], [21., 22., 23., 24.], [25., 26., 27., 28.], [29., 30., 31., 32.]]], [[[33., 34., 35., 36.], [37., 38., 39., 40.], [41., 42., 43., 44.], [45., 46., 47., 48.]], [[49., 50., 51., 52.], [53., 54., 55., 56.], [57., 58., 59., 60.], [61., 62., 63., 64.]]]]) torch.Size([18, 2, 2, 2]) tensor([[[[ 0., 0.], [ 0., 1.]], [[ 0., 0.], [ 2., 3.]]], [[[ 0., 0.], [ 4., 0.]], [[ 0., 5.], [ 0., 9.]]], [[[ 6., 7.], [10., 11.]], [[ 8., 0.], [12., 0.]]], [[[ 0., 13.], [ 0., 0.]], [[14., 15.], [ 0., 0.]]], [[[16., 0.], [ 0., 0.]], [[ 0., 0.], [ 0., 17.]]], [[[ 0., 0.], [18., 19.]], [[ 0., 0.], [20., 0.]]], [[[ 0., 21.], [ 0., 25.]], [[22., 23.], [26., 27.]]], [[[24., 0.], [28., 0.]], [[ 0., 29.], [ 0., 0.]]], [[[30., 31.], [ 0., 0.]], [[32., 0.], [ 0., 0.]]], [[[ 0., 0.], [ 0., 33.]], [[ 0., 0.], [34., 35.]]], [[[ 0., 0.], [36., 0.]], [[ 0., 37.], [ 0., 41.]]], [[[38., 39.], [42., 43.]], [[40., 0.], [44., 0.]]], [[[ 0., 45.], [ 0., 0.]], [[46., 47.], [ 0., 0.]]], [[[48., 0.], [ 0., 0.]], [[ 0., 0.], [ 0., 49.]]], [[[ 0., 0.], [50., 51.]], [[ 0., 0.], [52., 0.]]], [[[ 0., 53.], [ 0., 57.]], [[54., 55.], [58., 59.]]], [[[56., 0.], [60., 0.]], [[ 0., 61.], [ 0., 0.]]], [[[62., 63.], [ 0., 0.]], [[64., 0.], [ 0., 0.]]]]) torch.Size([2, 2, 4, 4]) tensor([[[[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.]], [[17., 18., 19., 20.], [21., 22., 23., 24.], [25., 26., 27., 28.], [29., 30., 31., 32.]]], [[[33., 34., 35., 36.], [37., 38., 39., 40.], [41., 42., 43., 44.], [45., 46., 47., 48.]], [[49., 50., 51., 52.], [53., 54., 55., 56.], [57., 58., 59., 60.], [61., 62., 63., 64.]]]]) tensor(True) Second (3D): Now it becomes interesting: We need to use 2 fold and unfold where we first apply the fold to the D dimension and leave the W and H untouched by setting kernel to 1, padding to 0, stride to 1 and dilation to 1. After we review the tensor and fold over the H and W dimensions. The unfolding happens in reverse, starting with H and W, then D. def extract_patches_3ds(x, kernel_size, padding=0, stride=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride) channels = x.shape[1] x = torch.nn.functional.pad(x, padding) # (B, C, D, H, W) x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]).unfold(4, kernel_size[2], stride[2]) # (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2]) # (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2]) return x def extract_patches_3d(x, kernel_size, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] d_dim_in = x.shape[2] h_dim_in = x.shape[3] w_dim_in = x.shape[4] d_dim_out = get_dim_blocks(d_dim_in, kernel_size[0], padding[0], stride[0], dilation[0]) h_dim_out = get_dim_blocks(h_dim_in, kernel_size[1], padding[1], stride[1], dilation[1]) w_dim_out = get_dim_blocks(w_dim_in, kernel_size[2], padding[2], stride[2], dilation[2]) # print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out) # (B, C, D, H, W) x = x.view(-1, channels, d_dim_in, h_dim_in * w_dim_in) # (B, C, D, H * W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C * kernel_size[0], d_dim_out * H * W) x = x.view(-1, channels * kernel_size[0] * d_dim_out, h_dim_in, w_dim_in) # (B, C * kernel_size[0] * d_dim_out, H, W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2])) # (B, C * kernel_size[0] * d_dim_out * kernel_size[1] * kernel_size[2], h_dim_out, w_dim_out) x = x.view(-1, channels, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out) # (B, C, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out) x = x.permute(0, 1, 3, 6, 7, 2, 4, 5) # (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2]) # (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2]) return x def combine_patches_3d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] d_dim_out, h_dim_out, w_dim_out = output_shape[2:] d_dim_in = get_dim_blocks(d_dim_out, kernel_size[0], padding[0], stride[0], dilation[0]) h_dim_in = get_dim_blocks(h_dim_out, kernel_size[1], padding[1], stride[1], dilation[1]) w_dim_in = get_dim_blocks(w_dim_out, kernel_size[2], padding[2], stride[2], dilation[2]) # print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out) x = x.view(-1, channels, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2]) # (B, C, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2]) x = x.permute(0, 1, 5, 2, 6, 7, 3, 4) # (B, C, kernel_size[0], d_dim_in, kernel_size[1], kernel_size[2], h_dim_in, w_dim_in) x = x.contiguous().view(-1, channels * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in) # (B, C * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in) x = torch.nn.functional.fold(x, output_size=(h_dim_out, w_dim_out), kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2])) # (B, C * kernel_size[0] * d_dim_in, H, W) x = x.view(-1, channels * kernel_size[0], d_dim_in * h_dim_out * w_dim_out) # (B, C * kernel_size[0], d_dim_in * H * W) x = torch.nn.functional.fold(x, output_size=(d_dim_out, h_dim_out * w_dim_out), kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C, D, H * W) x = x.view(-1, channels, d_dim_out, h_dim_out, w_dim_out) # (B, C, D, H, W) return x a = torch.arange(1, 129, dtype=torch.float).view(2,2,2,4,4) print(a.shape) print(a) # b = extract_patches_3d(a, 2, padding=1, stride=2) b = extract_patches_3ds(a, 2, padding=1, stride=2) print(b.shape) print(b) c = combine_patches_3d(b, 2, (2,2,2,4,4), padding=1, stride=2) print(c.shape) print(c) print(torch.all(a==c)) Output (3D) (I had to limit the characters please look at the notebook) Third (4D) We add a time dimension to the 3D volume. We start the folding with just the T dimension, leaving D, H and W alone similarly to the 3D version. Then we fold over D leaving H and W. Finally we do H and W. The unfolding happens in reverse again. Hopefully by now you notice a pattern and you can add arbitrarily many dimensions and start folding one by one. The unfolding happens in reverse again. def extract_patches_4ds(x, kernel_size, padding=0, stride=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding, padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride, stride) channels = x.shape[1] x = torch.nn.functional.pad(x, padding) # (B, C, T, D, H, W) x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]).unfold(4, kernel_size[2], stride[2]).unfold(5, kernel_size[3], stride[3]) # (B, C, t_dim_out, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) # (B * t_dim_out, d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) return x def extract_patches_4d(x, kernel_size, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] t_dim_in = x.shape[2] d_dim_in = x.shape[3] h_dim_in = x.shape[4] w_dim_in = x.shape[5] t_dim_out = get_dim_blocks(t_dim_in, kernel_size[0], padding[0], stride[0], dilation[0]) d_dim_out = get_dim_blocks(d_dim_in, kernel_size[1], padding[1], stride[1], dilation[1]) h_dim_out = get_dim_blocks(h_dim_in, kernel_size[2], padding[2], stride[2], dilation[2]) w_dim_out = get_dim_blocks(w_dim_in, kernel_size[3], padding[3], stride[3], dilation[3]) # print(t_dim_in, d_dim_in, h_dim_in, w_dim_in, t_dim_out, d_dim_out, h_dim_out, w_dim_out) # (B, C, T, D, H, W) x = x.view(-1, channels, t_dim_in, d_dim_in * h_dim_in * w_dim_in) # (B, C, T, D * H * W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C * kernel_size[0], t_dim_out * D * H * W) x = x.view(-1, channels * kernel_size[0] * t_dim_out, d_dim_in, h_dim_in * w_dim_in) # (B, C * kernel_size[0] * t_dim_out, D, H * W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[1], 1), padding=(padding[1], 0), stride=(stride[1], 1), dilation=(dilation[1], 1)) # (B, C * kernel_size[0] * t_dim_out * kernel_size[1], d_dim_out * H * W) x = x.view(-1, channels * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out, h_dim_in, w_dim_in) # (B, C * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out, H, W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[2], kernel_size[3]), padding=(padding[2], padding[3]), stride=(stride[2], stride[3]), dilation=(dilation[2], dilation[3])) # (B, C * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out * kernel_size[2] * kernel_size[3], h_dim_out * w_dim_out) x = x.view(-1, channels, kernel_size[0], t_dim_out, kernel_size[1], d_dim_out, kernel_size[2], kernel_size[3], h_dim_out, w_dim_out) # (B, C, kernel_size[0], t_dim_out, kernel_size[1], d_dim_out, kernel_size[2], kernel_size[3], h_dim_out, w_dim_out) x = x.permute(0, 1, 3, 5, 8, 9, 2, 4, 6, 7) # (B, C, t_dim_out, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) # (B * t_dim_out * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) return x def combine_patches_4d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] t_dim_out, d_dim_out, h_dim_out, w_dim_out = output_shape[2:] t_dim_in = get_dim_blocks(d_dim_out, kernel_size[0], padding[0], stride[0], dilation[0]) d_dim_in = get_dim_blocks(d_dim_out, kernel_size[1], padding[1], stride[1], dilation[1]) h_dim_in = get_dim_blocks(h_dim_out, kernel_size[2], padding[2], stride[2], dilation[2]) w_dim_in = get_dim_blocks(w_dim_out, kernel_size[3], padding[3], stride[3], dilation[3]) # print(t_dim_in, d_dim_in, h_dim_in, w_dim_in, t_dim_out, d_dim_out, h_dim_out, w_dim_out) x = x.view(-1, channels, t_dim_in, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) # (B, C, t_dim_in, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) x = x.permute(0, 1, 6, 2, 7, 3, 8, 9, 4, 5) # (B, C, kernel_size[0], t_dim_in, kernel_size[1], d_dim_in, kernel_size[2], kernel_size[3], h_dim_in, w_dim_in) x = x.contiguous().view(-1, channels * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in * kernel_size[2] * kernel_size[3], h_dim_in * w_dim_in) # (B, C * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in * kernel_size[2] * kernel_size[3], h_dim_in, w_dim_in) x = torch.nn.functional.fold(x, output_size=(h_dim_out, w_dim_out), kernel_size=(kernel_size[2], kernel_size[3]), padding=(padding[2], padding[3]), stride=(stride[2], stride[3]), dilation=(dilation[2], dilation[3])) # (B, C * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in, H, W) x = x.view(-1, channels * kernel_size[0] * t_dim_in * kernel_size[1], d_dim_in * h_dim_out * w_dim_out) # (B, C * kernel_size[0] * t_dim_in * kernel_size[1], d_dim_in * H * W) x = torch.nn.functional.fold(x, output_size=(d_dim_out, h_dim_out * w_dim_out), kernel_size=(kernel_size[1], 1), padding=(padding[1], 0), stride=(stride[1], 1), dilation=(dilation[1], 1)) # (B, C * kernel_size[0] * t_dim_in, D, H * W) x = x.view(-1, channels * kernel_size[0], t_dim_in * d_dim_out * h_dim_out * w_dim_out) # (B, C * kernel_size[0], t_dim_in * D * H * W) x = torch.nn.functional.fold(x, output_size=(t_dim_out, d_dim_out * h_dim_out * w_dim_out), kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C, T, D * H * W) x = x.view(-1, channels, t_dim_out, d_dim_out, h_dim_out, w_dim_out) # (B, C, T, D, H, W) return x a = torch.arange(1, 129, dtype=torch.float).view(2,2,2,2,4,2) print(a.shape) print(a) # b = extract_patches_4d(a, 2, padding=1, stride=2) b = extract_patches_4ds(a, 2, padding=1, stride=2) print(b.shape) print(b) c = combine_patches_4d(b, 2, (2,2,2,2,4,2), padding=1, stride=2) print(c.shape) print(c) print(torch.all(a==c)) Output (4D) (I had to limit the characters please look at the notebook)
https://stackoverflow.com/questions/68150248/
using pytorch for Gradient Descent
my code: import torch import torch.nn as nn import torch.nn.functional as F class MultivariateLinearRegressionModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(3,1) def forward(self,x): # print(1) return self.linear(x) x_train = torch.FloatTensor([[73,80,75], [93,88,93], [89,91,90], [96,98,100], [73,66,70]]) y_train = torch.FloatTensor([[152],[185],[180],[196], [142]]) model = MultivariateLinearRegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr = 1e-5) # print(222) ep = 2000 for epoch in range(ep+1): hypothesis = model(x_train) cost = F.mse_loss(hypothesis, y_train) if epoch % 100 == 0: print('Epoch {:4d}/{} Cost: {:.6f}'.format( epoch, 2000, cost.item() )) optimizer.zero_grad() cost.backward() optimizer.step() my problem: this code is my own MultivariateLinearRegressionModel. But in the for loop hypothesis = model(x_train) why this code is same with hypothesis = model.forward(x_train) ?? i don't know why this 2 code statement is same. is this a python grammar??
Because your model MultivariateLinearRegressionModel is inherited from nn.Module so when ever you call model(x_train), it will automatically execute the forward function which is defined in MultivariateLinearRegressionModel class. That's why model(x_train) and model.forward(x_train) give the same result.
https://stackoverflow.com/questions/68151719/
How do I convert a Detectron2 model into another deeplearning framework?
I would like to convert a detectron2 model into a another deeplearning framework i.e. PyTorch, TensorFlow or ONNX. How do I do this conversion? I can run inference on the detectron2 model with the cfg (which I believe means config in detectron2 lingo). The goal is to eventually run the Detectron2 model on a Nvidia Jetson Board. So, the goal would be to convert the model.
Since v0.4 you can deploy detectron2 models to torchscript and ONNX. There is more information about it in the documentation (and also example code).
https://stackoverflow.com/questions/68152372/
Unexpected input data type. Actual: (tensor(double)) , expected: (tensor(float))
I am learning this new ONNX framework that allows us to deploy the deep learning (and others) model into production. However, there is one thing I am missing. I thought that the main reason for having such a framework is so that for inference purposes e.g. when we have a trained model and want to use it in a different venv (where for example we cannot have PyTorch) the model still can be used. I have preped a "from scratch" example here: # Modules import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import DataLoader, TensorDataset import torchvision import onnx import onnxruntime import matplotlib.pyplot as plt import numpy as np # %config Completer.use_jedi = False # MNIST Example dataset train_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST( 'data', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), ])), batch_size=800) # Take data and labels "by hand" inputs_batch, labels_batch = next(iter(train_loader)) # Simple Model class CNN(nn.Module): def __init__(self, in_channels, num_classes): super(CNN, self).__init__() self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels = 10, kernel_size = (3, 3), stride = (1, 1), padding=(1, 1)) self.pool = nn.MaxPool2d(kernel_size=(2, 2), stride = (2, 2)) self.conv2 = nn.Conv2d(in_channels = 10, out_channels=16, kernel_size = (3, 3), stride = (1, 1), padding=(1, 1)) self.fc1 = nn.Linear(16*7*7, num_classes) def forward(self, x): x = F.relu(self.conv1(x)) x = self.pool(x) x = F.relu(self.conv2(x)) x = self.pool(x) x = x.reshape(x.shape[0], -1) x = self.fc1(x) return x # Training setting device = 'cpu' batch_size = 64 learning_rate = 0.001 n_epochs = 10 # Dataset prep dataset = TensorDataset(inputs_batch, labels_batch) TRAIN_DF = DataLoader(dataset = dataset, batch_size = batch_size, shuffle = True) # Model Init model = CNN(in_channels=1, num_classes=10) optimizer = optim.Adam(model.parameters(), lr = learning_rate) # Training Loop for epoch in range(n_epochs): for data, labels in TRAIN_DF: model.train() # Send Data to GPU data = data.to(device) # Send Data to GPU labels = labels.to(device) # data = data.reshape(data.shape[0], -1) # Forward pred = model(data) loss = F.cross_entropy(pred, labels) # Backward optimizer.zero_grad() loss.backward() optimizer.step() # Check Accuracy def check_accuracy(loader, model): num_correct = 0 num_total = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device) y = y.to(device) # x = x.reshape(x.shape[0], -1) scores = model(x) _, pred = scores.max(1) num_correct += (pred == y).sum() num_total += pred.size(0) print(F"Got {num_correct} / {num_total} with accuracy {float(num_correct)/float(num_total)*100: .2f}") check_accuracy(TRAIN_DF, model) # Inference with ONNX # Create Artifical data of the same size img_size = 28 dummy_data = torch.randn(1, img_size, img_size) dummy_input = torch.autograd.Variable(dummy_data).unsqueeze(0) input_name = "input" output_name = "output" model_eval = model.eval() torch.onnx.export( model_eval, dummy_input, "model_CNN.onnx", input_names=["input"], output_names=["output"], ) # Take Random Image from Training Data X_pred = data[4].unsqueeze(0) # Convert the Tensor image to PURE numpy and pretend we are working in venv where we only have numpy - NO PYTORCH X_pred_np = X_pred.numpy() X_pred_np = np.array(X_pred_np) IMG_Rando = np.random.rand(1, 1, 28, 28) np.shape(X_pred_np) == np.shape(IMG_Rando) ort_session = onnxruntime.InferenceSession( "model_CNN.onnx" ) def to_numpy(tensor): return ( tensor.detach().gpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) # compute ONNX Runtime output prediction # WORKS # ort_inputs = {ort_session.get_inputs()[0].name: X_pred_np} # DOES NOT WORK ort_inputs = {ort_session.get_inputs()[0].name: IMG_Rando} # WORKS # ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(X_pred)} ort_outs = ort_session.run(None, ort_inputs) ort_outs Firstly, we create a simple model and train it on the MNIST dataset. Then we export the trained model using the ONNX framework. Now, when I want to classify an image using the X_pred_np It works even though it is a "pure" NumPy, which is what I want. However, I suspect that this particular case works only because it has been derived from the PyTorch tensor object, and thus "under the hood" it still has PyTorch attributes. While when I try to inference on the random "pure" NumPy object IMG_Rando, there seems to be a problem: Unexpected input data type. Actual: (tensor(double)) , expected: (tensor(float)). Referring that PyTorch form is needed. Is there a way how to be able to use only numpy Images for the ONNX predictions?. So the inference can be performed in separated venv where no pytorch is installed? Secondly, is there a way that ONNX would remember the actual classes? In this particular case, the index corresponds to the label of the image. However, in animal classification, ONNX would not provide us with the "DOG" and "CAT" and other labels but would only provide us the index of the predicted label. Which we would need to run throw our own "prediction dictionary" so we know that the fifth label is associated with "cat" and sixth label is associated with "dog" etc.
As an improvement to the accepted answer, the idiomatic way to generate random numbers in Numpy is now by using a Generator. This offers the benefit of being able to create the array in the right type directly, rather than using the expensive astype operation, which copies the array (as in the accepted answer). Thus, the improved solution would look like: rng = np.random.default_rng() # set seed if desired IMG_Rando = rng.random((1, 1, 28, 28), dtype=np.float32)
https://stackoverflow.com/questions/68152634/
Index tensor must have the same number of dimensions as self tensor
I have a dataset which looks like ID Target Weight Score Scale_Cat Scale_num 0 A D 65.1 87 Up 1 1 A X 35.8 87 Up 1 2 B C 34.7 37.5 Down -2 3 B P 33.4 37.5 Down -2 4 C B 33.1 37.5 Down -2 5 S X 21.4 12.5 NA 9 This dataset consists of nodes (ID) and targets (neighbors) and it has been used as sample for testing label propagation. Classes/Labels are within the column Scale_num and can take values from -2 to 2 at step by one. The label 9 means unlabelled and it is the label that I would like to predict using label propagation algorithm. Looking for some example on Google about label propagation, I have found this code useful (difference is in label assignment, since in my df I have already information on data which have labelled - from -2 to 2 at step by 1, and unlabelled, i.e. 9): https://mybinder.org/v2/gh/thibaudmartinez/label-propagation/master?filepath=notebook.ipynb However, trying to use my classes instead of (-1,0,1) as in the original code, I have got some errors. A user has provided some help here: RunTimeError during one hot encoding, for fixing a RunTimeError, unfortunately still without success. In the answer provided on that link, 40 obs and labels are randomly generated. import random labels = list() for i in range(0,40): labels.append(list([(lambda x: x+2 if x !=9 else 5)(random.sample(classes,1)[0])])) index_aka_labels = torch.tensor(labels) torch.zeros(40, 6, dtype=src.dtype).scatter_(1, index_aka_labels, 1) The error I am getting, still a RunTimeError, seems to be still due to a wrong encoding. What I tried is the following: import random labels = list(df['Scale_num']) index_aka_labels = torch.tensor(labels) torch.zeros(len(df), 6, dtype=src.dtype).scatter_(1, index_aka_labels, 1) getting the error ---> 7 torch.zeros(len(df), 6, dtype=src.dtype).scatter_(1, index_aka_labels, 1) RuntimeError: Index tensor must have the same number of dimensions as self tensor For sure, I am missing something (e.g., the way to use classes and labels as well as src, which has never been defined in the answer provided in that link). The two functions in the original code which are causing the error are as follows: def _one_hot_encode(self, labels): # Get the number of classes classes = torch.unique(labels) # probably this should be replaced classes = classes[classes != -1] # unlabelled. In my df the unlabelled class is identified by 9 self.n_classes = classes.size(0) # One-hot encode labeled data instances and zero rows corresponding to unlabeled instances unlabeled_mask = (labels == -1) # In my df the unlabelled class is identified by 9 labels = labels.clone() # defensive copying labels[unlabeled_mask] = 0 self.one_hot_labels = torch.zeros((self.n_nodes, self.n_classes), dtype=torch.float) self.one_hot_labels = self.one_hot_labels.scatter(1, labels.unsqueeze(1), 1) self.one_hot_labels[unlabeled_mask, 0] = 0 self.labeled_mask = ~unlabeled_mask def fit(self, labels, max_iter, tol): self._one_hot_encode(labels) self.predictions = self.one_hot_labels.clone() prev_predictions = torch.zeros((self.n_nodes, self.n_classes), dtype=torch.float) for i in range(max_iter): # Stop iterations if the system is considered at a steady state variation = torch.abs(self.predictions - prev_predictions).sum().item() prev_predictions = self.predictions self._propagate() I would like to understand how to use in the right way my classes/labels definition and info from my df in order to run the label propagation algorithm with no errors.
I suspect it's complaining about index_aka_labels lacking the singleton dimension. Note that in your example which works: import random labels = list() for i in range(0,40): labels.append(list([(lambda x: x+2 if x !=9 else 5)(random.sample(classes,1)[0])])) index_aka_labels = torch.tensor(labels) torch.zeros(40, 6, dtype=src.dtype).scatter_(1, index_aka_labels, 1) If you run index_aka_labels.shape, it returns (40,1). When you just turn your pandas series into a tensor, however, it will return a tensor of shape (M) (where M is the length of the series). If you simply run: import random labels = list(df['Scale_num']) index_aka_labels = torch.tensor(labels)[:,None] #create another dimension torch.zeros(len(df), 6, dtype=src.dtype).scatter_(1, index_aka_labels, 1) the error should disappear. One more thing, you are not converting your labels into indices as you did in the top example. To do that, you can run: import random labels = list(df['Scale_num']) index_aka_labels = torch.tensor(labels)[:,None] #create another dimension index_aka_labels = index_aka_labels + 2 # labels are [-2,-1,0,1,2] and convert them to [0,1,2,3,4] index_aka_labels[index_aka_labels==11] = 5 #convert label 9 to index 5 torch.zeros(len(df), 6, dtype=src.dtype).scatter_(1, index_aka_labels, 1)
https://stackoverflow.com/questions/68152842/
Pytorch broadcasting command not found
I have the following segment of nested for loop in my code. The nested loop is slowing down my complete execution. for a torch tensor extended_output with shape [batchSize,nClass*repeat] and another torch tensor with dimension [batchSize,nClass], I want the aggregation to happen as follows: for q in range(nClass): for u in range(repeat): output[:,q]=output[:,q]+extended_output[:,(q+u*nClass)] Here, nClass,repeat all are integer variables with value 1400 and 8 repectively. Can this nested for loop be avoided using pytorch broadcasting? Any help will be highly useful. A sample working cpode might be like this import torch nClass=1400 repeat=8 batchSize=64 output=torch.zeros([batchSize,nClass]) extended_output=torch.rand([batchSize,nClass*repeat]) for q in range(nClass): for u in range(repeat): output[:,q]=output[:,q]+extended_output[:,(q+u*nClass)]
Sorry for the short and probably over-simplified example. I fear a bigger one would be much more difficult to visualize. But I hope this suits your purpose. Here's what I would do: import torch nClass = 3 repeat = 2 batchSize = 4 torch.manual_seed(0) output = torch.zeros([batchSize,nClass]) extended_output = torch.rand([batchSize,nClass*repeat]) for q in range(nClass): for u in range(repeat): output[:,q]=output[:,q]+extended_output[:,(q+u*nClass)] idxs = (torch.arange(repeat)*nClass).unsqueeze(0) idxs = idxs + torch.arange(nClass).unsqueeze(1) output_vectorized = extended_output[:, idxs].sum(2) output: extended_output = tensor([[0.4963, 0.7682, 0.0885, 0.1320, 0.3074, 0.6341], [0.4901, 0.8964, 0.4556, 0.6323, 0.3489, 0.4017], [0.0223, 0.1689, 0.2939, 0.5185, 0.6977, 0.8000], [0.1610, 0.2823, 0.6816, 0.9152, 0.3971, 0.8742]]) output = tensor([[0.6283, 1.0756, 0.7226], [1.1224, 1.2453, 0.8573], [0.5408, 0.8665, 1.0939], [1.0762, 0.6794, 1.5558]]) output_vectorized = tensor([[0.6283, 1.0756, 0.7226], [1.1224, 1.2453, 0.8573], [0.5408, 0.8665, 1.0939], [1.0762, 0.6794, 1.5558]])
https://stackoverflow.com/questions/68158155/
Can torch.where() used in a equivalent broadcsating form?
I have the following segment of for loop in my code. The nested loop is slowing down my complete execution. for q in range(batchSize): temp=torch.where((composition_matrix == pred[q]).all(dim=1))[0] if len(temp)==0: output[q]=0 else: output[q]=int(temp[0]) Here, composition_matrix is [14000,2] dimensional pytorch tensor with only positive integers as cell values. pred and output both are a [batchSize,2] dimensional torch tensor. As this for loop is slowing my code a lot and I am unable to get the equivalent broadcasting solution to this code segment. Does a broadcasting solution exists to eleminate this for loop? I shall be grateful for any help. A minimum reproducible example is import torch composition_matrix=torch.randint(3, 10, (14000,2)) batchSize=64 pred=torch.randint(3, 10, (batchSize,2)) output=torch.zeros([batchSize]) for q in range(batchSize): temp=torch.where((composition_matrix == pred[q]).all(dim=1))[0] if len(temp)==0: output[q]=0 else: output[q]=int(temp[0])
To make it simple, you first need to understand what the operation is essentially doing. You've got two tensors. Tensor A is of shape (14000, 2) and tensor B is of shape (64, 2). The operation you want to do is: For each row B[i] in B, compare that B[i] (of shape (2,) with A (of shape (14000, 2)). If B[i] occurs within A, set output[i] = index of first occurrence. This can actually be done in two lines of code (maybe even one line): comp = (composition_matrix[:, None, :] == pred).all(dim=-1) output = torch.argmax(comp.float(), axis=0) The first line creates comp, the broadcasted comparison of composition_matrix and pred, a boolean tensor of shape (14000, 64). The second line needs to find the "index of the first match". This can be done quite simply with argmax: it will return the index of the first "1" (or if all the values are "0", will return the first index, ie, 0). (Note that torch does not support argmax for "bool" tensors, and so comp needed to be cast to another data type.)
https://stackoverflow.com/questions/68158357/
Numerical inconsistency between loop and builtin function
I'm trying to compute the sum of an array of random numbers. But there seems to be an inconcistancy between the results when I do it one element at a time and when I use the built-in function. Furthermore, the error seems to increase when I decrease the data precision. import torch columns = 43*22 rows = 44 torch.manual_seed(0) array = torch.rand([rows,columns], dtype = torch.float64) array_sum = 0 for i in range(rows): for j in range(columns): array_sum += array[i, j] torch.abs(array_sum - array.sum()) results in: tensor(3.6380e-10, dtype=torch.float64) using dtype = torch.float32 results in: tensor(0.1426) using dtype = torch.float16 results in (a whooping!): tensor(18784., dtype=torch.float16) I find it hard to believe no one has ever asked about it. Yet, I haven't found a similar question in SO. Can anyone please help me find some explanation or the source of this error?
The first mistake is this: you should change the summation line to array_sum += float(array[i, j]) For float64 this causes no problems, for the other values it is a problem, the explenation will follow. To start with: when doing floating point arithmetic, you should always keep in mind that there are small mistakes due to rounding errors. The most simple way to see this is in a python shell: >>> .1+.1+.1-.3 5.551115123125783e-17 But how do you take these errors into account? When summing n positive integers to a total tot, the analysis is fairly simple and it the rule is: error(tot) < tot * n * machine_epsilon Where the factor n is usually a gross over-estimation and the machine_epsilon is dependant on the type (representation size) of floating point-number. And is approximatly: float64: 2*10^-16 float32: 1*10^-7 float16: 1*10^-3 And one would generally expect as an error approximately within a reasonable factor of tot*machine_epsilon. And for my tests with float16 we get (with always +-40000 variables summing to a total of +- 20000): error(float64) = 3*10^-10 ≈ 80* 20000 * 2*10^-16 error(float32) = 1*10^-1 ≈ 50* 20000 * 1*10^-7 which is acceptable. Then there is another problem with the float 16. There is the machine epsilon = 1e-4 and you can see the problem with >>> ar = torch.ones([1], dtype=float16) >>> ar tensor([2048.], dtype=torch.float16) >>> ar[0] += .5 >>> ar tensor([2048.], dtype=torch.float16) Here the problem is that when the value 2048 is reached, the value is not precise enough to be able to add a value 1 or less. More specifically: with a float16 you can 'represent' the value 2048, and you can represent the value 2050, but nothing in between because it has too little bits for that precision. By keeping the sum in a float64 variable, you overcome this problem. Fixing this we get for float16: error(float16) = 16 ≈ 8* 20000 * 1*10^-4 Which is large, but acceptable as a value relative to 20000 represented in float16. If you ask yourself, which of the two methods is 'right' then the answer is none of the two, they are both approximations with the same precision, but a different error. But as you probably guessed using the sum() method is faster, better and more reliable.
https://stackoverflow.com/questions/68162677/
How to use Pytorch LinAlg Solver?
I'm trying to replicate the first example on https://pytorch.org/docs/stable/generated/torch.linalg.solve.html import torch import time Acuda = torch.randn(2,3,3,device='cuda') bcuda = torch.randn(2,3,4,device='cuda') t1 = time.time() torch.linalg.torch.solve(Acuda,bcuda) print('torch took: ',time.time()-t1) As result I'm getting Traceback (most recent call last): File "linalg_solver_test.py", line 10, in <module> torch.linalg.torch.solve(Acuda,bcuda) RuntimeError: A must be batches of square matrices, but they are 4 by 3 matrices My Pytorch Version is 1.7.1. In contray to the example on the documentation page, I'm using torch.linalg.torch.solve as torch.linalg.solve does not exist.
You should use the latest PyTorch 1.9 for LinAlg, because it explicitly mentions "Major improvements to support scientific computing, including torch.linalg" (https://github.com/pytorch/pytorch/releases/tag/v1.9.0) PyTorch 1.7.1 is rather old. Looks like this version's LinAlg solver doesn't support non-square matrices.
https://stackoverflow.com/questions/68164046/
Manualy convert pytorch weights to tf.keras weights for convolutional layer
I'm trying to convert pytorch model to tf.keras model including weights conversion and came across an output missmatch between libraries' outputs. Here I define two convolutional layers, which should be identical torch_layer = torch.nn.Conv2d( in_channels=3, out_channels=64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), dilation=1, groups=1, bias=False, padding_mode='zeros' ) tf_layer = tf.keras.layers.Conv2D( filters=64, kernel_size=(7, 7), strides=(2, 2), padding='same', dilation_rate=(1, 1), groups=1, activation=None, use_bias=False ) # define model to specify input channel size tf_model = tf.keras.Sequential([tf.keras.layers.Input((256, 256, 3), batch_size=1), tf_layer]) now I have torch weights and I convert them to tf.keras format # output_channels, input_channels, x, y torch_weights = np.random.rand(64, 3, 7, 7) # x, y, input_channels, output_channels tf_weights = np.transpose(torch_weights, (2, 3, 1, 0)) # assign weights torch_layer.weight = torch.nn.Parameter(torch.Tensor(torch_weights)) tf_model.layers[0].set_weights([tf_weights]) now I define input and the outputs are different (shape is the same, values are different), what am I doing wrong? torch_inputs = np.random.rand(1, 3, 256, 256) tf_inputs = np.transpose(torch_inputs, (0, 2, 3, 1)) torch_output = torch_layer(torch.Tensor(torch_inputs)) tf_output = tf_model.layers[0](tf_inputs)
In tensorflow, set_weights is basically used for outputs from get_weights, so it is better to use assign to avoid making mistakes. Besides, 'same' padding in tensorflow is a little bit complicated. For details, see my SO answer. It depends on input_shape, kernel_size and strides. In your example here, it is translated to torch.nn.ZeroPad2d((2,3,2,3)) in pytorch. Example codes: from tensorflow to pytorch np.random.seed(88883) #initialize the layers respectively torch_layer = torch.nn.Conv2d( in_channels=3, out_channels=64, kernel_size=(7, 7), stride=(2, 2), bias=False ) torch_model = torch.nn.Sequential( torch.nn.ZeroPad2d((2,3,2,3)), torch_layer ) tf_layer = tf.keras.layers.Conv2D( filters=64, kernel_size=(7, 7), strides=(2, 2), padding='same', use_bias=False ) #setting weights in torch layer and tf layer respectively torch_weights = np.random.rand(64, 3, 7, 7) tf_weights = np.transpose(torch_weights, (2, 3, 1, 0)) with torch.no_grad(): torch_layer.weight = torch.nn.Parameter(torch.Tensor(torch_weights)) tf_layer(np.zeros((1,256,256,3))) tf_layer.kernel.assign(tf_weights) #prepare inputs and do inference torch_inputs = torch.Tensor(np.random.rand(1, 3, 256, 256)) tf_inputs = np.transpose(torch_inputs.numpy(), (0, 2, 3, 1)) with torch.no_grad(): torch_output = torch_model(torch_inputs) tf_output = tf_layer(tf_inputs) np.allclose(tf_output.numpy() ,np.transpose(torch_output.numpy(),(0, 2, 3, 1))) #True Edit: from pytorch to tensorflow torch_layer = torch.nn.Conv2d( in_channels=3, out_channels=64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) tf_layer=tf.keras.layers.Conv2D( filters=64, kernel_size=(7, 7), strides=(2, 2), padding='valid', use_bias=False ) tf_model = tf.keras.Sequential([ tf.keras.layers.ZeroPadding2D((3, 3)), tf_layer ])
https://stackoverflow.com/questions/68165375/
CUDA error: device-side assert triggered on Colab
I am trying to initialize a tensor on Google Colab with GPU enabled. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') t = torch.tensor([1,2], device=device) But I am getting this strange error. RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Even by setting that environment variable to 1 seems not showing any further details. Anyone ever had this issue?
While I tried your code, and it did not give me an error, I can say that usually the best practice to debug CUDA Runtime Errors: device-side assert like yours is to turn collab to CPU and recreate the error. It will give you a more useful traceback error. Most of the time CUDA Runtime Errors can be the cause of some index mismatching so like you tried to train a network with 10 output nodes on a dataset with 15 labels. And the thing with this CUDA error is once you get this error once, you will recieve it for every operation you do with torch.tensors. This forces you to restart your notebook. I suggest you restart your notebook, get a more accuracate traceback by moving to CPU, and check the rest of your code especially if you train a model on set of targets somewhere.
https://stackoverflow.com/questions/68166721/
Complicated vector multiplication without iterating through the vector
I'm trying to calculate a loss value in a variation of multiclass classification. I have my y tensor (the values correspond to the classes): y = torch.tensor([ 1, 0, 2]) My y_pred is a 3x3 matrix of probability distributions: y_pred = torch.tensor([[0.4937, 0.2657, 0.2986], [0.2553, 0.3845, 0.4384], [0.2510, 0.3498, 0.2630]]) The complication is that I also have a distance matrix (each class has some distance to other classes): d_mtx = torch.tensor([[0, 0.7256, 0.7433], [0.6281, 0, 0.1171], [0.7580, 0.2513, 0]]) The loss that I'm trying to calculate is: loss = 0 for class_value in range(len(y)): dis = torch.dot(d_mtx[y[class_value]], y_pred[class_value]) loss += dis Is there a way to calculate it efficiently without the iteration? Update 1: Tried @Yahia Zakaria approach and it works if my y_pred has the same size as my d_mtx, but otherwise I get an error: RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 0 For example: y = torch.tensor([ 1, 0, 2, 1]) y_pred = torch.tensor([[0.4937, 0.2657, 0.2986], [0.2553, 0.3845, 0.4384], [0.2510, 0.3498, 0.2630], [0.2510, 0.3498, 0.2630]]) d_mtx = torch.tensor([[0, 0.7256, 0.7433], [0.6281, 0, 0.1171], [0.7580, 0.2513, 0]])
You could do it like that: loss = (d_mtx[y] * y_pred).sum() This solution assumes the y is of type torch.int64 which is valid for the example you have shown.
https://stackoverflow.com/questions/68167918/
Why we need to save pytorch models with .net extension?
I'm a new learner for Pytorch and I am working on a Character_Level_LSTM_Exercise. Why they save the model with .net extension in the model name? I'm searching for the explanation but I didn't get any good explanation. # change the name, for saving multiple files model_name = 'rnn_x_epoch.net' checkpoint = {'n_hidden': net.n_hidden, 'n_layers': net.n_layers, 'state_dict': net.state_dict(), 'tokens': net.chars} with open(model_name, 'wb') as f: torch.save(checkpoint, f)
You can use whatever extension you like! Just make sure to be consistent. The docu recommends to use .pt extension. https://pytorch.org/docs/stable/generated/torch.save.html For more explanation and more extensions options see Soumith Chintala's comment.
https://stackoverflow.com/questions/68177394/
Computing gradient in Tensorflow vs PyTorch
I am trying to compute the gradient for a loss of a simple linear model. However, I face the problem that while using TensorFlow the gradient is computed as 'none'. Why is this happening and how to compute the gradient using TensorFlow? import numpy as np import tensorflow as tf inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70]], dtype='float32') targets = np.array([[56, 70], [81, 101], [119, 133], [22, 37], [103, 119]], dtype='float32') inputs = tf.convert_to_tensor(inputs) targets = tf.convert_to_tensor(targets) w = tf.random.normal(shape=(2, 3)) b = tf.random.normal(shape=(2,)) print(w, b) def model(x): return tf.matmul(x, w, transpose_b = True) + b def mse(t1, t2): diff = t1-t2 return tf.reduce_sum(diff * diff) / tf.cast(tf.size(diff), 'float32') with tf.GradientTape() as tape: pred = model(inputs) loss = mse(pred, targets) print(tape.gradient(loss, [w, b])) Here is the working code using PyTorch. The gradients are computed as expected. import torch inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70]], dtype='float32') targets = np.array([[56, 70], [81, 101], [119, 133], [22, 37], [103, 119]], dtype='float32') inputs = torch.from_numpy(inputs) targets = torch.from_numpy(targets) w = torch.randn(2, 3, requires_grad = True) b = torch.randn(2, requires_grad = True) def model(x): return x @ w.t() + b def mse(t1, t2): diff = t1 - t2 return torch.sum(diff * diff) / diff.numel() pred = model(inputs) loss = mse(pred, targets) loss.backward() print(w.grad) print(b.grad)
Your code doesn't work because in tensorflow, gradients are only computed for tf.Variables. When you create a layer, TF automatically marks its weights and biases as a variable (unless you specify trainable=False). So, in order to make your code work, all you need to do is wrap your w and b with tf.Variable w = tf.Variable(tf.random.normal(shape=(2, 3)), name='w') b = tf.Variable(tf.random.normal(shape=(2,)), name='b') Use these lines to define your weights and biases, and you will get actual values in your final print.
https://stackoverflow.com/questions/68179640/
Getting a specific vector from a matrix using elements of other vectors
I have 2 vectors: x = torch.tensor([ 0, 1, 0, 0, 2, 2, 3, 3, 0, 0, 2, 2, 4, 4, 0, 5, 0, 4, 6, 7, 6, 8, 6, 9, 6, 10, 5, 11, 4, 11, 8, 12, 10, 12, 9, 12, 13, 14, 15, 16, 17, 14, 17, 18, 19, 16, 19, 18]) y = torch.tensor([ 1, 0, 2, 3, 0, 3, 2, 0, 2, 4, 0, 4, 2, 0, 5, 0, 4, 0, 7, 6, 8, 6, 9, 6, 10, 6, 11, 5, 11, 4, 12, 8, 12, 10, 12, 9, 14, 13, 16, 15, 14, 17, 18, 17, 16, 19, 18, 19]) I also have a matrix: mtx = torch.rand(20,20) Is there a way to get the corresponding [i,i] vector from the matrix using the 2 vectors? That is, x[0] = 0, y[0] = 1, so the first element of the vector will be mtx[0,1]. x[1] = 1, y[1] = 0, so the second element of the vector will be mtx[1,0], and so on. I'm looking for an answer without iterating through the vectors (e.g for element in x...?
You can use fancy indexing: output_vector = mtx[[x,y]] where output_vector is the vector of mtx values at the indices in the lists 'x' and 'y' as requested.
https://stackoverflow.com/questions/68185054/
Can someone please explain about the functionality of "torch.bernoulli"?
I have a question about torch.bernoulli. According to the documentation: Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor containing probabilities to be used for drawing the binary random number. Hence, all values in input have to be in the range: 0 ≤ input i ≤ 1. The i-th element of the output tensor will draw a value 1 according to i-th probability value given in input. I don't quite get it. For example, if we have torch.bernoulli([0.599, 0.0846, 0.0179, 0.0742, 0.0742]), once it returns tensor([1., 0., 0., 0., 0.]). Another time it returns tensor([0., 0., 0., 1., 0.]). I don't understand why it behaves like that. The documentation states "...for drawing the binary random number...". But the whole proccess is vague to me. Can someone explain please?
Let's simplify your example to the input tensor [0.60, 0.07, 0.12]. Now, on any given draw, there is a ... 60% chance that element 1 will be 1 7% chance that element 2 will be 1 12% chance that element 3 will be 1 ... and any elements that aren't 1 are 0. These events are independent. For instance, there is a chance of 0.60 * 0.07 * 0.12 that all three elements will be 1. There is a 0.40 * 0.93 * 0.88 chance that all three will be 0. Does that help clear it up?
https://stackoverflow.com/questions/68185761/
Problem with installing pytorch with pip3: -f option requires 1 argument
I am trying to install torch in linux with cuda version 11.1 I checked this: Start Locally | PyTorch It says that the code is pip3 install --user torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f However, this line gives this error -f option requires 1 argument Can someone help?
you must have missed https://download.pytorch.org/whl/torch_stable.html use the below cmd pip3 install torch==1.9.0+cu102 torchvision==0.10.0+cu102 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/68186036/
Loss of accuracy when inferring with a pytorch rnn model
I am training a model using a pytorch RNN model and have multiple csv files to train and infer. If I train file #1 and infer on file #1 I get ~100% accurate predictions. If I train on file #1 and infer on, say, file #4 or file #2 then accuracy drops to ~80%. Here’s what I am doing: 1. Read the file and separate the features (X) and labels (y) into two dataframes. 2. The range of my values, both features and labels, is high. So I apply scaling transformation. 3. Then I split data as train and test. 4. Instantiate model.train() and run train data through the rnn model. 5. Instantiate model.eval() and get the predictions from the model with the test data. 6. Reverse scale the predictions. 7. Calculate mean-square error. So far this is all good. My MSE is very, very low which is good. After training, I need to infer a randomly selected file. Here’s what I am doing for inference: 1. Read the single file and separate the features (X) and labels (y) into two dataframes. 2. Apply scaling transformation. 3. Instantiate model.eval(). 4. Get the predictions. 5. Reverse scale the predictions If the inference file is the same as the trained file accuracy is close to 100%. If I use a different file for inference why does the accuracy drop? Am I doing something wrong? Unfortunately, I cannot share the code due to confidentiality.
With the additional information provided in the comment, I would say it is most likely a problem with over-fitting, rather than any mistake in implementation. Your model is learning the class distribution of file #1, which is then useful to predict the test set of file #1, but which does not translate to the other test sets. To solve this, my suggestion would be to sample a training set from all the available files, such that it more closely resembles the distribution found in the collection of test sets, rather than a single test set. Delving into other RNN over-fitting solutions might be worthwhile too.
https://stackoverflow.com/questions/68189796/
How to batch matrix-vector multiplication (one matrix, many vectors) in pytorch without duplicating the matrix in memory
I have n vectors of size d and a single d x d matrix J. I'd like to compute the n matrix-vector multiplications of J with each of the n vectors. For this, I'm using pytorch's expand() to get a broadcast of J, but it seems that when computing the matrix vector product, pytorch instantiates a full n x d x d tensor in the memory. e.g. the following code device = torch.device("cuda:0") n = 100_000_000 d = 10 x = torch.randn(n, d, dtype=torch.float32, device=device) J = torch.randn(d, d, dtype=torch.float32, device=device).expand(n, d, d) y = torch.sign(torch.matmul(J, x[..., None])[..., 0]) raises RuntimeError: CUDA out of memory. Tried to allocate 37.25 GiB (GPU 0; 11.00 GiB total capacity; 3.73 GiB already allocated; 5.69 GiB free; 3.73 GiB reserved in total by PyTorch) which means that pytorch, unnecessarily, tries to allocate space for n copies of the matrix J How can I perform this task in a vectorized way (the matrices are small, so I don't want to loop over each matrix-vector multiplication) without exhausting my GPU's memory?
I think this will solve it: import torch x = torch.randn(n, d) J = torch.randn(d, d) # no need to expand y = torch.matmul(J, x.T).T Verifying using your expression: Jex = J.expand(n, d, d) y1 = torch.matmul(Jex, x[..., None])[..., 0] y = torch.matmul(J, x.T).T torch.allclose(y1, y) # using allclose for float values # tensor(True)
https://stackoverflow.com/questions/68192626/
An easy way to create a torch tensor from multiple elements of tuple through concatenate
Inputs I have list as follow r1 = [([[[1, 2, 3], [1, 2, 3]], [[4, 5, 6], [4, 5, 6]]], [[[7, 8], [7, 8]], [[9, 10], [9, 10]]]), ([[[11, 12, 13], [11, 12, 13]], [[14, 15, 16], [14, 15, 16]]], [[[17, 18], [17, 18]], [[19, 20], [19, 20]]])] I'm going to make 2 torch tensors from the input above. my desired output is as follow output output = [tensor([[[ 1, 2, 3], [ 1, 2, 3]], [[ 4, 5, 6], [ 4, 5, 6]], [[11, 12, 13], [11, 12, 13]], [[14, 15, 16], [14, 15, 16]]]), tensor([[[ 7, 8], [ 7, 8]], [[ 9, 10], [ 9, 10]], [[17, 18], [17, 18]], [[19, 20], [19, 20]]])] My code is as follows. output = [] for i in range(len(r1[0])): templates = [] for j in range(len(r1)): templates.append(torch.tensor(r1[j][i])) template = torch.cat(templates) output.append(template) Is there a simpler or easier way to get the result I want?
This will do: output = [torch.Tensor([*a, *b]) for a, b in zip(*r1)] It concatenates the corresponding items of the two list first then create the Tensor
https://stackoverflow.com/questions/68194352/
Pytorch Dataloader for reading a large parquet/csv file
I am trying to get Pytorch to train records of a single parquet file, without having to read the entire file in memory at once since it won't fit in memory. Since the file is stored remotely, I would rather keep it as a single file, as training using IO for many files is extremely expensive. How can I use Pytorch's IterableDataset or Dataset to read smaller chunks of the file during training when I want to specify the number of batches in the DataLoader? I know that the map-style Dataset won't work in this case since I need everything in one file rather than reading the index of each file. I managed to implement this in Tensorflow using tfio.IODataset and tf.data.Dataset, but I can't find the equivalent way to implement it in Pytorch.
I found a workaround using torch.utils.data.Dataset, but the data must be manipulated using dask beforehand such that each partition is a user, stored as its own parquet file, but can be read only once later. In the following code, the labels and the data are stored separately for the multivariate timeseries classification problem (but can be easily adapted to other tasks as well): import dask.dataframe as dd import pandas as pd import numpy as np import torch from torch.utils.data import TensorDataset, DataLoader, IterableDataset, Dataset # Breakdown file raw_ddf = dd.read_parquet("data.parquet") # Read huge file using dask raw_ddf = raw_ddf.set_index("userid") # set the userid as index userids = raw_ddf.index.unique().compute().values.tolist() # get a list of indices new_ddf = raw_ddf.repartition(divisions = userids) # repartition by userids new_ddf.to_parquet("my_folder") # this will save each user as its own parquet file within "my_folder" # Dask to read the partitions train_ddf = dd.read_parquet("my_folder/*.parquet") # read all files # Read labels file labels_df = pd.read("label.csv") y_labels = np.array(labels_df["class"]) # Define the Dataset class class UsersDataset(Dataset): def __init__(self, dask_df, labels): self.dask_df = dask_df self.labels = labels def __len__(self): return len(self.labels) def __getitem__(self, idx): X_df = self.dask_df.get_partition(idx).compute() X = np.row_stack([X_df]) X_tensor = torch.tensor(X, dtype=torch.float32) y = self.labels[idx] y_tensor = torch.tensor(y, dtype=torch.long) sample = (X_tensor, y_tensor) return sample # Create a Dataset object user_dataset = UsersDataset(dask_df=ddf_train, labels = y_train) # Create a DataLoader object dataloader = DataLoader(user_dataset, batch_size=4, shuffle=True, num_workers=0) # Print output of the first batch to ensure it works for i_batch, sample_batched in enumerate(dataloader): print("Batch number ", i_batch) print(sample_batched[0]) # print X print(sample_batched[1]) # print y # stop after first batch. if i_batch == 0: break I would like to know how can I adapt my approach when using >= 2 workers to read the data, without duplicate entries. Any insights on this are greatly appreciated.
https://stackoverflow.com/questions/68199072/
Pytorch - building from source - CMAKE_BUILD_WITH_INSTALL_RPATH
I am building PyTorch from source as my GPU card is not supported by the packages in pip or conda. I am in Ubuntu 20.04. I followed the instructions at : I get this error. I searched the internet but there are no useful pointers. There are numerous errors like below: CMake Error at modules/observers/CMakeLists.txt:12 (add_library): The install of the caffe2_observers target requires changing an RPATH from the build tree, but this is not supported with the Ninja generator unless on an ELF-based platform. The CMAKE_BUILD_WITH_INSTALL_RPATH variable may be set to avoid this relinking step. Any idea how this could be resolved. I already tried setting up an environment variable as below but it did not help. $ export CMAKE_BUILD_WITH_INSTALL_RPATH=On $ echo ${CMAKE_BUILD_WITH_INSTALL_RPATH} On
I found that installing cmake solves the problem. sudo apt install cmake I could not debug why it worked. I was in ubuntu version 20.04. May be it had a cmake that was too old and running the above command fetches the latest version. I thought this is still a useful information for someone facing the same issues - so, noting down here.
https://stackoverflow.com/questions/68202182/
Pytorch CNN Model: Dimension out of range Error
I keep getting this error "IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)" when I use cross entropy loss my CNN model. My layers look like this: x= self.conv1(x) x= F.relu(x) x= self.pool1(x) x= F.relu(x) x= self.conv3(x) x= F.relu(x) x= self.conv4(x) x= F.relu(x) x= self.pool2(x) x = torch.flatten(x, 1) x= self.lin1(x) x= F.relu(x) x= self.lin2(x) x= F.relu(x) x= self.lin3(x) x= F.relu(x) x= self.sftmax(x) My code for training is pretty much the one on the [Pytorch website]. where x_train is of shape (300, 1000) and y_train has 300 labels. I want to feed in one array in x_train at a time, then optimize based on its corresponding label in y_train. When I am training, the outputs= net(Input) works without error. For the first epoch, I get an output like tensor([[[0.1072, 0.2725, 0.2963, 0.2395, 0.3821]]], dtype=torch.float64, grad_fn=<UnsqueezeBackward0>). Then I pass in the Label, which is tensor([5.], dtype=torch.float64). After this first epoch though, I get an error "IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)". Is this related to the shape of the input and outputs? I'm not sure what is wrong. Any advice would be appreciated!
I believe the issue might be due to your usage of the squeeze(0) right before you pass in outputs to the loss function. Note in the docs below it says that the output tensor and input sensor will share the same memory space. https://pytorch.org/docs/stable/generated/torch.squeeze.html Edit After testing with the collab attached to the example. It seems that is not the case. Although I noticed that your labels tensor is of size one. Whereas the example has label arrays that match the size of the output space. I believe you might be extracting a single value out of the training set as opposed to the expected tensor size? Essentially the output should match the training set output in size so it can calculate the loss between the two.
https://stackoverflow.com/questions/68202291/
What is the fastest way to use Pytorch Conv2d to apply multiple filters to every layer in multiple CT scans?
Assume an input data set contains CT scans of 100 patients, each scan containing 16 layers and each layer containing 512 x 512 pixels. I want to apply eight 3x3 convolution filters to each layer in every CT scan. So, the input array has shape [100, 16, 512, 512] and the kernels array has shape [8, 3, 3]. After the convolutions are applied, the goal is an output array with a shape [100, 16, 8, 512, 512]. The following code uses Pytorch Conv2d function to achieve this; however, I want to know if the groups parameter (and/or other means) can somehow eliminate the need for the loop. for layer_index in range(0, number_of_layers): # Getting current ct scan layer for all patients # ct_scans dimensions are: [patient, scan layer, pixel row, pixel column] # ct_scans shape: [100, 16, 512, 512] image_stack = ct_scans[:, layer_index, :, :] # Converting from numpy to tensor format image_stack_t = torch.from_numpy(image[:, None, :, :]) # Applying convolution to create 8 filtered versions of current scan layer across all patients # shape of kernels is: [8, 3, 3] filtered_image_stack_t = conv2d(image_stack_t, kernels, padding=1, groups=1) # Converting from tensor format back to numpy format filtered_image_stack = filtered_image_stack_t.numpy() # Amassing filtered ct scans for all patients back into one array # filtered_ct_scans dimensions are: [patient, ct scan layer, filter number, pixel row, pixel column] # filtered_ct_scans shape is: [100, 16, 8, 512, 512] filtered_ct_scans[:, layer_index, :, :, :] = filtered_image_stack So far, my attempts to use anything other than groups=1 leads to errors. I also found the following similar posts; however, they don't address my specific question. How to use groups parameter in PyTorch conv2d function with batch? How to use groups parameter in PyTorch conv2d function
You do not need to use grouped convolutions. Resizing you input appropriately is all that is needed. import torch import torch.nn.functional as F ct_scans = torch.randn((100,16,512,512)) kernels = torch.randn((8,1,3,3)) B,L,H,W = ct_scans.shape #(batch,layers,height,width) ct_scans = ct_scans.view(-1,H,W) ct_scans.unsqueeze_(1) out = F.conv2d(ct_scans, kernels) out = out.view(B,L,*out.shape[1:]) print(out)
https://stackoverflow.com/questions/68203955/
What information does Pytorch nn.functional.interpolate use?
I have a tensor img in PyTorch of size bx2xhxw and want to upsample it using torch.nn.functional.interpolate. But while interpolation I do not wish channel 1 to use information from channel 2. To do this should I do, img2 = torch.rand(b,2,2*h,2*w) # create a random torch tensor. img2[:,0,:,:] = nn.functional.interpolate(img[:,0,:,:], [2*h,2*w], mode='bilinear', align_corners=True) img2[:,1,:,:] = nn.functional.interpolate(img[:,1,:,:], [2*h,2*w], mode='bilinear', align_corners=True) img=img2 or simply using img = nn.functional.interpolate(img, [2*h,2*w], mode='bilinear', align_corners=True) will solve my purpose.
You should use (2). There is no communication in the first and second dimensions (batch and channel respectively) for all types of interpolation (1D, 2D, 3D), as they should be. Simple example: import torch import torch.nn.functional as F b = 2 c = 4 h = w = 8 a = torch.randn((b, c, h, w)) a_upsample = F.interpolate(a, [h*2, w*2], mode='bilinear', align_corners=True) a_mod = a.clone() a_mod[:, 0] *= 1000 a_mod_upsample = F.interpolate(a_mod, [h*2, w*2], mode='bilinear', align_corners=True) print(torch.isclose(a_upsample[:,0], a_mod_upsample[:,0]).all()) print(torch.isclose(a_upsample[:,1], a_mod_upsample[:,1]).all()) print(torch.isclose(a_upsample[:,2], a_mod_upsample[:,2]).all()) print(torch.isclose(a_upsample[:,3], a_mod_upsample[:,3]).all()) Output: tensor(False) tensor(True) tensor(True) tensor(True) One can tell that a large change in the first channel has no effect in other channels.
https://stackoverflow.com/questions/68204161/
AttributeError: Caught AttributeError in DataLoader worker process 0
I have done some googling, and this error seems to commonly appear but I am unsure about how to solve it/ I am currently doing the PyTorch torchvision tutorial (https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html) for segmentation. However, I am getting thrown the titular error: Traceback (most recent call last): File "main.py", line 139, in <module> main() File "main.py", line 131, in main print_freq=10) File "/engine.py", line 26, in train_one_epoch for images, targets in metric_logger.log_every(data_loader, print_freq, header): File "/utils.py", line 180, in log_every for obj in iterable: File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) AttributeError: Caught AttributeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py", line 311, in __getitem__ return self.dataset[self.indices[idx]] File "main.py", line 64, in __getitem__ img, target = self.transforms(img.target) File "/usr/local/lib/python3.6/dist-packages/PIL/Image.py", line 546, in __getattr__ raise AttributeError(name) AttributeError: target I believe I have followed the tutorial completely, copying their code - I'm a bit unsure, therefore, why this error is arising. For a beginner who's trying to learn how to use ML in computer vision it's especially difficult to figure out what's going on here. I don't know if this is important, but I'm running Python3 on Ubuntu. Many thanks in advance for your help! :)
I think you mistyped the , with . in __getitem__ function of PennFudanDataset Your version: if self.transforms is not None: img, target = self.transforms(img.target) Tutorial: if self.transforms is not None: img, target = self.transforms(img, target)
https://stackoverflow.com/questions/68207469/
Extracting hidden representations from an autoencoder using Pytorch
After having trained an AutoEncoder with PyTorch, how can I extract the low-dimensional embeddings of input features at some hidden-level?
You can just define your model such that it optionally returns the intermediate pytorch variable calculated during the forward pass. Simple example: class Autoencoder(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.encoder = nn.Sequential( nn.Linear(input_size, hidden_size), nn.ReLU(), nn.Linear(hidden_size, 3)) #reduce the size self.decoder = nn.Sequential( nn.Linear(3, hidden_size), nn.ReLU(), nn.Linear(hidden_size, input_size), nn.ReLU()) #reduce the size def forward(self, x, return_encoding = False): encoded = self.encoder(x) decoded = self.decoder(encoded) if return_encoding: return decoded,encoded return decoded
https://stackoverflow.com/questions/68211353/
TypeError: __array__() takes 1 positional argument but 2 were given
I've been doing the pytorch tutorial (https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html) and have been getting this error that I don't know how to fix. The full error is below: Traceback (most recent call last): File "main.py", line 146, in <module> main() File "main.py", line 138, in main train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) File "/engine.py", line 26, in train_one_epoch for images, targets in metric_logger.log_every(data_loader, print_freq, header): File "/utils.py", line 180, in log_every for obj in iterable: File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py", line 311, in __getitem__ return self.dataset[self.indices[idx]] File "main.py", line 64, in __getitem__ img, target = self.transforms(img, target) File "/transforms.py", line 26, in __call__ image, target = t(image, target) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/transforms.py", line 50, in forward image = F.to_tensor(image) File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py", line 129, in to_tensor np.array(pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True) TypeError: __array__() takes 1 positional argument but 2 were given I believe it means somewhere I'm using an array with 2 arguments which isn't allowed, but I don't really know where abouts that is happening - perhaps in one of their pre written libraries? I can share the code in full if desired, but thought its a bit unwieldy. Does anyone know what might be causing this error?
I had the same error when using: torch==1.9.0 torchvision==0.10.0 In my requirements.txt file I downgraded the torch library, which forced me to downgrade torchvision, and that fixed the error for me. The library versions I ended up using that did not raise the error were: torch==1.8.1 torchvision==0.9.1
https://stackoverflow.com/questions/68211850/
What does the group_ids parameter of the TimeSeriesDataSet class specifically do in PyTorch Forecasting?
I am currently working with PyTorch Forecasting and I want to create dataset with TimeSeriesDataSet. My original data lies in a pandas Dataframe and looks like this: date amount location 2014-01-01 5 A 2014-01-01 7 B ... ... ... 2017-12-30 4 H 2017-12-31 8 I So in total I got nine different unique values in "location" and an amount for each location per date. Now I am wondering what the group_ids parameter for the TimeSeriesDataSet class does and what it exact behaviour is? I am not really getting the idea based on the documentation. Thanks a lot in advance!
A time-series dataset usually contains multiple time-series for different entities/individuals. group_ids is a list of columns which uniquely determine entities with associated time series. In your example it would be location: group_ids (List[str]) – list of column names identifying a time series. This means that the group_ids identify a sample together with the time_idx. If you have only one timeseries, set this to the name of column that is constant.
https://stackoverflow.com/questions/68211994/
torchaudio "RuntimeError: Error loading audio file: failed to open file" for wav file
I was following this tutorial However, when I ran the code under "Training", it gave me the following error. RuntimeError: Error loading audio file: failed to open file /Users/leonardchoo/Desktop/dev_m1/audio_cnn/UrbanSound8K/fold10/30344-3-0-1.wav The problem is, each time I run this, the error occurs at a different file. error at... # 1st run fold2/106015-5-0-7.wav # 2nd run fold2/76086-4-0-22.wav # 3rd run fold9/14111-4-0-6.wav I couldn't find anything like this on the web. I am genuinely confused by this. My entire code can be found in this COLAB Notebook The dataset is from here --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-14-2d83571e984b> in <module> 59 60 num_epochs = 2 # Just for demo, adjust this higher. ---> 61 training(myModel, train_dl, num_epochs) <ipython-input-14-2d83571e984b> in training(model, train_dl, num_epochs) 19 20 # Repeat for each batch in the training set ---> 21 for i, data in enumerate(train_dl): 22 # Get the input features and target labels, and put them on the GPU 23 inputs, labels = data[0].to(device), data[1].to(device) /usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py in __next__(self) 519 if self._sampler_iter is None: 520 self._reset() --> 521 data = self._next_data() 522 self._num_yielded += 1 523 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py in _next_data(self) 559 def _next_data(self): 560 index = self._next_index() # may raise StopIteration --> 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 562 if self._pin_memory: 563 data = _utils.pin_memory.pin_memory(data) /usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.9/site-packages/torch/utils/data/dataset.py in __getitem__(self, idx) 309 310 def __getitem__(self, idx): --> 311 return self.dataset[self.indices[idx]] 312 313 def __len__(self): <ipython-input-8-8743d21efeae> in __getitem__(self, idx) 32 class_id = self.df.loc[idx, 'classID'] 33 ---> 34 aud = AudioUtil.open(audio_file) 35 # Some sounds have a higher sample rate, or fewer channels compared to the 36 # majority. So make all sounds have the same number of channels and same <ipython-input-7-f64618dd0374> in open(audio_file) 13 @staticmethod 14 def open(audio_file): ---> 15 sig, sr = torchaudio.load(audio_file) 16 return (sig, sr) 17 /usr/local/lib/python3.9/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file /Users/leonardchoo/Desktop/dev_m1/audio_cnn/UrbanSound8K/fold10/30344-3-0-1.wav
It was a silly trouble... i was missing /audio in the path... I did not realize this was the case because the file it gave me error at always changed.
https://stackoverflow.com/questions/68212306/
What is causing the allocation of 12 GB of memory and causing CUDA out of memory error? Model, Data or something else?
I am trying to build a 3D CNN based video classifier using Pytorch. When i try to run a single datapoint i run into this error: CUDA out of memory. Tried to allocate 1.20 GiB (GPU 0; 14.76 GiB total capacity; 12.60 GiB already allocated; 1.09 GiB free; 12.61 GiB reserved in total by PyTorch) My data of 1000 videos has a size of around 90MB on disk. I think i am only loading model and data onto GPU. I am not able to understand what is causing the already allocated 12.6 GiB of memory? is my model too big or it has something to do with data loaded or something else? Here is a snippet of my model. It is based on C3D by Tran et al, 2015. self.conv1 = nn.Conv3d(3, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.pool1 = nn.MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2)) self.conv2 = nn.Conv3d(64, 128, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.pool2 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2)) self.conv3a = nn.Conv3d(128, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.conv3b = nn.Conv3d(256, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.pool3 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2)) self.conv4a = nn.Conv3d(256, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.conv4b = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.pool4 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2)) self.conv5a = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.conv5b = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1)) self.pool5 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=(0, 1, 1)) self.fc6 = nn.Linear(373248, 4096) self.fc7 = nn.Linear(4096, 128) self.fc8 = nn.Linear(128, 2) self.dropout = nn.Dropout(p=0.5) self.relu = nn.ReLU() self.softmax = nn.Softmax() Also, i am using Dataset, DataLoader and DeviceDataLoader to load the data. Even if they load all the data at once, what is causing to inflate 90MB of data to 12.6GiB.
It's important to note that the real reason you have out of memory issues most of the time is not necessarily the inherent size of the model itself (though it is directly related to this). Even 100,000,000 parameters, with single floating point precision, only takes about 1 GB to store. (Now, in your case, your model actually is so large that just your parameters may consume on the order of 100GB of memory, but in most models this wouldn't be the case). Generally, you'll have memory issues when the forward pass through the model computes and stores the gradient of each parameter, with respect to the previous layer's parameters. This can temporarily double (or more) the number of floating point values being stored in memory, so the combined total of your model, your data, and the gradient/computation graph exceed the available GPU memory. As suggested above, your solutions are reduce your batch size downsample your data (if possible) reduce the complexity of your model (As an aside, I'm not totally sure how you are intending to train this model but you cannot possibly hope to feed a whole video to your model as input. Aside from the memory issues, it would be extremely difficult for the model to learn the temporal relationships necessary to make sense of the data. Conventional video processing techniques would probably pass a single frame or a few frames at a time to a model and use a transformer, lstm, or other sequence-to-output model to learn the temporal context.)
https://stackoverflow.com/questions/68215724/
What is the output of pytorch RNN?
I have a simple rnn code below. rnn = nn.RNN(1, 1, 1, bias = False, batch_first = True) t = torch.ones(size = (1, 2, 1)) output, hidden = rnn(t) print(rnn.weight_ih_l0) print(rnn.weight_hh_l0) print(output) print(hidden) # Outputs Parameter containing: tensor([[0.7199]], requires_grad=True) Parameter containing: tensor([[0.4698]], requires_grad=True) tensor([[[0.6168], [0.7656]]], grad_fn=<TransposeBackward1>) tensor([[[0.7656]]], grad_fn=<StackBackward>) tensor([[[0.7656]]], grad_fn=<StackBackward>) My understanding from the PyTorch documentation is that the output from above is the hidden state. So, I tried to manually calculate the output using the below hidden_state1 = torch.tanh(t[0][0] * rnn.weight_ih_l0) print(hidden_state1) hidden_state2 = torch.tanh(t[0][1] * rnn.weight_ih_l0 + hidden_state1 * rnn.weight_hh_l0) print(hidden_state2) tensor([[0.6168]], grad_fn=<TanhBackward>) tensor([[0.7656]], grad_fn=<TanhBackward>) The result was correct. hidden_state1 and hidden_state2 match the output. Shouldn’t the hidden_states get multiplied with output weights to get the output? I checked for weights connecting from hidden state to output. But there are no weights at all. If the objective of rnn is to calculate only hidden states, Could anyone tell me how to get the output?
Shouldn’t the hidden_states get multiplied with output weights to get the output Yes and No. It depends on your problem formulation. Suppose you are dealing with a case where output from last timestep only matters. In that case it really doesn't make sense to multiply hidden state to output weight in each unit. That's why pytorch only gives you hidden output as an abstract value, after that you can really go wild and do whatever you want with hidden states according to your problem. In your particular case suppose you want to apply another linear layer to the output at each timestep. You can do so simply by defining a linear layer and propagating the output of hidden unit. #Linear Layer ##hidden_feature_size = 1 in your case lin_layer = nn.Linear(hidden_feature_size, output_feature_size) #output from first timestep linear_layer(output[0]) #output from second timestep linear_layer(output[1])
https://stackoverflow.com/questions/68220175/
To calculate euclidean distance between vectors in a torch tensor with multiple dimensions
There is a random initialized torch tensor of the shape as below. Inputs tensor1 = torch.rand((4,2,3,100)) tensor2 = torch.rand((4,2,3,100)) tensor1 and tensor2 are torch tensors with 24 100-dimensional vectors, respectively. I want to get a tensor with a shape of torch.size([4,2,3]) by obtaining the Euclidean distance between vectors with the same index of two tensors. I used dist = torch.nn.functional.pairwise_distance(tensor1, tensor2) to get the results I wanted. However, the pairwise_distance function calculates the euclidean distance for the second dimension of the tensor. So dist shape is torch.size([4,3,100]). I have performed transpose several times to solve these problems. My code is as follows. tensor1 = tensor1.transpose(1,3) tensor2 = tensor2.transpose(1,3) dist = torch.nn.functional.pairwise_distance(tensor1, tensor2) dist = dist.transpose(1,2) Is there a simpler or easier way to get the result I want?
Here ya go dist = (tensor1 - tensor2).pow(2).sum(3).sqrt() Basically that's what Euclidean distance is. Subtract -> power by 2 -> sum along the unfortunate axis you want to eliminate-> square root
https://stackoverflow.com/questions/68220457/