instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Questions regarding image processing for pre-trained image classifier in PyTorch
I am trying to use a popular pre-trained VGG model for image classification in PyTorch but noticed that the image is resized to 256 and crop it to 224 for pre-processing images in most of the programs. I am curious why we do resize it to 256 first and crop it instead of resizing it to 224 directly. transforms = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
For image classification tasks, typically the object of interest is located in the center of the image. It is thus common practice (for inference) to take a central crop of the image cutting away some border (this does not apply in general, however, as exact preprocessing depends strongly on how the network was trained). As per the "why cropping and not resizing directly", this is a byproduct of data augmentation during training: taking a random crop of the image is a very common data augmentation technique. At inference time, resizing the whole image to the input size instead of applying a crop influences the scale of the objects in the image, which negatively affects the network's performance (because you're evaluating on data that has a "format" different from the one you trained on and CNNs are not scale-invariant).
https://stackoverflow.com/questions/60543974/
Strange Move Assignment Operator Signature
I came across an unfamiliar move assignment operator signature in Pytorch' tensor backend (ATen, source). Just out of curiosity, what does the && operator do at the end of Tensor & Tensor::operator=(Tensor && rhs) && While I'm familiar with move semantics and the usual copy/move constructor and assignment operator signatures, I could not find any documentation online about the syntax above. I would be grateful if someone could explain what this operator does, how it differs from the usual move assignment operation, and when it should be used.
Objects of a class used as expressions can be rvalues or lvalues. The move assignment operator is a member function of a class. This declaration Tensor & Tensor::operator=(Tensor && rhs) && means that this move assignment operator is called for rvalue object of the class. Here is a demonstrative program. #include <iostream> struct A { A & operator =( A && ) & { std::cout << "Calling the move assignment operator for an lvalue object\n"; return *this; } A & operator =( A && ) && { std::cout << "Calling the move assignment operator for an rvalue object\n"; return *this; } }; int main() { A a; a = A(); A() = A(); return 0; } The program output is Calling the move assignment operator for an lvalue object Calling the move assignment operator for an rvalue object That is in this statement a = A(); the left hand operand of the assignment is an lvalue. In this statement A() = A(); the left hand operand of the assignment is rvalue (a temporary object).
https://stackoverflow.com/questions/60554813/
Verify convolution theorem using pytorch
Basically this theorem is formulated as below: F(f*g) = F(f)xF(g) I know this theorem but I just simply cannot reproduce the result by using pytorch. Below is a reproducable code: import torch import torch.nn.functional as F # calculate f*g f = torch.ones((1,1,5,5)) g = torch.tensor(list(range(9))).view(1,1,3,3).float() conv = F.conv2d(f, g, bias=None, padding=2) # calculate F(f*g) F_fg = torch.rfft(conv, signal_ndim=2, onesided=False) # calculate F x G f = f.squeeze() g = g.squeeze() # need to pad into at least [w1+w2-1, h1+h2-1], which is 7 in our case. size = f.size(0) + g.size(0) - 1 f_new = torch.zeros((7,7)) g_new = torch.zeros((7,7)) f_new[1:6,1:6] = f g_new[2:5,2:5] = g F_f = torch.rfft(f_new, signal_ndim=2, onesided=False) F_g = torch.rfft(g_new, signal_ndim=2, onesided=False) FxG = torch.mul(F_f, F_g) print(FxG - F_fg) here is the result for print(FxG - F_fg) tensor([[[[[ 0.0000e+00, 0.0000e+00], [ 4.1426e+02, 1.7270e+02], [-3.6546e+01, 4.7600e+01], [-1.0216e+01, -4.1198e+01], [-1.0216e+01, -2.0223e+00], [-3.6546e+01, -6.2804e+01], [ 4.1426e+02, -1.1427e+02]], ... [[ 4.1063e+02, -2.2347e+02], [-7.6294e-06, 2.2817e+01], [-1.9024e+01, -9.0105e+00], [ 7.1708e+00, -4.1027e+00], [-2.6739e+00, -1.1121e+01], [ 8.8471e+00, 7.1710e+00], [ 4.2528e+01, 9.7559e+01]]]]]) and you can see that the difference is not always 0. can someone tell me why and how to do this properly? Thanks
So I took a closer look at what you've done so far. I've identified three sources of error in your code. I'll try to sufficiently address each of them here. 1. Complex arithmetic PyTorch doesn't currently support multiplication of complex numbers (AFAIK). The FFT operation simply returns a tensor with a real and imaginary dimension. Instead of using torch.mul or the * operator we need to explicitly code complex multiplication. (a + ib) * (c + id) = (a*c - b*d) + i(a*d + b*c) 2. The definition of convolution The definition of "convolution" often used in CNN literature is actually different from the definition used when discussing the convolution theorem. I won't go into detail, but the theoretical definition flips the kernel before sliding and multiplying. Instead, the convolution operation in pytorch, tensorflow, caffe, etc... doesn't do this flipping. To account for this we can simply flip g (both horizontally and vertically) before applying the FFT. 3. Anchor position The anchor-point when using the convolution theorem is assumed to be the upper left corner of the padded g. Again, I won't go into detail about this but it's how the math works out. The second and third point may be easier to understand with an example. Suppose you used the following g [1 2 3] [4 5 6] [7 8 9] instead of g_new being [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 1 2 3 0 0] [0 0 4 5 6 0 0] [0 0 7 8 9 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] it should actually be [5 4 0 0 0 0 6] [2 1 0 0 0 0 3] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [8 7 0 0 0 0 9] where we flip the kernel vertically and horizontally, then apply circular shift so that the center of the kernel is in the upper left corner. I ended up rewriting most of your code and generalizing it a bit. The most complex operation is defining g_new properly. I decided to use a meshgrid and modulo arithmetic to simultaneously flip and shift the indices. If something here doesn't make sense to you please leave a comment and I'll try to clarify. import torch import torch.nn.functional as F def conv2d_pyt(f, g): assert len(f.size()) == 2 assert len(g.size()) == 2 f_new = f.unsqueeze(0).unsqueeze(0) g_new = g.unsqueeze(0).unsqueeze(0) pad_y = (g.size(0) - 1) // 2 pad_x = (g.size(1) - 1) // 2 fcg = F.conv2d(f_new, g_new, bias=None, padding=(pad_y, pad_x)) return fcg[0, 0, :, :] def conv2d_fft(f, g): assert len(f.size()) == 2 assert len(g.size()) == 2 # in general not necessary that inputs are odd shaped but makes life easier assert f.size(0) % 2 == 1 assert f.size(1) % 2 == 1 assert g.size(0) % 2 == 1 assert g.size(1) % 2 == 1 size_y = f.size(0) + g.size(0) - 1 size_x = f.size(1) + g.size(1) - 1 f_new = torch.zeros((size_y, size_x)) g_new = torch.zeros((size_y, size_x)) # copy f to center f_pad_y = (f_new.size(0) - f.size(0)) // 2 f_pad_x = (f_new.size(1) - f.size(1)) // 2 f_new[f_pad_y:-f_pad_y, f_pad_x:-f_pad_x] = f # anchor of g is 0,0 (flip g and wrap circular) g_center_y = g.size(0) // 2 g_center_x = g.size(1) // 2 g_y, g_x = torch.meshgrid(torch.arange(g.size(0)), torch.arange(g.size(1))) g_new_y = (g_y.flip(0) - g_center_y) % g_new.size(0) g_new_x = (g_x.flip(1) - g_center_x) % g_new.size(1) g_new[g_new_y, g_new_x] = g[g_y, g_x] # take fft of both f and g F_f = torch.rfft(f_new, signal_ndim=2, onesided=False) F_g = torch.rfft(g_new, signal_ndim=2, onesided=False) # complex multiply FxG_real = F_f[:, :, 0] * F_g[:, :, 0] - F_f[:, :, 1] * F_g[:, :, 1] FxG_imag = F_f[:, :, 0] * F_g[:, :, 1] + F_f[:, :, 1] * F_g[:, :, 0] FxG = torch.stack([FxG_real, FxG_imag], dim=2) # inverse fft fcg = torch.irfft(FxG, signal_ndim=2, onesided=False) # crop center before returning return fcg[f_pad_y:-f_pad_y, f_pad_x:-f_pad_x] # calculate f*g f = torch.randn(11, 7) g = torch.randn(5, 3) fcg_pyt = conv2d_pyt(f, g) fcg_fft = conv2d_fft(f, g) avg_diff = torch.mean(torch.abs(fcg_pyt - fcg_fft)).item() print('Average difference:', avg_diff) Which gives me Average difference: 4.6866085767760524e-07 This is very close to zero. The reason we don't get exactly zero is simply due to floating point errors.
https://stackoverflow.com/questions/60561933/
Pytorch error: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDATensorId' backend
I am training a CNN on CUDA GPU which takes 3D medical images as input and outputs a classifier. I suspect there may be a bug in pytorch. I am running pytorch 1.4.0. The GPU is 'Tesla P100-PCIE-16GB'. When I run the model on CUDA I get the error Traceback (most recent call last): File "/home/ub/miniconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-55-cc0dd3d9cbb7>", line 1, in <module> net(cc) File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "<ipython-input-2-19e11966d1cd>", line 181, in forward out = self.layer1(x) File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 480, in forward self.padding, self.dilation, self.groups) RuntimeError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDATensorId' backend. 'aten::slow_conv3d_forward' is only available for these backends: [CPUTensorId, VariableTensorId]. To replicate the issue: #input is a 64,64,64 3d image batch with 2 channels class ConvNet(nn.Module): def __init__(self): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv3d(2, 32, kernel_size=5, stride=1, padding=2), nn.ReLU(), nn.MaxPool3d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv3d(32, 64, kernel_size=5, stride=1, padding=2), nn.ReLU(), nn.MaxPool3d(kernel_size=2, stride=2)) self.drop_out = nn.Dropout() self.fc1 = nn.Linear(16 * 16*16 * 64, 1000) self.fc2 = nn.Linear(1000, 2) # self.softmax = nn.LogSoftmax(dim=1) def forward(self, x): # print(out.shape) out = self.layer1(x) # print(out.shape) out = self.layer2(out) # print(out.shape) out = out.reshape(out.size(0), -1) # print(out.shape) out = self.drop_out(out) # print(out.shape) out = self.fc1(out) # print(out.shape) out = self.fc2(out) # out = self.softmax(out) # print(out.shape) return out net = Convnet() input = torch.randn(16, 2, 64, 64, 64) net(input)
Initially, I was thinking the error message indicates that 'aten::slow_conv3d_forward' is not implemented with GPU (CUDA). But after looked at your network, it does not make sense to me, since Conv3D is a very basic op, and Pytorch team should implement this in CUDA. Then I dived a bit about the source code, finding that the input is not a CUDA tensor, which causes the problem. Here is a working sample: import torch from torch import nn #input is a 64,64,64 3d image batch with 2 channels class ConvNet(nn.Module): def __init__(self): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv3d(2, 32, kernel_size=5, stride=1, padding=2), nn.ReLU(), nn.MaxPool3d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv3d(32, 64, kernel_size=5, stride=1, padding=2), nn.ReLU(), nn.MaxPool3d(kernel_size=2, stride=2)) self.drop_out = nn.Dropout() self.fc1 = nn.Linear(16 * 16*16 * 64, 1000) self.fc2 = nn.Linear(1000, 2) # self.softmax = nn.LogSoftmax(dim=1) def forward(self, x): # print(out.shape) out = self.layer1(x) # print(out.shape) out = self.layer2(out) # print(out.shape) out = out.reshape(out.size(0), -1) # print(out.shape) out = self.drop_out(out) # print(out.shape) out = self.fc1(out) # print(out.shape) out = self.fc2(out) # out = self.softmax(out) # print(out.shape) return out net = ConvNet() input = torch.randn(16, 2, 64, 64, 64) net.cuda() input = input.cuda() # IMPORTANT to reassign your tensor net(input) Remember when you put a model from CPU to GPU, you can directly call .cuda(), but if you put a tensor from CPU to GPU, you will need to reassign it, such as tensor = tensor.cuda(), instead of only calling tensor.cuda(). Hope that helps. Output: tensor([[-0.1588, 0.0680], [ 0.1514, 0.2078], [-0.2272, -0.2835], [-0.1105, 0.0585], [-0.2300, 0.2517], [-0.2497, -0.1019], [ 0.1357, -0.0475], [-0.0341, -0.3267], [-0.0207, -0.0451], [-0.4821, -0.0107], [-0.1779, 0.1247], [ 0.1281, 0.1830], [-0.0595, -0.1259], [-0.0545, 0.1838], [-0.0033, -0.1353], [ 0.0098, -0.0957]], device='cuda:0', grad_fn=<AddmmBackward>)
https://stackoverflow.com/questions/60563115/
Pytorch Quantization RuntimeError: Trying to create tensor with negative dimension
I am trying out the pytorch quantization module. When doing static post training quantization I follow the next procedure detailed in the documentation: adding QuantStub and DeQuantStub modules Fuse operations Specify qauntization config torch.quantization.prepare() Calibrate the model by running inference against a calibration dataset torch.quantization.convert() However, when calibrating the model after preparing it the program breaks. The error appears at the last fully connected layers. It seems that the observers introduced in the graph are trying to create an histogram of negative dimension. Here is the error: x = self.fc(x) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 552, in __call__ hook_result = hook(self, input, result) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/quantization/quantize.py", line 74, in _observer_forward_hook return self.activation_post_process(output) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/quantization/observer.py", line 805, in forward self.bins) File "/home/juan/miniconda3/envs/sparse/lib/python3.6/site-packages/torch/quantization/observer.py", line 761, in _combine_histograms histogram_with_output_range = torch.zeros((Nbins * downsample_rate)) RuntimeError: Trying to create tensor with negative dimension -4398046511104: [-4398046511104] The fully connected are built as class LinearReLU(nn.Sequential): def __init__(self, in_neurons, out_neurons): super(LinearReLU, self).__init__( nn.Linear(in_neurons, out_neurons), nn.ReLU(inplace=False) ) They are appended in the fc(x) as fc = nn.Sequential(*([LinearReLU, LinearReLU, ...]). However, I suspect that it has something to do with the reshape between the convolutions and the fully connected layers. x = x.reshape(-1, size) Until now I have not been able to solve this error. Thanks in advance
For anybody that has the same problem. The solution is in this line in pytorch quantization documentation: View-based operations like view(), as_strided(), expand(), flatten(), select(), python-style indexing, etc - work as on regular tensor (if quantization is not per-channel) The problem was using reshape and doing per channel quantization. If I do meanof the last two channels there is no problem.
https://stackoverflow.com/questions/60567538/
Pytorch model stuck at 0.5 though loss decreases consistently
This is using PyTorch I have been trying to implement UNet model on my images, however, my model accuracy is always exact 0.5. Loss does decrease. I have also checked for class imbalance. I have also tried playing with learning rate. Learning rate affects loss but not the accuracy. My architecture below ( from here ) """ `UNet` class is based on https://arxiv.org/abs/1505.04597 The U-Net is a convolutional encoder-decoder neural network. Contextual spatial information (from the decoding, expansive pathway) about an input tensor is merged with information representing the localization of details (from the encoding, compressive pathway). Modifications to the original paper: (1) padding is used in 3x3 convolutions to prevent loss of border pixels (2) merging outputs does not require cropping due to (1) (3) residual connections can be used by specifying UNet(merge_mode='add') (4) if non-parametric upsampling is used in the decoder pathway (specified by upmode='upsample'), then an additional 1x1 2d convolution occurs after upsampling to reduce channel dimensionality by a factor of 2. This channel halving happens with the convolution in the tranpose convolution (specified by upmode='transpose') Arguments: in_channels: int, number of channels in the input tensor. Default is 3 for RGB images. Our SPARCS dataset is 13 channel. depth: int, number of MaxPools in the U-Net. During training, input size needs to be (depth-1) times divisible by 2 start_filts: int, number of convolutional filters for the first conv. up_mode: string, type of upconvolution. Choices: 'transpose' for transpose convolution """ class UNet(nn.Module): def __init__(self, num_classes, depth, in_channels, start_filts=16, up_mode='transpose', merge_mode='concat'): super(UNet, self).__init__() if up_mode in ('transpose', 'upsample'): self.up_mode = up_mode else: raise ValueError("\"{}\" is not a valid mode for upsampling. Only \"transpose\" and \"upsample\" are allowed.".format(up_mode)) if merge_mode in ('concat', 'add'): self.merge_mode = merge_mode else: raise ValueError("\"{}\" is not a valid mode for merging up and down paths.Only \"concat\" and \"add\" are allowed.".format(up_mode)) # NOTE: up_mode 'upsample' is incompatible with merge_mode 'add' if self.up_mode == 'upsample' and self.merge_mode == 'add': raise ValueError("up_mode \"upsample\" is incompatible with merge_mode \"add\" at the moment " "because it doesn't make sense to use nearest neighbour to reduce depth channels (by half).") self.num_classes = num_classes self.in_channels = in_channels self.start_filts = start_filts self.depth = depth self.down_convs = [] self.up_convs = [] # create the encoder pathway and add to a list for i in range(depth): ins = self.in_channels if i == 0 else outs outs = self.start_filts*(2**i) pooling = True if i < depth-1 else False down_conv = DownConv(ins, outs, pooling=pooling) self.down_convs.append(down_conv) # create the decoder pathway and add to a list # - careful! decoding only requires depth-1 blocks for i in range(depth-1): ins = outs outs = ins // 2 up_conv = UpConv(ins, outs, up_mode=up_mode, merge_mode=merge_mode) self.up_convs.append(up_conv) self.conv_final = conv1x1(outs, self.num_classes) # add the list of modules to current module self.down_convs = nn.ModuleList(self.down_convs) self.up_convs = nn.ModuleList(self.up_convs) self.reset_params() @staticmethod def weight_init(m): if isinstance(m, nn.Conv2d): #https://prateekvjoshi.com/2016/03/29/understanding-xavier-initialization-in-deep-neural-networks/ ##Doc: https://pytorch.org/docs/stable/nn.init.html?highlight=xavier#torch.nn.init.xavier_normal_ init.xavier_normal_(m.weight) init.constant_(m.bias, 0) def reset_params(self): for i, m in enumerate(self.modules()): self.weight_init(m) def forward(self, x): encoder_outs = [] # encoder pathway, save outputs for merging for i, module in enumerate(self.down_convs): x, before_pool = module(x) encoder_outs.append(before_pool) for i, module in enumerate(self.up_convs): before_pool = encoder_outs[-(i+2)] x = module(before_pool, x) # No softmax is used. This means we need to use # nn.CrossEntropyLoss is your training script, # as this module includes a softmax already. x = self.conv_final(x) return x Parameters are : device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') x,y = train_sequence[0] ; batch_size = x.shape[0] model = UNet(num_classes = 2, depth=5, in_channels=5, merge_mode='concat').to(device) optim = torch.optim.Adam(model.parameters(),lr=0.01, weight_decay=1e-3) criterion = nn.BCEWithLogitsLoss() #has sigmoid internally epochs = 1000 The function for training is : import torch.nn.functional as f def train_model(epoch,train_sequence): """Train the model and report validation error with training error Args: model: the model to be trained criterion: loss function data_train (DataLoader): training dataset """ model.train() for idx in range(len(train_sequence)): X, y = train_sequence[idx] images = Variable(torch.from_numpy(X)).to(device) # [batch, channel, H, W] masks = Variable(torch.from_numpy(y)).to(device) outputs = model(images) print(masks.shape, outputs.shape) loss = criterion(outputs, masks) optim.zero_grad() loss.backward() # Update weights optim.step() # total_loss = get_loss_train(model, data_train, criterion) My function for calculating loss and accuracy is below: def get_loss_train(model, train_sequence): """ Calculate loss over train set """ model.eval() total_acc = 0 total_loss = 0 for idx in range(len(train_sequence)): with torch.no_grad(): X, y = train_sequence[idx] images = Variable(torch.from_numpy(X)).to(device) # [batch, channel, H, W] masks = Variable(torch.from_numpy(y)).to(device) outputs = model(images) loss = criterion(outputs, masks) preds = torch.argmax(outputs, dim=1).float() acc = accuracy_check_for_batch(masks.cpu(), preds.cpu(), images.size()[0]) total_acc = total_acc + acc total_loss = total_loss + loss.cpu().item() return total_acc/(len(train_sequence)), total_loss/(len(train_sequence)) Edit : Code which runs (calls) the functions: for epoch in range(epochs): train_model(epoch, train_sequence) train_acc, train_loss = get_loss_train(model,train_sequence) print("Train Acc:", train_acc) print("Train loss:", train_loss) Can someone help me identify as why is accuracy always exact 0.5? Edit-2: As asked accuracy_check_for_batch function is here: def accuracy_check_for_batch(masks, predictions, batch_size): total_acc = 0 for index in range(batch_size): total_acc += accuracy_check(masks[index], predictions[index]) return total_acc/batch_size and def accuracy_check(mask, prediction): ims = [mask, prediction] np_ims = [] for item in ims: if 'str' in str(type(item)): item = np.array(Image.open(item)) elif 'PIL' in str(type(item)): item = np.array(item) elif 'torch' in str(type(item)): item = item.numpy() np_ims.append(item) compare = np.equal(np_ims[0], np_ims[1]) accuracy = np.sum(compare) return accuracy/len(np_ims[0].flatten())
I found the mistake. model = UNet(num_classes = 2, depth=5, in_channels=5, merge_mode='concat').to(device) should be model = UNet(num_classes = 1, depth=5, in_channels=5, merge_mode='concat').to(device) because I am using BCELosswithLogits.
https://stackoverflow.com/questions/60572542/
How to create a train-val split in custom image datasets using PyTorch?
I want to create a train+val set from my original trainset. The directory is split into train and test. I load the original train set and want to split it into train and val sets so I can evaulate validation loss during training using the train_loader and val_loader. There's not a lot of documentation about this which explains things clearly.
Check out the answer here. I've posted it below as well. ====================================================== The data is read using ImageFolder. Task is binary image classification with 498 images in the dataset which are equally distributed among both classes (249 images each). img_dataset = ImageFolder(..., transforms=t) 1. SubsetRandomSampler dataset_size = len(img_dataset) dataset_indices = list(range(dataset_size)) np.random.shuffle(dataset_indices) val_split_index = int(np.floor(0.2 * dataset_size)) train_idx, val_idx = dataset_indices[val_split_index:], dataset_indices[:val_split_index] train_sampler = SubsetRandomSampler(train_idx) val_sampler = SubsetRandomSampler(val_idx) train_loader = DataLoader(dataset=img_dataset, shuffle=False, batch_size=8, sampler=train_sampler) validation_loader = DataLoader(dataset=img_dataset, shuffle=False, batch_size=1, sampler=val_sampler) 2. random_split Here, out of the 498 total images, 400 get randomly assigned to train and the rest 98 to validation. dataset_train, dataset_valid = random_split(img_dataset, (400, 98)) train_loader = DataLoader(dataset=dataset_train, shuffle=True, batch_size=8) val_loader = DataLoader(dataset=dataset_valid, shuffle=False, batch_size=1) 3. WeightedRandomSampler if someone stumbled here searching for WeightedRandomSampler, check @ptrblck's answer here for a good explanation of what is happening below. Now, how does WeightedRandomSampler fit in creating train+val set? Because unlike SubsetRandomSampler or random_split(), we're not splitting for train and val here. We're simply ensuring that each batch gets equal number of classes during training. So, my guess is we need to use WeightedRandomSampler after random_split() or SubsetRandomSampler. But this wouldn't ensure that train and val have similar ratio between classes. target_list = [] for _, t in imgdataset: target_list.append(t) target_list = torch.tensor(target_list) target_list = target_list[torch.randperm(len(target_list))] # get_class_distribution() is a function that takes in a dataset and # returns a dictionary with class count. In this case, the # get_class_distribution(img_dataset) returns the following - # {'class_0': 249, 'class_0': 249} class_count = [i for i in get_class_distribution(img_dataset).values()] class_weights = 1./torch.tensor(class_count, dtype=torch.float) class_weights_all = class_weights[target_list] weighted_sampler = WeightedRandomSampler( weights=class_weights_all, num_samples=len(class_weights_all), replacement=True )
https://stackoverflow.com/questions/60577242/
TypeError: unsupported operand type(s) for +: 'Tensor' and 'dict'
I am new to the world of neural networks and I am trying to implement a CNN generator from this model and these equations (N=32) in order to make motion generation. I wrote the following code, where H_txt is a dictionary containing, as keys, the name of my clips and as values, a vector representing the action shown in the clip, and z is a white gaussian noise of dimension (1, 256). N=32 class CNNGenerator(nn.Module): def __init__(self, htxt = H_txt): super(CNNGenerator, self).__init__() self.htxt = htxt self.conv1 = nn.Conv1d(1, 1, 3) self.conv2 = nn.Conv1d(1, 1, 3) self.conv3 = nn.Conv1d(1, 1, 3) self.conv4 = nn.Conv1d(4, 4, 3) self.conv5 = nn.Conv1d(2, 2, 3) self.conv6 = nn.Conv1d(8, 8, 3) self.conv7 = nn.Conv1d(4, 4, 3) self.conv8 = nn.Conv1d(16, 16, 3) self.conv9 = nn.Conv1d(8, 8, 3) self.conv10 = nn.Conv1d(32, 32, 3) self.conv11 = nn.Conv1d(16, 16, 3) self.conv12 = nn.Conv1d(32, 32, 3) self.conv13 = nn.Conv1d(1, 1, 3) self.conv14 = nn.Conv1d(2, 2, 3) self.conv15 = nn.Conv1d(2, 2, 3) self.conv16 = nn.Conv1d(4, 4, 3) self.conv17 = nn.Conv1d(4, 4, 3) self.conv18 = nn.Conv1d(8, 8, 3) self.conv19 = nn.Conv1d(8, 8, 3) self.conv20 = nn.Conv1d(16, 16, 3) self.conv21 = nn.Conv1d(16, 16, 3) self.conv22 = nn.Conv1d(32, 32, 3) self.conv23 = nn.Conv1d(32, 32, 3) def forward(self, x): x[0] = self.conv1(F.relu(self.conv2(z) + self.htxt)) x[1] = self.conv3(F.relu(self.conv4(z) + self.htxt)) x[2] = self.conv5(F.relu(self.conv6(z) + self.htxt)) x[3] = self.conv7(F.relu(self.conv8(z) + self.htxt)) x[4] = self.conv9(F.relu(self.conv10(z) + self.htxt)) x[5] = self.conv11(F.relu(self.conv12(z) + self.htxt)) h = np.zeros(np.log2(N)) h[0] = x[0] h[1] = nn.AdaptiveAvgPool1d(2*h[0]) + self.conv13(F.relu(nn.AdaptiveAvgPool1d(2*(self.conv14(h[0])+x[1])))) h[2] = nn.AdaptiveAvgPool1d(2*h[1]) + self.conv15(F.relu(nn.AdaptiveAvgPool1d(2*(self.conv16(h[1])+x[2])))) h[3] = nn.AdaptiveAvgPool1d(2*h[2]) + self.conv17(F.relu(nn.AdaptiveAvgPool1d(2*(self.conv18(h[2])+x[3])))) h[4] = nn.AdaptiveAvgPool1d(2*h[3]) + self.conv19(F.relu(nn.AdaptiveAvgPool1d(2*(self.conv20(h[3])+x[4])))) h[5] = nn.AdaptiveAvgPool1d(2*h[4]) + self.conv21(F.relu(nn.AdaptiveAvgPool1d(2*(self.conv22(h[4])+x[5])))) A = self.conv23(h[np.log2(N)]) return A def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = CNNGenerator() z= torch.randn(1, 1, 256) #k=256 out = net(z) print(out) When I run my code, I get the following error message, coming from my forward function : TypeError: unsupported operand type(s) for +: 'Tensor' and 'dict' My code doesn't like the fact that I try to sum a tensor with a dictionary. I haven't found any solution online, so I was wondering if it was possible to sum my tensor with a dictionary? Is there a function that can convert my dictionary into a tensor? I tried to convert the dictionary into an array with np.asarray() but I got an error message saying I can't use a numpy function for this. Thanks a lot for reading my message
Your question is missing details about your variables but based on the error i am giving you my answer.You are adding a dictionary and a tensor which is giving you the error.If you want to add the values of dictionary to the tensor then you must convert the dictionary into a tensor.And also why are you adding the dictionary directly because here there is no need for keys.If you want to concatenate the values in the dictionary after converting it into tensor along the desired axis then you need to use torch.cat function. To add the values in the dictionary to the tensor get the values from dict and convert to tensor by something like this torch.Tensor(list(htxt.values()))
https://stackoverflow.com/questions/60582195/
Can autograd in pytorch handle a repeated use of a layer within the same module?
I have a layer layer in an nn.Module and use it two or more times during a single forward step. The output of this layer is later inputted to the same layer. Can pytorch's autograd compute the grad of the weights of this layer correctly? def forward(x): x = self.layer(x) x = self.layer(x) return x Complete example: import torch import torch.nn as nn import torch.nn.functional as F class net(nn.Module): def __init__(self,in_dim,out_dim): super(net,self).__init__() self.layer = nn.Linear(in_dim,out_dim,bias=False) def forward(self,x): x = self.layer(x) x = self.layer(x) return x input_x = torch.tensor([10.]) label = torch.tensor([5.]) n = net(1,1) loss_fn = nn.MSELoss() out = n(input_x) loss = loss_fn(out,label) n.zero_grad() loss.backward() for param in n.parameters(): w = param.item() g = param.grad print('Input = %.4f; label = %.4f'%(input_x,label)) print('Weight = %.4f; output = %.4f'%(w,out)) print('Gradient w.r.t. the weight is %.4f'%(g)) print('And it should be %.4f'%(4*(w**2*input_x-label)*w*input_x)) Output: Input = 10.0000; label = 5.0000 Weight = 0.9472; output = 8.9717 Gradient w.r.t. the weight is 150.4767 And it should be 150.4766 In this example, I have defined a module with only one linear layer (in_dim=out_dim=1 and no bias). w is the weight of this layer; input_x is the input value; label is the desired value. Since the loss is chosen as MSE, the formula for the loss is ((w^2)*input_x-label)^2 Computing by hand, we have dw/dx = 2*((w^2)*input_x-label)*(2*w*input_x) The output of my example above shows that autograd gives the same result as computed by hand, giving me a reason to believe that it can work in this case. But in a real application, the layer may have inputs and outputs of higher dimensions, a nonlinear activation function after it, and the neural network could have multiple layers. What I want to ask is: can I trust autograd to handle such situation, but a lot more complicated than that in my example? How does it work when a layer is called iteratively?
This will work just fine. From the perspective of the autograd engine this isn't a cyclic application since the resulting computation graph will unwrap the repeated computation as a linear sequence. To illustrate this, for a single layer you might have: x -----> layer --------+ ^ | | 2 times | +-----------+ From the autograd perspective this looks like: x ---> layer ---> layer ---> layer Here layer is the same layer copied 3 times over the graph. This means when computing the gradient for the layer's weights they will be accumulated from all the three stages. So when using backward: x ---> layer ---> layer ---> layer ---> loss_func | lback <--- lback <--- lback <--------+ | | | | v | +------> weights <----+ _grad Here lback represents the local derivative of the layer forward transformation which uses the upstream gradient as an input. Each one adds to the layer's weights_grad. Recurrent Neural Networks use this repeated application of layers (cells) at their basis. See for example this tutorial about Classifying Names with a Character-Level RNN.
https://stackoverflow.com/questions/60584600/
Parallel analog to torch.nn.Sequential container
Just wondering, why I can't find subj in torch.nn? nn.Sequential is pretty convinient, it allows to define networks in one place, clear and visual, but restricted to very simple ones! With parallel analog (and litle help of "identity" nodes for residual connections) it forms a complete method to construct any feedforward net combinatorial way. Am I missing something?
Well, maybe it shouldn't be in standard module collection, just because it can be defined really simple: class ParallelModule(nn.Sequential): def __init__(self, *args): super(ParallelModule, self).__init__( *args ) def forward(self, input): output = [] for module in self: output.append( module(input) ) return torch.cat( output, dim=1 ) Inheriting "Parallel" from "Sequential" is ideologically bad, but works well. Now one can define networks like pictured, with following code: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.net = nn.Sequential( nn.Conv2d( 1, 32, 3, padding=1 ), nn.ReLU(), nn.Conv2d( 32, 64, 3, padding=1 ), nn.ReLU(), nn.MaxPool2d( 3, stride=2 ), nn.Dropout2d( 0.25 ), ParallelModule( nn.Conv2d( 64, 64, 1 ), nn.Sequential( nn.Conv2d( 64, 64, 1 ), nn.ReLU(), ParallelModule( nn.Conv2d( 64, 32, (3,1), padding=(1,0) ), nn.Conv2d( 64, 32, (1,3), padding=(0,1) ), ), ), nn.Sequential( nn.Conv2d( 64, 64, 1 ), nn.ReLU(), nn.Conv2d( 64, 64, 3, padding=1 ), nn.ReLU(), ParallelModule( nn.Conv2d( 64, 32, (3,1), padding=(1,0) ), nn.Conv2d( 64, 32, (1,3), padding=(0,1) ), ), ), nn.Sequential( #PrinterModule(), nn.AvgPool2d( 3, stride=1, padding=1 ), nn.Conv2d( 64, 64, 1 ), ), ), nn.ReLU(), nn.Conv2d( 256, 64, 1 ), nn.ReLU(), nn.Conv2d( 64, 128, 3, padding=1 ), nn.ReLU(), nn.MaxPool2d( 3, stride=2 ), nn.Dropout2d( 0.5 ), nn.Flatten(), nn.Linear( 4608, 128 ), nn.ReLU(), nn.Linear( 128, 10 ), nn.LogSoftmax( dim=1 ), ) def forward(self, x): return self.net.forward( x )
https://stackoverflow.com/questions/60586559/
Appending missing values according to index
Say if i have a tensor values = torch.tensor([5, 3, 2, 8]) and a corresponding index to values index = torch.tensor([0, 2, 4, 5]) and assuming that I want to insert in the missing index (1 and 3), with a fixed value (100), such that I get values = torch.tensor([5, 100, 3, 100, 2, 8]) Is there a vectorized way to do it in PyTorch (or numpy)?
You can fill it with 100 first and then fill it with the original values. in pytorch import torch result = torch.empty(6, dtype = torch.int32).fill_(100) values = torch.tensor([5, 3, 2, 8], dtype = torch.int32) index = torch.tensor([0, 2, 4, 5]) result[index] = values print(result) in numpy import numpy as np result = np.full((6,), 100) index = np.array([0, 2, 4, 5]) values = np.array([5, 3, 2, 8]) result[index] = values print(result)
https://stackoverflow.com/questions/60586665/
Getting the index of successive minimum comparisons
Say if i have a tensor values = torch.tensor([5., 4., 8., 3.]) and i want to take the minimum over every 2 successive values, meaning min(5., 4.) = 4. min(8., 3.) = 3. is there a vectorized way of doing it and still obtaining back the relative index of the minimum? Meaning what I want as output is: min_index = [1, 1] #min_index[0] == 1 as 4. is the minimum of (5., 4.) and is in index 1 of (5., 4.) #min_index[1] == 1 as 3. is the minimum of (8., 3.) and is in index 1 of (8., 3.)
I think reshaping your tensor will make it a lot easier. After that the torch.min automatically returns the minimum values and indexes. import torch values = torch.tensor([5., 4., 8., 3.]) values_reshaped = values.reshape(-1,2) # works for any length minimums, index = torch.min(values_reshaped, axis = -1) print(minimums) # tensor of the minimum values print(index) # tensor of indexes
https://stackoverflow.com/questions/60586724/
How to perform advanced indexing in PyTorch?
Is there a way of doing the following without looping? S, N, H = 9, 7, 4 a = torch.randn(S, N, H) # tensor with integer values between 1, S of shape (N,) lens = torch.randint(1, S + 1, (N,)) res = torch.zeros(N, H) for i in range(N): res[i] = a[lens[i] - 1, i, :]
Yes, I believe this works. import torch S, N, H = 9, 7, 4 a = torch.randn(S, N, H) # tensor with integer values between 1, S of shape (N,) lens = torch.randint(0, S, (N,)) i = torch.tensor(range(0,7)) res = torch.zeros(N, H) res = a[lens, i, :] print(res) And why did you make lens 1 from S+1 and then do lens[i]-1 ? I just changed it so lens is 0 from S for convenience. However if you need lens to be 1 from S+1, you can change res = a[lens, i, :] to res = a[lens-1, i, :]
https://stackoverflow.com/questions/60590722/
Any guarantees that Torch won't mess up with an already allocated CUDA array?
Assume we allocated some array on our GPU through other means than PyTorch, for example by creating a GPU array using numba.cuda.device_array. Will PyTorch, when allocating later GPU memory for some tensors, accidentally overwrite the memory space that is being used for our first CUDA array? In general, since PyTorch and Numba use the same CUDA runtime and thus I assume the same mechanism for memory management, are they automatically aware of memory regions used by other CUDA programs or does each one of them see the entire GPU memory as his own? If it's the latter, is there a way to make them aware of allocations by other CUDA programs? EDIT: figured this would be an important assumption: assume that all allocations are done by the same process.
Will PyTorch, when allocating later GPU memory for some tensors, accidentally overwrite the memory space that is being used for our first CUDA array? No. are they automatically aware of memory regions used by other CUDA programs ... They are not "aware", but each process gets its own separate context ... ... or does each one of them see the entire GPU memory as his own? .... and contexts have their own address spaces and isolation. So neither, but there is no risk of memory corruption. If it's the latter, is there a way to make them aware of allocations by other CUDA programs? If by "aware" you mean "safe", then that happens automatically. If by "aware" you imply some sort of interoperability, then that is possible on some platforms, but it is not automatic. ... assume that all allocations are done by the same process. That is a different situation. In general, the same process implies a shared context, and shared contexts share a memory space, but all the normal address space protection rules and facilities apply, so there is not a risk of loss of safety.
https://stackoverflow.com/questions/60593317/
Loading & Freezing a Pretrained Model to Combine with a New Network
I have a pretrained model and would like to build a classifier on top of it. I’m trying to load and freeze the weights of the pretrained model, and pass its outputs to the new classifier, which I’d like to optimise. Here is what I have so far, I’m a little stuck on a TypeError: forward() missing 1 required positional argument: 'x' error from the nn.Sequential line: import model #model.py contains the architecture of the pretrained model class Classifier(nn.Module): def __init__(self): ... def forward(self, x): ... net = model.Model() net.load_state_dict(checkpoint["net"]) for c in net.children(): for param in child.parameters(): params.requires_grad = False model = nn.Sequential(nn.ModuleList(net()), Classifier())
I finally solved this issue after a discussion with @ptrblck from the PyTorch Forums. The solution is similar to Shai's answer, only that because net contains an instance of the model.Model class, one should do model = nn.Sequential(net, Classifier()) instead, without calling nn.ModuleList().
https://stackoverflow.com/questions/60595325/
Get value of variable index in particular dimension
Say if i have a tensor that is value = torch.tensor([ [[0, 0, 0], [1, 1, 1]], [[2, 2, 2], [3, 3, 3]], ]) essentially with shape (2,2,3). Now say if i have an index = [1, 0], which means I want to take: # row 1 of [[0, 0, 0], [1, 1, 1]], giving me: [1, 1, 1] # row 0 of [[2, 2, 2], [3, 3, 3]], giving me: [2, 2, 2] So that the final output: output = torch.tensor([[1, 1, 1], [2, 2, 2]]) is there a vectorized way to achieve this?
You can use advanced indexing. I can't find a good pytorch document about this, but I believe it works as same as numpy, so here's the numpy's document about indexing. import torch value = torch.tensor([ [[0, 0, 0], [1, 1, 1]], [[2, 2, 2], [3, 3, 3]], ]) index = [1, 0] i = range(0,2) result = value[i, index] # same as result = value[i, index, :] print(result)
https://stackoverflow.com/questions/60596339/
Is there any way to use both a GPU accelerator and Torch in google cloud AI platform for model deployment?
I already have a torch model (BERT), and I'd like to use the ai-platform service to get online predictions using a GPU, but I can't figure out how to do it. The following command, without an accelerator, works: gcloud alpha ai-platform versions create {VERSION} --model {MODEL_NAME} --origin=gs://{BUCKET}/models/ --python-version=3.5 --runtime-version=1.14 --package-uris=gs://{BUCKET}/packages/my-torch-package-0.1.tar.gz,gs://cloud-ai-pytorch/torch-1.0.0-cp35-cp35m-linux_x86_64.whl --machine-type=mls1-c4-m4 --prediction-class=predictor.CustomModelPrediction However, if I try to add the accelerator parameter: --accelerator=^:^count=1:type=nvidia-tesla-k80 I get the following error message: ERROR: (gcloud.alpha.ai-platform.versions.create) INVALID_ARGUMENT: Field: version.machine_type Error: GPU accelerators are not supported on the requested machine type: mls1-c4-m4 - '@type': type.googleapis.com/google.rpc.BadRequest fieldViolations: - description: 'GPU accelerators are not supported on the requested machine type: mls1-c4-m4' field: version.machine_type But if I use a different machine type, that I know I can use with an accelerator, I get the following error: ERROR: (gcloud.alpha.ai-platform.versions.create) FAILED_PRECONDITION: Field: framework Error: Machine type n1-highcpu-4 does not support CUSTOM_CLASS. - '@type': type.googleapis.com/google.rpc.BadRequest fieldViolations: - description: Machine type n1-highcpu-4 does not support CUSTOM_CLASS. field: framework It's like any machine that supports GPU accelerators doesn't support custom classes (required AFAIK to use Torch), and any machine that supports custom classes doesn't support GPU accelerators. Any way to make it work? There are a bunch of tutorials on how to use ai-platform with Torch, but I can't see the point of using gcloud to train and predict if you have to do everything on the CPU so that feels very odd to me.
As for now, using Custom Prediction Routines is in Beta. In addition, using other machine types than mls1-c1-m2 is also in Beta. Nevertheless, as you can see in the previously referenced link, GPU's are not available for mls1-like machines. At the same time, these are the only machine types that allow models outside TensorFlow. In summary, probably deploying your prediction model in Torch and using a GPU might not be a feasible option right now.
https://stackoverflow.com/questions/60598419/
How to implement differentiable hamming loss in pytorch?
How to implement a differentiable loss function that counts the number of wrong predictions? output = [1,0,4,10] target = [1,2,4,15] loss = np.count_nonzero(output != target) / len(output) # [0,1,0,1] -> 2 / 4 -> 0.5 I have tried a few implementations but they are not differentiable. RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn def hamming_loss(output, target): #loss = torch.tensor(torch.nonzero(output != target).size(0)).double() / target.size(0) #loss = torch.sum((output != target), dim=0).double() / target.size(0) loss = torch.mean((output != target).double()) return loss Maybe there is some similar but differential loss function?
Why don't you convert your discrete predictions (e.g., [1, 0, 4, 10]) with "soft" predictions, i.e. probability of each label (e.g., output becomes a 4x(num labels) probability vectors). Once you have "soft" predictions, you can compute the cross entropy loss between the predicted output probabilities and the desired targets.
https://stackoverflow.com/questions/60599365/
How to collect the predictions for each observation in pytorch?
Im trying to make a RNN myself using a loop rather than the nn.rnn module. But I would still like to use pytorch's backward for backpropogation. Each observation makes 1 prediction which is then used for subsequent predictions. But how should each prediction be stored such that we can calculate the loss and get back propagation? can we just create a normal list and append to it with each prediction? Or will that not allow back propogation?
Yes. That would work. PyTorch creates a computational graph with nodes for all the operations that you perform during the forward pass. So, for each operation in the for loop, no matter how you store the outputs (or even if you discard them), you should still be able to call backward on the loss as the information required for computing gradients is already there in the graph. Here's a toy example: import torch from torch import nn import torch.nn.functional as F class RNN(nn.Module): def __init__(self, in_dim, hidden_dim, out_dim): super().__init__() self.wh = nn.Linear(hidden_dim, hidden_dim) self.wx = nn.Linear(in_dim, hidden_dim) self.wo = nn.Linear(hidden_dim, out_dim) self.hidden_dim = hidden_dim def forward(self, xs, hidden): outputs = [] hiddens = [] hiddens.append(hidden) for i, x in enumerate(xs): hiddens.append(torch.tanh(self.wx(x) + self.wh(hiddens[i]))) outputs.append(F.log_softmax(self.wo(hiddens[i+1]), dim=0)) return outputs, hiddens def init_hidden(self): return torch.zeros(self.hidden_dim) # Initialize the input and output x = torch.tensor([[1., 0., 0.], [0., 1., 0.], [1., 0., 0.]]) y = torch.tensor([1]) # Initialize the network, hidden state and loss function rnn = RNN(3, 10, 2) hidden = rnn.init_hidden() nll = nn.NLLLoss() # Forward pass outputs, hidden_states = rnn(x, hidden) # Compute loss loss = nll(outputs[-1].unsqueeze(0), y) # Call Backward on the loss loss.backward() # Inspect the computed gradients print(rnn.wh.weight.grad) # Gives a tensor of shape (10, 10) in this case
https://stackoverflow.com/questions/60605649/
Pytorch ImageNet dataset
I am unable to download the original ImageNet dataset from their official website. However, I found out that pytorch has ImageNet as one of it’s torch vision datasets. Q1. Is that the original ImageNet dataset? Q2. How do I get the classes for the dataset like it’s being done in Cifar-10 classes = [‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’]
The torchvision.datasets.ImageNet is just a class which allows you to work with the ImageNet dataset. You have to download the dataset yourself (e.g. from http://image-net.org/download-images) and pass the path to it as the root argument to the ImageNet class object. Note that the option to download it directly by passing the flag download=True is no longer possible: if download is True: msg = ("The dataset is no longer publicly accessible. You need to " "download the archives externally and place them in the root " "directory.") raise RuntimeError(msg) elif download is False: msg = ("The use of the download flag is deprecated, since the dataset " "is no longer publicly accessible.") warnings.warn(msg, RuntimeWarning) (source) If you just need to get the class names and the corresponding indices without downloading the whole dataset (e.g. if you are using a pretrained model and want to map the predictions to labels), then you can download them e.g. from here or from this github gist.
https://stackoverflow.com/questions/60607824/
BertForSequenceClassification vs. BertForMultipleChoice for sentence multi-class classification
I'm working on a text classification problem (e.g. sentiment analysis), where I need to classify a text string into one of five classes. I just started using the Huggingface Transformer package and BERT with PyTorch. What I need is a classifier with a softmax layer on top so that I can do 5-way classification. Confusingly, there seem to be two relevant options in the Transformer package: BertForSequenceClassification and BertForMultipleChoice. Which one should I use for my 5-way classification task? What are the appropriate use cases for them? The documentation for BertForSequenceClassification doesn't mention softmax at all, although it does mention cross-entropy. I am not sure if this class is only for 2-class classification (i.e. logistic regression). Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). The documentation for BertForMultipleChoice mentions softmax, but the way the labels are described, it sound like this class is for multi-label classification (that is, a binary classification for multiple labels). Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. Thank you for any help.
The answer to this lies in the (admittedly very brief) description of what the tasks are about: [BertForMultipleChoice] [...], e.g. for RocStories/SWAG tasks. When looking at the paper for SWAG, it seems that the task is actually learning to choose from varying options. This is in contrast to your "classical" classification task, in which the "choices" (i.e., classes) do not vary across your samples, which is exactly what BertForSequenceClassification is for. Both variants can in fact be for an arbitrary number of classes (in the case of BertForSequenceClassification), respectively choices (for BertForMultipleChoice), via changing the labels parameter in the config. But, since it seems like you are dealing with a case of "classical classification", I suggest using the BertForSequenceClassification model. Shortly addressing the missing Softmax in BertForSequenceClassification: Since classification tasks can compute loss across classes indipendent of the sample (unlike multiple choice, where your distribution is changing), this allows you to use Cross-Entropy Loss, which factors in Softmax in the backpropagation step for increased numerical stability.
https://stackoverflow.com/questions/60610280/
Freeze only some lines of a torch.nn.Embedding object
I am quite a newbie to Pytorch, and I am trying to implement a sort of "post-training" procedure on embeddings. I have a vocabulary with a set of items, and I have learned one vector for each of them. I keep the learned vectors in a nn.Embedding object. What I'd like to do now is to add a new item to the vocabulary without updating the already learned vectors. The embedding for the new item would be initialized randomly, and then trained while keeping all the other embeddings frozen. I know that in order to prevent a nn.Embedding to be trained, I need to set to False its requires_grad variable. I have also found this other question that is similar to mine. The best answer proposes to either store the frozen vectors and the vector to train in different nn.Embedding objects, the former with requires_grad = False and the latter with requires_grad = True or store the frozen vectors and the new one in the same nn.Embedding object, computing the gradient on all vectors, but descending it is only on the dimensions of the vector of of the new item. This, however, leads to a relevant degradation in performances (which I want to avoid, of course). My problem is that I really need to store the vector for the new item in the same nn.Embedding object as the frozen vectors of the old items. The reason for this constraint is the following: when building my loss function with the embeddings of the items (old and new), I need to lookup the vectors based on the ids of the items, and for performances reasons I need to use Python slicing. In other words, given a list of item ids item_ids, I need to do something like vecs = embedding[item_ids]. If I used two different nn.Embedding items for the old items and the and new one I would need to use an explicit for-loop with if-else conditions, which would lead to worse performances. Is there any way I can do this?
If you look at the implementation of nn.Embedding it uses the functional form of embedding in the forward pass. Therefore, I think you could implement a custom module that does something like this: import torch from torch.nn.parameter import Parameter import torch.nn.functional as F weights_freeze = torch.rand(10, 5) # Don't make parameter weights_train = Parameter(torch.rand(2, 5)) weights = torch.cat((weights_freeze, weights_train), 0) idx = torch.tensor([[11, 1, 3]]) lookup = F.embedding(idx, weights) # Desired result print(lookup) lookup.sum().backward() # 11 corresponds to idx 1 in weights_train so this has grad print(weights_train.grad)
https://stackoverflow.com/questions/60615832/
Pytorch Autograd gives different gradients when using .clamp instead of torch.relu
I'm still working on my understanding of the PyTorch autograd system. One thing I'm struggling at is to understand why .clamp(min=0) and nn.functional.relu() seem to have different backward passes. It's especially confusing as .clamp is used equivalently to relu in PyTorch tutorials, such as https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-nn. I found this when analysing the gradients of a simple fully connected net with one hidden layer and a relu activation (linear in the outputlayer). to my understanding the output of the following code should be just zeros. I hope someone can show me what I am missing. import torch dtype = torch.float x = torch.tensor([[3,2,1], [1,0,2], [4,1,2], [0,0,1]], dtype=dtype) y = torch.ones(4,4) w1_a = torch.tensor([[1,2], [0,1], [4,0]], dtype=dtype, requires_grad=True) w1_b = w1_a.clone().detach() w1_b.requires_grad = True w2_a = torch.tensor([[-1, 1], [-2, 3]], dtype=dtype, requires_grad=True) w2_b = w2_a.clone().detach() w2_b.requires_grad = True y_hat_a = torch.nn.functional.relu(x.mm(w1_a)).mm(w2_a) y_a = torch.ones_like(y_hat_a) y_hat_b = x.mm(w1_b).clamp(min=0).mm(w2_b) y_b = torch.ones_like(y_hat_b) loss_a = (y_hat_a - y_a).pow(2).sum() loss_b = (y_hat_b - y_b).pow(2).sum() loss_a.backward() loss_b.backward() print(w1_a.grad - w1_b.grad) print(w2_a.grad - w2_b.grad) # OUT: # tensor([[ 0., 0.], # [ 0., 0.], # [ 0., -38.]]) # tensor([[0., 0.], # [0., 0.]]) #
The reason is that clamp and relu produce different gradients at 0. Checking with a scalar tensor x = 0 the two versions: (x.clamp(min=0) - 1.0).pow(2).backward() versus (relu(x) - 1.0).pow(2).backward(). The resulting x.grad is 0 for the relu version but it is -2 for the clamp version. That means relu chooses x == 0 --> grad = 0 while clamp chooses x == 0 --> grad = 1.
https://stackoverflow.com/questions/60618346/
Torch squeeze and the batch dimension
does anyone here know if the torch.squeeze function respects the batch (e.g. first) dimension? From some inline code it seems it does not.. but maybe someone else knows the inner workings better than I do. Btw, the underlying problem is that I have tensor of shape (n_batch, channel, x, y, 1). I want to remove the last dimension with a simple function, so that I end up with a shape of (n_batch, channel, x, y). A reshape is of course possible, or even selecting the last axis. But I want to embed this functionality in a layer so that I can easily add it to a ModuleList or Sequence object. EDIT: just found out that for Tensorflow (2.5.0) the function tf.linalg.diag DOES respect batch dimension. Just a FYI that it might differ per function you are using
No! squeeze doesn't respect the batch dimension. It's a potential source of error if you use squeeze when the batch dimension may be 1. Rule of thumb is that only classes and functions in torch.nn respect batch dimensions by default. This has caused me headaches in the past. I recommend using reshape or only using squeeze with the optional input dimension argument. In your case you could use .squeeze(4) to only remove the last dimension. That way nothing unexpected happens. Squeeze without the input dimension has led me to unexpected results, specifically when the input shape to the model may vary batch size may vary nn.DataParallel is being used (in which case batch size for a particular instance may be reduced to 1)
https://stackoverflow.com/questions/60619886/
PyTorch Inequality Gradient
I wrote a PyTorch model roughly as follows: import torch.nn as nn class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.layer1 = nn.Sequential(nn.Linear(64 * 64, 16), nn.LeakyReLU(0.2)) self.layer2 = nn.Sequential(nn.Linear(16, 32), nn.LeakyReLU(0.2)) self.layer3 = nn.Sequential(nn.Linear(32, 64), nn.LeakyReLU(0.2)) self.layer4 = nn.Sequential(nn.Linear(64, 15), nn.Tanh()) def forward(self, x): return (self.layer4(self.layer3(self.layer2(self.layer1(x)))) < 0).float() Notice what I want to do: I want forward to return a tensor of 0s and 1s. However, this does not train, probably because the derivative of the inequality is zero. How can I make a model like this one train, for example, if I want to do image segmentation?
As you said, you can't train something like x<0. You should be fine even if you get rid of the <0 part and use return self.layer4(self.layer3(self.layer2(self.layer1(x)))) as long as you are using the appropriate loss. I think what you would want to use is nn.BCEWithLogitsLoss. In that case you should get the Tanh out of the last layer since nn.BCEWithLogitsLoss internally computes with sigmoid. (There are options of using nn.BCEloss() with sigmoid at the last layer, or even stick with Tanh, but I don't think there's a reason to take the long way.) So in the training phase, the neural network tries as hard as it could to fit the output to 0s and 1s. After that, it is the testing phase that you should take the output of the layer, and give it some kind of threshold to change the values to precisely 1s and 0s.(like you did (output<0).float()) You will find useful sources if you search for multilabel classification.
https://stackoverflow.com/questions/60626560/
Getting 'tensor is not a torch image' for data type
As the question suggests, I'm trying to convert images to tensor. X, y = train_sequence[idx] images = Variable(torch.from_numpy(X)).to(device) # [batch, channel, H, W] masks = Variable(torch.from_numpy(y)).to(device) print(type(images)) ## Output: <class 'torch.Tensor'> images = transforms.Normalize((0.5, 0.5, 0.5, 0.5, 0.5), (0.5, 0.5, 0.5,0.5, 0.5))(images) masks = transforms.Normalize((0.5), (0.5))(masks) But I get the error at ---> 19 images = transforms.Normalize((0.5, 0.5, 0.5, 0.5, 0.5), (0.5, 0.5, 0.5,0.5, 0.5))(images) TypeError: tensor is not a torch image.
This is because as of now, torchvison.transforms.Normalize only supports images with 2 or 3 channels, without the batch dimension, ie. (C, H, W). So instead of passing in a 4D tensor, something like this would work: image = torch.ones(3, 64, 64) image = transforms.Noramalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))(image) Also, because the 0.5 values represent the mean and standard deviation of the image channels, there should normally only be 3 channels (you don't "normalise" the batch dimension, only the spatial ones), so instead of using a tuple of length 5, do (0.5, 0.5, 0.5).
https://stackoverflow.com/questions/60626832/
Index out of range error while training dataset
I am trying to train MaskRCNN to detect and segment apples using the dataset from this paper, github link to code being used I am simply following the instructions as provided in the ReadMe file.. Here is the output on console (venv) PS > python train_rcnn.py --data_path 'D:\Research Report\tensorflow\Mask_RCNN-TRIALS\Mask_RCNN-master\datasets\apples-minneapple' --model mrcnn --epochs 50 --output-dir 'D:\Research Report\tensorflow\Mask_RCNN-TRIALS\Mask_RCNN-master\samples\apples' mrcnn Namespace(batch_size=2, data_path='D:\\Research Report\\tensorflow\\Mask_RCNN-TRIALS\\Mask_RCNN-master\\datasets\\apples-minneapple', dataset='AppleDataset', device='cuda', epochs=50, lr=0.02, lr_gamma=0.1, lr_step_size=8, lr_steps=[8, 11], model='mrcnn', momentum=0.9, output_dir='D:\\Research Report\\tensorflow\\Mask_RCNN-TRIALS\\Mask_RCNN-master\\samples\\apples', print_freq=20, resume='', weight_decay=0.0001, workers=4) Loading data Creating data loaders Creating model Start training Epoch: [0] [ 0/335] eta: 1:00:28 lr: 0.000080 loss: 2.4100 (2.4100) loss_classifier: 0.8481 (0.8481) loss_box_reg: 0.4164 (0.4164) loss_objectness: 0.9299 (0.9299) loss_rpn_box_reg: 0.2157 (0.2157) time: 10.8327 data: 7.9925 max mem: 2733 Epoch: [0] [ 20/335] eta: 0:06:18 lr: 0.001276 loss: 1.4465 (1.4728) loss_classifier: 0.5526 (0.5496) loss_box_reg: 0.3586 (0.3572) loss_objectness: 0.2666 (0.3418) loss_rpn_box_reg: 0.2233 (0.2242) time: 0.7204 data: 0.0132 max mem: 3247 Epoch: [0] [ 40/335] eta: 0:04:48 lr: 0.002473 loss: 0.9622 (1.2287) loss_classifier: 0.2927 (0.4276) loss_box_reg: 0.3188 (0.3314) loss_objectness: 0.1422 (0.2491) loss_rpn_box_reg: 0.2168 (0.2207) time: 0.7408 data: 0.0210 max mem: 3282 Epoch: [0] [ 60/335] eta: 0:04:05 lr: 0.003669 loss: 0.7924 (1.0887) loss_classifier: 0.2435 (0.3654) loss_box_reg: 0.2361 (0.2983) loss_objectness: 0.1289 (0.2105) loss_rpn_box_reg: 0.1898 (0.2144) time: 0.7244 data: 0.0127 max mem: 3432 Epoch: [0] [ 80/335] eta: 0:03:37 lr: 0.004865 loss: 0.7438 (1.0117) loss_classifier: 0.2565 (0.3376) loss_box_reg: 0.2193 (0.2799) loss_objectness: 0.0776 (0.1835) loss_rpn_box_reg: 0.1983 (0.2108) time: 0.7217 data: 0.0127 max mem: 3432 Epoch: [0] [100/335] eta: 0:03:14 lr: 0.006062 loss: 0.7373 (0.9490) loss_classifier: 0.2274 (0.3156) loss_box_reg: 0.2193 (0.2654) loss_objectness: 0.0757 (0.1643) loss_rpn_box_reg: 0.1867 (0.2037) time: 0.7291 data: 0.0132 max mem: 3432 Epoch: [0] [120/335] eta: 0:02:54 lr: 0.007258 loss: 0.8275 (0.9243) loss_classifier: 0.2689 (0.3094) loss_box_reg: 0.2315 (0.2602) loss_objectness: 0.0867 (0.1539) loss_rpn_box_reg: 0.1883 (0.2008) time: 0.7270 data: 0.0134 max mem: 3432 Epoch: [0] [140/335] eta: 0:02:35 lr: 0.008455 loss: 0.7886 (0.9057) loss_classifier: 0.2573 (0.3029) loss_box_reg: 0.2246 (0.2539) loss_objectness: 0.0724 (0.1455) loss_rpn_box_reg: 0.2459 (0.2035) time: 0.7170 data: 0.0124 max mem: 3432 Epoch: [0] [160/335] eta: 0:02:17 lr: 0.009651 loss: 0.7588 (0.8878) loss_classifier: 0.2341 (0.2948) loss_box_reg: 0.2226 (0.2486) loss_objectness: 0.1032 (0.1427) loss_rpn_box_reg: 0.2020 (0.2016) time: 0.7139 data: 0.0118 max mem: 3432 Epoch: [0] [180/335] eta: 0:02:01 lr: 0.010847 loss: 0.7340 (0.8744) loss_classifier: 0.2331 (0.2898) loss_box_reg: 0.2120 (0.2441) loss_objectness: 0.1086 (0.1392) loss_rpn_box_reg: 0.1993 (0.2012) time: 0.7800 data: 0.0584 max mem: 3432 Epoch: [0] [200/335] eta: 0:01:45 lr: 0.012044 loss: 0.8106 (0.8694) loss_classifier: 0.2616 (0.2873) loss_box_reg: 0.2208 (0.2411) loss_objectness: 0.1117 (0.1397) loss_rpn_box_reg: 0.1927 (0.2014) time: 0.7344 data: 0.0143 max mem: 3432 Epoch: [0] [220/335] eta: 0:01:29 lr: 0.013240 loss: 0.8191 (0.8610) loss_classifier: 0.2581 (0.2848) loss_box_reg: 0.2140 (0.2382) loss_objectness: 0.0860 (0.1362) loss_rpn_box_reg: 0.2177 (0.2018) time: 0.7213 data: 0.0126 max mem: 3432 Epoch: [0] [240/335] eta: 0:01:13 lr: 0.014437 loss: 0.7890 (0.8590) loss_classifier: 0.2671 (0.2842) loss_box_reg: 0.2094 (0.2357) loss_objectness: 0.1175 (0.1360) loss_rpn_box_reg: 0.2256 (0.2030) time: 0.7576 data: 0.0564 max mem: 3432 Epoch: [0] [260/335] eta: 0:00:57 lr: 0.015633 loss: 0.8631 (0.8587) loss_classifier: 0.2900 (0.2849) loss_box_reg: 0.2089 (0.2337) loss_objectness: 0.0925 (0.1350) loss_rpn_box_reg: 0.2271 (0.2050) time: 0.7371 data: 0.0220 max mem: 3432 Epoch: [0] [280/335] eta: 0:00:42 lr: 0.016830 loss: 0.8464 (0.8580) loss_classifier: 0.2679 (0.2840) loss_box_reg: 0.2156 (0.2321) loss_objectness: 0.0940 (0.1346) loss_rpn_box_reg: 0.2345 (0.2073) time: 0.7379 data: 0.0143 max mem: 3432 Epoch: [0] [300/335] eta: 0:00:27 lr: 0.018026 loss: 0.7991 (0.8519) loss_classifier: 0.2485 (0.2819) loss_box_reg: 0.2125 (0.2305) loss_objectness: 0.0819 (0.1315) loss_rpn_box_reg: 0.2217 (0.2080) time: 0.8549 data: 0.1419 max mem: 3450 Epoch: [0] [320/335] eta: 0:00:11 lr: 0.019222 loss: 0.6906 (0.8432) loss_classifier: 0.2362 (0.2791) loss_box_reg: 0.2036 (0.2285) loss_objectness: 0.0662 (0.1285) loss_rpn_box_reg: 0.1801 (0.2070) time: 0.7257 data: 0.0238 max mem: 3450 Epoch: [0] [334/335] eta: 0:00:00 lr: 0.020000 loss: 0.7822 (0.8441) loss_classifier: 0.2501 (0.2785) loss_box_reg: 0.2224 (0.2285) loss_objectness: 0.1135 (0.1296) loss_rpn_box_reg: 0.1948 (0.2075) time: 0.7249 data: 0.0139 max mem: 3450 Epoch: [0] Total time: 0:04:18 (0.7707 s / it) Traceback (most recent call last): File "train_rcnn.py", line 143, in <module> main(args) File "train_rcnn.py", line 109, in main evaluate(model, data_loader_test, device=device) File "C:\Users\___\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\autograd\grad_mode.py", line 49, in decorate_no_grad return func(*args, **kwargs) File "D:\Research Report\tensorflow\Mask_RCNN-TRIALS\Mask_RCNN-master\samples\apples\utility\engine.py", line 78, in evaluate coco = get_coco_api_from_dataset(data_loader.dataset) File "D:\Research Report\tensorflow\Mask_RCNN-TRIALS\Mask_RCNN-master\samples\apples\utility\coco_utils.py", line 205, in get_coco_api_from_dataset return convert_to_coco_api(dataset) File "D:\Research Report\tensorflow\Mask_RCNN-TRIALS\Mask_RCNN-master\samples\apples\utility\coco_utils.py", line 154, in convert_to_coco_api img, targets = ds[img_idx] File "D:\Research Report\tensorflow\Mask_RCNN-TRIALS\Mask_RCNN-master\samples\apples\data\apple_dataset.py", line 22, in __getitem__ mask_path = os.path.join(self.root_dir, "masks", self.masks[idx]) IndexError: list index out of range This is the file that is run in order to train the network import datetime import os import time import torch import torch.utils.data import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor from data.apple_dataset import AppleDataset from utility.engine import train_one_epoch, evaluate import utility.utils as utils import utility.transforms as T ###################################################### # Train either a Faster-RCNN or Mask-RCNN predictor # using the MinneApple dataset ###################################################### def get_transform(train): transforms = [] transforms.append(T.ToTensor()) if train: transforms.append(T.RandomHorizontalFlip(0.5)) return T.Compose(transforms) def get_maskrcnn_model_instance(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # now get the number of input features for the mask classifier in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels hidden_layer = 256 # and replace the mask predictor with a new one model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask, hidden_layer, num_classes) return model def get_frcnn_model_instance(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) return model def main(args): print(args) device = args.device # Data loading code print("Loading data") num_classes = 2 dataset = AppleDataset(os.path.join(args.data_path, 'train'), get_transform(train=True)) dataset_test = AppleDataset(os.path.join(args.data_path, 'test'), get_transform(train=False)) print("Creating data loaders") data_loader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True, num_workers=args.workers, collate_fn=utils.collate_fn) data_loader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1, shuffle=False, num_workers=args.workers, collate_fn=utils.collate_fn) print("Creating model") # Create the correct model type if args.model == 'maskrcnn': model = get_maskrcnn_model_instance(num_classes) else: model = get_frcnn_model_instance(num_classes) # Move model to the right device model.to(device) params = [p for p in model.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay) # lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=args.lr_step_size, gamma=args.lr_gamma) lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=args.lr_steps, gamma=args.lr_gamma) if args.resume: checkpoint = torch.load(args.resume, map_location='cpu') model.load_state_dict(checkpoint['model']) optimizer.load_state_dict(checkpoint['optimizer']) lr_scheduler.load_state_dict(checkpoint['lr_scheduler']) print("Start training") start_time = time.time() for epoch in range(args.epochs): train_one_epoch(model, optimizer, data_loader, device, epoch, args.print_freq) lr_scheduler.step() if args.output_dir: torch.save(model.state_dict(), os.path.join(args.output_dir, 'model_{}.pth'.format(epoch))) # evaluate after every epoch evaluate(model, data_loader_test, device=device) total_time = time.time() - start_time total_time_str = str(datetime.timedelta(seconds=int(total_time))) print('Training time {}'.format(total_time_str)) if __name__ == "__main__": import argparse parser = argparse.ArgumentParser(description='PyTorch Detection Training') parser.add_argument('--data_path', default='~~~~', help='dataset') parser.add_argument('--dataset', default='AppleDataset', help='dataset') parser.add_argument('--model', default='maskrcnn', help='model') parser.add_argument('--device', default='cuda', help='device') parser.add_argument('-b', '--batch-size', default=2, type=int) parser.add_argument('--epochs', default=13, type=int, metavar='N', help='number of total epochs to run') parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 16)') parser.add_argument('--lr', default=0.02, type=float, help='initial learning rate') parser.add_argument('--momentum', default=0.9, type=float, metavar='M', help='momentum') parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float, metavar='W', help='weight decay (default: 1e-4)', dest='weight_decay') parser.add_argument('--lr-step-size', default=8, type=int, help='decrease lr every step-size epochs') parser.add_argument('--lr-steps', default=[8, 11], nargs='+', type=int, help='decrease lr every step-size epochs') parser.add_argument('--lr-gamma', default=0.1, type=float, help='decrease lr by a factor of lr-gamma') parser.add_argument('--print-freq', default=20, type=int, help='print frequency') parser.add_argument('--output-dir', default='.', help='path where to save') parser.add_argument('--resume', default='', help='resume from checkpoint') args = parser.parse_args() print(args.model) assert(args.model in ['mrcnn', 'frcnn']) if args.output_dir: utils.mkdir(args.output_dir) main(args) Apple_dataset.py is as follows import os import numpy as np import torch from PIL import Image ##################################### # Class that takes the input instance masks # and extracts bounding boxes on the fly ##################################### class AppleDataset(object): def __init__(self, root_dir, transforms): self.root_dir = root_dir self.transforms = transforms # Load all image and mask files, sorting them to ensure they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root_dir, "images")))) self.masks = list(sorted(os.listdir(os.path.join(root_dir, "masks")))) def __getitem__(self, idx): # Load images and masks img_path = os.path.join(self.root_dir, "images", self.imgs[idx]) mask_path = os.path.join(self.root_dir, "masks", self.masks[idx]) img = Image.open(img_path).convert("RGB") mask = Image.open(mask_path) # Each color of mask corresponds to a different instance with 0 being the background # Convert the PIL image to np array mask = np.array(mask) obj_ids = np.unique(mask) # Remove background id obj_ids = obj_ids[1:] # Split the color-encoded masks into a set of binary masks masks = mask == obj_ids[:, None, None] # Get bbox coordinates for each mask num_objs = len(obj_ids) boxes = [] h, w = mask.shape for ii in range(num_objs): pos = np.where(masks[ii]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) if xmin == xmax or ymin == ymax: continue xmin = np.clip(xmin, a_min=0, a_max=w) xmax = np.clip(xmax, a_min=0, a_max=w) ymin = np.clip(ymin, a_min=0, a_max=h) ymax = np.clip(ymax, a_min=0, a_max=h) boxes.append([xmin, ymin, xmax, ymax]) # Convert everything into a torch.Tensor boxes = torch.as_tensor(boxes, dtype=torch.float32) # There is only one class (apples) labels = torch.ones((num_objs,), dtype=torch.int64) masks = torch.as_tensor(masks, dtype=torch.uint8) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # All instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["masks"] = masks target["image_id"] = image_id target["area"] = area target["iscrowd"] = iscrowd if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs) def get_img_name(self, idx): return self.imgs[idx] How do i fix the index going out of range??? OR what is the underlying issue that needs to be addressed here?? EDIT1: ok.. so whats happening here is i have two folders "train" and "test".. the training folder has the images and masks, while test folder has only images.. the apple_dataset.py is written such that its looking for masks folder in both train and test folders.. i think i need to change the code such that it looks for masks in only the train folder and not the test set
fixed this by creating a dummy folder called 'masks' in the "test" folder.. just copy paste the one from 'train' with all the masks.. the train and predict scripts wont really use it, so shoudnt be any issues here.. also look at this issue for more changes that need to be made
https://stackoverflow.com/questions/60627275/
cmake - linking static library pytorch cannot find its internal functions during build
I'm trying to build a program using cmake. For several reasons, the program must be built using static libraries rather than dynamic libraries, and I need to use PyTorch so this is what I've done: Downloaded and installed PyTorch static library (I've found libtorch.a in the proper path, in /home/me/pytorch/torch/lib) Made CMakeLists.txt with the following contents: cmake_minimum_required(VERSION 3.5.1 FATAL_ERROR) project(example-app LANGUAGES CXX) find_package(Torch REQUIRED) add_executable(example-app example-app.cpp argparse/argparse.cpp) target_link_libraries(example-app "${TORCH_LIBRARIES}" -static -fopenmp) set_property(TARGET example-app PROPERTY CXX_STANDARD 14) FYI, example-app.cpp is the file with the main function, and argparse/ is a directory with some source code for functions called in example-app.cpp It works until cmake -DCMAKE_PREFIX_PATH=/home/me/pytorch/torch .., but the following build incurs some errors, saying it could not find the reference to some functions, namely functions starting with fbgemm::. fbgemm is (as long as I know) some sort of GEMM library used in implementing PyTorch. It seems to me that while linking the static PyTorch library, its internal libraries like fbgemm stuff have not been linked properly, but I'm not an expert on cmake and honestly not entirely sure. Am I doing something wrong, or is there a workaround for this problem? Any help or push in the right direction would be greatly appreciated. P.S. The exact error has not been posted because it is way too long, but it consists of mostly undefined reference to ~ errors. If looking at the error message might be helpful for some people, I'd be happy to edit the question and post it. building and running the file works fine if I remove the parts that require the library's functions from the code without commenting out #include <torch/torch.h> from example-app.cpp.
Lately went through similar process with static linking of PyTorch and to be honest it wasn't too pretty. I will outline the steps I have undertaken (you can find exact source code in torchlambda, here is CMakeLists.txt (it also includes AWS SDK and AWS Lambda static builds), here is a script building pytorch from source ( cloning and building via /scripts/build_mobile.sh with only CPU support)), though it's only with CPU support (though similar steps should be fine if you need CUDA, it will get you started at least). Pytorch static library Pre-built static PyTorch First of all, you need pre-built static library files (all of them need to be static, hence no .so, only those with .a extension are suitable). Tbh I've been looking for those provided by PyTorch on installation page, yet there is only shared version. In one GitHub issue I've found a way to download them as follows: Instead of downloading (here via wget) shared libraries: $ wget https://download.pytorch.org/libtorch/cu101/libtorch-shared-with-deps-1.4.0.zip you rename shared to static (as described in this issue), so it would become: $ wget https://download.pytorch.org/libtorch/cu101/libtorch-static-with-deps-1.4.0.zip Yet, when you download it there is no libtorch.a under lib folder (didn't find libcaffe2.a either as indicated by this issue), so what I was left with was building explicitly from source. If you have those files somehow (if so, please provide where you got them from please), you can skip the next step. Building from source For CPU version I have used /pytorch/scripts/build_mobile.sh file, you can base your version off of this if GPU support is needed (maybe you only have to pass -DUSE_CUDA=ON to this script, not sure though). Most important is cmake's -DBUILD_SHARED_LIBS=OFF in order to build everything as static library. You can also check script from my tool which passes arguments to build_mobile.sh as well. Running above will give you static files in /pytorch/build_mobile/install by default where there is everything you need. CMake Now you can copy above build files to /usr/local (better not to unless you are using Docker as torchlambda) or set path to it from within your CMakeLists.txt like this: set(LIBTORCH "/path/to/pytorch/build_mobile/install") # Below will append libtorch to path so CMake can see files set(CMAKE_PREFIX_PATH "${CMAKE_PREFIX_PATH};${LIBTORCH}") Now the rest is fine except target_link_libraries, which should be (as indicated by this issue, see related issues listed there for additional reference) used with -Wl,--whole-archive linker flag, which brought me to this: target_link_libraries(example-app PRIVATE -lm -Wl,--whole-archive "${TORCH_LIBRARIES}" -Wl,--no-whole-archive -lpthread ${CMAKE_DL_LIBS}) You may not need either of -lm, -lpthread or ${CMAKE_DL_LIBS}, though I needed it when building on Amazon Linux AMI. Building Now you are off to building your application. Standard libtorch way should be fine but here is another command I used: mkdir build && \ cd build && \ cmake .. && \ cmake --build . --config Release Above will create build folder where example-app binary should be now safely located. Finally use ld build/example-app to verify everything from PyTorch was statically linked, see aforementioned issue point 5., your output should look similar.
https://stackoverflow.com/questions/60629537/
Vectorized way to shuffle a given tensor using pytorch
I have a tensor A of shape (1,12,2,2) as follows: ([[[[1., 3.], [9., 11.], [[ 2., 4.], [10., 12.]], [[ 5., 7.], [13., 15.]], [[ 6., 8.], [14., 16.]], [[17., 19.], [25., 27.]], [[18., 20.], [26., 28.]], [[21., 23.], [29., 31.]], [[22., 24.], [30., 32.]], [[33., 35.], [41., 43.]], [[34., 36.], [42., 44.]], [[37., 39.], [45., 47.]], [[38., 40.], [46., 48.]]]]) I want to shuffle it using pytorch to produce the following tensor B of shape (1,3,4,4): tensor([[[[ 1., 6., 3., 8.], [21., 34., 23., 36.], [ 9., 14., 11., 16.], [29., 42., 31., 44.]], [[ 2., 17., 4., 19.], [22., 37., 24., 39.], [10., 25., 12., 27.], [30., 45., 32., 47.]], [[ 5., 18., 7., 20.], [33., 38., 35., 40.], [13., 26., 15., 28.], [41., 46., 43., 48.]]]]) I have implemented this using two for loops as follows: B = torch.zeros(1,3,4,4, dtype=torch.float) ctr = 0 for i in range(2): for j in range(2): B[:,:,i:4:2,j:4:2] = A[:,ctr:ctr+3,:,:] ctr = ctr+3 I'm looking for any way to implement this in a vectorized manner in pytorch without these for loops. Maybe using functions like .permute(), etc.
Just generalising the above solution for any upsampling factor 'r' like in pixel shuffle. B = A.reshape(-1,r,3,s,s).permute(2,3,0,4,1).reshape(1,3,rs,rs) Here 's' is the spatial resolution of each channel in 'A' and 'r' is the upsampling factor. For the particular case, r=2 and s=2. This solution should work for arbitary values of 'r' with appropriate size of 'A'. So for the problem in hand s=2, r=2 and so the solution goes as B = A.reshape(-1,2,3,2,2).permute(2,3,0,4,1).reshape(1,3,4,4) as posted by @ddoGas Similarly if 'A' had been of size (1, 192, 356, 532) and wanting to upsample by r=8 do B = A.reshape(-1,8,3,356,532).permute(2,3,0,4,1).reshape(1,3,2848,4256)
https://stackoverflow.com/questions/60633781/
What input shapes are required for a classifier that is using a pre-trained network (Pytorch)?
I'm fairly new to deeplearning, python, and pytorch so please bear with me! I'm trying to understand Transfer Learning in Pytorch using two different Pretrained Networks: Vgg11 and Densenet121. I've run data of shape (3 x 224 x 224) through the "features" part of the above networks, and the output shapes are as follows: Vgg11 features output shape: 512 x 7 x 7 Densenet121 features output shape: 1024 x 7 x7 Now, I'm trying to make my own Classifier to use instead of the Pre-trained one. Upon checking both pre-trained classifiers, I see the Vgg11 classifier has in the first layer: (0): Linear(in_features=25088, out_features=4096, bias=True) While the Densenet121 has in the first layer: (classifier): Linear(in_features=1024, out_features=1000, bias=True)) The Vgg one makes sense, since if you flatten the output of the "features" part, you get 512 x 7 x 7 = 25,088. How does the Densenet one have only 1024 dimensions? If you flatten the output of its "features" part, you get 1024 x 7 x 7 = 50,176 Are there steps that I am missing for either of them? Are there ways to check the input and output shapes of each layer and find out exactly what's happening? Thank you.
As mentioned in Table 1 in the DenseNet paper, DenseNet-121 uses something called Global Average Pooling, which is an extreme way of pooling where a tensor of dimensions d x h x w is reduced to d x 1 x 1.
https://stackoverflow.com/questions/60646996/
Resizing PyTorch tensor with grad to smaller size
I am trying to downsize a tensor from let's say (3,3) to (1, 1), but I want to keep the original tensor: import torch a = torch.rand(3, 3) a_copy = a.clone() a_copy.resize_(1, 1) I need requires_grad=True in my initial tensor but PyTorch forbids this me from trying to resize the copy: a = torch.rand(3, 3, requires_grad=True) a_copy = a.clone() a_copy.resize_(1, 1) Throws an error: Traceback (most recent call last): File "pytorch_test.py", line 7, in <module> a_copy.resize_(1, 1) RuntimeError: cannot resize variables that require grad Clone and Detach I tried to .clone() and .detach() as well: a = torch.rand(3, 3, requires_grad=True) a_copy = a.clone().detach() with torch.no_grad(): a_copy.resize_(1, 1) which gives this error instead: Traceback (most recent call last): File "pytorch_test.py", line 14, in <module> a_copy.resize_(1, 1) RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach(). If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset) without autograd tracking the change, remove the .data / .detach() call and wrap the change in a `with torch.no_grad():` block. For example, change: x.data.set_(y) to: with torch.no_grad(): x.set_(y) This behaviour had been stated in the docs and #15070. With no_grad() So, following what they said in the error message, I removed .detach() and used no_grad() instead: a = torch.rand(3, 3, requires_grad=True) a_copy = a.clone() with torch.no_grad(): a_copy.resize_(1, 1) But it still gives me an error about grad: Traceback (most recent call last): File "pytorch_test.py", line 21, in <module> a_copy.resize_(1, 1) RuntimeError: cannot resize variables that require grad Similar questions I have looked at Resize PyTorch Tensor but it the tensor in that example retains all original values. I have also looked at Pytorch preferred way to copy a tensor which is the method I am using to copy the tensor. I am using PyTorch version 1.4.0
I think you should first detach and then clone: a = torch.rand(3, 3, requires_grad=True) a_copy = a.detach().clone() a_copy.resize_(1, 1) Note: a.detach() returns a new tensor detached from the current graph (it doesn't detach a itself from the graph as a.detach_() does). But because it shares storage with a, you also should clone it. In this way, whatever you do with the a_copy won't affect a. However I am not sure why a.detach().clone() works but a.clone().detach() gives error. Edit: The following code also works (which is probably a better solution): a = torch.rand(3, 3, requires_grad=True) with torch.no_grad(): a_copy = a.clone() a_copy.resize_(1, 1)
https://stackoverflow.com/questions/60664524/
How to sort the pytorch tensors by specific key value?
I'm new to Pytorch. Given a tensor set, I need to sort these tensors by the key value. For example, A = [[0.9133, 0.5071, 0.6222, 3.], [0.5951, 0.9315, 0.6548, 1.], [0.7704, 0.0720, 0.0330, 2.]] My expected result after sorting is: A' = [[0.5951, 0.9315, 0.6548, 1.], [0.7704, 0.0720, 0.0330, 2.], [0.9133, 0.5071, 0.6222, 3.]] I tried to use sorted function in python, but it was time-consuming in my training process. How to achieve it more efficiently? Thanks!
%%timeit -r 10 -n 10 A[A[:,-1].argsort()] 38.6 µs ± 23 µs per loop (mean ± std. dev. of 10 runs, 10 loops each) %%timeit -r 10 -n 10 sorted(A, key = lambda x: x[-1]) 69.6 µs ± 34.8 µs per loop (mean ± std. dev. of 10 runs, 10 loops each) Both output tensor([[0.5951, 0.9315, 0.6548, 1.0000], [0.7704, 0.0720, 0.0330, 2.0000], [0.9133, 0.5071, 0.6222, 3.0000]]) Then there is %%timeit -r 10 -n 10 a, b = torch.sort(A, dim=-2) The slowest run took 8.45 times longer than the fastest. This could mean that an intermediate result is being cached. 14.3 µs ± 18.1 µs per loop (mean ± std. dev. of 10 runs, 10 loops each) with a as the sorted tensor and b as the indices
https://stackoverflow.com/questions/60665864/
How can I have a PyTorch Conv1d work over a vector?
I understand Conv1d strides in one dimension. But my input is of shape [64, 20, 161], where 64 is the batches, 20 is the sequence length and 161 is the dimension of my vector. I'm not sure how to set up my Conv1d to stride over the vector. I'm trying: self.conv1 = torch.nn.Conv1d(batch_size, 20, 161, stride=1) but getting: RuntimeError: Given groups=1, weight of size 20 64 161, expected input[64, 20, 161] to have 64 channels, but got 20 channels instead
According to the documentation: torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') in_channels is the number of channels in your input, number of channels usually an computer vision term, in your case this number is 20. out_channels size of your output, it depends on how much output you want. For 1D convolution, you can think of number of channels as "number of input vectors" and "number of output feature vectors". And size (not number) of output feature vectors are decided from other parameters like kernel_size, strike, padding, dilation. An example usage: t = torch.randn((64, 20, 161)) conv = torch.nn.Conv1d(20, 100) conv(t) Note: You never specify batch size in torch.nn modules, first dimension is always assumed to be batch size.
https://stackoverflow.com/questions/60671530/
how to get value 0 or 1 in siamese neural network?
I'm trying to use siamese neural network. Here I want to compare 2 types of images and get the results of the score, This is the code for the test model to produce the score in this case i use pytorch model = Siamese() # Load state_dict model.load_state_dict(torch.load('/Users/tania/Desktop/TA/model/model-batch-1001.pth')) # Create the preprocessing transformation from torchvision import transforms transforms = transforms.ToTensor() # load image(s) from PIL import Image x1 = Image.open('table.PNG') x2 = Image.open('table.PNG') # Transform x1 = transforms(x1) x2 = transforms(x2) x1 = torch.stack([x1]) x2 = torch.stack([x2]) model.eval() # Get prediction output = model(x1,x2) print (output) so i got the score like this, the score is -14.1640 basically in siamese if the image is the same then it produces a value of 1 and if different it will produce a value of 0 how do I get the value of 0 or 1 so that I know whether the image is the same or not? please help me, i am newbie in neural network
To get an output between 0 and 1, you need to transform your values with an activation function, in order to map them to a probability. This can be done with the Sigmoid function, defined as: It returns a probability with range (0,1) (exclusive), where values 0 < y < 0.5 can be interpreted as a negative label, whereas values 0.5 <= y < 1 can be interpreted as being positive. In PyTorch, this can be implemented: output = model(x1,x2) output = torch.sigmoid(output) Hope this helps!
https://stackoverflow.com/questions/60679416/
Inference time varies over different GPUs using Torch
I get a bug when running the below inference code. In the function recognize(), it takes 0.4s to finish prediction. It takes another 3s to return the result preds_str to the caller function. I found that if I set gpu_id=0 in file config, it returns instantly. How can I fix this bug? Thanks in advance. def recognize(imgs, model, demo_loader): t = time() model.eval() with torch.no_grad(): for image_tensors, image_path_list in demo_loader: batch_size = image_tensors.size(0) image = image_tensors.to(config.device) # For max length prediction length_for_pred = torch.IntTensor([config.batch_max_length] * batch_size).to(config.device) text_for_pred = torch.LongTensor(batch_size, config.batch_max_length + 1).fill_(0).to(config.device) preds = model(image, text_for_pred, is_train=False) _, preds_index = preds.max(2) preds_str = converter.decode(preds_index, length_for_pred) print('time elapsed before return:'time()-t) #0.4s return preds_str def main(): model = Model() self.model.cuda(config.device) model = torch.nn.DataParallel(model, device_ids=[config.device], output_device=[config.device]).to(config.device) model.load_state_dict(torch.load(config.saved_model, map_location=config.device)) AlignCollate_demo = AlignCollate(imgH=config.imgH, imgW=config.imgW, keep_ratio_with_pad=config.PAD) imgs_dataset = ImageDataset(imgs) demo_loader = torch.utils.data.DataLoader(imgs_dataset, batch_size=config.batch_size,shuffle=False,num_workers=int(config.workers),collate_fn=AlignCollate_demo, pin_memory=True) start_time = time() # imgs = [img1, img2, ....] preds_str = recognize(imgs, model, demo_loader) print('time elapsed after return', time()-start_time) #3.4s Config file: class ConfigWordRecognizer: gpu_id = 1 #troublesome line here device = torch.device('cuda:{}'.format(gpu_id) if torch.cuda.is_available() else 'cpu') imgH = 32 imgW = 100 batch_size = 80 workers = 8 batch_max_length = 25
I found the solution from this post. I set CUDA_VISIBLE_DEVICES=1, gpu_id=0. Then, I remove model = torch.nn.DataParallel(model, device_ids=[config.device], output_device=[config.device]).to(config.device) and change model.load_state_dict(torch.load(config.saved_model, map_location=config.device)) to model.load_state_dict(self.copyStateDict(torch.load(self.config.saved_model, map_location=self.config.device))) Copy stateDict function: def copyStateDict(self, state_dict): if list(state_dict.keys())[0].startswith("module"): start_idx = 1 else: start_idx = 0 new_state_dict = OrderedDict() for k, v in state_dict.items(): name = ".".join(k.split(".")[start_idx:]) new_state_dict[name] = v return new_state_dict The model works well on gpu1. But I still don't understand why if I set 'gpu_id=0', it works well on gpu0 without copyStateDict
https://stackoverflow.com/questions/60681049/
How to apply OpenCV filters on Pytorch dataset?
I have used below steps to pre-process single image using OpenCV. Now, I want to apply these pre-processing steps to my entire dataset before training the model in Pytorch. How can this be done? im = cv2.imread(image_path) im_nonoise = cv2.medianBlur(im, 3) imgray = cv2.cvtColor(im_nonoise,cv2.COLOR_BGR2GRAY) clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) cl1 = clahe.apply(imgray) ret,thresh = cv2.threshold(cl1,110,255,0) image, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) img = cv2.drawContours(image, contours, -1, (250,100,120)) And I load the data using data = datasets.ImageFolder(train_dir,transform=transform) train_loader = torch.utils.data.DataLoader(data,batch_size=batch_size,sampler=train_sampler)
You can build your own dataset class (derived from ImageFolder) and overload only the __getitem__ method: class MySpecialDataset(datasets.ImageFolder): def __init__(self, root, loader=default_loader, is_valid_file=None): super(MySpecialDataset, self).__init__(root=root, loader=loader, is_valid_file=is_valid_file) def __getitem__(self, index): image_path, target = self.samples[index] # do your magic here im = cv2.imread(image_path) im_nonoise = cv2.medianBlur(im, 3) imgray = cv2.cvtColor(im_nonoise,cv2.COLOR_BGR2GRAY) clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) cl1 = clahe.apply(imgray) ret,thresh = cv2.threshold(cl1,110,255,0) image, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) img = cv2.drawContours(image, contours, -1, (250,100,120)) # you need to convert img from np.array to torch.tensor # this has to be done CAREFULLY! sample = torchvision.transforms.ToTensor()(img) return sample, target Once you have this dataset, you can use it with the basic pytorch's DataLoader: data = MySpecialDataset(train_dir) train_loader = torch.utils.data.DataLoader(data,batch_size=batch_size,sampler=train_sampler)
https://stackoverflow.com/questions/60686273/
AttributeError: ‘NoneType’ object has no attribute ‘register_forward_hook’
I’m trying to register a forward hook function to the last conv layer of my network. I first printed out the names of the modules via: for name, _ in model.named_modules(): print(name) Which gave me "0.conv" as the module name. However, when I tried to do the following, the above error was triggered by line 4: def hook_feature(module, in_, out_): features.append(out_.cpu().data.numpy()) model._modules.get("0.conv").register_forward_hook(hook_feature) Here is my named_modules() output: ... 0.decoder1.dec1conv2 0.decoder1.dec1norm2 0.decoder1.dec1relu2 0.conv 1 1.outc1 1.outc2 What am I doing wrong and how do I fix it? Thanks!
Your modules are stored in an hierarchical way. To get to '0.conv', you need to models._modules["0"]._modules.get("conv").register_forward_hook(hook_feature)
https://stackoverflow.com/questions/60689003/
Why does a torch gradient increase linearly every time the function is backpropagated?
I am trying to understand how PyTorch backpropagation works, using the following code. import torch import numpy x = torch.tensor(numpy.e, requires_grad=True) y = torch.log(x) y.backward() print(x.grad) The result is tensor(0.3679), as expected, which is 1 / x, which is the derivative of log(x) w.r.t. x with x = numpy.e. However, if I run the last 3 lines again WITHOUT re-assigning x, i.e. do y = torch.log(x) y.backward() print(x.grad) then I will get tensor(0.7358), which is twice the previous result. Why does this happen?
Gradients are accumulated until cleared. From the docs (emphasis mine): This function accumulates gradients in the leaves - you might need to zero them before calling it. This zeroing can be done by way of x.grad.zero_() or, in the case of a torch.optim.Optimizer, optim.zero_grad().
https://stackoverflow.com/questions/60696153/
RuntimeError occurs in PyTorch backward function
I am trying to calculate the grad of a variable in PyTorch. However, there was a RuntimeError which tells me that the shape of output and grad must be the same. However, in my case, the shape of output and grad cannot be the same. Here is my code to reproduce: import numpy as np import torch from torch.autograd import Variable as V ne = 3 m, n = 79, 164 G = np.random.rand(m, n).astype(np.float64) w = np.random.rand(n, n).astype(np.float64) z = -np.random.rand(n).astype(np.float64) G = V(torch.from_numpy(G)) w = V(torch.from_numpy(w)) z = V(torch.from_numpy(z), requires_grad=True) e, v = torch.symeig(torch.diag(2 * z - torch.sum(w, dim=1)) + w, eigenvectors=True, upper=False) ssev = torch.sum(torch.pow(e[-ne:] * v[:, -ne:], 2), dim=1) out = torch.sum(torch.matmul(G, ssev.reshape((n, 1)))) out.backward(z) print(z.grad) The error message is: RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([164]) and output[0] has a shape of torch.Size([]) Similar calculation is allowed in TensorFlow and I can successfully get the gradient I want: import numpy as np import tensorflow as tf m, n = 79, 164 G = np.random.rand(m, n).astype(np.float64) w = np.random.rand(n, n).astype(np.float64) z = -np.random.rand(n).astype(np.float64) def tf_function(z, G, w, ne=3): e, v = tf.linalg.eigh(tf.linalg.diag(2 * z - tf.reduce_sum(w, 1)) + w) ssev = tf.reduce_sum(tf.square(e[-ne:] * v[:, -ne:]), 1) return tf.reduce_sum(tf.matmul(G, tf.expand_dims(ssev, 1))) z, G, w = [tf.convert_to_tensor(_, dtype=tf.float64) for _ in (z, G, w)] z = tf.Variable(z) with tf.GradientTape() as g: g.watch(z) out = tf_function(z, G, w) print(g.gradient(out, z).numpy()) My tensorflow version is 2.0 and my PyTorch version is 1.14.0. I am using Python3.6.9. In my opinion, calculating the gradients when the output and the variables have different shapes is very reasonable and I don't think I made any mistake.Can anyone help me with this problem? I really appreciate it!
First of all you don't need to use numpy and then convert to Variable (which is deprecated by the way), you can just use G = torch.rand(m, n) etc. Second, when you write out.backward(z), you are passing z as the gradient of out, i.e. out.backward(gradient=z), probably due to the misconception that "out.backward(z) computes the gradient of z, i.e. dout/dz". Instead, this argument is meant to be gradient = d[f(out)]/dout for some function f (e.g. a loss function) and it's the tensor used to compute vector-Jacobian product dout/dz * df/dout. Therefore, the reason why you got the error is because your out (and its gradient df/dout) is a scalar (zero-dimensional tensor) and z is a tensor of size n, leading to a mismatch in shapes. To fix the problem, as you have already figured out by yourself, just replace out.backward(z) with out.backward(), which is equivalent to out.backward(gradient=torch.tensor(1.)), since in your case out is a scalar and f(out) = out, so d[f(out)]/dout = d(out)/d(out) = tensor(1.). If your out was a non-scalar tensor, then out.backward() would not work and instead you would have to use out.backward(torch.ones(out.shape)) (again assuming that f(out) = out). In any case, if you need to pass gradient to the out.backward(), make sure that it has the same shape as the out.
https://stackoverflow.com/questions/60700062/
PyTorch AutoEncoder - Decoded output dimension not the same as input
I am building a Custom Autoencoder to train on a dataset. My model is as follows class AutoEncoder(nn.Module): def __init__(self): super(AutoEncoder,self).__init__() self.encoder = nn.Sequential( nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size=3,stride=1), nn.ReLU(inplace=True), nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size=3,stride=1), nn.ReLU(inplace=True), nn.Conv2d(in_channels = 64, out_channels = 128, kernel_size=3,stride=1), nn.ReLU(inplace=True), nn.Conv2d(in_channels=128,out_channels=256,kernel_size=5,stride=2), nn.ReLU(inplace=True), nn.Conv2d(in_channels=256,out_channels=512,kernel_size=5,stride=2), nn.ReLU(inplace=True), nn.Conv2d(in_channels=512,out_channels=1024,kernel_size=5,stride=2), nn.ReLU(inplace=True) ) self.decoder = nn.Sequential( nn.ConvTranspose2d(in_channels=1024,out_channels=512,kernel_size=5,stride=2), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=512,out_channels=256,kernel_size=5,stride=2), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=256,out_channels=128,kernel_size=5,stride=2), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=128,out_channels=64,kernel_size=3,stride=1), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=64,out_channels=32,kernel_size=3,stride=1), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=32,out_channels=3,kernel_size=3,stride=1), nn.ReLU(inplace=True) ) def forward(self,x): x = self.encoder(x) print(x.shape) x = self.decoder(x) return x def unit_test(): num_minibatch = 16 img = torch.randn(num_minibatch, 3, 512, 640).cuda(0) model = AutoEncoder().cuda() model = nn.DataParallel(model) output = model(img) print(output.shape) if __name__ == '__main__': unit_test() As you can see, my input dimension is (3, 512, 640) but my output after passing it through the decoder is (3, 507, 635). Am I missing something while adding the Conv2D Transpose layers ? Any help would be appreciated. Thanks
The mismatch is caused by the different output shapes of ConvTranspose2d layer. You can add output_padding of 1 to first and third transpose convolution layer to solve this problem. i.e. nn.ConvTranspose2d(in_channels=1024,out_channels=512,kernel_size=5,stride=2, output_padding=1) and nn.ConvTranspose2d(in_channels=256,out_channels=128,kernel_size=5,stride=2, output_padding=1) As per the documentation: When stride > 1, Conv2d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Decoder layers' shapes before adding output_padding: ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ ConvTranspose2d-1 [-1, 512, 123, 155] 13,107,712 ReLU-2 [-1, 512, 123, 155] 0 ConvTranspose2d-3 [-1, 256, 249, 313] 3,277,056 ReLU-4 [-1, 256, 249, 313] 0 ConvTranspose2d-5 [-1, 128, 501, 629] 819,328 ReLU-6 [-1, 128, 501, 629] 0 ConvTranspose2d-7 [-1, 64, 503, 631] 73,792 ReLU-8 [-1, 64, 503, 631] 0 ConvTranspose2d-9 [-1, 32, 505, 633] 18,464 ReLU-10 [-1, 32, 505, 633] 0 ConvTranspose2d-11 [-1, 3, 507, 635] 867 ReLU-12 [-1, 3, 507, 635] 0 After adding padding: ================================================================ ConvTranspose2d-1 [-1, 512, 124, 156] 13,107,712 ReLU-2 [-1, 512, 124, 156] 0 ConvTranspose2d-3 [-1, 256, 251, 315] 3,277,056 ReLU-4 [-1, 256, 251, 315] 0 ConvTranspose2d-5 [-1, 128, 506, 634] 819,328 ReLU-6 [-1, 128, 506, 634] 0 ConvTranspose2d-7 [-1, 64, 508, 636] 73,792 ReLU-8 [-1, 64, 508, 636] 0 ConvTranspose2d-9 [-1, 32, 510, 638] 18,464 ReLU-10 [-1, 32, 510, 638] 0 ConvTranspose2d-11 [-1, 3, 512, 640] 867 ReLU-12 [-1, 3, 512, 640] 0
https://stackoverflow.com/questions/60700472/
How to combine datasets in PyTorch to return image and numpy file simultaneously
I am trying to build a dataloader that will take images and poses. The images are saved in the form of .jpg files, and the poses in the form of .npy files. The images and poses are in different folders but have the same sub-folder structure and name. The sub-folders are in the form of classes, i.e., each class has a corresponding folder. I want to apply image transformations and then return the images (for which I am using torchvision datasets.ImageFolder). For the poses, I am using torchvision datasets.DatasetFolder. How do I combine these two datasets so that I get both pose and image of the same name simultaneously? class ReIDFolder_images(datasets.ImageFolder): def __init__(self, root, transform): super().__init__(root, transform) targets = np.asarray([s[1] for s in self.samples]) self.targets = targets self.img_num = len(self.samples) print(self.img_num) def _get_cam_id(self, path): camera_id = [] filename = os.path.basename(path) camera_id = filename.split('c')[1][0] return int(camera_id)-1 def _get_pos_sample(self, target, index, path): pos_index = np.argwhere(self.targets == target) pos_index = pos_index.flatten() pos_index = np.setdiff1d(pos_index, index) if len(pos_index)==0: # in the query set, only one sample return path else: rand = random.randint(0,len(pos_index)-1) return self.samples[pos_index[rand]][0] def _get_neg_sample(self, target): neg_index = np.argwhere(self.targets != target) neg_index = neg_index.flatten() rand = random.randint(0,len(neg_index)-1) return self.samples[neg_index[rand]] def __getitem__(self, index): path, target = self.samples[index] sample = self.loader(path) pos_path = self._get_pos_sample(target, index, path) pos = self.loader(pos_path) if self.transform is not None: sample = self.transform(sample) pos = self.transform(pos) if self.target_transform is not None: target = self.target_transform(target) return sample, target, pos class ReIDFolder_poses(datasets.DatasetFolder): def __init__(self, root): super().__init__(root, loader=self.npy_loader, extensions='.npy') targets = np.asarray([s[1] for s in self.samples]) self.targets = targets self.img_num = len(self.samples) print(self.img_num) def npy_loader(self, path): sample = torch.Tensor(np.load(path)) return sample def _get_cam_id(self, path): camera_id = [] filename = os.path.basename(path) camera_id = filename.split('c')[1][0] return int(camera_id)-1 def _get_pos_sample(self, target, index, path): pos_index = np.argwhere(self.targets == target) pos_index = pos_index.flatten() pos_index = np.setdiff1d(pos_index, index) if len(pos_index)==0: # in the query set, only one sample return path else: rand = random.randint(0,len(pos_index)-1) return self.samples[pos_index[rand]][0] def _get_neg_sample(self, target): neg_index = np.argwhere(self.targets != target) neg_index = neg_index.flatten() rand = random.randint(0,len(neg_index)-1) return self.samples[neg_index[rand]] def __getitem__(self, index): path, target = self.samples[index] sample = self.loader(path) pos_path = self._get_pos_sample(target, index, path) pos = self.loader(pos_path) return sample, target, pos
I was able to solve this problem! It turns out I didn't have to inherit datasets.DatasetFolder. Since the labels were the same, I just created one class which inherits datasets.ImageFolder, and fed a modified path to the function npy_loader.
https://stackoverflow.com/questions/60700820/
how to use torchaudio with torch xla on google colab tpu
I'm trying to run a pytorch script which is using torchaudio on a google TPU. To do this I'm using pytorch xla following this notebook, more specifically I'm using this code cell to load the xla: !pip install torchaudio import os assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator' VERSION = "20200220" #@param ["20200220","nightly", "xrt==1.15.0"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION import torch import torchaudio import torch_xla however this is incompatible with the version of torchaudio that I need as: ERROR: torchaudio 0.4.0 has requirement torch==1.4.0, but you'll have torch 1.5.0a0+e95282a which is incompatible. I couldn't find anywhere how to load torch 1.4.0 using pytorch xla. I tried to use the nightly version of torch audio but that gives the error as follows: !pip install torchaudio_nightly -f https://download.pytorch.org/whl/nightly/torch_nightly.html import os assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator' VERSION = "20200220" #@param ["20200220","nightly", "xrt==1.15.0"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION import torch import torchaudio import torch_xla --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-2-968e9d93c06f> in <module>() 9 10 import torch ---> 11 import torchaudio 12 13 import torch_xla /usr/local/lib/python3.6/dist-packages/torchaudio/__init__.py in <module>() 3 4 import torch ----> 5 import _torch_sox 6 7 from .version import __version__, git_version ImportError: /usr/local/lib/python3.6/dist-packages/_torch_sox.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_29E --------------------------------------------------------------------------- So how would I go to load the stable version or 1.4.0 version of pytorch using xla or is there any other workaround for this situation? Thanks a lot for your help!
I tested using the notebook below; Getting Started with PyTorch on Cloud TPUs After changing the cell containing; FROM: VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION TO: VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !pip install torchvision !pip install torch==1.4.0 !pip install torchaudio==0.4.0 %matplotlib inline !python pytorch-xla-env-setup.py --version $VERSION All cells ran successfully, and the import statements below threw no errors; # imports pytorch import torch # imports the torch_xla package import torch_xla import torch_xla.core.xla_model as xm
https://stackoverflow.com/questions/60718831/
Using fastai (pytorch) in c# code, how to normalize Bitmap with mean and std?
A few days ago i switched from tensorflow to fastai for my c# Project. But now i am facing a problem with my normalisation. For both i use an onnx pipeline to load the model and the data. var onnxPipeline = mLContext.Transforms.ResizeImages(resizing: ImageResizingEstimator.ResizingKind.Fill, outputColumnName: inputName, imageWidth: ImageSettings.imageWidth, imageHeight: ImageSettings.imageHeight, inputColumnName: nameof(ImageInputData.Image)) .Append(mLContext.Transforms.ExtractPixels(outputColumnName: inputName, interleavePixelColors: true, scaleImage: 1 / 255f)) .Append(mLContext.Transforms.ApplyOnnxModel(outputColumnName: outputName, inputColumnName: inputName, modelFile: onnxModelPath)); var emptyData = mLContext.Data.LoadFromEnumerable(new List<ImageInputData>()); var onnxModel = onnxPipeline.Fit(emptyData); with class ImageInputData { [ImageType(ImageSettings.imageHeight, ImageSettings.imageWidth)] public Bitmap Image { get; set; } public ImageInputData(byte[] image) { using (var ms = new MemoryStream(image)) { Image = new Bitmap(ms); } } public ImageInputData(Bitmap image) { Image = image; } } After using fastai i learned, that the models get better accuracy if the data is normalized with a specific mean and standard deviation (because i used the resnet34 model it should be means { 0.485, 0.456, 0.406 } stds = { 0.229, 0.224, 0.225 } respectively). So the pixelvalues (for each color ofc.) have to be transformed with those values to match the trainings images. But how can i achive this in C#? What i tried so far is: int imageSize = 256; double[] means = new double[] { 0.485, 0.456, 0.406 }; // used in fastai model double[] stds = new double[] { 0.229, 0.224, 0.225 }; Bitmap bitmapImage = inputBitmap; Image image = bitmapImage; Color[] pixels = new Color[imageSize * imageSize]; for (int x = 0; x < bitmapImage.Width; x++) { for (int y = 0; y < bitmapImage.Height; y++) { Color pixel = bitmapImage.GetPixel(x, y); pixels[x + y] = pixel; double red = (pixel.R - (means[0] * 255)) / (stds[0] * 255); // *255 to scale the mean and std values to the Bitmap double gre = (pixel.G - (means[1] * 255)) / (stds[1] * 255); double blu = (pixel.B - (means[2] * 255)) / (stds[2] * 255); Color pixel_n = Color.FromArgb(pixel.A, (int)red, (int)gre, (int)blu); bitmapImage.SetPixel(x, y, pixel_n); } } Ofcourse its not working, because the Colorvalues can`t be negative (which i realised only later). But how can i achive this normalisation between -1 and 1 for my model in C# with the onnx-model? Is there a different way to feed the model or to handle the normalisation? Any help would be appreciated!
One way to solve this problem is to switch from an onnx pipeline to an onnx Inferencesession, which is in my view simpler and better to understand: public List<double> UseOnnxSession(Bitmap image, string onnxModelPath) { double[] means = new double[] { 0.485, 0.456, 0.406 }; double[] stds = new double[] { 0.229, 0.224, 0.225 }; using (var session = new InferenceSession(onnxModelPath)) { List<double> scores = new List<double>(); Tensor<float> t1 = ConvertImageToFloatData(image, means, stds); List<float> fl = new List<float>(); var inputMeta = session.InputMetadata; var inputs = new List<NamedOnnxValue>() { NamedOnnxValue.CreateFromTensor<float>("input_1", t1) }; using (var results = session.Run(inputs)) { foreach (var r in results) { var x = r.AsTensor<float>().First(); var y = r.AsTensor<float>().Last(); var softmaxScore = Softmax(new double[] { x, y }); scores.Add(softmaxScore[0]); scores.Add(softmaxScore[1]); } } return scores; } } // Create your Tensor and add transformations as you need. public static Tensor<float> ConvertImageToFloatData(Bitmap image, double[] means, double[] std) { Tensor<float> data = new DenseTensor<float>(new[] { 1, 3, image.Width, image.Height }); for (int x = 0; x < image.Width; x++) { for (int y = 0; y < image.Height; y++) { Color color = image.GetPixel(x, y); var red = (color.R - (float)means[0] * 255) / ((float)std[0] * 255); var gre = (color.G - (float)means[1] * 255) / ((float)std[1] * 255); var blu = (color.B - (float)means[2] * 255) / ((float)std[2] * 255); data[0, 0, x, y] = red; data[0, 1, x, y] = gre; data[0, 2, x, y] = blu; } } return data; } Also i have to use my own Softmax method on these scores to get the real probabilities out of my model: public double[] Softmax(double[] values) { double[] ret = new double[values.Length]; double maxExp = values.Select(Math.Exp).Sum(); for (int i = 0; i < values.Length; i++) { ret[i] = Math.Round((Math.Exp(values[i]) / maxExp), 4); } return ret; } Hope this helps someone who has a similar Problem.
https://stackoverflow.com/questions/60722864/
Batches of points with the same label on Pytorch
I want to train a neural network using gradient descent on batches that contain N training points each. I would like these batches to only contain points with the same label, instead of being randomly sampled from the training set. For example, if I'm training using MNIST, I would like to have batches that look like the following: batch_1 = {0,0,0,0,0,0,0,0} batch_2 = {3,3,3,3,3,3,3,3} batch_3 = {7,7,7,7,7,7,7,7} ..... and so on. How can I do it using pytorch?
One way to do it is to create subsets and dataloaders for each class and then iterate by randomly switching between the dataloaders at each iteration: import torch from torch.utils.data import DataLoader, Subset from torchvision.datasets import MNIST from torchvision import transforms import numpy as np dataset = MNIST('path/to/mnist_root/', transform=transforms.ToTensor(), download=True) class_inds = [torch.where(dataset.targets == class_idx)[0] for class_idx in dataset.class_to_idx.values()] dataloaders = [ DataLoader( dataset=Subset(dataset, inds), batch_size=8, shuffle=True, drop_last=False) for inds in class_inds] epochs = 1 for epoch in range(epochs): iterators = list(map(iter, dataloaders)) while iterators: iterator = np.random.choice(iterators) try: images, labels = next(iterator) print(labels) # do_more_stuff() except StopIteration: iterators.remove(iterator) This will work with any dataset (not just the MNIST). Here's the result of printing the labels at each iteration: tensor([6, 6, 6, 6, 6, 6, 6, 6]) tensor([3, 3, 3, 3, 3, 3, 3, 3]) tensor([0, 0, 0, 0, 0, 0, 0, 0]) tensor([5, 5, 5, 5, 5, 5, 5, 5]) tensor([8, 8, 8, 8, 8, 8, 8, 8]) tensor([0, 0, 0, 0, 0, 0, 0, 0]) ... tensor([1, 1, 1, 1, 1, 1, 1, 1]) tensor([1, 1, 1, 1, 1, 1]) Note that by setting drop_last=False, there will be batches, here and there, with less than batch_size elements. By setting it to True, the batches will be all of equal size, but some data points will be dropped.
https://stackoverflow.com/questions/60725571/
Getting ModuleNotFoundError while the module is next to the calling file
I am in a weird situation where in order for my package to be called just fine, one single file (models.py) needs to be next to the calling script and also inside the package folder where it is used. To make it a bit more clear, this is how the package organization looks like : -FV_dir ---__init__.py ---F_V.py ---models.py ---utils.py ---service_utils.py ---subModule1_dir ----__init__.py ----detector.py ----utils.py ----subdir1 --- etc ----subdir2 --- etc ----subdir3 This whole package is placed in the site-packages, so it's usable system-wide. And there is a user script that uses this package like this: service_client.py: from FV.service_utils import ServiceCore from FV.utils import a_helper_function def run(): service = ServiceCore() service.run() if __name__ == "__main__": run() The ServiceCore itself uses F_V.py which is the main module here. The F_V module itself uses the models.py and utils.py next to it like this : F_V.py : from FV.utils import func1, func2 from FV.models import model1, model2, model3 ... Now the problem is, if the models.py is not next to the client code (service_client.py) it just complains that the module is not found : here is an example error I get when this is the case: └─19146 /home/user1/anaconda3/bin/python3 /home/user1/Documents/service_client.py Mar 17 19:22:54 ubuntu python3[19146]: self.fv = FaceVerification(**cfg['Face_Verification']['ARGS']) Mar 17 19:22:54 ubuntu python3[19146]: File "/home/user1/anaconda3/lib/python3.7/site-packages/FV/F_V.py", line 58, in __init__ Mar 17 19:22:54 ubuntu python3[19146]: self._init_model() Mar 17 19:22:54 ubuntu python3[19146]: File "/home/user1/anaconda3/lib/python3.7/site-packages/FV/F_V.py", line 80, in _init_model Mar 17 19:22:54 ubuntu python3[19146]: checkpoint = torch.load(self.model_checkpoint_path, map_location=torch.device('cpu')) Mar 17 19:22:54 ubuntu python3[19146]: File "/home/user1/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 529, in Mar 17 19:22:54 ubuntu python3[19146]: return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) Mar 17 19:22:54 ubuntu python3[19146]: File "/home/user1/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 702, in Mar 17 19:22:54 ubuntu python3[19146]: result = unpickler.load() Mar 17 19:22:54 ubuntu python3[19146]: ModuleNotFoundError: No module named 'models' If I remove the models.py next to the F_V.py obviously F_V.py would complain, as it's directly using it : └─19216 /home/user1/anaconda3/bin/python3 /home/user1/Documents/fv_service_linux.py Mar 17 19:27:33 ubuntu systemd[1532]: Started FV Service. Mar 17 19:27:34 ubuntu python3[19216]: Traceback (most recent call last): Mar 17 19:27:34 ubuntu python3[19216]: File "/home/user1/Documents/fv_service_linux.py", line 83, in <module> Mar 17 19:27:34 ubuntu python3[19216]: from FV.service_utils import ServiceCore Mar 17 19:27:34 ubuntu python3[19216]: File "/home/user1/anaconda3/lib/python3.7/site-packages/FV/service_utils.py", line 21, in <mod Mar 17 19:27:34 ubuntu python3[19216]: from FV.F_V import FaceVerification Mar 17 19:27:34 ubuntu python3[19216]: File "/home/user1/anaconda3/lib/python3.7/site-packages/FV/F_V.py", line 17, in <module> Mar 17 19:27:34 ubuntu python3[19216]: from FV.models import resnet18, resnet50, resnet101 Mar 17 19:27:34 ubuntu python3[19216]: ModuleNotFoundError: No module named 'FV.models' So the only way to get this to work is to have models.py next to the client code as well. I can't understand why this is happening, as the client code doesn't even directly interacts with models.py. What am I missing here? Update: In the F_V.py I use Pytorch and load a pretrained model. I thought this was unrelated and I was doing something wrong with regards to the packaging in Python, however, it turns out this was indeed the culprit. read the answer for more information.
This actually happened to be a Pytorch related issue, in which this whole mess has been caused by this simple command when loading the model inside F_V.py : if self.device == 'cpu': checkpoint = torch.load(self.model_checkpoint_path, map_location=torch.device('cpu')) else: checkpoint = torch.load(self.model_checkpoint_path) # the culprit! self.model = checkpoint['model'].module This way of storing and loading the model is very bad, the pretrained models were initially wrapper around by a nn.DataParallel and the person who saved them, did it like this : def save_checkpoint(epoch, epochs_since_improvement, model, metric_fc, optimizer, acc, is_best): print('saving checkpoint ...') state = {'epoch': epoch, 'epochs_since_improvement': epochs_since_improvement, 'acc': acc, 'model': model, 'metric_fc': metric_fc, 'optimizer': optimizer} # filename = 'checkpoint_' + str(epoch) + '_' + str(loss) + '.tar' filename = 'checkpoint.tar' torch.save(state, filename) # If this checkpoint is the best so far, store a copy so it doesn't get overwritten by a worse checkpoint if is_best: torch.save(state, 'BEST_checkpoint.tar') As you can see, he used the whole model ('model': model) in the state_dict and save that away. He should have used the state_dict(). This is bad since the same files/dir structure hierarchy /everything must be retained all the time, everywhere that this model is going to be used. and we were affected by this as well as you can see. All the client services relied on models.py and needed it to be next to them while they didn't even use it. Initially I thought therefore in order to solve the issue, we have to instantiate the models ourselves and then load the weights manually. if self.model_name == 'r18': self.model = resnet18(pretrained=False, use_se=use_se) elif self.model_name == 'r50': self.model = resnet50(pretrained=False, use_se=use_se) elif self.model_name == 'r101': self.model = resnet101(pretrained=False, use_se=use_se) else: raise Exception(f"Model name: '{self.model_name}' is not recognized.") # load the model weights self.model.load_state_dict(checkpoint['model'].module.state_dict()) Note that since the model was initially a nn.DataParallel model, in order to access the model itself, we use the .module property and then use the models state_dict() to initialized the model and hopefully this solves the issue. However, it seems this is not the case, and since the model is saved that way, it seems there is no way to get rid of such dependencies this way. instead convert your model into torch script and then save the model. This way you can get rid of all the nuisance . Solution 1: Try converting your model into torch script and then use that instead: def convert_model(model, input=torch.tensor(torch.rand(size=(1,3,112,112)))): model = torch.jit.trace(self.model, input) torch.jit.save(model,'/home/Rika/Documents/models/model.tjm') and then loaded this version instead: # load the model self.model = torch.jit.load('/home/Rika/Documents/models/model.tjm') Solution 2: simply save the model's state_dict() again and use that instead : I myself ended up doing : self.model = checkpoint['model'].module # create the new checkpoint based on what you need torch.save({'state_dict' : self.model.state_dict(), 'use_se':True}, '/home/Rika/Documents/BEST_checkpoint_r18_2.tar') and started using the new checkpoint and so far everything has been good
https://stackoverflow.com/questions/60726057/
L1 regulariser Pytorch acting opposite to what I expect
I'm trying to add an L1 penalty to a specific layer of a neural network, and I have the code below (in which I attempt to add l1 penalty to the first layer). If I run it for lambda = 0 (i.e. no penalty), the output gets very close to the expected weights those being [10, 12, 2, 11, -0.25]) and if I run for enough epochs or reduce batch size it will get it exactly, as in the output below: mlp.0.weight Parameter containing: tensor([[ 9.8657, -11.8305, 2.0242, 10.8913, -0.1978]], requires_grad=True) Then, when I run it for a large lambda, say 1000, I would expect these weights to shrink towards zero as there is a large penalty being added to the loss that we are trying to minimise. However, the opposite happens and the weights explode, as in the output below (for lam = 1000) mlp.0.weight Parameter containing: tensor([[-13.9368, 9.9072, 2.2447, -11.6870, 26.7293]], requires_grad=True) If anyone could help me, that'd be great. I'm new to pytorch (but not the idea of regularisation), so I'm guessing it's something in my code that is the problem. Thanks import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader import numpy as np from sklearn.linear_model import LinearRegression class TrainDataset(Dataset): def __init__(self, data): self.data = data def __len__(self): return self.data.shape[0] def __getitem__(self, ind): x = self.data[ind][1:] y = self.data[ind][0] return x, y class TestDataset(TrainDataset): def __getitem__(self, ind): x = self.data[ind] return x torch.manual_seed(94) x_train = np.random.rand(1000, 5) y_train = x_train[:, 0] * 10 - x_train[:, 1] * 12 + x_train[:, 2] * 2 + x_train[:, 3] * 11 - x_train[:, 4] * 0.25 y_train = y_train.reshape(1000, 1) x_train.shape y_train.shape train_data = np.concatenate((y_train, x_train), axis=1) train_set = TrainDataset(train_data) batch_size = 100 train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True) class MLP(nn.Module): def __init__(self): super(MLP, self).__init__() self.mlp = nn.Sequential(nn.Linear(5, 1, bias=False)) def forward(self, x_mlp): out = self.mlp(x_mlp) return out device = 'cpu' model = MLP() optimizer = torch.optim.SGD(model.parameters(), lr=0.02, momentum=0.82) criterion = nn.MSELoss() epochs = 5 lam = 0 model.train() for epoch in range(epochs): losses = [] for batch_num, input_data in enumerate(train_loader): optimizer.zero_grad() x, y = input_data x = x.to(device).float() y = y.reshape(batch_size, 1) y = y.to(device) output = model(x) for name, param in model.named_parameters(): if name == 'mlp.0.weight': l1_norm = torch.norm(param, 1) loss = criterion(output, y) + lam * l1_norm loss.backward() optimizer.step() print('\tEpoch %d | Batch %d | Loss %6.2f' % (epoch, batch_num, loss.item())) for name, param in model.named_parameters(): if param.requires_grad: print(name) print(param)
I found that if I use Adagrad as the optimiser instead of SGD, it acts as expected. Will need to look into the difference of those now, but this can be considered answered.
https://stackoverflow.com/questions/60742211/
Invalid device id when using pytorch dataparallel!
Environment: Win10 Pytorch 1.3.0 python3.7 Problem: I am using dataparallel in Pytorch to use the two 2080Ti GPUs. Code are like below: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = Darknet(opt.model_def) model.apply(weights_init_normal) model = nn.DataParallel(model, device_ids=[0, 1]).to(device) But when run this code, I encounter errors below: Traceback (most recent call last): File "C:/Users/Administrator/Desktop/PyTorch-YOLOv3-master/train.py", line 74, in <module> model = nn.DataParallel(model, device_ids=[0, 1]).to(device) File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 133, in __init__ _check_balance(self.device_ids) File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 19, in _check_balance dev_props = [torch.cuda.get_device_properties(i) for i in device_ids] File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 19, in <listcomp> dev_props = [torch.cuda.get_device_properties(i) for i in device_ids] File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\cuda\__init__.py", line 337, in get_device_properties raise AssertionError("Invalid device id") AssertionError: Invalid device id When I debug into it, I find the function device_count() in get_device_properties() returns 1 while I have 2 GPU on my machine. And torch._C._cuda_getDeviceCount() returns 2 in Anaconda Prompt. What is wrong? Qustion: How to solve this problem? How can I manage to use the two GPUs using dataparallel? Thank you guys!
Basically as pointed out by @ToughMind, we need specify os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1" It depends though on the CUDA devices available in one's unit, so if someone has one GPU it may be appropriate to put, for example, os.environ["CUDA_VISIBLE_DEVICES"] = "0"
https://stackoverflow.com/questions/60750288/
What is the multi-label binary classification?
Reading PyTorch docs BCEWithLogitsLoss: where c is the class number (c > 1 for multi-label binary classification, c = 1 for single-label binary classification), nn is the number of the sample in the batch and p_cp c is the weight of the positive answer for the class cc . What is the multi-label binary classification? Binary assume only two labels AFIK
Multi-label in terms of binary classification means that both the classes can be true class for a single example. For example, in case of dog-cat classifier, for an image containing both dog and cat, it'll predict both dog and cat. In the multi-label problem there is no constraint on how many of the classes the instance can be assigned to. Wiki
https://stackoverflow.com/questions/60756374/
SpaCy-transformers regression output
I would like to have a regression output instead of the classification. For instance: instead of n classes I want a floating point output value from 0 to 1. Here is the minimalistic example from the package github page: import spacy from spacy.util import minibatch import random import torch is_using_gpu = spacy.prefer_gpu() if is_using_gpu: torch.set_default_tensor_type("torch.cuda.FloatTensor") nlp = spacy.load("en_trf_bertbaseuncased_lg") print(nlp.pipe_names) # ["sentencizer", "trf_wordpiecer", "trf_tok2vec"] textcat = nlp.create_pipe("trf_textcat", config={"exclusive_classes": True}) for label in ("POSITIVE", "NEGATIVE"): textcat.add_label(label) nlp.add_pipe(textcat) optimizer = nlp.resume_training() for i in range(10): random.shuffle(TRAIN_DATA) losses = {} for batch in minibatch(TRAIN_DATA, size=8): texts, cats = zip(*batch) nlp.update(texts, cats, sgd=optimizer, losses=losses) print(i, losses) nlp.to_disk("/bert-textcat") Is there an easy way to make trf_textcat work as a regressor? Or would it mean extending the library?
I have figured out a workaround: extract vector representations from the nlp pipeline as: vector_repres = nlp('Test text').vector After doing so for all the text entries, You end up with a fixed-dimensional representation of the texts. Assuming You have the continuous output values, feel free to use any estimator, including Neural Network with a linear output. Note that the vector representation is an average of the vector embeddings of all the words in the text - it might be a sub-optimal solution for Your case.
https://stackoverflow.com/questions/60759015/
How to interpret the P numbers that fairseq generate produces?
Using fairseq-generate.py, with the transformer architecture, each translation produces a section like this: Why is it rare to discover new marine mammal species? S-0 Why is it rare to discover new marine mam@@ mal species ? H-0 -0.0643349438905716 Pourquoi est-il rare de découvrir de nouvelles espèces de mammifères marins? P-0 -0.0763 -0.1849 -0.0956 -0.0946 -0.0735 -0.1150 -0.1301 -0.0042 -0.0321 -0.0171 -0.0052 -0.0062 -0.0015 With this explanation: H is the hypothesis along with an average log-likelihood; and P is the positional score per token position, including the end-of-sentence marker I'm wondering if it is reasonable to say a low (absolute) number in the P row means higher confidence in that particular word? E.g. does -0.07 for "Pourquoi" means it was happier about that than it was (-0.1849) for "est-il"? And the low -0.0015 at the end means it was really confident the sentence should end there. Background: What I'm trying to work out is if I can use either the H number, or somehow to use the individual P numbers, to get a confidence measure in its translation. I've been analyzing a handful of translations against the H number and didn't notice much correspondence between it and my subjective opinion of translation quality. But I've a couple where I thought it was particularly poor - it had missed a bit of key information - and the final P number was a relatively high -0.6099 and -0.3091 (The final P number is -0.11 or so on most of them.)
Q: I'm wondering if it is reasonable to say a low (absolute) number in the P row means higher confidence in that particular word? Yes. As the docs says, "P is the positional score per token position". The score is actually the log probability, therefore the higher (i.e., the lower absolute number) the more "confident". The source-code may not be that easy to follow, but the scores are generated by the SequenceScorer, and there you can see that scores are normalized (which includes a log either if when you're using a single model or an ensemble). Moreover, when printing the scores, they convert them from base e to 2: print('P-{}\t{}'.format( sample_id, ' '.join(map( lambda x: '{:.4f}'.format(x), # convert from base e to base 2 hypo['positional_scores'].div_(math.log(2)).tolist(), )) Q: What I'm trying to work out is if I can use either the H number, or somehow to use the individual P numbers, to get a confidence measure in its translation. It turns out that the H value is simply the average of the P values, as you can see here: score_i = avg_probs_i.sum() / tgt_len also converted to base 2. You can check that in your example: import numpy as np print(np.mean([-0.0763,-0.1849 ,-0.0956 ,-0.0946 ,-0.0735 ,-0.1150 ,-0.1301 ,-0.0042 ,-0.0321 ,-0.0171 ,-0.0052 ,-0.0062 ,-0.0015])) # >>> -0.06433076923076922 Another measurement that is often used to assess the performance of a language model is Perplexity. And a good thing is that perplexity can be easily computed based on the P values, as shown in the Language Model example of the fairseq repository: # Compute perplexity for a sequence en_lm.score('Barack Obama is coming to Sydney and New Zealand')['positional_scores'].mean().neg().exp() # tensor(15.1474) I'm not an expert on NLP, so I can't really tell you which one you should use in your case.
https://stackoverflow.com/questions/60765496/
when I run my network. I got an error,one of the variables needed for gradient computation has been modified by an inplace operation
This is a part of code in my train.py: output_c1[output_c1 > 0.5] = 1. output_c1[output_c1 < 0.5] = 0. dice = DiceLoss() loss = dice(output_c1,labels) optimizer.zero_grad() loss.backward() optimizer.step() The error is: Traceback (most recent call last): File "D:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-78553e2886de>", line 1, in <module> runfile('F:/experiment_code/U-net/train.py', wdir='F:/experiment_code/U-net') File "D:\pycharm\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\pycharm\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "F:/experiment_code/U-net/train.py", line 99, in <module> loss.backward() File "D:\Anaconda3\lib\site-packages\torch\tensor.py", line 107, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "D:\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 2, 224, 224]], which is output 0 of SigmoidBackward, is at version 2; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). When I comment this two lines of code: output_c1[output_c1 > 0.5] = 1. output_c1[output_c1 < 0.5] = 0. it can run. I think the error comes from here but I don't know how to solve it.
output_c1[output_c1 > 0.5] = x is an inplace operation, meaning that it modifies directly the tensor instead of returning a modified copy. Hence it makes the computation of the gradient impossible. To solve your usecase, you could use the function torch.round to binarize your output : binary_output_c1 = torch.round(output_c1) # and don't forget to change the computation of your loss function! dice = DiceLoss() loss = dice(binary_output_c1,labels)
https://stackoverflow.com/questions/60771047/
RuntimeError in Pytorch when increasing batch size to more than 1
This code for my custom data loader runs smoothly with batch_size=1, but when I increase batch size I get the following Error: RuntimeError: Expected object of scalar type Double but got scalar type Long for sequence element 1 in sequence argument at position #1 'tensors' import numpy as np import matplotlib.pyplot as plt import matplotlib matplotlib.use("TkAgg") import os, h5py import PIL #------------------------------ import torch from torch.utils.data import Dataset, DataLoader from torchvision import transforms #------------------------------ from data_augmentation import * #------------------------------ dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor class NiftiDataset(Dataset): def __init__(self,transformation_params,data_path, mode='train',transforms=None ): """ Parameters: data_path (string): Root directory of the preprocessed dataset. mode (string, optional): Select the image_set to use, ``train``, ``valid`` transforms (callable, optional): Optional transform to be applied on a sample. """ self.data_path = data_path self.mode = mode self.images = [] self.labels = [] self.W_maps = [] self.centers = [] self.radiuss = [] self.pixel_spacings = [] self.transformation_params = transformation_params self.transforms = transforms #------------------------------------------------------------------------------------- if self.mode == 'train': self.data_path = os.path.join(self.data_path,'train_set') elif self.mode == 'valid': self.data_path = os.path.join(self.data_path,'validation_set') #------------------------------------------------------------------------------------- for _, _, f in os.walk(self.data_path): for file in f: hdf_file = os.path.join(self.data_path,file) data = h5py.File(hdf_file,'r') # Dictionary # Preprocessing of Input Image and Label patch_img, patch_gt, patch_wmap = PreProcessData(file, data, self.mode, self.transformation_params) #print(type(data)) self.images.append(patch_img) # 2D image #print('image shape is : ',patch_img.shape) self.labels.append(patch_gt) # 2D label #print('label shape is : ',patch_img.shape) self.W_maps.append(patch_wmap) # Weight_Map # self.centers.append(data['roi_center'][:]) # [x,y] # self.radiuss.append(data['roi_radii'][:]) # [R_min,R_max] # self.pixel_spacings.append(data['pixel_spacing'][:]) # [x , y , z] def __len__(self): return len(self.images) def __getitem__(self, index): image = self.images[index] label = self.labels[index] W_map = self.W_maps[index] if self.transforms is not None: image, label, W_maps = self.transforms(image, label, W_map) return image, label, W_map #================================================================================================= if __name__ == '__main__': # Test Routinue to check your threaded dataloader # ACDC dataset has 4 labels n_labels = 4 path = './hdf5_files' batch_size = 1 # Data Augmentation Parameters # Set patch extraction parameters size1 = (128, 128) patch_size = size1 mm_patch_size = size1 max_size = size1 train_transformation_params = { 'patch_size': patch_size, 'mm_patch_size': mm_patch_size, 'add_noise': ['gauss', 'none1', 'none2'], 'rotation_range': (-5, 5), 'translation_range_x': (-5, 5), 'translation_range_y': (-5, 5), 'zoom_range': (0.8, 1.2), 'do_flip': (False, False), } valid_transformation_params = { 'patch_size': patch_size, 'mm_patch_size': mm_patch_size} transformation_params = { 'train': train_transformation_params, 'valid': valid_transformation_params, 'n_labels': 4, 'data_augmentation': True, 'full_image': False, 'data_deformation': False, 'data_crop_pad': max_size} #==================================================================== dataset = NiftiDataset(transformation_params=transformation_params,data_path=path,mode='train') dataloader = DataLoader(dataset=dataset,batch_size=2,shuffle=True,num_workers=0) dataiter = iter(dataloader) data = dataiter.next() images, labels,W_map = data #=============================================================================== # Data Visualization #=============================================================================== print('image: ',images.shape,images.type(),'label: ',labels.shape,labels.type(), 'W_map: ',W_map.shape,W_map.type()) img = transforms.ToPILImage()(images[0,0,:,:,0].float()) lbl = transforms.ToPILImage()(labels[0,0,:,:].float()) W_mp = transforms.ToPILImage()(W_map [0,0,:,:].float()) plt.subplot(1,3,1) plt.imshow(img,cmap='gray',interpolation=None) plt.title('image') plt.subplot(1,3,2) plt.imshow(lbl,cmap='gray',interpolation=None) plt.title('label') plt.subplot(1,3,3) plt.imshow(W_mp,cmap='gray',interpolation=None) plt.title('Weight Map') plt.show() I have noticed some strange things such as Tensor types are different even though images and labels and weight maps are images with same type and size. The Error Traceback: Traceback (most recent call last): File "D:\Saudi_CV\Vibot\Smester_2\2_Medical Image analysis\Project_2020\OUR_Project\data_loader.py", line 118, in <module> data = dataiter.next() File "F:\Download_2019\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "F:\Download_2019\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "F:\Download_2019\Anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 47, in fetch return self.collate_fn(data) File "F:\Download_2019\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 79, in default_collate return [default_collate(samples) for samples in transposed] File "F:\Download_2019\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 79, in <listcomp> return [default_collate(samples) for samples in transposed] File "F:\Download_2019\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 64, in default_collate return default_collate([torch.as_tensor(b) for b in batch]) File "F:\Download_2019\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: Expected object of scalar type Double but got scalar type Long for sequence element 1 in sequence argument at position #1 'tensors' [Finished in 19.9s with exit code 1]
The problem was solved through this solution explained on this page link image = torch.from_numpy(self.images[index]).type(torch.FloatTensor) label = torch.from_numpy(self.labels[index]).type(torch.FloatTensor) W_map = torch.from_numpy(self.W_maps[index]).type(torch.FloatTensor)
https://stackoverflow.com/questions/60772545/
How can I create a PyTorch tensor with all zeroes and a 1 in the middle of the third dimension?
I have a tensor, torch.Size([161, 161, 11]) and I want to set it all to zeroes, which I can do with: self.conv1.weight.data = torch.zeros(self.conv1.weight.data.size()) Except, I want column 6 (the middle) of the third dimension to be all ones. How do I do that?
You can assign it afterwards: self.conv1.weight.data[:, :, 6] = 1.0 Or in case this tensor is trainable: with torch.no_grad(): self.conv1.weight.data[:, :, 6] = 1.0
https://stackoverflow.com/questions/60775139/
Torch/numpy: groupby pandas alternative
Is there some efficient way to rewrite the following code to avoid installing & importing pandas and using torch/numpy instead? I am used to work with pandas, so I wrote it like this, but I am trying to learn numpy and torch, so I am looking for alternative solutions that do not use pandas. bins = torch.LongTensor(3072).random_(0, 35) weights = torch.rand((3072)) df = pd.DataFrame({'weights': weights.numpy(), 'bins': bins.numpy()}) bins_sum = df.groupby('bins').weights.sum().values So, basically: how, without using pandas, get a sum of weights grouped by bins?
You can compute unique elements of bins via torch.unique (the values to group by) and then use index masks for accessing the corresponding elements in weights: unique = torch.unique(bins) result = torch.zeros(unique.size(), dtype=weights.dtype) for i, val in enumerate(unique): result[i] += weights[bins == val].sum()
https://stackoverflow.com/questions/60775672/
Considering transforms in torchaudio, is window length etc. in audio frames or milliseconds
I am preprocessing audio recordings using torchaudio. The documentation lists win_length (int) – Window size. (Default: n_fft) Does this value, which defaults to 400, represent the window length in frames or milliseconds?
The n_fft variable is used in the Spectrogram class. And it's forward function documentation mentions where n_fft is the number of Fourier bins
https://stackoverflow.com/questions/60777112/
Access the output of several layers of pretrained DistilBERT model
I am trying to access the output embeddings from several different layers of the pretrained "DistilBERT" model. ("distilbert-base-uncased") bert_output = model(input_ids, attention_mask=attention_mask) The bert_output seems to return only the embedding values of the last layer for the input tokens.
If you want to get the output of all the hidden layers, you need to add the output_hidden_states=True kwarg to your config. Your code will look something like from transformers import DistilBertModel, DistilBertConfig config = DistilBertConfig.from_pretrained('distilbert-base-cased', output_hidden_states=True) model = DistilBertModel.from_pretrained('distilbert-base-cased', config=config) The hidden layers will be made available as bert_output[2]
https://stackoverflow.com/questions/60780181/
Does the generator's optimizer also train the discriminator?
While learning about GANs, I noticed that the code examples exhibit this pattern: The discriminator is trained like this: d_optim.zero_grad() real_pred = d(real_batch) d_loss = d_loss_fn(real_pred, torch.ones(real_batch_size, 1)) d_loss.backward() fake_pred = d(g(noise_batch).detach()) d_loss = d_loss_fn(fake_pred, torch.zeros(noise_batch_size, 1)) d_loss.backward() d_optim.step() The generator is trained like this: g_optim.zero_grad() fake_pred = d(g(noise_batch)) g_loss = g_loss_fn(fake_pred, torch.ones(noise_batch_size, 1)) g_loss.backward() g_optim.step() It is mentioned that d(g(noise_batch).detach()) is written for the discriminator instead of d(g(noise_batch)) to prevent d_optim.step() from training g, but nothing is said about d(g(noise_batch)) for the generator; would g_optim.step() also train d? In fact, why do we d(g(noise_batch).detach()) if, for example, d_optim = torch.optim.SGD(d.parameters(), lr=0.001)? Does this not specify that d.parameters() and not also g.parameters() are to be updated?
TLDR: optimizer will update only the parameters specified to it, whereas backward() call computes the gradients for all variables in the computation graph. So, it is useful to detach() the variables for which gradient computation is not required at that instant. I believe the answer lies in the way things are implemented within PyTorch. tensor.detach() creates a tensor that shares storage with tensor that does not require grad. So, effectively, you cut off the computation graph. That is, doing fake_pred = d(g(noise_batch).detach()) will detach (cut off) the computation graph of the generator. When you call backward() on the loss, gradients are calculated for the entire computation graph (irrespective of whether optimizer uses it or not). Thus, cutting off the generator part will avoid the gradient computations for the generator weights (since they are not required). Also, only the parameters passed to particular optimizer are updated when optimizer.step() is called. So, the g_optim will only optimize the parameters passed to it (You don't explicitly mention which parameters are passed to g_optim). Similarly, d_optim will only update d.parameters() since you explicitly specify that.
https://stackoverflow.com/questions/60782353/
My PyTorch Conv1d with an identity kernel does not produce the same output as the input
I have: print('\ninp', inp.min(), inp.mean(), inp.max()) print(inp) out = self.conv1(inp) print('\nout1', out.min(), out.mean(), out.max()) print(out) quit() My min, mean and max for my inp is: inp tensor(9.0060e-05) tensor(0.1357) tensor(2.4454) For my output, I have: out1 tensor(4.8751, grad_fn=<MinBackward1>) tensor(21.8416, grad_fn=<MeanBackward0>) tensor(54.9332, grad_fn=<MaxBackward1>) My self.conv1 is: self.conv1 = torch.nn.Conv1d( in_channels=161, out_channels=161, kernel_size=11, stride=1, padding=5) self.conv1.weight.data = torch.zeros(self.conv1.weight.data.size()) self.conv1.weight.data[:, :, 5] = 1.0 self.conv1.bias.data = torch.zeros(self.conv1.bias.data.size()) So my weights look like: tensor([0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]) So if I understand how convolution works, this should produce the same output. But it doesn't. What am I doing wrong?
Always try to provide a Minimal, Reproducible Example. It shouldn't. You are probably forgetting the summation. As stated in the docs: In the simplest case, the output value of the layer with input size (N, C_in, L) and output (N, C_out, L_out) can be precisely described as: where ⋆ is the valid cross-correlation operator, N is a batch size, C denotes a number of channels, L is a length of signal sequence. Notice that, in your example, the mean after the conv (i.e., 21.8416) is approx. 161 times the mean before (i.e., 161 * 0.1357), and this is not a coincidence. Notice the same happening in the code below: import torch torch.manual_seed(0) # define the fake input data x = torch.rand(1, 3, 5) # >>> x # tensor([[[0.4963, 0.7682, 0.0885, 0.1320, 0.3074], # [0.6341, 0.4901, 0.8964, 0.4556, 0.6323], # [0.3489, 0.4017, 0.0223, 0.1689, 0.2939]]]) # define the conv conv1 = torch.nn.Conv1d(3, 3, kernel_size=5, stride=1, padding=2) conv1.weight.data = torch.zeros(conv1.weight.data.size()) conv1.weight.data[:, :, 2] = 1.0 conv1.bias.data = torch.zeros(conv1.bias.data.size()) # print mean before print(x.mean()) # tensor(0.4091) # print mean after print(conv1(x).mean()) # tensor(1.2273, grad_fn=<MeanBackward0>) See? After the conv, the mean is 3 times the original one. As @jodag said, if you want an identity, you can do like this: import torch torch.manual_seed(0) # define the fake input data x = torch.rand(1, 3, 5) # >>> x # tensor([[[0.4963, 0.7682, 0.0885, 0.1320, 0.3074], # [0.6341, 0.4901, 0.8964, 0.4556, 0.6323], # [0.3489, 0.4017, 0.0223, 0.1689, 0.2939]]]) # define the conv conv1 = torch.nn.Conv1d(3, 3, kernel_size=5, stride=1, padding=2) torch.nn.init.zeros_(conv1.weight) torch.nn.init.zeros_(conv1.bias) # set identity kernel conv1.weight.data[:, :, 2] = torch.eye(3, 3) # print mean before print(x.mean()) # tensor(0.4091) # print mean after print(conv1(x).mean()) # tensor(0.4091, grad_fn=<MeanBackward0>)
https://stackoverflow.com/questions/60782616/
Unable to load model weights while predicting (using pytorch)
I have trained a Mask RCNN network using PyTorch and am trying to use the obtained weights to predict the location of apples in an image.. I am using the dataset from this paper, and here is the github link to code being used I am simply following the instructions as provided in the ReadMe file.. Here is the command i wrote in prompt (removed personal info) python predict_rcnn.py --data_path "my_directory\datasets\apples-minneapple\detection" --output_file "my_directory\samples\apples\predictions" --weight_file "my_directory\samples\apples\weights\model_19.pth" --mrcnn model_19.pth is the file with all the weights generated after the 19th epoch Error is as follows: Loading model Traceback (most recent call last): File "predict_rcnn.py", line 122, in <module> main(args) File "predict_rcnn.py", line 77, in main model.load_state_dict(checkpoint['model'], strict=False) KeyError: 'model' I will paste predict_rcnn.py for convenience: import os import torch import torch.utils.data import torchvision import numpy as np from data.apple_dataset import AppleDataset from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor import utility.utils as utils import utility.transforms as T ###################################################### # Predict with either a Faster-RCNN or Mask-RCNN predictor # using the MinneApple dataset ###################################################### def get_transform(train): transforms = [] transforms.append(T.ToTensor()) if train: transforms.append(T.RandomHorizontalFlip(0.5)) return T.Compose(transforms) def get_maskrcnn_model_instance(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # now get the number of input features for the mask classifier in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels hidden_layer = 256 # and replace the mask predictor with a new one model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask, hidden_layer, num_classes) return model def get_frcnn_model_instance(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=False) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) return model def main(args): num_classes = 2 device = args.device # Load the model from print("Loading model") # Create the correct model type if args.mrcnn: model = get_maskrcnn_model_instance(num_classes) else: model = get_frcnn_model_instance(num_classes) # Load model parameters and keep on CPU checkpoint = torch.load(args.weight_file, map_location=device) #checkpoint = torch.load(args.weight_file, map_location=lambda storage, loc: storage) model.load_state_dict(checkpoint['model'], strict=False) model.eval() print("Creating data loaders") dataset_test = AppleDataset(args.data_path, get_transform(train=False)) data_loader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1, shuffle=False, num_workers=1, collate_fn=utils.collate_fn) # Create output directory base_path = os.path.dirname(args.output_file) if not os.path.exists(base_path): os.makedirs(base_path) # Predict on bboxes on each image f = open(args.output_file, 'a') for image, targets in data_loader_test: image = list(img.to(device) for img in image) outputs = model(image) for ii, output in enumerate(outputs): img_id = targets[ii]['image_id'] img_name = data_loader_test.dataset.get_img_name(img_id) print("Predicting on image: {}".format(img_name)) boxes = output['boxes'].detach().numpy() scores = output['scores'].detach().numpy() im_names = np.repeat(img_name, len(boxes), axis=0) stacked = np.hstack((im_names.reshape(len(scores), 1), boxes.astype(int), scores.reshape(len(scores), 1))) # File to write predictions to np.savetxt(f, stacked, fmt='%s', delimiter=',', newline='\n') if __name__ == "__main__": import argparse parser = argparse.ArgumentParser(description='PyTorch Detection') parser.add_argument('--data_path', required=True, help='path to the data to predict on') parser.add_argument('--output_file', required=True, help='path where to write the prediction outputs') parser.add_argument('--weight_file', required=True, help='path to the weight file') parser.add_argument('--device', default='cuda', help='device to use. Either cpu or cuda') model = parser.add_mutually_exclusive_group(required=True) model.add_argument('--frcnn', action='store_true', help='use a Faster-RCNN model') model.add_argument('--mrcnn', action='store_true', help='use a Mask-RCNN model') args = parser.parse_args() main(args)
There is no 'model' parameter in the saved checkpoint. If you look in train_rcnn.py:106: torch.save(model.state_dict(), os.path.join(args.output_dir, 'model_{}.pth'.format(epoch))) you see that they save just the model parameters. It should've been something like: torch.save({ "model": model.state_dict(), "optimizer": optimizer.state_dict(), "lr_scheduler": lr_scheduler.state_dict() }, os.path.join(args.output_dir, 'model_{}.pth'.format(epoch))) so then after loading you get a dictionary with 'model', and the other parameters they appear to be wanting to keep. This seems to be a bug in their code.
https://stackoverflow.com/questions/60795375/
Mapping a list of label with one hot encoding
How can I do the same thing when 'label' is a list? For example: label = [2,4,6,1,7...,9] label = 3 NumClass = 10 NumRows = 100 mask = torch.zeros(100, 64) ones = torch.ones(1, 64) ElementsPerClass = NumRows//NumClass mask [ ElementsPerClass*label : ElementsPerClass*(label+1) ] = ones
You are looking for scatter: NumRows = len(label) mask = torch.zeros((NumRoes, NumClass)).scatter_(dim=1, index=torch.tensor(label, dtype=torch.long)[:, None], src=torch.ones(NumRows, 1))
https://stackoverflow.com/questions/60795922/
How to convert a trained model to a function?
I have finished training a model, what I want to do next is hand over my model to my colleages who know nothing about deep learning, I just want to give them a function that they can run without installing tensorflow or Python on their machines, may be just python (ideally I would love it to run on Matlab). Is this doable? How can I abstract away all or codify everything from them? I read about deployment of models but it's all about servers and stuff, this is not what I want. PS: assume a TF model for now.
Basically, there's onnx that aims at assisting in deploying trained models. I do not have much experience with it, but I know it's not always a straight forward procedure.
https://stackoverflow.com/questions/60795970/
Certain members of a torch module aren't moved to GPU even if model.to(device) is called
An mwe is as follows: import torch import torch.nn as nn class model(nn.Module): def __init__(self): super(model,self).__init__() self.mat = torch.randn(2,2) def forward(self,x): print('self.mat.device is',self.mat.device) x = torch.mv(self.mat,x) return x device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') m = model() m.to(device) x = torch.tensor([2.,1.]) x = x.to(device) m(x) The output is self.mat.device is cpu and right after that comes Traceback (most recent call last): File "Z:\cudatest.py", line 21, in <module> print(m(x)) File "E:\Python37\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "Z:\cudatest.py", line 11, in forward x = torch.mv(self.mat,x) RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_mv The code works fine if I set device = torch.device('cpu'). It seems that the problem is model.mat is not moved to GPU even after m.to(device) is called. Why doesn't whis work? How can I fix this? Please note the following: Even though this particular example can be fixed by using self.mat = nn.Linear(2,2) and x = self.mat(x) instead, in my original program, I need a temporary tensor to store some data in forward() that is also used in some arithmetics. How can I construct such a tensor and send it to GPU when calling m.to(device) It is not known in advance whether the computer has GPU or not. Therefore, writing self.mat = self.mat.cuda() is not a good solution for my case.
pytorch apply Module's methods such as .cpu(), .cuda() and .to() only to sub-modules, parameters and buffers, but NOT to regular class members. pytorch has no way of knowing that self.mat, in your case, is an actual tensor that should be moved around. Once you decide if your mat should be a parameter or a buffer, simply register it accordingly, e.g. class model(nn.Module): def __init__(self): super(model,self).__init__() self.register_buffer(name='mat', tensor=torch.randn(2,2))
https://stackoverflow.com/questions/60796555/
Pytorch DataParallel doesn't work when the model contain tensor operation
If my model contains only nn.Module layers such as nn.Linear, nn.DataParallel works fine. x = torch.randn(100,10) class normal_model(torch.nn.Module): def __init__(self): super(normal_model, self).__init__() self.layer = torch.nn.Linear(10,1) def forward(self, x): return self.layer(x) model = normal_model() model = nn.DataParallel(model.to('cuda:0')) model(x) However, when my model contains a tensor operation such as the following class custom_model(torch.nn.Module): def __init__(self): super(custom_model, self).__init__() self.layer = torch.nn.Linear(10,5) self.weight = torch.ones(5,1, device='cuda:0') def forward(self, x): return self.layer(x) @ self.weight model = custom_model() model = torch.nn.DataParallel(model.to('cuda:0')) model(x) It gives me the following error RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "", line 7, in forward return self.layer(x) @ self.weight RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:277 How to avoid this error when we have some tensor operations in our model?
I have no experience with DataParallel, but I think it might be because your tensor is not part of the model parameters. You can do this by writing: torch.nn.Parameter(torch.ones(5,1)) Note that you don't have to move it to the gpu when initializing, because now when you call model.to('cuda:0') this is done automatically. I can imagine that DataParallel uses the model parameters to move them to the appropriate gpu. See this answer for more on the difference between a torch tensor and torch.nn.Parameter. If you don't want the tensor values to be updated by backpropagation during training, you can add requires_grad=False. Another way that might work is to override the to method, and initialize the tensor in the forward pass: class custom_model(torch.nn.Module): def __init__(self): super(custom_model, self).__init__() self.layer = torch.nn.Linear(10,5) def forward(self, x): return self.layer(x) @ torch.ones(5,1, device=self.device) def to(self, device: str): new_self = super(custom_model, self).to(device) new_self.device = device return new_self or something like this: class custom_model(torch.nn.Module): def __init__(self, device:str): super(custom_model, self).__init__() self.layer = torch.nn.Linear(10,5) self.weight = torch.ones(5,1, device=device) def forward(self, x): return self.layer(x) @ self.weight def to(self, device: str): new_self = super(custom_model, self).to(device) new_self.device = device new_self.weight = torch.ones(5,1, device=device) return new_self
https://stackoverflow.com/questions/60799655/
how to get alignment or attention information for translations produced by a torch hub model?
The torch hub provides pretrained models, such as: https://pytorch.org/hub/pytorch_fairseq_translation/ These models can be used in python, or interactively with a CLI. With the CLI it is possible to get alignments, with the --print-alignment flag. The following code works in a terminal, after installing fairseq (and pytorch) curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - MODEL_DIR=wmt14.en-fr.fconv-py fairseq-interactive \ --path $MODEL_DIR/model.pt $MODEL_DIR \ --beam 5 --source-lang en --target-lang fr \ --tokenizer moses \ --bpe subword_nmt --bpe-codes $MODEL_DIR/bpecodes \ --print-alignment In python it is possible to specify the keyword args verbose and print_alignment: import torch en2fr = torch.hub.load('pytorch/fairseq', 'transformer.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt') fr = en2fr.translate('Hello world!', beam=5, verbose=True, print_alignment=True) However, this will only output the alignment as a logging message. And for fairseq 0.9 it seems to be broken and results in an error message (issue). Is there a way to access alignment information (or possibly even the full attention matrix) from python code?
I've browsed the fairseq codebase and found a hacky way to output alignment information. Since this requires editing the fairseq sourcecode itself, I don't think it's an acceptable solution. But maybe it helps someone (I'm still very interested in an answer on how to do this properly). Edit the sample() function and rewrite the return statement. Here is the whole function (to help you find it better, in the code), but only the last line should be changed: def sample(self, sentences: List[str], beam: int = 1, verbose: bool = False, **kwargs) -> List[str]: if isinstance(sentences, str): return self.sample([sentences], beam=beam, verbose=verbose, **kwargs)[0] tokenized_sentences = [self.encode(sentence) for sentence in sentences] batched_hypos = self.generate(tokenized_sentences, beam, verbose, **kwargs) return list(zip([self.decode(hypos[0]['tokens']) for hypos in batched_hypos], [hypos[0]['alignment'] for hypos in batched_hypos]))
https://stackoverflow.com/questions/60802614/
How to make a 3D tensor into one_hot encoding
for example ,labels is a groundtruth, labels size is [4,224,224] where 4 means batchsize ,224 means h and w, labels dtype is torch.int64, in my training code ,label's pixel from 0 to 255 .I use my network to Semantic segmentation
From your question, it seems like you would like 256 integer labels, from 0 to 255. This can be done by the following: # let a be the (4, 224, 224) dim tensor labels = np.stack([(a == i).int() for i in range(256)]) print(labels.shape) #(256, 4, 224, 224) dimensional one-hot encoding Hope this helps!
https://stackoverflow.com/questions/60807705/
Using WeightedRandomSampler in PyTorch
I need to implement a multi-label image classification model in PyTorch. However my data is not balanced, so I used the WeightedRandomSampler in PyTorch to create a custom dataloader. But when I iterate through the custom dataloader, I get the error : IndexError: list index out of range Implemented the following code using this link :https://discuss.pytorch.org/t/balanced-sampling-between-classes-with-torchvision-dataloader/2703/3?u=surajsubramanian def make_weights_for_balanced_classes(images, nclasses): count = [0] * nclasses for item in images: count[item[1]] += 1 weight_per_class = [0.] * nclasses N = float(sum(count)) for i in range(nclasses): weight_per_class[i] = N/float(count[i]) weight = [0] * len(images) for idx, val in enumerate(images): weight[idx] = weight_per_class[val[1]] return weight weights = make_weights_for_balanced_classes(train_dataset.imgs, len(full_dataset.classes)) weights = torch.DoubleTensor(weights) sampler = WeightedRandomSampler(weights, len(weights)) train_loader = DataLoader(train_dataset, batch_size=4,sampler = sampler, pin_memory=True) Based on the answer in https://stackoverflow.com/a/60813495/10077354, the following is my updated code. But then too when I create a dataloader :loader = DataLoader(full_dataset, batch_size=4, sampler=sampler), len(loader) returns 1. class_counts = [1691, 743, 2278, 1271] num_samples = np.sum(class_counts) labels = [tag for _,tag in full_dataset.imgs] class_weights = [num_samples/class_counts[i] for i in range(len(class_counts)] weights = [class_weights[labels[i]] for i in range(num_samples)] sampler = WeightedRandomSampler(torch.DoubleTensor(weights), num_samples) Thanks a lot in advance ! I included an utility function based on the accepted answer below : def sampler_(dataset): dataset_counts = imageCount(dataset) num_samples = sum(dataset_counts) labels = [tag for _,tag in dataset] class_weights = [num_samples/dataset_counts[i] for i in range(n_classes)] weights = [class_weights[labels[i]] for i in range(num_samples)] sampler = WeightedRandomSampler(torch.DoubleTensor(weights), int(num_samples)) return sampler The imageCount function finds number of images of each class in the dataset. Each row in the dataset contains the image and the class, so we take the second element in the tuple into consideration. def imageCount(dataset): image_count = [0]*(n_classes) for img in dataset: image_count[img[1]] += 1 return image_count
That code looks a bit complex... You can try the following: #Let there be 9 samples and 1 sample in class 0 and 1 respectively class_counts = [9.0, 1.0] num_samples = sum(class_counts) labels = [0, 0,..., 0, 1] #corresponding labels of samples class_weights = [num_samples/class_counts[i] for i in range(len(class_counts))] weights = [class_weights[labels[i]] for i in range(int(num_samples))] sampler = WeightedRandomSampler(torch.DoubleTensor(weights), int(num_samples))
https://stackoverflow.com/questions/60812032/
How to load all 5 batches of CIFAR10 in a single data structure as on MNIST in PyTorch?
With Mnist I have a single file with the labels and a single file for the train, so I simply do: self.data = datasets.MNIST(root='./data', train=True, download=True) Basically I create a set of labels (from 0-9) and save the i-th position of the image in the data structure, to create my custom tasks: def make_tasks (self):         self.task_to_examples = {} #task 0-9         self.all_tasks = set (self.data.train_labels.numpy ())         for i, digit in enumerate (self.data.train_labels.numpy ()):             if str(digit) not in self.task_to_examples:                 self.task_to_examples[str(digit)] = []             self.task_to_examples[str(digits)].append(i) I don't understand how to do the same thing using CIFAR10 because it is divided into 5 batches, I would like all the data in a single structure.
If your desired structure is {"class_id": [indices of the samples]}, then for CIFAR10 you can do something like this: import numpy as np import torchvision # set root accordingly cifar = torchvision.datasets.CIFAR10(root=".", train=True, download=True) task_to_examples = { str(task_id): np.where(cifar.targets == task_id)[0].tolist() for task_id in np.unique(cifar.targets) }
https://stackoverflow.com/questions/60812518/
Scope of with torch.no_grad() in pytorch
with torch.no_grad(): input = Variable(input).cuda() target = Variable(target).cuda(non_blocking=True) y=model(input) # many things here Is the no_grad continue to having effect out of the "with" scope?
The no_grad has no effect outside the "with" scope. According to this answer from a moderator on the pytorch blog: with torch.no_grad(): # No gradients in this block x = self.cnn(x) # Gradients as usual outside of it x = self.lstm(x) It is the purpose of the with statement in python. The variable used by the with (here torch.no_grad()) has only effect in the with context and not after. See the python doc for complete details.
https://stackoverflow.com/questions/60814026/
Find indices of elements equal to zero in a PyTorch Tensor
My question is nearly identical to this one, with the notable difference of being in PyTorch. I would prefer not to use the Numpy solution as this would involve moving data back to the CPU. I see that, as with Numpy, PyTorch has a nonzero function, however its where function (the solution in the Numpy thread I linked) has behavior different from Numpy's. The behavior I want is an is_zero() function as follows: >>> arr.nonzero() tensor([[0, 1], [1, 0]]) >>> arr.is_zero() tensor([[0, 0], [1, 1]])
You can make a boolean mask and then call nonzero(): (arr == 0).nonzero() For instance: arr = torch.randint(high=2, size=(3, 3)) tensor([[1, 1, 0], # (0, 2) [1, 1, 0], # (1, 2) [1, 0, 0]]) # (2, 1) and (2, 2) (arr == 0).nonzero() tensor([[0, 2], [1, 2], [2, 1], [2, 2]])
https://stackoverflow.com/questions/60817261/
Implementing a simple ResNet block with PyTorch
I'm trying to implement following ResNet block, which ResNet consists of blocks with two convolutional layers and a skip connection. For some reason it doesn't add the output of skip connection, if applied, or input to the output of convolution layers. The ResNet block has: Two convolutional layers with: 3x3 kernel no bias terms padding with one pixel on both sides 2d batch normalization after each convolutional layer The skip connection: simply copies the input if the resolution and the number of channels do not change. if either the resolution or the number of channels change, the skip connection should have one convolutional layer with: 1x1 convolution without bias change of the resolution with stride (optional) different number of input channels and output channels (optional) the 1x1 convolutional layer is followed by 2d batch normalization. The ReLU nonlinearity is applied after the first convolutional layer and at the end of the block. My code: class Block(nn.Module): def __init__(self, in_channels, out_channels, stride=1): """ Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. stride (int): Controls the stride. """ super(Block, self).__init__() self.skip = nn.Sequential() if stride != 1 or in_channels != out_channels: self.skip = nn.Sequential( nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(out_channels)) else: self.skip = None self.block = nn.Sequential( nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, padding=1, stride=1, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(), nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, padding=1, stride=1, bias=False), nn.BatchNorm2d(out_channels)) def forward(self, x): out = self.block(x) if self.skip is not None: out = self.skip(x) else: out = x out += x out = F.relu(out) return out
The problem is in the reuse of the out variable. Normally, you'd implement like this: def forward(self, x): identity = x out = self.block(x) if self.skip is not None: identity = self.skip(x) out += identity out = F.relu(out) return out If you like "one-liners": def forward(self, x): out = self.block(x) out += (x if self.skip is None else self.skip(x)) out = F.relu(out) return out If you really like one-liners (please, that is too much, do not choose this option :)) def forward(self, x): return F.relu(self.block(x) + (x if self.skip is None else self.skip(x)))
https://stackoverflow.com/questions/60817390/
Efficiently saving and loading data using h5py (or other methods)
I am testing ways of efficient saving and retrieving data using h5py. But am having trouble with running time while not using up all my memory. In my first method I simply create a static h5py file with h5py.File(fileName, 'w') as f: f.create_dataset('data_X', data = X, dtype = 'float32') f.create_dataset('data_y', data = y, dtype = 'float32') In the second method, I set parameter maxshape in order to append more training data in the future. (see How to append data to one specific dataset in a hdf5 file with h5py) with h5py.File(fileName2, 'w') as f: f.create_dataset('data_X', data = X, dtype = 'float32',maxshape=(None,4919)) f.create_dataset('data_y', data = y, dtype = 'float32',maxshape=(None,6)) I am using PyTorch and am set up my data loader as such: class H5Dataset_all(torch.utils.data.Dataset): def __init__(self, h5_path): # super(dataset_h5, self).__init__() self.h5_path = h5_path self._h5_gen = None def __getitem__(self, index): if self._h5_gen is None: self._h5_gen = self._get_generator() next(self._h5_gen) return self._h5_gen.send(index) def _get_generator(self): with h5py.File( self.h5_path, 'r') as record: index = yield while True: X = record['data_X'][index] y = record['data_y'][index] index = yield X, y def __len__(self): with h5py.File(self.h5_path,'r') as record: length = record['data_X'].shape[0] return length loader = Data.DataLoader( dataset=H5Dataset_all(filename), batch_size=BATCH_SIZE, shuffle=True, num_workers=0) Having saved the same data for each of these methods I would expect them to be similar in running time, however that is not the case. The data I used has size X.shape=(200722,4919) and y.shape=(200772,6). The files are about 3.6 GB each. I test the running time using: import time t0 = time.time() for i, (X_batch, y_batch) in enumerate(loader): # assign a dummy value a = 0 t1 = time.time()-t0 print(f'time: {t1}') For the first method the running time is 83 s and for the second it is 1216 s, which In my mind doesn't make sense. Can anyone help me figure out why? Additionally I also tried saving/loading it as a torch file using torch.save and torch.load and passing the data to Data.TensorDataset before setting up the loader. This implementation runs significantly faster (about 3.7 s), but has the disadvantage of having to load the files before training, which could quickly be capped by my memory. Is there a better way in which I can train somewhat fast while not using having to load all of the data before training?
This looks like an I/O performance issue. To test, I created a very simple example to compare your 2 methods. (My code is at the end of the post.) I found the exact opposite behavior (assuming my code mimics your process). Writing the dataset is slower when I don't use the maxshape=() parameter: 62 sec to create w/out maxshape versus 16 sec to create with maxshape. To verify the operations aren't order dependent, I also ran creating _2 first, then created _1, and got very similar results. Here is the timing data: create data_X time: 62.60318350791931 create data_y time: 0.010000228881835 ** file 1 Done ** create data_X time: 16.416041135787964 create data_y time: 0.0199999809265136 ** file 2 Done ** Code to create the 2 files below: import h5py import numpy as np import time n_rows = 200722 X_cols = 4919 y_cols = 6 X = np.random.rand(n_rows,X_cols).astype('float32') y = np.random.rand(n_rows,y_cols).astype('float32') t0 = time.time() with h5py.File('SO_60818355_1.h5', 'w') as h5f: h5f.create_dataset('data_X', data = X) t1 = time.time() print(f'create data_X time: {t1-t0}') h5f.create_dataset('data_y', data = y) t2 = time.time() print(f'create data_y time: {t2-t1}') print ('** file 1 Done ** \n ') t0 = time.time() with h5py.File('SO_60818355_2.h5', 'w') as h5f: h5f.create_dataset('data_X', data = X, maxshape=(None,X_cols)) t1 = time.time() print(f'create data_X time: {t1-t0}') h5f.create_dataset('data_y', data = y, maxshape=(None,y_cols)) t2 = time.time() print(f'create data_y time: {t2-t1}') print ('** file 2 Done ** \n ')
https://stackoverflow.com/questions/60818355/
list of tensors into tensors
I have a list like xs = [tensor1, tensor2, tensor3....] I want to change it into a tensor so that I can feed xs and ys into the Pytorch DataLoader. I tried this, xs = torch.Tensor(xs) but it doesn't work as the individual elements are not items but tensors.
You may want torch.stack xs = torch.stack(xs)
https://stackoverflow.com/questions/60822587/
Splitting ImageFolder into train and validation datasets
I have loaded my dataset as follows : full_dataset = ImageFolder(root = os.path.join(root, 'train'), transform=train_transforms) Now to split my dataset into training and validation sets, I used the following code : train_size = int(0.8 * len(full_dataset)) validation_size = len(full_dataset) - train_size train_dataset, validation_dataset = random_split(full_dataset, [train_size, validation_size]) Both train_dataset and validation_dataset are of type : torch.utils.data.dataset.Subset. Is there any way to convert these datasets into torchvision.datasets.folder.ImageFolder. I need to do this as I am not able to iterate through datasets of type torch.utils.data.dataset.Subset
You should be able to iterate through a Subset just fine, since it has the __getitem__ method implemented as you can see from the source code : class Subset(Dataset): r""" Subset of a dataset at specified indices. Arguments: dataset (Dataset): The whole Dataset indices (sequence): Indices in the whole set selected for subset """ def __init__(self, dataset, indices): self.dataset = dataset self.indices = indices def __getitem__(self, idx): return self.dataset[self.indices[idx]] def __len__(self): return len(self.indices) So the following should work: for image, label in train_dataset: print(image, label) Or you can create a Dataloader from a Subset: train_dataloader = DataLoader(train_dataset, batch_size, shuffle) for images, labels in train_dataloader: print(images, labels) Same for validation_dataset.
https://stackoverflow.com/questions/60825989/
CUDA kernel failed : no kernel image is available for execution on the device, Error when running PyTorch model inside Google Compute VM
I have a docker image of a PyTorch model that returns this error when run inside a google compute engine VM running on debian/Tesla P4 GPU/google deep learning image: CUDA kernel failed : no kernel image is available for execution on the device This occurs on the line where my model is called. The PyTorch model includes custom c++ extensions, I'm using this model https://github.com/daveredrum/Pointnet2.ScanNet My image installs these at runtime The image runs fine on my local system. Both VM and my system have these versions: Cuda compilation tools 10.1, V10.1.243 torch 1.4.0 torchvision 0.5.0 The main difference is the GPU as far as I'm aware Local: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 960M Off | 00000000:01:00.0 Off | N/A | | N/A 36C P8 N/A / N/A | 361MiB / 2004MiB | 0% Default | +-------------------------------+----------------------+----------------------+ VM: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P4 Off | 00000000:00:04.0 Off | 0 | | N/A 42C P0 23W / 75W | 0MiB / 7611MiB | 3% Default | If I ssh into the VM torch.cuda.is_available() returns true Therefore I suspect it must be something to do with the compilation of the extensions This is the relevant part of my docker file: ENV CUDA_HOME "/usr/local/cuda-10.1" ENV PATH /usr/local/nvidia/bin:/usr/local/cuda-10.1/bin:${PATH} ENV NVIDIA_VISIBLE_DEVICES all ENV NVIDIA_DRIVER_CAPABILITIES compute,utility ENV NVIDIA_REQUIRE_CUDA "cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 brand=tesla,driver>=418,driver<419" ENV FORCE_CUDA=1 # CUDA 10.1-specific steps RUN conda install -c open3d-admin open3d RUN conda install -y -c pytorch \ cudatoolkit=10.1 \ "pytorch=1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0" \ "torchvision=0.5.0=py36_cu101" \ && conda clean -ya RUN pip install -r requirements.txt RUN pip install flask RUN pip install plyfile RUN pip install scipy # Install OpenCV3 Python bindings RUN sudo apt-get update && sudo apt-get install -y --no-install-recommends \ libgtk2.0-0 \ libcanberra-gtk-module \ libgl1-mesa-glx \ && sudo rm -rf /var/lib/apt/lists/* RUN dir RUN cd pointnet2 && python setup.py install RUN cd .. I have already re-running this line from ssh in the VM: TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0" python setup.py install Which I think targets the installation to the Tesla P4 compute capability? Is there some other setting or troubleshooting step I can try? I didn't know anything about docker/VMs/pytorch extensions until a couple of days ago, so somewhat shooting in the dark. Also this is my first stackoverflow post, apologies if I'm not following some etiquette, feel free to point out.
I resolved this in the end by manually deleting all the folders except for "src" in the folder containing setup.py Then rebuilt the docker image Then when building the image I ran TORCH_CUDA_ARCH_LIST="6.1" python setup.py install, to install the cuda extensions targeting the correct compute capability for the GPU on the VM and it worked! I guess just running setup.py without deleting the folders previously installed doesn't fully overwrite the extension
https://stackoverflow.com/questions/60829433/
Why can't I append a PyTorch tensor with torch.cat?
I have: import torch input_sliced = torch.rand(180, 161) output_sliced = torch.rand(180,) batched_inputs = torch.Tensor() batched_outputs = torch.Tensor() print('input_sliced.size', input_sliced.size()) print('output_sliced.size', output_sliced.size()) batched_inputs = torch.cat((batched_inputs, input_sliced)) batched_outputs = torch.cat((batched_outputs, output_sliced)) print('batched_inputs.size', batched_inputs.size()) print('batched_outputs.size', batched_outputs.size()) This outputs: input_sliced.size torch.Size([180, 161]) output_sliced.size torch.Size([180]) batched_inputs.size torch.Size([180, 161]) batched_outputs.size torch.Size([180]) I need the batched ones to append, but the torch.cat isn't working. What am I doing wrong?
Assuming you're doing it in a loop, I'd say it is better to do like this: import torch batch_input, batch_output = [], [] for i in range(10): # assuming batch_size=10 batch_input.append(torch.rand(180, 161)) batch_output.append(torch.rand(180,)) batch_input = torch.stack(batch_input) batch_output = torch.stack(batch_output) print(batch_input.shape) # output: torch.Size([10, 180, 161]) print(batch_output.shape) # output: torch.Size([10, 180]) If you know the resulting batch_* shape a priori, you can preallocate the final Tensor and simply assign each sample into their corresponding positions in the batch. It would be more memory efficient.
https://stackoverflow.com/questions/60841161/
How to test one single image in pytorch
I created my model in pytorch and is working really good, but when i want to test just one image batch_size=1 always return the second class (in this case a dog). I tried to test with batch > 1 and in all cases this works! The architecture: model = models.densenet121(pretrained=True) for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier so my tensors are [batch, 3, 224, 224] i have tried with: resize reshape unsqueeze(0) the response when is one image is always [[0.4741, 0.5259]] My Test Code from PIL import * msize = 256 loader = transforms.Compose([transforms.Scale(imsize), transforms.ToTensor()]) def image_loader(image_name): """load image, returns cuda tensor""" image = Image.open(image_name) image = loader(image).float() image = image.unsqueeze(0) return image.cuda() image = image_loader('Cat_Dog_data/test/cat/cat.16.jpg') with torch.no_grad(): logits = model.forward(image) ps = torch.exp(logits) _, predTest = torch.max(ps,1) print(ps) ## same value in all cases imagen_mostrar = images[ii].to('cpu') helper.imshow(imagen_mostrar,title=clas_perro_gato(predTest), normalize=True) Second Test Code andrea_data = datasets.ImageFolder(data_dir + '/andrea', transform=test_transforms) andrealoader = torch.utils.data.DataLoader(andrea_data, batch_size=1, shuffle=True) dataiter = iter(andrealoader) images, labels = dataiter.next() images, labels = images.to(device), labels.to(device) ps = torch.exp(model.forward(images)) _, predTest = torch.max(ps,1) print(ps.float()) if i changed my batch_size to 1 always returned a tensor who say that is a dog [0.43,0.57] for example. Thanks!
I realized that my model wasn't in eval mode. So i just added model.eval() and now that's all, works for any size batch
https://stackoverflow.com/questions/60841650/
How to define ration of summary with hugging face transformers pipeline?
I am using the following code to summarize an article from using huggingface-transformer's pipeline. Using this code: from transformers import pipeline summarizer = pipeline(task="summarization" ) summary = summarizer(text) print(summary[0]['summary_text']) How can I define a ratio between the summary and the original article? For example, 20% of the original article? EDIT 1: I implemented the solution you suggested, but got the following error. This is the code I used: summarizer(text, min_length = int(0.1 * len(text)), max_length = int(0.2 * len(text))) print(summary[0]['summary_text']) The error I got: RuntimeError Traceback (most recent call last) <ipython-input-9-bc11c5d8eb66> in <module>() ----> 1 summarizer(text, min_length = int(0.1 * len(text)), max_length = int(0.2 * len(text))) 2 print(summary[0]['summary_text']) 13 frames /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1482 # remove once script supports set_grad_enabled 1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1485 1486 RuntimeError: index out of range: Tried to access index 1026 out of table with 1025 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
(Note that this answer is based on the documentation for version 2.6 of transformers) It seems that as of yet the documentation on the pipeline feature is still very shallow, which is why we have to dig a bit deeper. When calling a Python object, it internally references its own __call__ property, which we can find here for the summarization pipeline. Note that it allows us (similar to the underlying BartForConditionalGeneration model) to specifiy the min_length and max_length, which is why we can simply call with something like summarizer(text, min_length = 0.1 * len(text), max_length = 0.2 * len(text) This would give you a summary of about 10-20% length of the original data, but of course you can change that to your liking. Note that the default value for BartForConditionalGeneration for max_length is 20 (as of now, min_length is undocumented, but defaults to 0), whereas the summarization pipeline has values min_length=21 and max_length=142.
https://stackoverflow.com/questions/60843698/
PyTorch - convert ProGAN agent from pth to onnx
I trained a ProGAN agent using this PyTorch reimplementation, and I saved the agent as a .pth. Now I need to convert the agent into the .onnx format, which I am doing using this scipt: from torch.autograd import Variable import torch.onnx import torchvision import torch device = torch.device("cuda") dummy_input = torch.randn(1, 3, 64, 64) state_dict = torch.load("GAN_agent.pth", map_location = device) torch.onnx.export(state_dict, dummy_input, "GAN_agent.onnx") Once I run it, I get the error AttributeError: 'collections.OrderedDict' object has no attribute 'state_dict' (full prompt below). As far as I understood, the problem is that converting the agent into .onnx requires more information. Am I missing something? --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-c64481d4eddd> in <module> 10 state_dict = torch.load("GAN_agent.pth", map_location = device) 11 ---> 12 torch.onnx.export(state_dict, dummy_input, "GAN_agent.onnx") ~\anaconda3\envs\Basemap_upres\lib\site-packages\torch\onnx\__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs) 146 operator_export_type, opset_version, _retain_param_name, 147 do_constant_folding, example_outputs, --> 148 strip_doc_string, dynamic_axes, keep_initializers_as_inputs) 149 150 ~\anaconda3\envs\Basemap_upres\lib\site-packages\torch\onnx\utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs) 64 _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding, 65 example_outputs=example_outputs, strip_doc_string=strip_doc_string, ---> 66 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs) 67 68 ~\anaconda3\envs\Basemap_upres\lib\site-packages\torch\onnx\utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size) 414 example_outputs, propagate, 415 _retain_param_name, do_constant_folding, --> 416 fixed_batch_size=fixed_batch_size) 417 418 # TODO: Don't allocate a in-memory string for the protobuf ~\anaconda3\envs\Basemap_upres\lib\site-packages\torch\onnx\utils.py in _model_to_graph(model, args, verbose, training, input_names, output_names, operator_export_type, example_outputs, propagate, _retain_param_name, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size) 277 model.graph, tuple(in_vars), False, propagate) 278 else: --> 279 graph, torch_out = _trace_and_get_graph_from_model(model, args, training) 280 state_dict = _unique_state_dict(model) 281 params = list(state_dict.values()) ~\anaconda3\envs\Basemap_upres\lib\site-packages\torch\onnx\utils.py in _trace_and_get_graph_from_model(model, args, training) 226 # A basic sanity check: make sure the state_dict keys are the same 227 # before and after running the model. Fail fast! --> 228 orig_state_dict_keys = _unique_state_dict(model).keys() 229 230 # By default, training=False, which is good because running a model in ~\anaconda3\envs\Basemap_upres\lib\site-packages\torch\jit\__init__.py in _unique_state_dict(module, keep_vars) 283 # id(v) doesn't work with it. So we always get the Parameter or Buffer 284 # as values, and deduplicate the params using Parameters and Buffers --> 285 state_dict = module.state_dict(keep_vars=True) 286 filtered_dict = type(state_dict)() 287 seen_ids = set() AttributeError: 'collections.OrderedDict' object has no attribute 'state_dict'
Files you have there are state_dict, which are simply mappings of layer name to tensor weights biases and a-like (see here for more thorough introduction). What that means is that you need a model so those saved weights and biases can be mapped upon, but first things first: 1. Model preparation Clone the repository where model definitions are located and open file /pro_gan_pytorch/pro_gan_pytorch/PRO_GAN.py. We need some modifications in order for it to work with onnx. onnx exporter requires input to be passed as torch.tensor only (or list/dict of those), while Generator class needs int and float arguments). Simple solution it to slightly modify forward function (line 80 in the file, you can verify it on GitHub) to the following: def forward(self, x, depth, alpha): """ forward pass of the Generator :param x: input noise :param depth: current depth from where output is required :param alpha: value of alpha for fade-in effect :return: y => output """ # THOSE TWO LINES WERE ADDED # We will pas tensors but unpack them here to `int` and `float` depth = depth.item() alpha = alpha.item() # THOSE TWO LINES WERE ADDED assert depth < self.depth, "Requested output depth cannot be produced" y = self.initial_block(x) if depth > 0: for block in self.layers[: depth - 1]: y = block(y) residual = self.rgb_converters[depth - 1](self.temporaryUpsampler(y)) straight = self.rgb_converters[depth](self.layers[depth - 1](y)) out = (alpha * straight) + ((1 - alpha) * residual) else: out = self.rgb_converters[0](y) return out Only unpacking via item() was added here. Every input which is not of Tensor type should be packed as one in function definition and unpacked ASAP at the top of your function. It will not destroy your created checkpoint so no worries as it's just layer-weight mapping. 2. Model exporting Place this script in /pro_gan_pytorch (where README.md is located as well): import torch from pro_gan_pytorch import PRO_GAN as pg gen = torch.nn.DataParallel(pg.Generator(depth=9)) gen.load_state_dict(torch.load("GAN_GEN_SHADOW_8.pth")) module = gen.module.to("cpu") # Arguments like depth and alpha may need to be changed dummy_inputs = (torch.randn(1, 512), torch.tensor([5]), torch.tensor([0.1])) torch.onnx.export(module, dummy_inputs, "GAN_GEN8.onnx", verbose=True) Please notice a few things: We have to create model before loading weights as it's a state_dict only. torch.nn.DataParallel is needed as that's what the model was trained on (not sure about your case, please adjust accordingly). After loading we can get the module itself via module attribute. everything is casted to CPU, no need for GPU here I think. You could cast everything to GPU if you so insist though. dummy input to generator cannot be an image (I used files provided by repo authors on their Google Drive), it has to be noise with 512 elements. Run it and your .onnx file should be there. Oh, and as you are after different checkpoint you may want to follow similar procedure, though no guarantees everything will work fine (it does look like it though).
https://stackoverflow.com/questions/60845222/
Getting error when trying to torch stack a list of tensors
I am trying to get cuda to work but I need to change my training input into a tensor. When I tried to do that, I am getting an error when I tried to stack a list of tensor into one tensor. Code for epoch in: alst = [] for x, y in loader: x = torch.stack(x) #x = torch.Tensor(x) #x = torch.stack(x).to(device,dtype=float) Shape of x: List of tensors [tensor([[[0.325], [ 0.1257], [ 0.1149], ..., [-1.572], [-1.265], [-3.574]], ]), tensor([1,2,3,4,5]), tensor(6,5,4,3,2])] Error I got 22 alst = [] 23 for x, y in loader: ---> 24 x_list = torch.stack(x) 25 # x = torch.Tensor(x) 26 # x = torch.stack(x).to(device,dtype=float) RuntimeError: Expected object of scalar type Float but got scalar type Long for sequence element 1 in sequence argument at position #1 'tensors' Not sure what I am doing wrong. I tried x = torch.stack(x).to(device,dtype=float) as well but it still didn't work.
First tensor in your output is of float type with values to input your network with, second looks like labels (of type long). Furthermore the first one is a tensor while the second and third element are vectors (with 6 and 9 elements respectively). You cannot stack tensors of different shape hence this won't work no matter the types. Unpack your x via matrix, vector1, vector2 = x To remove type warnings cast vector1 and vector2 to float via vector1 = vector1.float() Check their shapes via .shape attribute and act accordingly. Probably though you already have batches of data as you are using train_loader. See: DataLoader documentation for more information and check whether you are using one.
https://stackoverflow.com/questions/60846642/
AttributeError: 'torch.return_types.max' object has no attribute 'dim' - Maxpooling Channel
I'm trying to do maxpooling over channel dimension: class ChannelPool(nn.Module): def forward(self, input): return torch.max(input, dim=1) but I get the error AttributeError: 'torch.return_types.max' object has no attribute 'dim'
The torch.max function called with dim returns a tuple so: class ChannelPool(nn.Module): def forward(self, input): input_max, max_indices = torch.max(input, dim=1) return input_max From the documentation of torch.max: Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).
https://stackoverflow.com/questions/60847083/
Confusion in understanding the output of BERTforTokenClassification class from Transformers library
It is the example given in the documentation of transformers pytorch library from transformers import BertTokenizer, BertForTokenClassification import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForTokenClassification.from_pretrained('bert-base-uncased', output_hidden_states=True, output_attentions=True) input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=labels) loss, scores, hidden_states,attentions = outputs Here hidden_states is a tuple of length 13 and contains hidden-states of the model at the output of each layer plus the initial embedding outputs. I would like to know, whether hidden_states[0] or hidden_states[12] represent the final hidden state vectors?
If you check the source code, specifically BertEncoder, you can see that the returned states are initialized as an empty tuple and then simply appended per iteration of each layer. The final layer is appended as the last element after this loop, see here, so we can safely assume that hidden_states[12] is the final vectors.
https://stackoverflow.com/questions/60847291/
Hyper-parameter tuning and Over-fitting with Feed-Forward Neural Network - Mini-Batch Epoch and Cross Validation
I am looking at implementing a hyper-parameter tuning method for a feed-forward neural network (FNN) implemented using PyTorch. My original FNN , the model is named net, has been implemented using a mini-batch learning approach with epochs: #Parameters batch_size = 50 #larger batch size leads to over fitting num_epochs = 1000 learning_rate = 0.01 #was .01-AKA step size - The amount that the weights are updated during training batch_no = len(x_train) // batch_size criterion = nn.CrossEntropyLoss() #performance of a classification model whose output is a probability value between 0 and 1 optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) for epoch in range(num_epochs): if epoch % 20 == 0: print('Epoch {}'.format(epoch+1)) x_train, y_train = shuffle(x_train, y_train) # Mini batch learning - mini batch since batch size < n(batch gradient descent), but > 1 (stochastic gradient descent) for i in range(batch_no): start = i * batch_size end = start + batch_size x_var = Variable(torch.FloatTensor(x_train[start:end])) y_var = Variable(torch.LongTensor(y_train[start:end])) # Forward + Backward + Optimize optimizer.zero_grad() ypred_var = net(x_var) loss =criterion(ypred_var, y_var) loss.backward() optimizer.step() I lastly test my model on a separate test set. I came across an approach using randomised search to tune the hyper-parameters as well as implementing K-fold cross-validation (RandomizedSearchCV). My question is two-fold(no pun intended!) and firstly is theoretical: Is k-fold validation is necessary or could add any benefit to mini-batch feed-forward neural network? From what I can see, the mini-batch approach should do roughly the same job, stopping over-fitting. I also found a good answer here but I'm not sure this addresses a mini-batch approach approach specifically. Secondly, if k-fold is not necessary, is there another hyper-parameter tuning function for PyTorch to avoid manually creating one?
k-fold cross validation is generally useful when you have a very small dataset. Thus, if you are training on a dataset like CIFAR10 (which is large, 60000 images), then you don't require k-fold cross validation. The idea of k-fold cross validation is to see how model performance (generalization) varies as different subsets of data is used for training and testing. This becomes important when you have very less data. However, for large datasets, the metric results on the test dataset is enough to test the generalization of the model. Thus, whether you require k-fold cross validation depends on the size of your dataset. It does not depend on what model you use. If you look at this chapter of the Deep Learning book (this was first referenced in this link): Small batches can offer a regularizing effect (Wilson and Martinez, 2003), perhaps due to the noise they add to the learning process. Generalization error is often best for a batch size of 1. Training with such a small batch size might require a small learning rate to maintain stability because of the high variance in the estimate of the gradient. The total runtime can be very high as a result of the need to make more steps, both because of the reduced learning rate and because it takes more steps to observe the entire training set. So, yes, mini-batch training will have a regularizing effect (reduce overfitting) to some extent. There is no inbuilt hyperparameter tuning (at least at the time of writing this answer), but many developers have developed tools for this purpose (for example). You can find more such tools by searching for them. This question has answers which list a lot of such tools.
https://stackoverflow.com/questions/60854883/
Return all dimensions if there are 3 channels, else return 0
I want to plot some images that are sometimes grayscale, sometimes color. Since I'm using pytorch, the transformed images are either (50, 100, 1) or (50, 100, 3) depending on if they're color or not. Since matplotlib cannot take the former for a picture, I need to return these shapes: (100, 100, 3) # if it's a color picture (100, 100) # if it's a grayscale picture Those are the shapes matplotlib can deal with. Here is what my workflow looks like: import numpy as np import matplotlib.pyplot as plt h, w = 50, 100 grey = np.random.randint(0, 256, (h, w, 1)) color = np.random.randint(0, 256, (h, w, 3) With the same line, I need to be able to plot images that can either be color or greyscale. plt.imshow(grey[:, :, 0 if grey.shape[-1] == 1 else :] plt.imshow(color[:, :, 0 if color.shape[-1] == 1 else :] But that's not correct python syntax.
Can't you just write plt.imshow(grey[:,:,0] if grey.shape[-1] == 1 else grey) plt.imshow(color[:,:,0] if color.shape[-1] == 1 else color)
https://stackoverflow.com/questions/60855328/
Need pytorch help doing 2D convolutions of N images with N kernels all at once
I have a torch tensor which is a stack of images. Let's say for kicks it is im=th.arange(4*5*6,dtype=th.float32).view(4,5,6) which is a 4x5x6 tensor, meaning four 5x6 images stacked vertically. I want to convolve each layer with its own 2-D kernel so that I_{out,j} = k_j*I_{in,j}, j=(1...4) I can obviously do this with a for loop, but I'd like to take advantage of GPU acceleration and do all the convolutions at the same time. No matter what I try, I've only been able to use torch's conv2d or conv3d to produce a single output layer that is the sum of all the 2d convolutions. Or I can make 4 layers where each is the same sum of all the 2d convolutions. Here's a concrete example. Let's use im as defined above. Say that the kernel is defined by k=th.zeros((4,3,3),dtype=th.float32) n=-1 for i in range(2): for j in range(2): n+=1 k[n,i,j]=1 k[n,2,2]=1 print(k) tensor([[[1., 0., 0.], [0., 0., 0.], [0., 0., 1.]], [[0., 1., 0.], [0., 0., 0.], [0., 0., 1.]], [[0., 0., 0.], [1., 0., 0.], [0., 0., 1.]], [[0., 0., 0.], [0., 1., 0.], [0., 0., 1.]]]) and from above, im is tensor([[[ 0., 1., 2., 3., 4., 5.], [ 6., 7., 8., 9., 10., 11.], [ 12., 13., 14., 15., 16., 17.], [ 18., 19., 20., 21., 22., 23.], [ 24., 25., 26., 27., 28., 29.]], [[ 30., 31., 32., 33., 34., 35.], [ 36., 37., 38., 39., 40., 41.], [ 42., 43., 44., 45., 46., 47.], [ 48., 49., 50., 51., 52., 53.], [ 54., 55., 56., 57., 58., 59.]], [[ 60., 61., 62., 63., 64., 65.], [ 66., 67., 68., 69., 70., 71.], [ 72., 73., 74., 75., 76., 77.], [ 78., 79., 80., 81., 82., 83.], [ 84., 85., 86., 87., 88., 89.]], [[ 90., 91., 92., 93., 94., 95.], [ 96., 97., 98., 99., 100., 101.], [102., 103., 104., 105., 106., 107.], [108., 109., 110., 111., 112., 113.], [114., 115., 116., 117., 118., 119.]]]) The right answer is easy if I do the for loop: import torch.functional as F for i in range(4): print(F.conv2d(im[i].expand(1,1,5,6),k[i].expand(1,1,3,3))) tensor([[[[14., 16., 18., 20.], [26., 28., 30., 32.], [38., 40., 42., 44.]]]]) tensor([[[[ 75., 77., 79., 81.], [ 87., 89., 91., 93.], [ 99., 101., 103., 105.]]]]) tensor([[[[140., 142., 144., 146.], [152., 154., 156., 158.], [164., 166., 168., 170.]]]]) tensor([[[[201., 203., 205., 207.], [213., 215., 217., 219.], [225., 227., 229., 231.]]]]) As I noted earlier, the only thing I've been able to get is one sum of those four output images (or four copies of the same summed layer): F.conv2d(im.expand(1,4,5,6),k.expand(1,4,3,3)) tensor([[[[430., 438., 446., 454.], [478., 486., 494., 502.], [526., 534., 542., 550.]]]]) I'm certain that what I want to do is possible, I just haven't been able to wrap my head around it yet. Does anyone have a solution to offer?
This is pretty straight forward if you use a grouped convolution. From the nn.Conv2d documentation At groups=in_channels, each input channel is convolved with its own set of filters Which is exactly what we want. The shape of the weights argument to F.conv2d needs to be considered since it changes depending on the value of groups. The first dimension of weights should just be out_channels, which is 4 in this case. The second dimension according to F.conv2d docs should be in_channels / groups, which is 1. So we can perform the operation using F.conv2d(im.unsqueeze(0), k.unsqueeze(1), groups=4).squeeze(0) which produces a tensor of shape [4,3,4] with values tensor([[[ 14., 16., 18., 20.], [ 26., 28., 30., 32.], [ 38., 40., 42., 44.]], [[ 75., 77., 79., 81.], [ 87., 89., 91., 93.], [ 99., 101., 103., 105.]], [[140., 142., 144., 146.], [152., 154., 156., 158.], [164., 166., 168., 170.]], [[201., 203., 205., 207.], [213., 215., 217., 219.], [225., 227., 229., 231.]]])
https://stackoverflow.com/questions/60856343/
Python not finding torch
I want to use deepsaber. For that I have to Install Torch. I did this using pip install torch torchvision. After running cd scripts/generation and ./script_generate.sh [path to song] python tells me: Traceback (most recent call last): File "generate_stage1.py", line 16, in <module> from models import create_model File "/home/server/deepsaber/models/__init__.py", line 2, in <module> from .base_model import BaseModel File "/home/server/deepsaber/models/base_model.py", line 2, in <module> import torch ModuleNotFoundError: No module named 'torch' Traceback (most recent call last): File "generate_stage2.py", line 16, in <module> from models import create_model File "/home/server/deepsaber/models/__init__.py", line 2, in <module> from .base_model import BaseModel File "/home/server/deepsaber/models/base_model.py", line 2, in <module> import torch ModuleNotFoundError: No module named 'torch' using import torch works like a charm when using python any ideas of what I can do? The repos says: pytorch (installed as torch or via https://pytorch.org/get-started/locally/)
Most likely you installed torch using Python 2, while script_generate.sh uses Python 3 (see here): # [...] py=python3 # [...] Try to run pip3 install torch torchvision or python3 -m pip install torch torchvision. Also, check if import torch works when using python3.
https://stackoverflow.com/questions/60863206/
how to #include in a .cpp file with Visual Studio?
Now I'm working with a project requiring using Pytorch C++ extension. I've installed a Pytorch of version 1.4.0 in a python virtual environment: activate crfasrnn >>>import torch >>>print(torch.__version__) 1.4.0 I'm using visual studio as a C++ compiler. The code is like this: #include<iostream> #include<torch/extension.h> int main() { std::cout << "hello, world!" << std::endl; return 0; } However, the compiler tells me "can not open 'torch/extension.h'" How can I solve this problem?
You need to get the appropriate LibTorch binaries from https://pytorch.org/ and then include the required binaries in the visual studio. You can follow the detailed instructions given here.
https://stackoverflow.com/questions/60864423/
How can I interleave 5 PyTorch tensors?
I have 5 tensors of shape torch.Size([7, 20, 180]) I want to interleave them, one after the other along dim=1. So that my final shape will be torch.Size([7, 100, 180]). Basically, I want the first element from the first tensor, then the first element from the second tensor, and so on.
If I understood correctly, import torch stacked = torch.stack(list_of_tensors, dim=2) interleaved = torch.flatten(stacked, start_dim=1, end_dim=2) interleaved is what you need apparently (tested with pytorch 1.1.0)
https://stackoverflow.com/questions/60869537/
Pytorch: How to access tensor(values) by tensor(keys) in python dictionary
I have a dictionary with tensor keys and tensor values. I want to access the values by the keys. from torch import tensor x = {tensor(0): [tensor(1)], tensor(1): [tensor(0)]} for i in x.keys(): print(i, x[i]) Returns: tensor(0) [tensor(1)] tensor(1) [tensor(0)] But when i try to access the values without looping through the keys, try: print(x[tensor(0)]) except: print(Exception) print(x[0]) Throws Exception: KeyError Traceback (most recent call last) <ipython-input-34-746d28dcd450> in <module>() 6 try: ----> 7 print(x[tensor(0)]) 8 KeyError: tensor(0) During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) <ipython-input-34-746d28dcd450> in <module>() 9 except: 10 print(Exception) ---> 11 print(x[0]) 12 continue KeyError: 0
There's at least three issues here. If x is a dictionary of tensor keys, then of course x[0] will not work. 0 is not a key of it. Hence the inner KeyError that occured during the other exception. Not actually relevant to your errors, but print(Exception) is almost surely not what you want. It prints the class object (if that is the right term) of the class Exception. You likely rather meant except Exception as e: print(e) or more specifically, except KeyError (otherwise it will just catch every kind of exception). The real thing: you don't want to use a tensor as key in the first place. It's a mutable type, compared by reference, not by value. Every tensor(something) call will create a new object, hashing to a different value than the tensor(something) you specified as key. Use the actual integers instead.
https://stackoverflow.com/questions/60870008/
How to use masked select for these kind of tensors?
Suppose, I got a tensor a and a tensor b import torch a = torch.tensor([[[ 0.8856, 0.1411, -0.1856, -0.1425], [-0.0971, 0.1251, 0.1608, -0.1302], [-0.0901, 0.3215, 0.1763, -0.0412]], [[ 0.8856, 0.1411, -0.1856, -0.1425], [-0.0971, 0.1251, 0.1608, -0.1302], [-0.0901, 0.3215, 0.1763, -0.0412]]]) b = torch.tensor([[0, 2, 1], [0, 2, 1]]) Now, I would like to select indices from tensor a, where the value of tensor b is not 0. pred_masks = ( b != 0 ) c = torch.masked_select( a, (pred_masks == 1)) And of course, I get an expected error. ----> 1 c = torch.masked_select( a, (pred_masks == 1)) RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 2 This is caused by the nested list containing 4 items. However, it is required to select all the values of the nested list at index x in tensor a, corresponding to the index x in tensor b. I will be grateful for any hint or answer.
I am not so sure what you want as the shape of the output c. Since your mask is of shape (2,3) and a is of shape (2,3,4) do you want as output a tensor of shape (n,4) where n is the number of elements that is true in the (2,3)-mask ? If yes then I would suggest just using the mask as an index for the first two dimensions. c = a[pred_masks,:] Hope that helps a bit.
https://stackoverflow.com/questions/60870690/
IndexError: tensors used as indices must be long, byte or bool tensors
I am getting this error only during the testing phase, but I do not face any problem in the training and validation phase. IndexError: tensors used as indices must be long, byte or bool tensors I get this error for the last line in the given code snippet. The code snippet looks like the one below, NumClass = 10 mask = torch.zeros(batch_size, self.mem_dim, 4, 4) ones = torch.ones(1, 4, 4) NumRows = self.mem_dim Elements = NumRows//NumClass for i in range(batch_size): lab = torch.arange(Elements * label[i], Elements*(label[i]+1), 1) mask[i,lab] = ones The "lab" is a tensor value and prints out the range in such a way, tensor([6, 7, 8]) tensor([ 9, 10, 11]) tensor([21, 22, 23]) (Note*: the length of this lab tensor can be of length 'n' based on the value of ElementsPerClass)
Yes it works when I provide a dtype=long to the label tensor and leaving rest of the tensors with default dtypes. thanks!
https://stackoverflow.com/questions/60873477/
Does BertForSequenceClassification classify on the CLS vector?
I'm using the Huggingface Transformer package and BERT with PyTorch. I'm trying to do 4-way sentiment classification and am using BertForSequenceClassification to build a model that eventually leads to a 4-way softmax at the end. My understanding from reading the BERT paper is that the final dense vector for the input CLS token serves as a representation of the whole text string: The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks. So, does BertForSequenceClassification actually train and use this vector to perform the final classification? The reason I ask is because when I print(model), it is not obvious to me that the CLS vector is being used. model = BertForSequenceClassification.from_pretrained( model_config, num_labels=num_labels, output_attentions=False, output_hidden_states=False ) print(model) Here is the bottom of the output: (11): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (dropout): Dropout(p=0.1, inplace=False) (classifier): Linear(in_features=768, out_features=4, bias=True) I see that there is a pooling layer BertPooler leading to a Dropout leading to a Linear which presumably performs the final 4-way softmax. However, the use of the BertPooler is not clear to me. Is it operating on only the hidden state of CLS, or is it doing some kind of pooling over hidden states of all the input tokens? Thanks for any help.
The short answer: Yes, you are correct. Indeed, they use the CLS token (and only that) for BertForSequenceClassification. Looking at the implementation of the BertPooler reveals that it is using the first hidden state, which corresponds to the [CLS] token. I briefly checked one other model (RoBERTa) to see whether this is consistent across models. Here, too, classification only takes place based on the [CLS] token, albeit less obvious (check lines 539-542 here).
https://stackoverflow.com/questions/60876394/
Cuda version for cat in Pytorch?
I am trying to build a CNN architecture for Word Embedding. I am trying to concatenate two tensors using torch.cat but its throwing this error: 22 print(z1.size()) ---> 23 zcat = torch.cat(lz, dim = 2) 24 print("zcat",zcat.size()) 25 zcat2=zcat.reshape([batch_size, 1, 100, 3000]) RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 1 in sequence argument at position #1 'tensors' Attaching the architecture for reference as well: def __init__(self,vocab_size,embedding_dm,pad_idx): super().__init__() self.embedding = nn.Embedding(vocab_size,embedding_dim,padding_idx = pad_idx) self.convs = nn.ModuleList([nn.Conv2d(in_channels = 1,out_channels = 50,kernel_size = (1,fs)) for fs in (3,4,5)]) self.conv2 = nn.Conv2d(in_channels = 50,out_channels = 100,kernel_size = (1,2)) self.fc1 = nn.Linear(100000,150) #Change this self.fc2 = nn.Linear(150,1) self.dropout = nn.Dropout(0.5) def forward(self,text): print("text",text.size()) embedded = self.embedding(text.T) embedded = embedded.permute(0, 2,1) print("embedded",embedded.size()) x=embedded.size(2) y=3000-x print(y,"hello") batch_size=embedded.size(0) z=np.zeros((batch_size,100,y)) z1=torch.from_numpy(z).float() lz=[embedded,z1] print(z1.size()) zcat = torch.cat(lz, dim = 2) print("zcat",zcat.size()) zcat2=zcat.reshape([batch_size, 1, 100, 3000]) print("zcat2",zcat2.size()) # embedded = embedded.reshape([embedded.shape[0],1,]) print(embedded.size(),"embedding") conved = [F.relu(conv(embedded)) for conv in self.convs] pooled = [F.max_pool2d(conv,(1,2)) for conv in conved] print("Pool") for pl in pooled: print(pl.size()) cat = torch.cat(pooled,dim = 3) print("cat",cat.size()) conved2 = F.relu(self.conv2(cat)) print("conved2",conved2.size()) pooled2 = F.max_pool2d(conved2,(1,2)) print(pooled2.size(),"pooled2") return 0 # return pooled2 Am I doing something wrong? Help appreciated. Thanks!
Got it. Just create a tensor using : - torch.zeros(batch_size,100,y,dtype = embedded.dtype,device = embedded.device)
https://stackoverflow.com/questions/60882033/
How can I get the mean of 5 tensors along an axis?
I have 5 tensors of shape torch.Size([7, 20, 180]) I want to get the mean of each along dim=1 so that my final shape will be torch.Size([7, 20, 180]). Basically, I want the first element from the first tensor, then the first element from the second tensor, and so on to be averaged.
You did not mention how these 5 tensors are stored, but let's assume they are in a list. Here's a way to do it: import torch x = [torch.rand((7, 20, 180)) for _ in range(5)] y = torch.stack(x).mean(dim=0) print(y.shape) # >>> torch.Size([7, 20, 180]) I'm also assuming you said dim=1 as if PyTorch was 1-based indexing, which it is not. I see you are asking many questions recently, which is not a problem. I've said this once, but again: always try to provide a Minimal, Reproducible Example. It is always good to show some effort as well. Have you tried anything before asking?
https://stackoverflow.com/questions/60886033/
How can I remove elements across a dimension that are all zero with PyTorch?
I have a tensor: tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], device='cuda:0', grad_fn=<AsStridedBackward>) torch.Size([3, 176]) You see that the first and the third have all zeroes. I'd like to remove it and change this to have size of [1, 176]. If I do print((mask != 0).nonzero()), then I see: tensor([[ 0, 136], [ 0, 137], [ 0, 138], [ 0, 139], [ 0, 140], [ 0, 141], [ 0, 142], [ 0, 143], [ 0, 144], [ 0, 145], [ 0, 146], [ 0, 147], [ 0, 148], [ 0, 149], [ 0, 150], [ 0, 151], [ 0, 152], [ 0, 153], [ 0, 154], [ 0, 155], [ 0, 156], [ 0, 157], [ 0, 158], [ 0, 159]], device='cuda:0') So I feel like that's a part of it, but how do I further reduce the tensor?
# let x be some tensor with nonzero somewhere x = torch.zeros((2, 5)) x[1, 2] = 1 nonZeroRows = torch.abs(x).sum(dim=1) > 0 x = x[nonZeroRows]
https://stackoverflow.com/questions/60888546/
How can I trim a tensor based on a mask with PyTorch?
I have a tensor inp, which has a size of: torch.Size([4, 122, 161]). I also have a mask with a size of: torch.Size([4, 122]). Each element in my mask looks something like: tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0', grad_fn=<SelectBackward>) So I want to trim inp to be reduced along the dimension=1 to only exist where the mask has 1. In the case shown, there are 23 1s, so I want the size of inp to be: torch.Size([4, 23, 161])
I think advanced indexing would work. (I assume every mask has equally 23 1s) inp_trimmed = inp[mask.type(torch.bool)].reshape(4,23,161)
https://stackoverflow.com/questions/60891052/
How can I element-wise multiply tensors with different dimensions?
I have a tensor expanded_mask, which has a size of torch.Size([1, 208]) and another one inputs which has a size of torch.Size([1, 208, 161]). I want to elementwise multiply expanded_mask and input such that all 161 elements of the third dimension are multiplied with the 208 elements of expanded_mask. As per jodag's answer, I tried: masked_inputs = expanded_mask.unsqueeze(2) * inputs inputs is: tensor([1.8851e-02, 4.4921e-02, 7.5260e-02, 3.8994e-02, 3.5651e-02, 3.0457e-02, 1.2933e-02, 2.5496e-02, 2.3260e-04, 2.4903e-03, 6.5678e-03, 1.0501e-02, 1.2387e-02, 1.9434e-03, 1.0831e-03, 6.5691e-03, 5.3792e-03, 9.1925e-03, 1.8146e-03, 4.9215e-03, 1.4623e-03, 9.4454e-03, 1.0504e-03, 3.3749e-03, 2.1361e-03, 8.0782e-03, 1.7916e-03, 1.1577e-03, 1.1246e-04, 2.2520e-03, 2.2255e-03, 2.1072e-03, 9.8782e-03, 2.2909e-03, 2.9957e-03, 5.8540e-03, 1.1067e-02, 9.0582e-03, 5.6360e-03, 6.3841e-03, 5.9298e-03, 1.9501e-04, 2.7967e-03, 3.5786e-03, 9.2363e-03, 8.3934e-03, 8.8185e-04, 5.4591e-03, 2.2451e-04, 2.2307e-03, 2.4871e-03, 3.6736e-03, 1.3842e-04, 2.7455e-03, 6.2199e-03, 1.1924e-02, 9.5953e-03, 1.6939e-03, 4.1919e-04, 9.3509e-05, 1.8351e-03, 6.3350e-04, 1.1076e-03, 1.5472e-03, 1.2104e-03, 3.1803e-04, 8.6507e-04, 3.0083e-03, 2.8435e-03, 1.6740e-03, 8.1023e-05, 7.5767e-04, 9.1442e-04, 2.0204e-03, 1.3987e-03, 3.7729e-03, 5.2012e-04, 2.0367e-03, 1.5177e-03, 1.6948e-03, 9.5833e-04, 1.2050e-03, 1.8356e-03, 9.4503e-04, 4.8612e-04, 1.6844e-04, 1.2222e-04, 1.7526e-03, 2.6397e-04, 1.3026e-03, 1.0704e-03, 3.6407e-04, 1.3135e-03, 2.6665e-03, 1.8639e-03, 3.0385e-05, 1.0212e-03, 7.6236e-04, 1.7878e-03, 2.4298e-03, 7.2158e-05, 1.2488e-03, 2.1347e-03, 3.9256e-03, 3.1436e-03, 3.1648e-03, 3.4657e-03, 1.3746e-03, 1.6927e-03, 1.0794e-03, 8.8152e-04, 1.1757e-04, 3.2254e-04, 4.1866e-04, 9.2787e-04, 2.0020e-03, 1.4813e-03, 1.1912e-03, 2.4577e-03, 2.2247e-03, 1.7862e-03, 1.7460e-03, 1.4388e-03, 4.3175e-04, 6.7808e-04, 2.6875e-04, 3.6475e-04, 8.7643e-04, 3.6790e-04, 2.1274e-04, 6.3725e-04, 2.0949e-03, 2.4069e-03, 1.7348e-03, 1.0026e-03, 1.2451e-03, 4.7888e-04, 5.9790e-04, 1.4343e-03, 4.0900e-03, 1.0176e-03, 5.5178e-04, 2.0624e-03, 1.2878e-03, 6.9607e-04, 4.3259e-04, 1.8573e-03, 7.5521e-04, 5.2949e-04, 3.4758e-04, 4.7898e-04, 7.5599e-04, 6.0631e-04, 1.7585e-03, 1.8156e-03, 3.2421e-04, 8.9446e-04, 7.2131e-04, 6.2817e-04, 1.0827e-03, 2.0211e-03], device='cuda:0') expanded_mask is: tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], device='cuda:0', grad_fn=<AsStridedBackward>) then masked_inputs is: tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0', grad_fn=<SelectBackward>) Looks like the 1's isn't being multiplied through.
Another way of using broadcasting: import torch mask = torch.tensor([[1, 0, 1]]) inputs = torch.randn(1, 3, 2) masked = inputs * mask[..., None] print(mask) print(inputs) print(masked) result: tensor([[1, 0, 1]]) tensor([[[ 2.2820, 2.7476], [-0.1738, -0.5703], [ 0.7077, -0.6384]]]) tensor([[[ 2.2820, 2.7476], [-0.0000, -0.0000], [ 0.7077, -0.6384]]]) The ellipsis operator denotes all dimensions, then None adds a dimension at the end.
https://stackoverflow.com/questions/60892687/
How to load BertforSequenceClassification models weights into BertforTokenClassification model?
Initially, I have a fine-tuned BERT base cased model using a text classification dataset and I have used BertforSequenceClassification class for this. from transformers import BertForSequenceClassification, AdamW, BertConfig # Load BertForSequenceClassification, the pretrained BERT model with a single # linear classification layer on top. model = BertForSequenceClassification.from_pretrained( "bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab. num_labels = 2, # The number of output labels--2 for binary classification. # You can increase this for multi-class tasks. output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False, # Whether the model returns all hidden-states. ) Now I want to use this fine-tuned BERT model weights for Named Entity Recognition and I have to use BertforTokenClassification class for this. I'm unable to figure out how to load the fine-tuned BERT model weights into the new model created using BertforTokenClassification. Thanks in advance.......................
You can get weights from the bert inside the first model and load into the bert inside the second: new_model = BertForTokenClassification(config=config) new_model.bert.load_state_dict(model.bert.state_dict())
https://stackoverflow.com/questions/60897514/
Torchvision.transforms implementation of Flatten()
I have grayscale images, but I need transform it to a dataset of 1d vectors How can I do this? I could not find a suitable method in transforms: train_dataset = torchvision.datasets.ImageFolder(root='./data',train=True, transform=transforms.ToTensor()) test_dataset = torchvision.datasets.ImageFolder(root='./data',train=False, transform=transforms.ToTensor()) train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=4, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=4, shuffle=False)
Here's how you can do it using Lambda import torch from torchvision.datasets import MNIST import torchvision.transforms as T # without flatten dataset = MNIST(root='.', download=True, transform=T.ToTensor()) print(dataset[0][0].shape) # >>> torch.Size([1, 28, 28]) # with flatten (using Lambda, but you can do it in many other ways) dataset_flatten = MNIST(root='.', download=True, transform=T.Compose([T.ToTensor(), T.Lambda(lambda x: torch.flatten(x))])) print(dataset_flatten[0][0].shape) # >>> torch.Size([784])
https://stackoverflow.com/questions/60900406/
How can I set precision when printing a PyTorch tensor with integers?
I have: mask = mask_model(input_spectrogram) mask = torch.round(mask).float() torch.set_printoptions(precision=4) print(mask.size(), input_spectrogram.size(), masked_input.size()) print(mask[0][-40:], mask[0].size()) This prints: tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=<SliceBackward>) torch.Size([400]) But I want it to print 1.0000 instead of 1.. Why won't my set_precision do it? Even when I converted to float()?
Unfortunately, this is simply how PyTorch displays tensor values. Your code works fine, if you do print(mask * 1.1), you can see that PyTorch does indeed print out 4 decimal values when the tensor values can no longer be represented as integers.
https://stackoverflow.com/questions/60902576/
How to deploy Pytorch in Python via a REST API with Flask?
I am working on AWS Sagemaker and my goal is to follow this tutorial from Pytorch's official documentation. The original predict function from the tutorial above is the following: @app.route('/predict', methods=['POST']) def predict(): if request.method == 'POST': file = request.files['file'] img_bytes = file.read() class_id, class_name = get_prediction(image_bytes=img_bytes) return jsonify({'class_id': class_id, 'class_name': class_name}) I was getting this error, so I added 'GET' as a method as mentioned in here. I also simplified my example to its minimal expression: from flask import Flask, jsonify, request app = Flask(__name__) @app.route('/predict', methods=['GET','POST']) def predict(): if request.method == 'POST': return jsonify({'class_name': 'cat'}) return 'OK' if __name__ == '__main__': app.run() I perform requests with the following code: import requests resp = requests.post("https://catdogclassifier.notebook.eu-west-1.sagemaker.aws/proxy/5000/predict", files={"file": open('/home/ec2-user/SageMaker/cat.jpg', 'rb')}) resp is <Response [200]> but resp.json() returns JSONDecodeError: Expecting value: line 1 column 1 (char 0) Finally, resp.url points me to a page saying 'OK'. Moreover, this is the output of resp.content <!DOCTYPE HTML> <html> <head> <style type="text/css"> #loadingImage { margin: 10em auto; width: 234px; height: 238px; background-repeat: no-repeat; background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOoAAADuBAMAAADFK4ZrAAAAMFBMVEUAAAAaGhoXFxfo6Og3NzdPT0/4+PgmJibFxcWhoaF8fHzY2NiysrJmZmaNjY1DQ0OLk+cnAAAAAXRSTlMAQObYZgAABlpJREFUeNrtnGnITFEYx5+a3Ox1TjRFPpxbGmSZsmfLliKy73RKZImaSVmy7xFJtmSLLFlHdrIklH3fhSyJLOEDQpak4865951znmfGUuf34f3w1u13/+c+d+a555w74HA4HA6Hw+FwOBwOh+P/Z8DBfec3be3WbdO2U3P2SPgTeA1WXmuT9n2fse9/Eq1vvUhCoYntWzfqu1DBfdZ+Yz0oKA0WpblgWYhE6+UFzBub+8ZnYfBE7z1QIIY9ZoJFICYuLUxZDbkhWBEkboVqPdrJNGwuWJGIz0nQiD3pPYMibcFy0j+jHTZN8LEZglTktvpfstOWFozxd4SkJvAOWdpj7DvVJE46uJNgRvh9ZVbU78RxQxzbwIzpo0Vl/AFG6l0Q5tbE2qyo3+mIsR7wmQWVJwWiYq3FOzEb+Jikioq2ekuYHfyOioq2NkkxS8qdVFGR1lJvmS18rFRRcVZVv+bEr6ioKOuQtwxB5STAAoa3LmEY+AcoIfDWEYKhqDVpAUNbvTUMSVuBsKqoaPDWh+zPW1VF6Pzsibmff6u3INI56su9Vz+4Pyqdb2vpFAuDi693Z9Ub9POJZ9+8rkzk1XokXFpuyyAJikHzbvh5tMbehg5u26laf3MiLfJmHSiYTuLuINDwDjQXOayUTwie2CghjP09clgptZTYLKNa1875sdYPk0Ikh9P5sHqX9fH9LCESb246D9bSQm8RkkWe5sIU3bpLs1aekaOXvEy26gMcfwQ5aJoiWvUB5h8l5GKVIFqbiezu4KjBqXYmWp9mR+0JBuwWJGusud71GVA8RbIOFVlRr4ORlTbCe1FRYTLBqt83vI9ZVFoNl0plt7eYqMqKu6yfUVdVWVGXNT4fFVVZUXdrBYmKqqyou/WTWVSitUTWAE8yi0q0DlfFZD43Nplq3cECdDeLSrR6CxADPJlsDRZTDYmMqqz23xx8NCIqwlpSWH/dFGdka51gCc83qT+aVR+tchLRBCCswRKuKk2uCdnqBZ4geUtEb4ewxoIlvNqkEliRXLSeA4lnEI+6CuMZ+DIi0DFJ2xPVyVh/9lcBA2Kdi5JWk7ZZeRcwYaYW1rIeoVQLphBHzVqmzpFav1IGDPAOtVL0lWBE4/utIug9FYzwaiuk8Qp/7QgkOBwOh8PhcDgcDofJ00M9MMSzP6bB9lYRtD9jKL2gjmk9w2zS/5JgEfDKZg+WJfzfjqluuxqpMw5MqCuYolYSsZ6OWdiZos2n0KYE+XyTE+9sPZ9SVhRp7W496VoVDCjGiqSl9XJQFbqVvzOpx8ARXXBW6wm+BdoRRGs8aTuDz+fTrSa3fGntPIlWPt96LrOyRFmt74Jj5qOjrOTPiEuBQ9qRrVURCzPrMVb7cmymrd3SrFURu7HKSaKVz0fsmagGRGtVzG6sikQrn4/Y9MDXE61VMbtE45MMraSocJwFKC8NraSoxd+wAKMBZ1VREXuNeEuatSpqn378KMnKj1q1pOqy0rJOQq1hjQaa9TMYMF3bjEW0xhcbNBEtsg/KEK2sZtJ+R/cEoFr5HchBU6GtC5KtLHEyxythb7SrMoluZWMzlq+EVZN5sPKPSbtXwlTXT+pgbkaf/IG0XvZHKVZFIkrrzU6HVL2kWZX2rjTeKB2/AjSrQvSqBxoDzimponKGalX4+jzMvqs+C2E0kK0KXm7b7++oewOWaa8YaLVEtzIuJt7atwd+MGjAvpfvuQi/uyXNquMnWt/e9urZpmtd0z4LJ74WyFbd6/uM//gTRaUkwkqFd4e/YK0sCdar+Kh4a/WGb3HW8hJv5UfVy5lWxK8D3lodoDgq7IQkyqp64LopRNTFgLKqdj92Q1iX0jiJtKonm4Zp61LKANKqnmy8I7bWzYC2qoe4UhvsxvezJFirIt8yrjwJMFZ96+Ihi0tbbr29Us3A1wx0nebWm4CiVErfpRlbxTRytcyYd5lrQoDiZhVF+LWOZiJkQ+pgkw8LPn4GYIltEL5aolJpU7mlkwDPsCe9kiH/fc1zNDUqKQpPhp9MukhpgX50J3a+jR/pjN9MQmHwGl/1RcQTwSMJBSN2+n1Ku7yCj9ySgYJSe25X5ovgr0bd2imh0AxbueJ96tcvZI2aeO9FPfgjeI32nX2+qVu/TZuWHtw5CBwOh8PhcDgcDofD4XA4HP8i3wDmy/sFKv4WfAAAAABJRU5ErkJggg==); -webkit-animation:spin 4s linear infinite; -moz-animation:spin 4s linear infinite; animation:spin 4s linear infinite; } @-moz-keyframes spin { 100% { -moz-transform: rotate(360deg); } } @-webkit-keyframes spin { 100% { -webkit-transform: rotate(360deg); } } @keyframes spin { 100% { -webkit-transform: rotate(360deg); transform:rotate(360deg); } } </style> </head> <body> <div id="loadingImage"></div> <script type="text/javascript"> var RegionFinder = (function() { function RegionFinder( location ) { this.location = location; } RegionFinder.prototype = { getURLWithRegion: function() { var isDynamicDefaultRegion = ifPathContains(this.location.pathname, "region/dynamic-default-region"); var queryArgs = removeURLParameter(this.location.search, "region"); var hashArgs = this.location.href.split("#")[1] || ""; if (hashArgs) { hashArgs = "#" + hashArgs; } var region = this._getCurrentRegion(); var newArgs = "region=" + region; if (_shouldAuth()) { newArgs = "needs_auth=true"; region = "nil"; } if (queryArgs && queryArgs != "?") { queryArgs += "&" + newArgs; } else { queryArgs = "?" + newArgs; } if (!region) { var contactUs = "https://portal.aws.amazon.com/gp/aws/html-forms-controller/contactus/aws-report-issue1"; alert("How embarrassing! There is something wrong with this URL, please contact AWS at " + contactUs); } var pathname = isDynamicDefaultRegion ? "/console/home" : this.location.pathname; return this.location.protocol + "//" + _getRedirectHostFromAttributes() + pathname + queryArgs + hashArgs; }, _getCurrentRegion: function() { return _getRegionFromHash( this.location ) || _getRegionFromAttributes(); } }; function ifPathContains(url, parameter) { return (url.indexOf(parameter) != -1); } function removeURLParameter(url, parameter) { var urlparts= url.split('?'); if (urlparts.length>=2) { var prefix= encodeURIComponent(parameter); var pars= urlparts[1].split(/[&;]/g); //reverse iteration as may be destructive for (var i= pars.length; i-- > 0;) { if (pars[i].lastIndexOf(prefix, 0) !== -1) { pars.splice(i, 1); } } url= urlparts[0]+'?'+pars.join('&'); return url; } else { return url; } } function _getRegionFromAttributes() { return "eu-west-1"; }; function _shouldAuth() { return ""; }; function _getRedirectHostFromAttributes() { return "eu-west-1.console.aws.amazon.com"; } function _getRegionFromHash( location ) { var hashArgs = "#" + (location.href.split("#")[1] || ""); var hashRegionArg = ""; var match = hashArgs.match("region=([a-zA-Z0-9-]+)"); if (match && match.length > 1 && match[1]) { hashRegionArg = match[1]; } return hashRegionArg; } return RegionFinder; })(); var regionFinder = new RegionFinder( window.location ); window.location.href = regionFinder.getURLWithRegion(); </script> </body> </html> What am I missing?
Looks like the content of your resp is HTML as opposed to JSON; this is likely a consequence of how the Jupyter server proxy endpoint you're attempting to POST to (https://catdogclassifier.notebook.eu-west-1.sagemaker.aws/proxy/5000/predict) is configured. It looks like you're using a SageMaker notebook instance, so you might not have much control over this configuration. A workaround could be to instead deploy your Flask server as a SageMaker endpoint running outside JupyterLab, instead of directly on a notebook instance. If you want to prototype using only a notebook instance, you can alternately just bypass the proxy entirely and simply call your Flask route relative to localhost from another notebook tab while the Flask server runs in your main notebook tab: import requests resp = requests.post("https://localhost:5000/predict", files={"file": open('/home/ec2-user/SageMaker/cat.jpg', 'rb')})
https://stackoverflow.com/questions/60903977/