id
stringlengths
3
8
text
stringlengths
1
115k
st84268
I’m using ResNet18 for a classification task. I tried using the pretrained model and training it from scratch: one thing that bothers me is that the time per epoch is more or less the same for both. On my GPU (GTX 1060), it takes roughly 19 minutes if I use the pretrained model, and around 23 minutes if I train it from scratch. For the pretrained model, I just re-initialize the first convolutional layer and the FC layer. Is this expected?
st84269
Solved by ptrblck in post #2 If you train all layers, it’s obviously expected, since you won’t save any computations. However, if you freeze some layers, you might save the backward computation, since you don’t need the gradients before a certain layer. In your use case, it seems you are retraining the input layer, thus the b…
st84270
If you train all layers, it’s obviously expected, since you won’t save any computations. However, if you freeze some layers, you might save the backward computation, since you don’t need the gradients before a certain layer. In your use case, it seems you are retraining the input layer, thus the backward pass has to go through the complete model up to the input layer, even if the intermediate layers do not require gradients.
st84271
Well, yes that makes sense. Unfortunately my images have 6 channels so the only way I found is to replace the input layer to accept 6 channels. I don’t think it would make sense the initialize it with the weights of ResNet since they are not RGB images. Thank you.
st84272
can someone please help me with this error this is my CNN class DnCNN(nn.Module): def __init__(self, channels, num_of_layers=17): super(DnCNN, self).__init__() kernel_size = 3 padding = 1 features = 64 layers = [] layers.append(nn.Conv2d(in_channels=channels, out_channels=features, kernel_size=kernel_size, padding=padding, bias=False)) layers.append(nn.ReLU(inplace=True)) for _ in range(num_of_layers-2): layers.append(nn.Conv2d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=padding, bias=False)) layers.append(nn.BatchNorm2d(features)) layers.append(nn.ReLU(inplace=True)) layers.append(nn.Conv2d(in_channels=64, out_channels=3, kernel_size=kernel_size, padding=padding, bias=False)) self.dncnn = nn.Sequential(*layers) def forward(self, x): out = self.dncnn(x) return out this is a part of the training # Build model net = DnCNN(channels=3, num_of_layers=opt.num_of_layers) net.apply(weights_init_kaiming) criterion = nn.MSELoss(size_average=False) the training code runs normally and it outputs an error only at the end of the first epoch saying RuntimeError: Given groups=1, weight[64, 3, 3, 3], so expected input[1, 500, 500, 3] to have 3 channels, but got 500 channels instead as you can see i dont have any number of channels set to 500 and still it outputs this error…
st84273
Based on the error message it seems like you are passing an image tensor in channel-last ordering ([batch_size, w, h, c]), while PyTorch expects channel-first inputs as [batch_size, c, h, w]. Could you check, if all your inputs have this shape?
st84274
Yes that was the problem and I fixed it by resizing the input, thank you very much for your reply
st84275
Good to hear it’s working. However, I’m a bit afraid when you say “resizing the input”, as a view or reshape won’t yield the desired output. To swap the axes, use permute, if you haven’t already done so.
st84276
my input’s shape is [500,500,3], and since, as you said pytorch expects [batch_size,c,h,w] I resized the image by using np.resize(img,(3,500,500)) and then I converted it into a tensor by using torch.from_numpy(img) and then I applyed torch.unsqueeze(img,0), will this give the desired output ? or is there a problem?
st84277
Yeah, that might be problematic. Have a look at this small example: x = np.zeros((500, 500, 3)) x[200:300, 200:300, :] = 1. plt.imshow(x) x_reshaped = np.resize(x, (3, 500, 500)) print(x_reshaped.shape) > (3, 500, 500) print(x_reshaped.sum(1).sum(1)) > [ 0. 30000. 0.] x_transposed = x.transpose(2, 0, 1) plt.imshow(x_transposed[0]) print(x_transposed.sum(1).sum(1)) > [10000. 10000. 10000.] As you can see, the reshape operation will put all ones into the second channel, which is wrong. Use transpose in numpy or permute in pytorch instead.
st84278
How torch.nn.Conv2d compute the convolution matrix using its default filter. I have a small problem to know how the calculation is performed and how to use my own filter (mask vector), and why we use unsqueeze from the model. My code: inputconv = torch.randn(3, 4, 4) inputconv.unsqueeze_(0) mm = torch.nn.Conv2d(3, 3, kernel_size=2, padding=0, stride=(2, 1)) print(inputconv) print('Conv2d: ') print(mm(inputconv)) output: tensor([[[[ 0.8845, 0.2022, -0.8536, -0.5750], [-0.6650, 0.1512, 1.4356, 0.4598], [ 0.1666, 1.8639, 0.2500, -0.3754], [-1.1593, 0.1265, 1.1665, -0.6877]], [[-0.3083, -0.7364, 1.6745, -1.6611], [ 0.7673, -0.9379, 0.0095, -0.4120], [ 0.8867, 0.0865, 0.1563, -0.0828], [ 0.8034, -0.6904, -0.4510, -0.6925]], [[-0.4779, -0.7025, -0.1098, 0.6809], [ 0.3011, 0.3318, -0.7675, -0.8880], [ 0.9923, -1.5812, -0.1318, 0.2460], [-0.0436, -0.1883, -0.1694, 0.3716]]]]) Conv2d: tensor([[[[-0.1268, 0.0031, -0.3460], [-0.0473, -0.3350, -0.1950]], [[ 0.1417, -0.8431, -0.4891], [ 0.9959, -0.2300, 0.1318]], [[ 0.0118, -0.4037, -0.5443], [ 0.3780, -0.5303, -0.3546]]]]) My question is how to custom filters using tensor variables: example tf = torch.Tensor([[1, 0, -1], [1, 0, -1], [1, 0, -1]]) Thank you all in advance!
st84279
You need to use .unsqueeze_(0) in your example, since the batch dimension is missing for inputconv. nn.Conv2d expects an input of the shape [batch_size, channels, height, width]. .unsqueeze(0) adds an additional dimension at position 0, i.e. inputconv will have shape [1, 3, 4, 4]. You can create a custom filter kernel and apply it using the functional API. The shape of the filter kernel should be [number_of_filters, input_channels, height, width]. Here is a small example: conv_filter = torch.randn(1, 3, 5, 5) x = torch.randn(1, 3, 24, 24) output = F.conv2d(x, conv_filter, padding=2)
st84280
I tried with small matrix and custom conv_filter, it works fine. conv_filter = torch.Tensor([[[[1, 0, 0]], [[0, 1, 0]], [[0, 0, 1]]]]) x = torch.randn(1, 3, 5, 4) output = F.conv2d(x, conv_filter, padding=0) Than you for your help
st84281
What if instead of a random initial weigh, the filter is created using a custom function? suppose there are two parameters(variables) that are responsible for producing an entry in the weight matrix (2 for each entry for example). Something like this 67 . how should we go about such cases?
st84282
Hi, Our server has 56 cpu cores, but when I use the dataloader with num_workers=0, it took all the cpu cores. From htop, I see that all cpu cores works with workload of 100%. What is the cause of this, and how could I confine the cpu usage to a few cpu cores? Thanks, CoinCheung
st84283
github.com/pytorch/pytorch Issue: Number of CPU threads for the python process 258 opened by mahmoodn on 2019-02-08 For a test, I didn't use --cuda in order to run a cpu version. While the CPU has 8 physical cores... This might be related to your problem.
st84284
Thanks, but there is one thing that I cannot explain. I have set my dataloader worker number to be 0, and other operations are done with gpu. Why will there still be many threads if no constraints are set? What are these threads doing?
st84285
Guys, I am making a Sentiment Analysis Classifier and now I am stuck with a problem called ValueError: setting an array element with a sequence. The dataset is twitter dataset, as it can contain mainly un-processed tweets, so I first processed the data, by cleaning the punctuations and un-used words or not-meaningful words. Then as we need to tokenized our data to feed it into the embedding and LSTM layer, so I did it before splitting the data into train set and test set. Size of train and test set - (31962,) (17197,) and train[‘label’](label part) is (31962,). This is my classifier, that I made using keras to do it with LSTM. max_len=50 max_features=20000 classifier= Sequential() classifier.add(Embedding(max_features, 100, mask_zero=True)) classifier.add(LSTM(200, dropout=0.3, recurrent_dropout=0.3, return_sequences=False)) classifier.add(Dense(1, activation='softmax')) classifier.compile(loss = 'sparse_categorical_crossentropy', optimizer='adam',metrics = ['accuracy']) classifier.summary() Now after making callback and using classifier.fir function, it should return a training loss value along with epoch count, but it is giving me an unexpected error shown below - callback = [EarlyStopping(monitor='val_loss', patience=2),ModelCheckpoint(filepath='best_model.h5', monitor='val_loss', save_best_only=True)] classifier.fit(X_train, y_train,batch_size=100,epochs=5,callbacks=callback ,validation_data=(X_test, y_test)) ----> 2 classifier.fit(X_train, y_train,batch_size=100,epochs=5,callbacks=callback ,validation_data=(X_test, y_test)) 537 """ --> 538 return array(a, dtype, copy=False, order=order) 539 540 ValueError: setting an array element with a sequence. Can anyone of you help me to resolve this error, I am almost done with my project, but this is out of my knowledge. Please help and also please ask if anything is unclear. Thanks.
st84286
While we try to help with questions that use libraries and models built on top of PyTorch when people have errors that ultimately come down how they use PyTorch, and Keras is bound to be a fine library, it would seem that most people here are not using it as much. I would suggest using Keras’ suggested support venues 29 because ultimately there you will find people using it every day. That said, do check out fast.ai 2 if you want a PyTorch-based library that roughly has similar intentions as Keras. Best regards Thomas
st84287
Sir, I am sorry that I posted in the wrong group, but sir, can you tell me or help me regarding this including in the PyTorch. Actually I am almost done in this project and this can give me the end of this project, I hope you are getting me. So, can I personally Message you here regarding this? Because, I am not getting how can I do this in PyTorch? Still I am very much interested to do in PyTorch as I have done several projects in PyTorch. Thanks, and any lead will be helpful to me.
st84288
Hello fellow Pytorchers, I am trying to add normalization to the custom Dataset class Pytorch provides inside this 22 tutorial. The problem is that it gives always the same error: TypeError: tensor is not a torch image. As you can see inside ToTensor() method it returns: return {‘image’: torch.from_numpy(image),‘masks’: torch.from_numpy(landmarks)} so I think it returns a tensor already. I give you my code: class Rescale(object): """Rescale the image in a sample to a given size. Args: output_size (tuple or int): Desired output size. If tuple, output is matched to output_size. If int, smaller of image edges is matched to output_size keeping aspect ratio the same. """ def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) self.output_size = output_size def __call__(self, sample): image, landmarks = sample['image'], sample['masks'] h, w = image.shape[:2] if isinstance(self.output_size, int): if h > w: new_h, new_w = self.output_size * h / w, self.output_size else: new_h, new_w = self.output_size, self.output_size * w / h else: new_h, new_w = self.output_size new_h, new_w = int(new_h), int(new_w) img = transform.resize(image, (new_h, new_w)) # h and w are swapped for landmarks because for images, # x and y axes are axis 1 and 0 respectively #landmarks = landmarks return {'image': img, 'masks': landmarks} class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, landmarks = sample['image'], np.array(sample['masks']) # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose(2,0,1) return {'image': torch.from_numpy(image),'masks': torch.from_numpy(landmarks)} transformed_train_dataset = MasksTrainDataset(csv_file='pneumo_input/train/train-rle.csv', root_dir='pneumo_input/train/images/256/dicom/', transform=transforms.Compose([ Rescale(224), ToTensor(), transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5]) ])) for i in range(len(transformed_train_dataset)): sample = transformed_train_dataset[i] print(i, sample['image'].size(), sample['masks']) if i == 3: break train_dataloader = DataLoader(transformed_train_dataset, batch_size=4, shuffle=True, num_workers=4) I am using grayscale images converted to RGB. Thank you in advance!
st84289
Solved by soloupis in post #10 @Nikronic Final and working class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, landmarks = sample['image'], np.array(sample['masks']) # swap color axis because # numpy image: H x W x C # torch image: C X…
st84290
Hi, It is about the code you have implemented in __getitem()__ method in your MasksTrainDataset. Can you post how you return an item of your dataset using this method?
st84291
@Nikronic Check this out: class MasksTrainDataset(Dataset): """Masks Train dataset.""" def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.masks_frame = pd.read_csv(csv_file, skiprows=1500, nrows=10000) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.masks_frame) def __getitem__(self, idx): img_name = os.path.join(self.root_dir, self.masks_frame.iloc[idx, 0]) image = io.imread(img_name + '.png') image = cv2.cvtColor(image,cv2.COLOR_GRAY2RGB) ## use strip to get exact result masks = self.masks_frame.iloc[idx, 1].strip() if masks == '-1': mark = 0 else: mark = 1 #masks = np.array([masks]) #masks = masks.astype('float').reshape(-1, 2) sample = {'image': image, 'masks': mark} if self.transform: sample = self.transform(sample) return sample train_dataset = MasksTrainDataset(csv_file='pneumo_input/train/train-rle.csv', root_dir='pneumo_input/train/images/256/dicom/') print(len(train_dataset)) for i in range(len(train_dataset)): sample = train_dataset[i] print(i, sample['image'].shape, sample['masks']) print(type(sample['masks'])) if i == 4: plt.show() break What do you think?
st84292
OK, It seems your image and masks are CV2 objects. Pytorch’s image backend is Pillow if you want to do some transformation on it. And as you can see in ToTensor class, it expects numpy array or PIL image. So you can solve this issue by converting your image and masks to numpy or Pillow image in __getitem()__. I have not tried it by np.array(your image or mask) should do the job.
st84293
@Nikronic It seems that I cannot make it work. I have to use a method to turn one channel of grayscale image to 3 channel (RGB).I thought I have managed it with CV but I had problems with the normalize function. I used: image = Image.open(img_name + ‘.png’).convert(‘RGB’) but then it raises other shape related errors. Do you think there is an error at the above code instead use CV?
st84294
Actually, your problem should not be CV or PIL, because if you provide a numpy, they will have the same result sometimes. Here your code to convert to RGB is correct and PIL just duplicate the gray channel twice and concatenate them to make it 3 channel image. Try this code and please print errors (it is hard to track without having errors): import numpy as np print(np.array(image = Image.open(img_name + ‘.png’)).shape)
st84295
@Nikronic I changed everything to below code: class MasksTrainDataset(Dataset): """Masks Train dataset.""" def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.masks_frame = pd.read_csv(csv_file, skiprows=1500, nrows=10000) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.masks_frame) def __getitem__(self, idx): img_name = os.path.join(self.root_dir, self.masks_frame.iloc[idx, 0]) image = Image.open(img_name + '.png') ## use strip to get exact result masks = self.masks_frame.iloc[idx, 1].strip() if masks == '-1': mark = 0 else: mark = 1 #masks = np.array([masks]) #masks = masks.astype('float').reshape(-1, 2) sample = {'image': np.array(image), 'masks': np.array(mark)} if self.transform: sample = self.transform(sample) return sample train_dataset = MasksTrainDataset(csv_file='pneumo_input/train/train-rle.csv', root_dir='pneumo_input/train/images/256/dicom/') print(len(train_dataset)) for i in range(len(train_dataset)): sample = train_dataset[i] print(i, sample['image'].shape, sample['masks']) print(type(sample['masks'])) if i == 4: plt.show() break and Transforms: class Rescale(object): """Rescale the image in a sample to a given size. Args: output_size (tuple or int): Desired output size. If tuple, output is matched to output_size. If int, smaller of image edges is matched to output_size keeping aspect ratio the same. """ def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) self.output_size = output_size def __call__(self, sample): image, landmarks = sample['image'], sample['masks'] h, w = image.shape[:2] if isinstance(self.output_size, int): if h > w: new_h, new_w = self.output_size * h / w, self.output_size else: new_h, new_w = self.output_size, self.output_size * w / h else: new_h, new_w = self.output_size new_h, new_w = int(new_h), int(new_w) img = transform.resize(image, (new_h, new_w)) # h and w are swapped for landmarks because for images, # x and y axes are axis 1 and 0 respectively #landmarks = landmarks return {'image': np.array(img), 'masks': np.array(landmarks)} class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, landmarks = np.array(sample['image']), np.array(sample['masks']) # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose(2,0,1) #print(image.shape) #normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5]) #return {'image': torch.from_numpy(image).unsqueeze(0),'masks': torch.from_numpy(landmarks)} #print(torch.is_tensor(torch.from_numpy(image)) and torch.from_numpy(image).ndimension() == 3) return {'image': torch.from_numpy(image),'masks': torch.from_numpy(landmarks)} transformed_train_dataset = MasksTrainDataset(csv_file='pneumo_input/train/train-rle.csv', root_dir='pneumo_input/train/images/256/dicom/', transform=transforms.Compose([ Rescale(224), ToTensor(), transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5]) ])) for i in range(len(transformed_train_dataset)): sample = transformed_train_dataset[i] print(i,sample['image'].size(),sample['masks']) if i == 3: break train_dataloader = DataLoader(transformed_train_dataset, batch_size=4, shuffle=True, num_workers=4) Still the same error: ---------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-98-710d244d9279> in <module>() 61 62 for i in range(len(transformed_train_dataset)): ---> 63 sample = transformed_train_dataset[i] 64 65 print(i,sample['image'].size(),sample['masks']) 3 frames <ipython-input-96-283923ce157c> in __getitem__(self, idx) 37 38 if self.transform: ---> 39 sample = self.transform(sample) 40 41 return sample /usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py in __call__(self, img) 59 def __call__(self, img): 60 for t in self.transforms: ---> 61 img = t(img) 62 return img 63 /usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py in __call__(self, tensor) 162 Tensor: Normalized Tensor image. 163 """ --> 164 return F.normalize(tensor, self.mean, self.std, self.inplace) 165 166 def __repr__(self): /usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py in normalize(tensor, mean, std, inplace) 199 """ 200 if not _is_tensor_image(tensor): --> 201 raise TypeError('tensor is not a torch image.') 202 203 if not inplace: TypeError: tensor is not a torch image. What do you think?
st84296
@Nikronic I think the problem is because ToTensor custom method returns a dictionary. I changed code to below: transformed_train_dataset = MasksTrainDataset(csv_file='pneumo_input/train/train-rle.csv', root_dir='pneumo_input/train/images/256/dicom/', transform=transforms.Compose([ Rescale(224), ToTensor(), transforms.Lambda(lambda x: x['image'].repeat(1, 1, 1)), transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5]), ])) and normalization works. BUT now with Lambda function I lose labels (x[‘masks’]). I found where is the problem though. So I have to normalize image before returning a dictionary at ToTensor custom method.
st84297
@Nikronic Final and working class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, landmarks = sample['image'], np.array(sample['masks']) # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose(2,0,1) print(image.shape) image = torch.from_numpy(image).float() in_transform = transforms.Compose([transforms.Normalize([0.5, 0.5, 0.5],[0.5, 0.5, 0.5])]) ## discard the transparent, alpha channel (that's the :3) image = in_transform(image)[:3,:,:] return {'image': image,'masks': torch.from_numpy(landmarks)} #return torch.from_numpy(image).float() transformed_train_dataset = MasksTrainDataset(csv_file='pneumo_input/train/train-rle.csv', root_dir='pneumo_input/train/images/256/dicom/', transform=transforms.Compose([ Rescale(224), ToTensor(), ])) for i in range(len(transformed_train_dataset)): print(type(transformed_train_dataset)) print(len(transformed_train_dataset)) sample = transformed_train_dataset[i] print(i,sample['image'],sample['masks']) if i == 3: break train_dataloader = DataLoader(transformed_train_dataset, batch_size=4, shuffle=True, num_workers=4)
st84298
Yes you right, you should not return a dictionary in ToTensor or any of Transforms class. Sorry if I answered late (time zone differences!). But I have a suggestion here. It is better to build your classes modular so you can use them in other tasks with different datasets easily. For instance, maybe you need 3 or 4 images to be transformed or using different transforms on them. In this case you have to edit your ToTensor or Rescale class. So I think it is better to implement all transform classes for only a sample of input, actually, this is the approach has been chosen in PyTorch. If I want to explain scenario, I can say if want to do other transforms for example adding gaussian noise to your image not landmarks, you will be stuck again and you have change your ToTensor code because still you are returning dictionary or even you are using another transform inside another one. But if your classes only take one tensor as input and return the changed tensor, you can use all of your custom classes in any order or in any dataset you want. Custom Dataset with some preprocessing vision When should I do this based on Dataset class? Should I do my processing step in __getitem__() ? If yes, would it be parallel and fast? Yes. the __getitem__ calls are the ones run in parallel. Here’s some example: class CelebaDataset(Dataset): """Custom Dataset for loading CelebA face images""" def __init__(self, txt_path, img_dir, transform=None): df = pd.read_csv(txt_path, sep=" ", index_col=0) self.img_dir = img_dir self.txt_path = txt_path … By the way, I use same approach as pytorch so I really did not think about your ToTensor custom implementation. usage in preprocessing step github.com Nikronic/CoarseNet/blob/master/utils/preprocess.py#L98-L101 3 if self.transform is not None: X = self.transform(X) random.seed(seed) y_descreen = self.transform_gt(y_descreen) usage in DataLoaders github.com Nikronic/CoarseNet/blob/master/Train.py#L147-L153 7 custom_transforms = Compose([ RandomResizedCrop(size=224, scale=(0.8, 1.2)), RandomRotation(degrees=(-30, 30)), RandomHorizontalFlip(p=0.5), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), RandomNoise(p=0.5, mean=0, std=0.1)]) Custom Transform github.com Nikronic/CoarseNet/blob/master/utils/preprocess.py#L109-L119 2 class RandomNoise(object): def __init__(self, p, mean=0, std=0.1): self.p = p self.mean = mean self.std = std def __call__(self, img): if random.random() <= self.p: noise = torch.empty(*img.size(), dtype=torch.float, requires_grad=False) return img+noise.normal_(self.mean, self.std) return img Good luck
st84299
Hi I two models One dense net and one Efficient net… when i do sequential on both body =nn.Sequential(*list(model.children())[:-1]) in this dict keys are preserved like conv0.,Relu0 etc but in below dict keys get lost and converted into the 0 ,1,2 why is difference… body1=nn.Sequential(*list(md_ef.children())[:-1]) Sequential( (0): Conv2dStaticSamePadding( 3, 48, kernel_size=(3, 3), stride=(2, 2), bias=False (static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0) ) (1): BatchNorm2d(48, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True) (2): ModuleList(
st84300
I don’t think that that is actually the case, could it be that the outermost sequential (which is the only bit you are changing) is just short for the first? You would need to show more of what you are doing to get help, preferably enclosing the code in triple backticks (```). Best regards Thomas
st84301
I’m trying to continue training after saving my models and optimizers. However, it seems some part of the optimizer (Adam) is not being saved, because when I restart training from a checkpoint, the values move rapidly from the old training path, but then stabilize again. For example, the following three plots show this, with each line being a single trial, where the second line is the loaded version of the trial. Note, this is a GAN, so these values are not all expected to nicely descend, and you can ignore what each of these values refer to. Just that they’re not continuing when loaded as they were when not loaded. I would expect the loaded versions to roughly follow what the non-loaded versions had done, but they clearly deviate very quickly at the beginning. I’m guessing I’m just doing something incorrectly during my model/optimizer loading/saving, but I’m not sure what it is. Below is (approximately) what I’m using to load and save the training states. generator_optimizer = Adam(generator.parameters()) discriminator_optimizer = Adam(discriminator.parameters()) # Load from files. d_model_state_dict = torch.load(d_model_path) d_optimizer_state_dict = torch.load(d_optimizer_path) g_model_state_dict = torch.load(g_model_path) g_optimizer_state_dict = torch.load(g_optimizer_path) with open(meta_path, 'rb') as pickle_file: metadata = pickle.load(pickle_file) step = metadata['step'] epoch = metadata['epoch'] # Restore discriminator. discriminator.load_state_dict(d_model_state_dict) discriminator_optimizer.load_state_dict(d_optimizer_state_dict) discriminator_optimizer.param_groups[0].update({'lr': initial_learning_rate, 'weight_decay': weight_decay}) discriminator_scheduler = lr_scheduler.LambdaLR(discriminator_optimizer, lr_lambda=learning_rate_multiplier_function) discriminator_scheduler.step(epoch) # Restore generator. generator.load_state_dict(g_model_state_dict) generator_optimizer.load_state_dict(g_optimizer_state_dict) generator_optimizer.param_groups[0].update({'lr': initial_learning_rate}) generator_scheduler = lr_scheduler.LambdaLR(generator_optimizer, lr_lambda=learning_rate_multiplier_function) generator_scheduler.step(epoch) ... # Save. torch.save(discriminator.state_dict(), d_model_path) torch.save(discriminator_optimizer.state_dict(), d_optimizer_path) torch.save(generator.state_dict(), g_model_path) torch.save(generator_optimizer.state_dict(), g_optimizer_path) with open(meta_path, 'wb') as pickle_file: pickle.dump({'epoch': epoch, 'step': step}, pickle_file) Does anyone know if there’s anything here I’m obviously doing wrong that I’m missing? I should also note, I have setup the code so that I can start a new trial of training (reseting the learning rate and whatnot) so that I don’t have to train from scratch. Perhaps there’s something I did wrong there causing the continuation training to have problems? Or did I make some other mistake in loading? Or is this due to some limitation with PyTorch’s current loading apparatus? Thank you for your time!
st84302
I think this is related to this discussion: Saving and loading a model in Pytorch?
st84303
Hi there, I got the same problem. Have you fixed it? Can you provide any suggestions please?
st84304
Hello, I am new to PyTorch. I’m struggling data loader part in PyTorch. Objective I want to bring a pair of images and a binary value written on test.npy file. It will look like this. To do that, I prepared the location of the image pair and a binary value written on loc.npy file. It will look like this. (My account allowed to upload only one image. sorry) I saw a similar question in PyTorch forum. [1] 1 Based on that I wrote a code like this. class MyDataset(Dataset): def __init__(self, data_paths_1,data_paths_2, label_list, transform=None, target_transform=None): # length of data_paths_1 and data_paths_2 is equal self.data_paths_1 = data_paths_1 # source1 image dataset links ex) ["./AA/a.jpg","./BB/b.jpg","./CC/c.jpg",...] self.data_paths_2 = data_paths_2 # source2 image dataset links ex) ["./BB/b.jpg","./CC/c.jpg","./DD/d.jpg",...] self.label_lists = label_list # label_list ex) [0,1,1,...] self.transform = transforms def __getitem__(self, index): x1 = Image.open(self.data_paths_1[index]) x2 = Image.open(self.data_paths_2[index]) if self.transform: x1 = self.transform(x1) x2 = self.transform(x2) y = self.label_lists[index] return x1,x2, ys def __len__(self): return len(self.data_paths_1) My hardware environment is not really good so I can’t run it right now. Can you check if I missed something on that code line? What I want is that check the overall architecture of the code. you can ignore minor problems. I have no one to check my code so I uploaded my code in here. Sorry for just uploading code line in here.
st84305
Hi Yupjun, I understand that you don’t have the computing resource right now to test your code. Fortunately, there are online services that seek to help people with out access. One such website is Google Colab 1. This should enable you to carry out tests on you code in a gpu environment. There is also support for pytorch. An tutorial for the same can be found here. 3 Hope this helps!
st84306
class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 128, 3, 1, 1) self.relu1 = nn.ReLU() self.pool1 = nn.MaxPool2d(2) self.conv2 = nn.Conv2d(128, 256, 3, 1, 1) self.relu2 = nn.ReLU() self.pool2 = nn.MaxPool2d(2) self.fc1 = nn.Linear(256 * 56 * 56, 20) self.fc2 = nn.Linear(20, 1) def forward(self, x): x = self.conv1(x) x = self.relu1(x) x = self.pool1(x) x = self.conv2(x) x = self.relu2(x) x = self.pool2(x) x = x.view(x.size(0), -1) x = self.fc1(x) x = self.fc2(x) return x model = Net() criterion = nn.CrossEntropyLoss() learning_rate = 0.1 optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate) train_loader = torch.utils.data.DataLoader(dataset=load_dataset(), batch_size=200, shuffle=True) validation_loader = torch.utils.data.DataLoader(dataset=load_validate(), batch_size=5000, shuffle=True) Try to fixed follow this link but nothing changed. https://discuss.pytorch.org/t/always-output-of-0/21784/4
st84307
If your problem is a binary classification, use BCELoss 1 or BCEWithLogitsLoss 1.
st84308
I already find the reason. When I use single gpu to train my code, the bn weight is normal. But when I use torch.multiprocessing and torch.nn.parallel.DistributedDataParallel for 4 gpus, it’s wired that all BN weight in a layer is the same value. I use pytorch1.2.0, ubuntu16,cuda10. I use multiprocessing and DistributedDataParallel in my code like https://github.com/pytorch/examples/blob/master/imagenet/main.py. Someone can help me??
st84309
Hello all, I’ve been working with Pytorch for a quite a time now and I’m interested in contributing to its open source. I was wondering if anyone had any tips for really getting involved or areas where development is pretty hot? thanks!
st84310
Hi Victor! @tom published a blog post 3 about fixing your first bug in PyTorch recently (with a Video showing him fixing one bug ), which might be a good starter. I would also suggest to have a look at the usability / simple-fixes category 1 in the GitHub issues. Let us know, if you cannot decide where to start
st84311
I’m now dealing with images with 0 and 3 pixel values. Values of 0 and 1 are unbalance, and often have only a value of 0 in one image. I’m trying 2label(background+desired) segmentation with UNet. Data info as follow - X : numpy_images (dtype:uint8) Y : numpy_masks (dtype:uint8) X_max --> 255, Y_max --> 3 X_shape --> (480, 512, 512), Y_shape --> (480, 512, 512) I converted X,Y to PIL using x = x.fromarray(x)for applying augmentation and also X,Y were converted to tensor x = TF.to_tensor(x).float(), y = TF.to_tensor(y).long() X_tesnsor_shape --> torch.Size([1, 1, 512, 512]) Y_tesnsor_shape --> torch.Size([1, 512, 512]) output_tensor_shape --> torch.Size([1, 2, 512, 512]) My training loop as follow: criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.99) exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) def fit(epoch,model,data_loader,phase='train',volatile=False): if phase == 'train': exp_lr_scheduler.step() model.train() if phase == 'valid': model.eval() running_loss = 0.0 for batch_idx , (data,target) in enumerate(data_loader): inputs,target = data.cpu(),target.cpu() if is_cuda: inputs,target = data.cuda(),target.cuda() inputs , target = Variable(inputs),Variable(target) if phase == 'train': optimizer.zero_grad() output = model(inputs) loss = criterion(output,target) running_loss += loss.data.item() if phase == 'train': loss.backward() optimizer.step() loss = running_loss/len(data_loader.dataset) print('{} Loss: {:.4f}'.format( phase, loss)) return loss but, I got `Epoch 0/4 train Loss: 0.1523 valid Loss: 0.1041 Epoch 1/4 train Loss: 0.0815 valid Loss: 0.0792 Epoch 2/4 train Loss: 0.0791 valid Loss: 0.0792 Epoch 3/4 train Loss: 0.0791 valid Loss: 0.0792 Epoch 4/4 train Loss: 0.0791 valid Loss: 0.0792 ` The loss value is no longer decreased even if epoch is increased. Is the loss no longer decreased because the data is unbalanced?.
st84312
Your loss could have stagnated because your learning rate is too high, I suggest reducing your learning rate (use an LR scheduler to do it) and then continuing training.
st84313
I wanted to use the dataloader class for the ChatBot tutorial and use the best practices for padding, packing, variable size sequences with a real model (ideally using GPU as much as possible). Is there an example of how to use the dataloader class like that with the chatbot tutorial that is fully functional? cross posted: https://www.quora.com/unanswered/Is-there-a-version-of-the-ChatBot-tutorial-that-uses-dataloader-class 1
st84314
I was trying to use the built in padding function but it wasn’t padding things for me for some reason. This is my reproducible code: import torch def padding_batched_embedding_seq(): ## 3 sequences with embedding of size 300 a = torch.ones(1, 4, 5) # 25 seq len (so 25 tokens) b = torch.ones(1, 3, 5) # 22 seq len (so 22 tokens) c = torch.ones(1, 2, 5) # 15 seq len (so 15 tokens) ## sequences = [a, b, c] batch = torch.nn.utils.rnn.pad_sequence(sequences) if __name__ == '__main__': padding_batched_embedding_seq() error message: Traceback (most recent call last): File "padding.py", line 51, in <module> padding_batched_embedding_seq() File "padding.py", line 40, in padding_batched_embedding_seq batch = torch.nn.utils.rnn.pad_sequence(sequences) File "/Users/rene/miniconda3/envs/automl/lib/python3.7/site-packages/torch/nn/utils/rnn.py", line 376, in pad_sequence out_tensor[:length, i, ...] = tensor RuntimeError: The expanded size of the tensor (4) must match the existing size (3) at non-singleton dimension 1. Target sizes: [1, 4, 5]. Tensor sizes: [3, 5] any idea? stackoverflow.com How does one padd a tensor of 3 dimensions in Pytorch? 9 python, machine-learning asked by Pinocchio on 05:06PM - 19 Jul 19 UTC
st84315
I’m trying to setup torch 1.1 with CUDA, but I’m encountering random errors difficult to debug. After installation, when I try to run the minimal code to test CUDA functionality: import torch torch.zeros(5).cuda() I get random cryptic errors, one example being fatal : Memory allocation failure fatal : Memory allocation failure Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: CUDA error: unknown error or a CUDA out of memory error (even tough I’m only trying to copy a small array) or even a segmentation fault, causing Python to crash. To give some context, I’m using a conda environment with Python 3.7, and I have a CUDA 9.0 installation with cuDNN 7.5.1. I install PyTorch with the command from the website: conda install pytorch torchvision cudatoolkit=9.0 -c pytorch I have also tried to using Pytorch 0.4 and CUDA 10.0/10.1, but the error persists. The installed driver version is 418.74. Finally, I do not have root access since I’m computing on a shared server and due to this reason CUDA is installed in a custom directory.
st84316
The PyTorch binaries ship with the CUDA and cudnn runtimes, so you would just need to install the driver on your machine. Which GPU(s) are you using? Does nvidia-smi run successfully? Did you update the driver without restarting?
st84317
Thank you for the reply, The GPUs I tested were GTX TITAN X, TITAN Xp and Tesla K40, and all three reported the same errors, though for the Xp Python hangs indefinitely rather than an explicit message. In all cases nvidia-smi runs without issues. The drivers were not installed by me (as I do not have root access), but I think they work fine since other users of the server have not reported a problem. However the drivers were updated by the admin some time ago, just before the problems started. I clean installed each component I could, however could not make any progress. Apparently cleaning the cache at ~/.nv might be relevant in some cases after a driver change, but it did not help for me. I wonder if PyTorch keeps other driver-related cache files somewhere that might be causing a mismatch. The servers run Debian and I connect via ssh, if that might be relevant.
st84318
That’s really weird. Does your system have docker and nvidia-docker on it? If so, could you try to run the PyTorch Docker container as explained here 11? I would like to try to create a clean environment and see, if that’s working.
st84319
Unfortunately the system does not have docker installed, not sure if there is another way to test this without root access. I also tried clean installing Python using pyenv and installing torch via pip to see if it was a conda related issue, this did not work either.
st84320
Could you also try to create a clean conda environment and reinstall PyTorch there? If that doesn’t help and CUDA is installed on the machine, could you try to build PyTorch from source? Are you successfully running any CUDA code in this machine?
st84321
I tried deleting miniconda entirely, reinstalling it, creating a new empty environment (just in case) and installing PyTorch with the command from the website. Unfortunately I ran into the exact same issue, once again. On the other hand I can successfully compile and run the CUDA samples distributed with version 9.0.176, so CUDA and the nvcc tool seem to be working and configured properly, even though I understand it’s irrelevant for PyTorch. I will shortly try to compile PyTorch from scratch and report the results.
st84322
So I tried to compile PyTorch from scratch with CUDA support. I installed CUDA toolkit 9.2 locally, configured the environment variables and compile-installed PyTorch to a clean conda environment (as described in the PyTorch repo). Unfortunately, this seemed to not work either. CMake successfully detects CUDA and cudnn during compilation. Weirdly enough, after installation running import torch print(torch.cuda.is_available()) print(torch.version.cuda) print(torch.backends.cudnn.enabled) print() torch.zeros(5).cuda() prints False 9.2.148 True THCudaCheck FAIL file=../aten/src/THC/THCGeneral.cpp line=51 error=2 : out of memory Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yardima/miniconda3/lib/python3.7/site-packages/torch/cuda/__init__.py", line 163, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (2) : out of memory at ../aten/src/THC/THCGeneral.cpp:51 What could be wrong here? The CUDA 9.2 samples still compile and run cleanly, but PyTorch does not work. As a side note, I tried compiling with CUDA 9.0, but CMake complained GCC version >6 is incompatible with CUDA <9.2. The server I use has GCC 6.3.0 but not GCC 5. Is there a way to circumvent this without going through the hassle of compiling GCC-5, as I do not have root priveleges?
st84323
Could you check the memory usage in nvidia-smi and see, if the GPU is used by other processes or if some dead processes are filling it up? I’m not sure, if there is a simple way of changing gcc in your current setup. The cleanest way would probably still be to use docker, since you can just wipe it in the worst case.
st84324
Checking nvidia-smi shows that the the server is almost entirely free. Among 8 GPUs available, one is assigned to me (set by CUDA_VISIBLE_DEVICES), which is always unused. But the problem persists even in cases where the server usage is zero. I also realised that although docker is not supported, the server supports Singularity containers, maybe that could help?
st84325
Could you check, what torch.cuda.device_count() returns? Are you seeing more than a single GPU and might accidentally use the wrong one? I’m not familiar with Singularity containers, but you could give it a try.
st84326
Interestingly, even though CUDA_VISIBLE_DEVICES is set correctly, torch.cuda.device_count() returns 0.
st84327
Does torch.cuda.is_available() also return False? Could you check, if the flag is set in your .bashrc or somewhere else by using echo $CUDA_VISIBLE_DEVICES before running your script?
st84328
As I mentioned, running the minimal script import torch print(torch.cuda.is_available()) print(torch.version.cuda) print(torch.backends.cudnn.enabled) prints False 9.2.148 True I double checked the CUDA_VISIBLE_DEVICES flag, but it is not set explicitly prior to the PyTorch code.
st84329
I’m not sure what’s going on, and would personally try the brute force approach: create a new conda env for each run and try to install all current binaries start from the nightly build with CUDA10, then go down to CUDA9 end with the current stable version try the pip versions next Also, could you post the build log here which you have gotten while building from source?
st84330
Hello there. I would like to ask if it is possible to change the convolution behaviour in pytorch with the following goal: In convolutional operations, for specific numbers occured in multiplication in conv, replace their result with specific number, for example: if 5 * 3 is found, replace its result with 12 instead of 15. Basically I want to check each operation in convolution and change their result according to their value. I first tried to use nn.functional.unfold and define a custom dot product function to replace conv2d but it is extremely slow (2000X slow-down for my platform). I think I need to define a custom Function or Module by imitating the soure code of conv2d, but no luck on figuring out how conv2d works in the bottom layer (operation level). Is it possible to do this in python, or I have to dive into the C++ codes? Thanks,
st84331
I’m looking @ https://pytorch.org/docs/stable/_modules/torch/nn/functional.html 13 to see torch.nn.kl_div but it is just a class that wraps a function called torch.kl_div I can’t find that original function. I basically made my own function and it spits out different results from what Pytorch built-in is spitting so I’m wondering how does it look like. For completition, here’s my code: def my_kl(predicted, target): return -(target * t.log(predicted.clamp_min(1e-7))).sum(dim=1).mean() - \ -1*(target.clamp(min=1e-7) * t.log(target.clamp(min=1e-7))).sum(dim=1).mean() Which I believe is spitting out correct results LOL
st84332
Solved by tom in post #2 kl_div and the CPU backward are in aten/src/ATen/native/Loss.cpp, the cuda backward is in aten/src/ATen/native/cuda/Loss.cu. I once tried to write this up more generally: https://lernapparat.de/selective-excursion-into-pytorch-internals/ Best regards Thomas
st84333
kl_div and the CPU backward are in aten/src/ATen/native/Loss.cpp 135, the cuda backward is in aten/src/ATen/native/cuda/Loss.cu. I once tried to write this up more generally: https://lernapparat.de/selective-excursion-into-pytorch-internals/ 61 Best regards Thomas
st84334
Hi! I want to pass in a sequence of images and translate them into another sequence of images using an LSTM. Each image is passed into a CNN layer which decreases the height and width to 1 and the channel to 128, we reshape that to a 1d 128 and pass the 128 1d vector into the lstm cell. More concretely, I can pass in b x c x h x w to the convolutional layers/network, which then outputs (after squeezing) b x 128 x 1 x 1 which is effectively b x 128. 128 will be the lstm’s input_size. The shape that the lstm expects is (seq_len, batch, input_size), but from the CNN output we only have batch and input_size, not seq_len. This is because the CNN doesn’t take multiple images at a time. Is the solution to write a for loop in the training code for the CNN seperately on each image, and then pass that input to the lstm model? I was hoping there would be a way to do it faster/better. Thanks!
st84335
My understanding is that your batch_sizeCNN for the CNN is not the same thing as the batch_sizeLSTM for the LSTM. I think, and I can be completely wrong, that in the context of LSTMs your batch_sizeLSTM is 1 while the seq_len == batch_sizeCNN. Hope this helps!
st84336
I save resnet model using old pytorch and I load it using Pytorch1.1 but I get error: AttributeError: ‘Conv2d’ object has no attribute ‘padding_mode’
st84337
How did you store and try to load the model? padding_mode was introduced in PyTorch 1.1.0 ( docs 31 ), so this argument shouldn’t throw an error while trying to load the model in 1.1.0.
st84338
How did you store the resnet in the older PyTorch version? Did you store the state_dict or the complete model? In the latter case, you might encounter some issues, if the source code of the used classes has changed, which is the case in nn.Conv2d. Could you save and load the state_dict instead as described in the Serialization Docs 31?
st84339
Im here because of the same “error”. I think that there is somethin fundamentally incomplete in the serialization strategy in pytorch. You could call it user error - but it pops up more often than not when loading saved models from githubs. I dont do this often enough to have a full view on the matter, but Ive had several times to load a pth, wipe it clean and save it again, or save its parts, or… I figure that to write a fault tolerant torch load ought to be a quite simple task. And while the software is “maturing”, deep learning is far from ripe yet. Itd be a useful tool. thanks.
st84340
xvdp: but Ive had several times to load a pth, wipe it clean and save it again, or save its parts, or… I haven’t seen these issues yet, so could you point me to some examples? If I’m understanding it correctly, it seems the writing/saving process was somehow interrupted, which resulted in an invalid file? xvdp: Im here because of the same “error”. Storing the state_dict is backward compatible and thus the recommended way. If you store the model completely, you might need to use dump_patches=True to fix the code.
st84341
Sure, I was looking at super resolution githubs with pytorch. One of the ones I grabbed was https://github.com/BUPTLdy/Pytorch-LapSRN 24. Download it and run the python test.py on torch ‘1.0.1.post2’ (version im currently working with) this error will occur. Clearly, the author should have saved a state_dict, but they saved the entire model instead. The fix is quite simple, for this one, on a separate script load the entire model as in test.py then save state state dict as per the serialization docs, then change test.py to load the model class then load the state dict you just saved. So, if one follows the recommended serialization proc then everything works, but in many cases this isn’t done, because its in human nature to take the perceived simpler path. All Im just arguing that pytorch could have a more comprehensive error tolerant serialization. btw. not related; a much better written superresolution github is ESRGAN 14 - curiously it isnt in modelzoo.
st84342
z = torch.zeros([1,3,224,224]) fft = torch.rfft(z, 2, onesided=True) out = torch.irfft(fft, 2, onesided=True) print(z.shape, out.shape) #torch.Size([1, 3, 224, 224]) torch.Size([1, 3, 224, 225]) I understand that rfft duplicates centerline, irfft should remove it, i suppose. Im using onesided=False instead, but unless im mistaken theres a few code lines that must be missing torch.version ‘1.1.0’
st84343
https://pytorch.org/docs/stable/torch.html#torch.irfft 4 Due to the conjugate symmetry, input do not need to contain the full complex frequency values. Roughly half of the values will be sufficient, as is the case when input is given by rfft() with rfft(signal, onesided=True) . In such case, set the onesided argument of this method to True . Moreover, the original signal shape information can sometimes be lost, optionally set signal_sizes to be the size of the original signal (without the batch dimensions if in batched mode) to recover it with correct shape.
st84344
SimonW: signal_sizes Thank you, I had missed the ‘signal_sizes’ argument. much appreciated
st84345
Hi, I have used Pytorch for some time. But when I read the example code about language model 9, I’m quite confused about how to get gradient of hidden state. hidden = model.init_hidden(args.batch_size) for batch, i in enumerate(range(0, train_data.size(0) - 1, args.bptt)): data, targets = get_batch(train_data, i) # Starting each batch, we detach the hidden state from how it was previously produced. # If we didn't, the model would try backpropagating all the way to start of the dataset. hidden = repackage_hidden(hidden) model.zero_grad() output, hidden = model(data, hidden) loss = criterion(output.view(-1, ntokens), targets) loss.backward() But when I print hidden[0].grad or hidden[1].grad after loss.backward(), I got None. I have tried two approaches to get gradient. hidden[0].register_hook(get_gradient) hidden[1].register_hook(get_gradient) And hidden[0].retain_grad() hidden[1].retain_grad() However, neither way works. So how can get gradient of hidden state and ceil state?
st84346
There are two things: The variable hidden is overwritten by output, hidden = model(data, hidden), so if you want the initial hidden state, you would have to instead do output, hidden_new = .... The initial hidden state likely does not require grad, so you need hidden[0].requires_grad_() for the initial hidden state (for the final hidden state, new_hidden[0].retain_grad_() is good. All this assumes you are indeed using the LSTM and hidden and new_hidden are tuples. Best regards Thomas
st84347
Hi there, I am training a hierarchical model using Pytorch. This proved to be considerably difficult as the masking needs a sorted list of lengths. In a hierarchical model, I can sort the samples based on sentences and get away with padding sentences. But I will definitely have to mask the words, and Pytorch only supports them in a sorted order. Is there a reason for enforcing the sorted order, some sort of optimization? Can I make it take inputs without the sorting? Regards Sandeep
st84348
Yeah, this puts a lot of constraints on any complicated networks that I want to try. I can use GRUCell, but it does not have support for bidirectional or masking!
st84349
I have also been working with padded sequences and the need to order them. I am currently running into a problem where I have 2 different inputs and 2 RNNs. I can sort both inputs according to their length, but then the respective dimensions don’t match each other anymore, so I have to re-instantiate the original order after running through the RNN (basically as shown here: RNNs Sorting operations autograd safe?). I’m just worried that this operation is not autograd-safe, i.e. that the gradients get lost or are associated with the wrong matrix entries after resorting. Do you know about similar problems or can help?
st84350
Just out of curiosity, perhaps after 2 years its been solved so wanted to bump this so to document if it was solved pack_padded_sequence Thanks for the help!
st84351
Hey @smth, we can give an unsorted list to pack_padded_sequence and it is going to work fine?
st84352
Brando_Miranda: pack_padded_sequence The documentation still talk about sorting. Thus the issue seems unresolved, maybe the docs should be updated. https://pytorch.org/docs/stable/nn.html#pack-padded-sequence 165 For unsorted sequences, use enforce_sorted = False. If enforce_sorted is True , the sequences should be sorted by length in a decreasing order, i.e. input[:,0] should be the longest sequence, and input[:,B-1] the shortest one. enforce_sorted = True is only necessary for ONNX export. git issue: https://github.com/pytorch/pytorch/issues/23079 120
st84353
In the forward function, sometimes one may want to perform an action only at the test phase, say. Is there any built-in variable that allows to distinguish whether forward function was called during test or train phase?
st84354
Hi! Recently I’ve started to play around federated learning. I tried to use ResNet from torchvision.models for image classification for FashionMNIST dataset. But apparently I have an error on training: for epoch in range(1, args.epochs + 1): train(args, model, device, federated_train_loader, optimizer, epoch) test(args, model, device, test_loader) TypeError: object of type 'NoneType' has no len() My model: class resnet101(models.resnet.ResNet): def __init__(self, block, layers): super(resnet101, self).__init__(block, layers) self.inplanes = 64 self.conv1 = nn.Conv2d(1, self.inplanes, kernel_size=2, stride=1, padding=1, bias=False) model = resnet101(models.resnet.Bottleneck, [3, 4, 23, 3], **kwargs) As a trial code example, I fully use code from the blogpost -> https://blog.openmined.org/upgrade-to-federated-learning-in-10-lines/ 3 I’ll be really grateful for any advice or help. The full traceback of my error is: --------------------------------------------------------------------------- PureTorchTensorFoundError Traceback (most recent call last) ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in handle_func_command(cls, command) 259 new_args, new_kwargs, new_type, args_type = syft.frameworks.torch.hook_args.hook_function_args( --> 260 cmd, args, kwargs, return_args_type=True 261 ) ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook_args.py in hook_function_args(attr, args, kwargs, return_args_type) 156 # Try running it --> 157 new_args = hook_args(args) 158 ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook_args.py in <lambda>(x) 350 --> 351 return lambda x: f(lambdas, x) 352 ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook_args.py in seven_fold(lambdas, args, **kwargs) 558 return ( --> 559 lambdas[0](args[0], **kwargs), 560 lambdas[1](args[1], **kwargs), ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook_args.py in <lambda>(i) 328 # Last if not, rule is probably == 1 so use type to return the right transformation. --> 329 else lambda i: forward_func[type(i)](i) 330 for a, r in zip(args, rules) # And do this for all the args / rules provided ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook_args.py in <lambda>(i) 55 if hasattr(i, "child") ---> 56 else (_ for _ in ()).throw(PureTorchTensorFoundError), 57 torch.nn.Parameter: lambda i: i.child ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook_args.py in <genexpr>(.0) 55 if hasattr(i, "child") ---> 56 else (_ for _ in ()).throw(PureTorchTensorFoundError), 57 torch.nn.Parameter: lambda i: i.child PureTorchTensorFoundError: During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) <ipython-input-15-7c854db39ed0> in <module> 1 for epoch in range(1, args.epochs + 1): ----> 2 train(args, model, device, federated_train_loader, optimizer, epoch) 3 test(args, model, device, test_loader) <ipython-input-5-9b8111af22ce> in train(args, model, device, train_loader, optimizer, epoch) 5 data, target = data.to(device), target.to(device) 6 optimizer.zero_grad() ----> 7 output = model(data) 8 loss = F.nll_loss(output, target) 9 loss.backward() ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/torchvision/models/resnet.py in forward(self, x) 190 191 def forward(self, x): --> 192 x = self.conv1(x) 193 x = self.bn1(x) 194 x = self.relu(x) ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input) 336 _pair(0), self.dilation, self.groups) 337 return F.conv2d(input, self.weight, self.bias, self.stride, --> 338 self.padding, self.dilation, self.groups) 339 340 ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook.py in overloaded_func(*args, **kwargs) 715 cmd_name = f"{attr.__module__}.{attr.__name__}" 716 command = (cmd_name, None, args, kwargs) --> 717 response = TorchTensor.handle_func_command(command) 718 return response 719 ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in handle_func_command(cls, command) 268 new_command = (cmd, None, new_args, new_kwargs) 269 # Send it to the appropriate class and get the response --> 270 response = new_type.handle_func_command(new_command) 271 # Put back the wrappers where needed 272 response = syft.frameworks.torch.hook_args.hook_response( ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/pointers/object_pointer.py in handle_func_command(cls, command) 86 87 # Send the command ---> 88 response = owner.send_command(location, command) 89 90 return response ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/workers/base.py in send_command(self, recipient, message, return_ids) 425 426 try: --> 427 ret_val = self.send_msg(codes.MSGTYPE.CMD, message, location=recipient) 428 except ResponseSignatureError as e: 429 ret_val = None ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/workers/base.py in send_msg(self, msg_type, message, location) 221 222 # Step 2: send the message and wait for a response --> 223 bin_response = self._send_msg(bin_message, location) 224 225 # Step 3: deserialize the response ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/workers/virtual.py in _send_msg(self, message, location) 8 class VirtualWorker(BaseWorker, FederatedClient): 9 def _send_msg(self, message: bin, location: BaseWorker) -> bin: ---> 10 return location._recv_msg(message) 11 12 def _recv_msg(self, message: bin) -> bin: ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/workers/virtual.py in _recv_msg(self, message) 11 12 def _recv_msg(self, message: bin) -> bin: ---> 13 return self.recv_msg(message) 14 15 @staticmethod ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/workers/base.py in recv_msg(self, bin_message) 252 print(f"worker {self} received {sy.codes.code2MSGTYPE[msg_type]} {contents}") 253 # Step 1: route message to appropriate function --> 254 response = self._message_router[msg_type](contents) 255 256 # Step 2: Serialize the message to simple python objects ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/workers/base.py in execute_command(self, message) 383 command = getattr(command, path) 384 --> 385 response = command(*args, **kwargs) 386 387 # some functions don't return anything (such as .backward()) ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/hook/hook.py in overloaded_func(*args, **kwargs) 715 cmd_name = f"{attr.__module__}.{attr.__name__}" 716 command = (cmd_name, None, args, kwargs) --> 717 response = TorchTensor.handle_func_command(command) 718 return response 719 ~/miniconda3/envs/pysyft/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in handle_func_command(cls, command) 285 # in the execute_command function 286 if isinstance(args, tuple): --> 287 response = eval(cmd)(*args, **kwargs) 288 else: 289 response = eval(cmd)(args, **kwargs) RuntimeError: weight should have at least three dimensions
st84355
Could you post the shape of your input? The model should generally work for [batch_size, 1, 224, 224]-shaped inputs (at least if you remove the kwargs argument, as I’m not sure, if you are passing something in it).
st84356
I am trying to load my custom data set in Pytorch but every time getting ‘NotImplementedError’.I could not make sense where I am wrong. here is my code from __future__ import print_function, division import os from torch import nn import torch,torchvision import pandas as pd import pandas from torchvision.transforms import transforms import numpy as np import matplotlib.pyplot as plt from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils from PIL import Image root_dirA= '/home/mchow/Downloads/homework/datasetA/' root_dirB= '/home/mchow/Downloads/homework/datasetB/' fname = '/home/mchow/Downloads/homework/kxr_sq_bu00.sas7bdat' class OIDataset(Dataset): def __init__(self,fname, root_dir,transform=None): self.fp = pandas.read_sas( fname, format = 'sas7bdat', encoding='iso-8859-1') #get column names and find entries that end with 'KL' self.lfp = list(self.fp) self.matches = [x for x in self.lfp if 'KL'== x[-2:]] self.fp['ID'] = self.fp['ID'].astype('int') self.fp['SIDE'] = self.fp['SIDE'].astype('int') #add 'ID' index self.fp_kl = self.fp[ ['ID'] + ['SIDE'] + self.matches] self.fp_kl.drop_duplicates(subset=['ID', 'SIDE']) #change index self.kl_table = self.fp_kl.set_index('ID') self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.fname) def __getitem__(self, idx): img_name = os.path.join(self.root_dir, self.ID[idx]) # Read each 784 pixels and reshape the 1D array ([784]) to 2D array ([28,28]) patches, p_id = np.load(img_name) # get right image right_image = Image.fromarray(patches['R'].astype('uint8'), 'L') # left image left_image = Image.fromarray(patches['L'].astype('uint8'), 'L') print(patches, p_id.shape) img_class = self.fp_kl[idx] img_ID = self.fp[idx] img_side = self.fp[idx] sample = {'subject id': (patches, p_id), 'side': right_image, 'side2': left_image} # Return image and the label if self.transform: sample = self.transform(sample) return sample trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) if __name__ == '__main__': dset_train = OIDataset(fname,root_dirA,transform=trans) ```Here is the error len(dset_train) Traceback (most recent call last): File “”, line 1, in File “/home/mchow/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataset.py”, line 20, in len raise NotImplementedError NotImplementedError I will appreciate any kind of help regarding my question.
st84357
Solved by ptrblck in post #2 __len__ and __getitem__ are at the wrong indentation, thus not part of the Dataset class. Move them an indentation to the right and run your code again.
st84358
__len__ and __getitem__ are at the wrong indentation, thus not part of the Dataset class. Move them an indentation to the right and run your code again.
st84359
Currently it prompts "fused_dropout" not implemented for 'Long', whereas it’s not a problem for Theano. Is this feature we should wait for a while? or is it a bug?
st84360
Neither. The problem is that modern dropout scales the outputs by 1/p. This would mean that you can only use 1/n, n integer . Also you don’t get autograd with long tensors. As such, the use-case seems too limited to include it in PyTorch given that torch.empty_like(a).bernoulli_(keep_prob) * a will give you an (unscaled) dropout for long with not much code. Best regards Thomas
st84361
There’s no much sense to do mandatory scaling in Dropout, this feature should be optional controlled by a input argument. Anyway, thanks for replying.
st84362
david-leon: There’s no much sense to do mandatory scaling in Dropout, this feature should be optional controlled by a input argument. I’m not so sure about this claim, as the expected values will be off, which will most likely result in bad validation and test performance as described in the Dropout paper 17.
st84363
Is there any example about how to deploy a pytorch model in production environment (C++), for PyTorch 1.0 version?
st84364
The C++ export tutorial 15 shows you how to export your model and load and call it model from C++. How you write the surrounding application would likely depend on your use case. Best regards Thomas
st84365
I’m new to PyTorch, and I’m having trouble interpreting entropy. Suppose, we have a probability distribution [0.1, 0.2, 0.4, 0.3] First, let’s calculate entropy using numpy. import numpy as np p = np.array([0.1, 0.2, 0.4, 0.3]) logp = np.log2(p) entropy1 = np.sum(-p*logp) print(entropy1) Output: 1.846439 Next, let’s use entropy() from torch.distributions.Categorical import torch from torch.distributions import Categorical p_tensor = torch.Tensor([0.1, 0.2, 0.4, 0.3]) entropy2 = Categorical(probs = p_tensor).entropy() print(entropy2) Output: tensor(1.2799) Why is entropy1 not equal to entropy2?
st84366
Solved by Mazhar_Shaikh in post #2 Hi kabron_wade, The entropy is calculated using the natural logarithm. In your numpy example code, you use np.log2(). Using np.log() would give you the same result as the pytorch entropy().
st84367
Hi kabron_wade, The entropy is calculated using the natural logarithm. In your numpy example code, you use np.log2(). Using np.log() would give you the same result as the pytorch entropy().