id
stringlengths
3
8
text
stringlengths
1
115k
st30568
I think I was able to solve it. I first restarted the kernel and then, I edited the class: class data_gen(torch.utils.data.Dataset): def __init__(self, files): self.files = files my_data = np.genfromtxt('/data/'+files, delimiter=',') self.dim = my_data.shape[1] self.data = [] def __getitem__(self, i): file1 = self.files my_data = np.genfromtxt('/data/'+file1, delimiter=',') self.dim = my_data.shape[1] for j in range(my_data.shape[1]): tmp = np.reshape(my_data[:,j],(1,my_data.shape[0])) tmp = torch.from_numpy(tmp).float() self.data.append(tmp) return self.data[i] def __len__(self): return self.dim And now, it’s working when I call train_loader = torch.utils.data.DataLoader( train_dl_spec, batch_size=128, shuffle=True, num_workers=8, pin_memory=True)
st30569
I have some question.When I training model,I use two method to evaluate model.One method is I evaluate model in the end of every epoch,like this: def train_one_epoch(args, model, optimizer, scheduler, train_dataloader): """ Train the model """ # Train! logger.info("***** Running training *****") logger.info(" Num examples = %d", len(train_dataloader)*args.train_batch_size) epoch_step = 0 epoch_loss = 0.0 model.zero_grad() # 下面这里读取batch数据需要根据自己的数据脚本进行修改 epoch_iterator = tqdm(train_dataloader, desc="Training") # model.train() scaler = GradScaler() # 增加对抗训练代码 # fgm = FGM(model, epsilon=1, emb_name='word_embeddings.weight') # pgd = PGD(model, emb_name='word_embeddings.weight', epsilon=1.0, alpha=0.3) # k=3 for step, batch in enumerate(epoch_iterator): model.train() batch = tuple(t.to(args.device) for t in batch) inputs = {'input_ids':batch[0], 'attention_mask':batch[1], 'token_type_ids':batch[2], 'start_positions':batch[3], 'end_positions':batch[4], 'answerable_label':batch[5]} if args.model_type in ["xlm", "roberta", "distilbert", "camembert", "bart", "longformer"]: del inputs["token_type_ids"] if args.model_type in ['xlnet', 'xlm']: inputs.update({'cls_index': batch[6], 'p_mask': batch[9]}) with autocast(): outputs = model(**inputs) loss = outputs[0] # if args.n_gpu > 1: # loss = loss.mean() # mean() to average on multi-gpu parallel training epoch_loss += loss.item() scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() scheduler.step() # Update learning rate schedule optimizer.zero_grad() epoch_step += 1 return epoch_loss / epoch_step for epoch in range(int(args.num_train_epochs)): logger.info('***** Epoch {} Running Start! *****'.format(epoch+1)) train_epoch_loss = train_one_epoch(args,model, optimizer, scheduler, train_dataloader) **val_results = val_one_epoch(args, model, tokenizer,val_dataloader)** The another method is like this: def train_and_evaluate(args, model, tokenizer, optimizer, scheduler, train_dataloader, val_loader, epoch, max_f1): """ Train the model """ # Train! logger.info("***** Running training *****") logger.info(" Num examples = %d", len(train_dataloader)*args.train_batch_size) epoch_step = 0 epoch_loss = 0.0 model.zero_grad() epoch_iterator = tqdm(train_dataloader, desc="Training") # model.train() scaler = GradScaler() for step, batch in enumerate(epoch_iterator): model.train() batch = tuple(t.to(args.device) for t in batch) inputs = {'input_ids':batch[0], 'attention_mask':batch[1], 'token_type_ids':batch[2], 'start_positions':batch[3], 'end_positions':batch[4], 'answerable_label':batch[5]} if args.model_type in ["xlm", "roberta", "distilbert", "camembert", "bart", "longformer"]: del inputs["token_type_ids"] if args.model_type in ['xlnet', 'xlm']: inputs.update({'cls_index': batch[6], 'p_mask': batch[9]}) with autocast(): outputs = model(**inputs) loss = outputs[0] # if args.n_gpu > 1: # loss = loss.mean() # mean() to average on multi-gpu parallel training epoch_loss += loss.item() scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() scheduler.step() # Update learning rate schedule optimizer.zero_grad() epoch_step += 1 # evaluate model in some steps if (epoch_step % args.evaluate_steps == 0) : if max_f1 < val_results.get('f1'): max_f1 = val_results.get('f1') # logger.info('Epoch {} Training loss is {:.4f}'.format(epoch+1, epoch_loss/epoch_step)) logger.info("***** Eval results %s *****", "") info = "-".join([f' {key}: {value:.4f} ' for key, value in val_results.items()]) logger.info(info) # Save best model checkpoint output_dir = os.path.join(args.output_dir, args.model_type) if not os.path.exists(output_dir): os.makedirs(output_dir) # Save weights of the network model_to_save = model.module if hasattr(model, "module") else model # Take care of distributed/parallel training # model_checkpoint = {'epoch': epoch + 1, # 'state_dict': model_to_save.state_dict(), # 'optim_state_dict': optimizer.state_dict(), # 'scheduler_dict': scheduler.state_dict(), # } # model_to_save.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) model_file_path = os.path.join(output_dir, 'qa-best.bin') torch.save(model_to_save.state_dict(), model_file_path) logger.info("Saving best model checkpoint to %s", output_dir) # if 'cuda' in str(args.device): # torch.cuda.empty_cache() return max_f1 for epoch in range(int(args.num_train_epochs)): # seed_everything(args.seed) logger.info('******************** Epoch {} Running Start! ********************'.format(epoch+1)) max_f1 = train_and_evaluate(args,model, tokenizer, optimizer, scheduler, train_dataloader, val_dataloader, epoch, max_f1) **last_evaluate_results = evaluate(args, model, tokenizer, val_dataloader)** so, I find this method has not same evaluate in the end of every epoch. If I use evaulate every some steps in training, the evaluate result in the end of epoch is not same as only use evaluate in the end of every epoch. Anyone can help me?Thanks!
st30570
As long as you are properly switching between model.train() (during training) and model.eval() (during evaluation) the validation loop should not have major influence on the training. Note that you won’t be able to expect bitwise accurate results between both approaches (even if you are using only deterministic algorithms), since the validation loop could potentially call into the random number generator. This is usually uninteresting, but just for the sake of completeness I’m mentioning it here.
st30571
hi,thanks for your reply.But I find a intresting situration,the result in the first epoch is same use two methods,but the next epoch is not same,it is interesting
st30572
@ptrblck hi,thanks for your reply.But I find a intresting situration,the result in the first epoch is same use two methods,but the next epoch is not same,it is interesting
st30573
Sorry for the irrelevant issue. Is there anyone who knows how to change my username? I want to change my username to “sh0416”. Thanks,
st30574
Solved by albanD in post #2 Sure, it should be good now.
st30575
i would like to change my username to manix or manix29 , could you guide me to change it
st30576
Could you check your “Preferences” (click on your avatar and it should be the right tab) and then at “Account” you should be able to change the user name. Let me know, if it works.
st30577
But i am unable to edit it , under Accounts user name is there but it doesn’t seems to be editable.
st30578
Could you send me a PM with the preferred user name and we can continue the discussion there.
st30579
Hello, I am wondering how do the part code do in a training loop that I saw them frequently used. Especially update and scale do. optimizer.zero_grad() scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() Also, in what situation we need to use torch.amp.autocast() ? Thank you
st30580
Automatic Mixed Precision training (via torch.cuda.amp) can be used to speed up your training and I think the best place to get started is to check out the docs, the Automatic Mixed Precision recipe as well as the examples 1.
st30581
Hello there, I have programmed a multiclass segmentation model, everything works fine. My output for the tests is a tensor shaped 668x388x388 thus the output contains 668 different classes. So my goal is to get a tensor (or numpy array) out of this shaped like 1x388x388 to give it to matplotlib.imshow and thus I provided a softmax, every value is between 0 and 1. I need to pick for each pixel in the resulting array the highest one out of the 668 channels… Is there a method for this?
st30582
Solved by ptrblck in post #2 To create the predictions containing the class index associated with the highest probability (or logit) you could use: preds = torch.argmax(output, dim=0) # assuming output has the mentioned shape [nb_classes, height, weight]
st30583
To create the predictions containing the class index associated with the highest probability (or logit) you could use: preds = torch.argmax(output, dim=0) # assuming output has the mentioned shape [nb_classes, height, weight]
st30584
Hello! I have problem with my code: import numpy as np import os import torch from torch.utils.data import Dataset, DataLoader import torchvision.models as models import torchvision as trv import torch.nn as nn from sklearn.model_selection import train_test_split import torch.optim as optim import argparse class CustomData(Dataset): def __init__(self, path, transform=None): try: self.path = path self.data = np.load(path).astype('float32') self.data = self.data.astype(np.int_) self.transform = transform except: print("error") def __len__(self): return len(self.data) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() features = torch.tensor(self.data[idx, 2:]) label = self.data[idx, 1] if self.transform: features = self.transform(features) return features, label class CNN(nn.Module): def __init__(self, input_size, num_classes): super(CNN, self).__init__() self.conv1 = nn.Conv1d(1, 10, 1) self.conv2 = nn.Conv2d(1, 20, 45) self.fc1 = nn.Linear(45 * 20, 50) self.fc2 = nn.Linear(50, num_classes) self.bn1 = nn.BatchNorm1d(10) self.bn2 = nn.BatchNorm1d(20) self.bn3 = nn.BatchNorm1d(50) self.activation = nn.Sigmoid() def forward(self, x): x = x.unsqueeze(1) x = x.unsqueeze(1) x = self.activation(self.bn1(self.conv1(x))) x = self.activation(self.bn2(self.conv2(x))) x = x.view(-1, x.shape[1] * x.shape[2]) x = self.activation(self.bn3(self.fc1(x))) x = self.fc2(x) return x device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") parser = argparse.ArgumentParser(description='PyTorch Feature Extraction') parser.add_argument('--path', metavar='DIR', help='path to dataset') parser.add_argument('--j', '--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 4)') parser.add_argument('--b', '--batch-size', default=10, type=int, metavar='N', help='mini-batch size (default: 256)') args = parser.parse_args() path = "final_relebeled_dataset.npy" dataset = CustomData(path, transform=None) dataloader = DataLoader(dataset, batch_size=args.b, shuffle=True, num_workers=args.j, drop_last=True) # num_classes = 41 num_classes = 24 input_size = 53 # model = Simple_FC(input_size = input_size, num_classes = num_classes).to(device) model = CNN(input_size=input_size, num_classes=num_classes).to(device) lr = 0.01 momentum = 0.9 criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr, momentum=momentum) def train(epoch): model.train() for (features, labels) in enumerate(dataloader): features = torch.tensor([features]) labels = labels[0] print(type(features)) features = features.type(torch.LongTensor) print(type(features)) features, labels = features.to(device), labels.to(device) optimizer.zero_grad() print(features) output = model(features) loss = criterion(output, labels) loss.backward() optimizer.step() print(loss) if __name__ == '__main__': for epoch in range(0, 10): train(epoch) After I have started my code I see error :" RuntimeError: expected scalar type Long but found Float ". Could you help me solve my problem ?? Thank you !!!
st30585
Solved by ptrblck in post #18 I’m unsure how the kernel size is related to the input shape of the batchnorm layer. However, if the input contains only a single value for each channel (as is the case here), you won’t be able to use batchnorm layers in training mode, since they need to calculate the stats from the input. Since i…
st30586
Most likely this error is raised in: loss = criterion(output, labels) if you try to pass labels as a FloatTensor, while a LongTensor is expected. Try to use: loss = criterion(output, labels.long()) instead.
st30587
Thanks for your answer , but I have started study Traceback and I have seen that problem is in : output = model(features) I have tried to use: output = model(features.long()) but I have the same error again.
st30588
Can you post the full traceback if the above fix didn’t work? features.long() seems unusual especially for a CNN model .
st30589
It is Traceback: Traceback (most recent call last): File “”, line 124, in train(epoch) File “”, line 105, in train output = model(features.long()) File “”, line 889, in _call_impl result = self.forward(*input, **kwargs) File “”, line 56, in forward x = self.activation(self.bn1(self.conv1(x))) File “”, line 889, in _call_impl result = self.forward(*input, **kwargs) File “”, line 263, in forward return self._conv_forward(input, self.weight, self.bias) File “”, line 259, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: expected scalar type Long but found Float
st30590
What is the purpose of self.data = np.load(path).astype('float32') self.data = self.data.astype(np.int_) in the Dataset? Can you try removing the second line? You should also change the model invocation back to output = model(features). The error message is confusing because the operator expected weights to be Long for Long input. To fix this you want to change your input to Float and match the weights which are already Float by default.
st30591
Thank for you answer but I tried it (what you have written) and it is the same error.
st30592
Can you double check the type of x in the model before the conv layer? e.g., print(x.dtype) should give float.
st30593
Thank you ! I have float now. I have another question. Now I have error : Traceback (most recent call last): File "", line 121, in <module> train(epoch) File "", line 102, in train output = model(features) File "", line 889, in _call_impl result = self.forward(*input, **kwargs) File "y", line 56, in forward x = self.activation(self.bn1(self.conv1(x))) File "", line 889, in _call_impl result = self.forward(*input, **kwargs) File "", line 135, in forward return F.batch_norm( File "", line 2147, in batch_norm _verify_batch_size(input.size()) File "", line 2114, in _verify_batch_size raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size)) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 10, 1]) What does it mean ?? Should I change sth in : self.conv1 = nn.Conv1d(1, 10, 1)
st30594
Something seems strange about the output shape after a conv layer then. Can you separate the x = self.activation(self.bn1(self.conv1(x))) to x = self.conv1(x) x = self.bn1(x) ... and print the shape of x in between to debug? It looks like the shape of x is smaller than expected for the batchnorm.
st30595
I have done it and problem is in : x = self.bn1(x) So should I change value in bn1 in init??
st30596
If you know what the expected shape and what change should be made to bn1, then sure.
st30597
I know that x.shape is torch.Size([1, 10, 1]), so value in BatchNorm1d should be 10, but I still have the same error
st30598
The batchnorm layer would need more than a single value to calculate the stats (mean and var) from the input batch, so you would either need to use more than a single sample (increase the batch size) during training or increase the temporal dimension: bn = nn.BatchNorm1d(10) x = torch.randn(1, 10, 1) out = bn(x) > ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 10, 1]) x = torch.randn(2, 10, 1) out = bn(x) # works x = torch.randn(1, 10, 2) out = bn(x) # works
st30599
What if my input size is 1 and kernel size must be 1 too ? Can I do sth with this, without change my data ??
st30600
I’m unsure how the kernel size is related to the input shape of the batchnorm layer. However, if the input contains only a single value for each channel (as is the case here), you won’t be able to use batchnorm layers in training mode, since they need to calculate the stats from the input. Since it’s impossible to calculate the var from a single sample (it would create NaNs) and also subtracting the mean from a single value would create a zero output, you could remove these layers from the model.
st30601
I’m trying to annotate subclasses of nn.Module inline, but for now I unable to get it to work. For my project I create an abstract subclass of nn.Module from typing import Any from torch import nn class Foo(nn.Module): def forward(self, *input: Any, **kwargs: Any) -> Any: pass Running mypy 2 on this succeeds. In a second step I add a more concrete class import torch class Bar(Foo): def forward(self, input: torch.Tensor) -> torch.Tensor: pass Now mypy errors with error: Signature of "forward" incompatible with supertype "Foo" error: Signature of "forward" incompatible with supertype "Module" while pointing to forward() of Bar both times. I’m aware that this is valid error since Bar violates the Liskov Substitution Principle 5. Looking at the stub of nn.Module 20 I think the marked paragraph is related, but for now I was unable to comprehend it. But since I think it is not intended for every subclass of nn.Module to have that exact signature, I’m puzzled how to get around this. Can someone help me out here?
st30602
I’m seeing a similar problem. I have code that looks like: import torch.nn as nn class Foo(nn.Module): ... but mypy errors out with: error: Name "nn.Module" is not defined error: Class cannot subclass "Module" (has type "Any") No idea what this means or how to solve it.
st30603
I’m getting very weird behavior when training a basic CNN text classification model with Cuda v11.1 on an AWS machine. I have tried this on both p2 and g3 instance types. When training on the CPU, the model does very well and achieves about 85% validation accuracy after ~15 epochs. However, using the exact same code with CUDA gives strange results. I get random segmentation faults during the training process. In addition, the validation accuracy barely changes between batches even though the loss does change, and it hovers around 65%. Here is my code import numpy as np import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from sklearn.utils import shuffle from cnn import BasicConvModel from data import ( load_data, preprocess_text, create_vocabulary, get_valid_accuracy ) torch.manual_seed(12) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # device = torch.device('cpu') print(device) output_size = 1 embed_dim = 600 num_filters = 200 kernel_sizes = [2, 3, 4, 5] batch_size = 16 X, y = load_data('../data/classification_sample.csv') processed_text = preprocess_text(X) vocab = create_vocabulary(processed_text) model = BasicConvModel(vocab, embed_dim, num_filters, kernel_sizes, output_size) if torch.cuda.is_available(): model.cuda() loss_fn = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=1e-3) processed_text, y = shuffle(processed_text, y, random_state=132) max_idx = max([len(text) for text in processed_text]) text_idxs, pad_idx = [], vocab['<PAD>'] for text in processed_text: to_idx = [vocab[tok] for tok in text] for i in range(max_idx - len(text)): to_idx.append(pad_idx) text_idxs.append(to_idx) full_tensor = torch.tensor(text_idxs, device=device) y = torch.tensor(y, device=device) num_epochs = 20 valid_idx = int(0.2 * full_tensor.size(0)) X, valid_X = full_tensor[valid_idx:, :], full_tensor[:valid_idx, :] y, valid_y = y[valid_idx:], y[:valid_idx] num_batches = X.size(0) // batch_size print_every = 10 model.train() for i in range(num_epochs): for j in range(num_batches + 1): start_idx, end_idx = j * batch_size, (j + 1) * batch_size batch_X, batch_y = X[start_idx:end_idx, :], y[start_idx:end_idx] preds = model(batch_X) loss = loss_fn(preds.squeeze(), batch_y.float()) loss.backward() optimizer.step() model.zero_grad() if j % print_every == 0: acc = get_valid_accuracy(valid_X, valid_y, model) print(f"Epoch: {i}, Loss: {loss}, Validation Accuracy: {acc}") And the CNN model import torch.nn as nn import torch.optim as optim import torch.nn.functional as F class BasicConvModel(nn.Module): def __init__(self, vocab, embed_dim, num_filters, kernel_sizes, output_size, use_dropout=True, dropout_prob=0.2): super().__init__() self.vocab = vocab self.embed_dim = embed_dim self.num_filters = num_filters self.kernel_sizes = kernel_sizes self.output_size = output_size self.use_dropout = use_dropout self.dropout_prob = dropout_prob # embedding layer self.embed = nn.Embedding(len(vocab), embed_dim) # a series of 1d convs self.convs_1d = nn.ModuleList([ nn.Conv2d(1, num_filters, (k, embed_dim), padding=(k-2,0)) for k in kernel_sizes]) #dropout layer self.dropout = nn.Dropout(p=dropout_prob) self.dense = nn.Linear(len(kernel_sizes) * num_filters, output_size) self.sigmoid = nn.Sigmoid() def conv_and_pool(self, x, conv): x = F.relu(conv(x)).squeeze(3) x_max = F.max_pool1d(x, x.size(2)).squeeze(2) return x_max def forward(self, x): embeds = self.embed(x) embeds = embeds.unsqueeze(1) convs = [self.conv_and_pool(embeds, conv) for conv in self.convs_1d] x = torch.cat(convs, 1) x = self.dropout(x) logit = self.dense(x) return self.sigmoid(logit) The output from the CPU Epoch: 0, Loss: 0.5930776596069336, Validation Accuracy: 0.6538461538461539 Epoch: 0, Loss: 0.5337628722190857, Validation Accuracy: 0.7062937062937062 Epoch: 0, Loss: 0.8541656732559204, Validation Accuracy: 0.6783216783216783 Epoch: 0, Loss: 1.5076597929000854, Validation Accuracy: 0.7412587412587412 Epoch: 0, Loss: 1.055909514427185, Validation Accuracy: 0.7727272727272727 Epoch: 0, Loss: 0.4178897440433502, Validation Accuracy: 0.7797202797202797 Epoch: 0, Loss: 0.4650037884712219, Validation Accuracy: 0.7867132867132867 Epoch: 0, Loss: 0.3287246823310852, Validation Accuracy: 0.7867132867132867 Epoch: 1, Loss: 0.09397178143262863, Validation Accuracy: 0.7972027972027972 Epoch: 1, Loss: 0.355461061000824, Validation Accuracy: 0.7657342657342657 Epoch: 1, Loss: 0.05974849686026573, Validation Accuracy: 0.7517482517482518 Epoch: 1, Loss: 0.34798571467399597, Validation Accuracy: 0.7867132867132867 Epoch: 1, Loss: 0.21483170986175537, Validation Accuracy: 0.7797202797202797 Epoch: 1, Loss: 0.0788840726017952, Validation Accuracy: 0.7307692307692307 Epoch: 1, Loss: 0.07168418169021606, Validation Accuracy: 0.8006993006993007 Epoch: 1, Loss: 0.1160748153924942, Validation Accuracy: 0.8041958041958042 Epoch: 2, Loss: 0.05112272500991821, Validation Accuracy: 0.7937062937062938 Epoch: 2, Loss: 0.20592114329338074, Validation Accuracy: 0.7937062937062938 Epoch: 2, Loss: 0.05088387429714203, Validation Accuracy: 0.7937062937062938 Epoch: 2, Loss: 0.0787544697523117, Validation Accuracy: 0.8111888111888111 Epoch: 2, Loss: 0.5769421458244324, Validation Accuracy: 0.7587412587412588 Epoch: 2, Loss: 0.0004902268410660326, Validation Accuracy: 0.7972027972027972 Epoch: 2, Loss: 0.4308287501335144, Validation Accuracy: 0.7867132867132867 Epoch: 2, Loss: 0.007755571510642767, Validation Accuracy: 0.7902097902097902 Epoch: 3, Loss: 0.0071701593697071075, Validation Accuracy: 0.7622377622377622 Epoch: 3, Loss: 0.013051643036305904, Validation Accuracy: 0.7727272727272727 Epoch: 3, Loss: 0.007425243500620127, Validation Accuracy: 0.7412587412587412 and an example CUDA run’s output Epoch: 0, Loss: 0.6711463928222656, Validation Accuracy: 0.6538461538461539 Epoch: 0, Loss: 0.7122050523757935, Validation Accuracy: 0.458041958041958 Epoch: 0, Loss: 0.724312961101532, Validation Accuracy: 0.6188811188811189 Epoch: 0, Loss: 0.7947384715080261, Validation Accuracy: 0.6258741258741258 Epoch: 0, Loss: 0.688117265701294, Validation Accuracy: 0.6188811188811189 Epoch: 0, Loss: 0.5511382222175598, Validation Accuracy: 0.6503496503496503 Epoch: 0, Loss: 0.6020643711090088, Validation Accuracy: 0.6538461538461539 Epoch: 0, Loss: 0.6449810266494751, Validation Accuracy: 0.6643356643356644 Epoch: 1, Loss: 0.645209550857544, Validation Accuracy: 0.6713286713286714 Epoch: 1, Loss: 0.6702262163162231, Validation Accuracy: 0.6643356643356644 Epoch: 1, Loss: 0.7097079753875732, Validation Accuracy: 0.6538461538461539 Epoch: 1, Loss: 0.7172590494155884, Validation Accuracy: 0.6538461538461539 Epoch: 1, Loss: 0.6835118532180786, Validation Accuracy: 0.6363636363636364 Epoch: 1, Loss: 0.5769379734992981, Validation Accuracy: 0.6538461538461539 Epoch: 1, Loss: 0.6344826817512512, Validation Accuracy: 0.6468531468531469 Epoch: 1, Loss: 0.6470801830291748, Validation Accuracy: 0.6433566433566433 Epoch: 2, Loss: 0.6567261815071106, Validation Accuracy: 0.6363636363636364 Epoch: 2, Loss: 0.6605111360549927, Validation Accuracy: 0.6433566433566433 Epoch: 2, Loss: 0.716945469379425, Validation Accuracy: 0.6433566433566433 Epoch: 2, Loss: 0.7049784660339355, Validation Accuracy: 0.6503496503496503 Epoch: 2, Loss: 0.6796182990074158, Validation Accuracy: 0.6538461538461539 Epoch: 2, Loss: 0.571618914604187, Validation Accuracy: 0.6538461538461539 Epoch: 2, Loss: 0.6348196268081665, Validation Accuracy: 0.6538461538461539 Epoch: 2, Loss: 0.6413698196411133, Validation Accuracy: 0.6468531468531469 Epoch: 3, Loss: 0.6526831388473511, Validation Accuracy: 0.6468531468531469 Epoch: 3, Loss: 0.6591930389404297, Validation Accuracy: 0.6538461538461539 Epoch: 3, Loss: 0.7166335582733154, Validation Accuracy: 0.6643356643356644 Epoch: 3, Loss: 0.7043871879577637, Validation Accuracy: 0.6643356643356644 Epoch: 3, Loss: 0.6688210964202881, Validation Accuracy: 0.6573426573426573 Epoch: 3, Loss: 0.5581091642379761, Validation Accuracy: 0.6643356643356644 Epoch: 3, Loss: 0.6201637387275696, Validation Accuracy: 0.6433566433566433 Epoch: 3, Loss: 0.6303338408470154, Validation Accuracy: 0.6433566433566433 Epoch: 4, Loss: 0.6430555582046509, Validation Accuracy: 0.6433566433566433 Epoch: 4, Loss: 0.6545212864875793, Validation Accuracy: 0.6468531468531469 Epoch: 4, Loss: 0.710310161113739, Validation Accuracy: 0.6503496503496503 Epoch: 4, Loss: 0.7034661173820496, Validation Accuracy: 0.6538461538461539 Epoch: 4, Loss: 0.6596505641937256, Validation Accuracy: 0.6573426573426573 Epoch: 4, Loss: 0.5458865165710449, Validation Accuracy: 0.6538461538461539 Epoch: 4, Loss: 0.6087003350257874, Validation Accuracy: 0.6468531468531469 Epoch: 4, Loss: 0.617493748664856, Validation Accuracy: 0.6363636363636364 Epoch: 5, Loss: 0.6346374750137329, Validation Accuracy: 0.6433566433566433 Epoch: 5, Loss: 0.6527444124221802, Validation Accuracy: 0.6433566433566433 Epoch: 5, Loss: 0.7063688039779663, Validation Accuracy: 0.6538461538461539 Epoch: 5, Loss: 0.7048066854476929, Validation Accuracy: 0.6538461538461539 Epoch: 5, Loss: 0.6533452272415161, Validation Accuracy: 0.6538461538461539 Epoch: 5, Loss: 0.5325800180435181, Validation Accuracy: 0.6573426573426573 Epoch: 5, Loss: 0.5976376533508301, Validation Accuracy: 0.6503496503496503 Epoch: 5, Loss: 0.6109954118728638, Validation Accuracy: 0.6503496503496503 Epoch: 6, Loss: 0.626984715461731, Validation Accuracy: 0.6433566433566433 Epoch: 6, Loss: 0.6500746011734009, Validation Accuracy: 0.6468531468531469 Epoch: 6, Loss: 0.7023633718490601, Validation Accuracy: 0.6503496503496503 Epoch: 6, Loss: 0.7072952389717102, Validation Accuracy: 0.6573426573426573 Epoch: 6, Loss: 0.6468454003334045, Validation Accuracy: 0.6573426573426573 Epoch: 6, Loss: 0.5176084637641907, Validation Accuracy: 0.6573426573426573 Epoch: 6, Loss: 0.5881103277206421, Validation Accuracy: 0.6573426573426573 Epoch: 6, Loss: 0.602046549320221, Validation Accuracy: 0.6468531468531469 Epoch: 7, Loss: 0.6179436445236206, Validation Accuracy: 0.6433566433566433 Epoch: 7, Loss: 0.6477866768836975, Validation Accuracy: 0.6468531468531469 Epoch: 7, Loss: 0.7008408904075623, Validation Accuracy: 0.6608391608391608 Epoch: 7, Loss: 0.7075467109680176, Validation Accuracy: 0.6573426573426573 Epoch: 7, Loss: 0.6404005289077759, Validation Accuracy: 0.6573426573426573 Epoch: 7, Loss: 0.5072473287582397, Validation Accuracy: 0.6608391608391608 Epoch: 7, Loss: 0.5786412358283997, Validation Accuracy: 0.6573426573426573 Epoch: 7, Loss: 0.5931527614593506, Validation Accuracy: 0.6433566433566433 Epoch: 8, Loss: 0.6106604337692261, Validation Accuracy: 0.6433566433566433 Epoch: 8, Loss: 0.6467059850692749, Validation Accuracy: 0.6433566433566433 Epoch: 8, Loss: 0.6986489295959473, Validation Accuracy: 0.6573426573426573 Epoch: 8, Loss: 0.7110292911529541, Validation Accuracy: 0.6538461538461539 Epoch: 8, Loss: 0.6344634294509888, Validation Accuracy: 0.6538461538461539 Epoch: 8, Loss: 0.4974040985107422, Validation Accuracy: 0.6538461538461539 Epoch: 8, Loss: 0.5710303783416748, Validation Accuracy: 0.6573426573426573 Epoch: 8, Loss: 0.5836442708969116, Validation Accuracy: 0.6468531468531469 Epoch: 9, Loss: 0.6042808890342712, Validation Accuracy: 0.6468531468531469 Epoch: 9, Loss: 0.6457432508468628, Validation Accuracy: 0.6468531468531469 Epoch: 9, Loss: 0.6968148946762085, Validation Accuracy: 0.6573426573426573 Epoch: 9, Loss: 0.7122361660003662, Validation Accuracy: 0.6573426573426573 Epoch: 9, Loss: 0.6280542016029358, Validation Accuracy: 0.6573426573426573 Epoch: 9, Loss: 0.48882660269737244, Validation Accuracy: 0.6573426573426573 Epoch: 9, Loss: 0.5643866062164307, Validation Accuracy: 0.6608391608391608 Epoch: 9, Loss: 0.5753633379936218, Validation Accuracy: 0.6538461538461539 Epoch: 10, Loss: 0.5978206396102905, Validation Accuracy: 0.6538461538461539 Epoch: 10, Loss: 0.644151508808136, Validation Accuracy: 0.6468531468531469 Epoch: 10, Loss: 0.695034921169281, Validation Accuracy: 0.6573426573426573 Epoch: 10, Loss: 0.7126811742782593, Validation Accuracy: 0.6573426573426573 Epoch: 10, Loss: 0.6227348446846008, Validation Accuracy: 0.6573426573426573 Epoch: 10, Loss: 0.4803176522254944, Validation Accuracy: 0.6573426573426573 Epoch: 10, Loss: 0.5574368238449097, Validation Accuracy: 0.6573426573426573 Epoch: 10, Loss: 0.567111611366272, Validation Accuracy: 0.6503496503496503 Epoch: 11, Loss: 0.5913469791412354, Validation Accuracy: 0.6503496503496503 Epoch: 11, Loss: 0.6416419744491577, Validation Accuracy: 0.6538461538461539 Epoch: 11, Loss: 0.6932559013366699, Validation Accuracy: 0.6573426573426573 Epoch: 11, Loss: 0.7137739658355713, Validation Accuracy: 0.6573426573426573 Epoch: 11, Loss: 0.617378830909729, Validation Accuracy: 0.6538461538461539 Epoch: 11, Loss: 0.4722314774990082, Validation Accuracy: 0.6503496503496503 Epoch: 11, Loss: 0.5506305694580078, Validation Accuracy: 0.6503496503496503 Epoch: 11, Loss: 0.5594512224197388, Validation Accuracy: 0.6538461538461539 Epoch: 12, Loss: 0.5861461758613586, Validation Accuracy: 0.6503496503496503 Epoch: 12, Loss: 0.6391829252243042, Validation Accuracy: 0.6503496503496503 Epoch: 12, Loss: 0.690549373626709, Validation Accuracy: 0.6538461538461539 Epoch: 12, Loss: 0.7145882248878479, Validation Accuracy: 0.6503496503496503 Epoch: 12, Loss: 0.6126421689987183, Validation Accuracy: 0.6573426573426573 Epoch: 12, Loss: 0.4649009108543396, Validation Accuracy: 0.6573426573426573 Epoch: 12, Loss: 0.5442847013473511, Validation Accuracy: 0.6538461538461539 Epoch: 12, Loss: 0.5534595251083374, Validation Accuracy: 0.6468531468531469 Epoch: 13, Loss: 0.5814194679260254, Validation Accuracy: 0.6468531468531469 Epoch: 13, Loss: 0.6364545226097107, Validation Accuracy: 0.6433566433566433 Epoch: 13, Loss: 0.6902283430099487, Validation Accuracy: 0.6538461538461539 Epoch: 13, Loss: 0.7163193821907043, Validation Accuracy: 0.6503496503496503 Epoch: 13, Loss: 0.6077008247375488, Validation Accuracy: 0.6538461538461539 Epoch: 13, Loss: 0.4577617347240448, Validation Accuracy: 0.6573426573426573 Epoch: 13, Loss: 0.5393982529640198, Validation Accuracy: 0.6573426573426573 Epoch: 13, Loss: 0.5457847118377686, Validation Accuracy: 0.6433566433566433 Epoch: 14, Loss: 0.5763950347900391, Validation Accuracy: 0.6433566433566433 Epoch: 14, Loss: 0.6351915597915649, Validation Accuracy: 0.6398601398601399 Epoch: 14, Loss: 0.6895017623901367, Validation Accuracy: 0.6573426573426573 Epoch: 14, Loss: 0.7182782888412476, Validation Accuracy: 0.6538461538461539 Segmentation fault (core dumped) Any thoughts on what could be causing this?
st30604
Probably unrelated, but could you post the code for get_valid_accuracy? I’m wondering if model.eval() isn’t called or if model.train() isn’t called after the validation step finishes.
st30605
Thanks, here it is def get_valid_accuracy(valid_X, valid_y, model, batch_size=32): """ Computes validation accuracy given a validation set. """ model.eval() num_batches = valid_X.size(0) // batch_size all_preds = [] for k in range(num_batches + 1): start_idx, end_idx = k * batch_size, (k + 1) * batch_size batch_X = valid_X[start_idx:end_idx, :] probs = model(batch_X).flatten() preds = torch.where(probs >= 0.5, 1, 0) all_preds += preds.tolist() all_preds = np.array(all_preds) valid_y = np.array(valid_y.cpu()) acc = np.mean(all_preds == valid_y) return acc I just tried adding a call to model.train() every time after get_valid_accuracy runs in the loop but still getting the same issue.
st30606
OK, it might be more worthwhile to go after the segmentation fault first. Is it possible to get a stack trace with something like $ gdb --args python my_script.py ... Reading symbols from python...done. (gdb) run ... (gdb) backtrace ... ? Additionally, what is the output of nvidia-smi?
st30607
eqy: gdb --args python my_script.py Here is the backtrace output Thread 19 "python" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffeb98e6700 (LWP 94700)] 0x00007ffebb50ff05 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 (gdb) backtrace #0 0x00007ffebb50ff05 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #1 0x00007ffebb448c07 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #2 0x00007ffebb573f7d in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #3 0x00007ffebb420e6d in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #4 0x00007ffebb4213ff in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #5 0x00007ffebb327f05 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #6 0x00007ffebb328052 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #7 0x00007ffebb4ee87d in cuMemsetD8Async () from /usr/lib/x86_64-linux-gnu/libcuda.so.1 #8 0x00007ffefde3bd2e in ?? () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libcudart-6d56b25a.so.11.0 #9 0x00007ffefde1989b in ?? () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libcudart-6d56b25a.so.11.0 #10 0x00007ffefde57311 in cudaMemsetAsync () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libcudart-6d56b25a.so.11.0 #11 0x00007fff257dd155 in scaleFilter4d(cudnnContext*, cudnnFilter4dStruct*, void*, void const*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #12 0x00007fff25abbc8b in cudnn::cnn::Wgrad2dAlgo0Engine<float, float, float>::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #13 0x00007fff250386f3 in cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #14 0x00007fff25735750 in cudnn::cnn::EngineContainer<(cudnnBackendEngineName_t)2020, 4096ul>::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so ---Type <return> to continue, or q <return> to quit--- #15 0x00007fff250386f3 in cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #16 0x00007fff254aeebc in cudnn::cnn::AutoTransformationExecutor::execute_pipeline(cudnn::cnn::ConvolutionEngine&, cudnn::backend::VariantPack const&, CUstream_st*) const () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #17 0x00007fff2575c4f1 in cudnn::cnn::GeneralizedConvolutionEngine<cudnn::cnn::EngineContainer<(cudnnBackendEngineName_t)2020, 4096ul> >::execute_internal_impl(cudnn::backend::VariantPack const&, CUstream_st*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #18 0x00007fff250386f3 in cudnn::cnn::EngineInterface::execute(cudnn::backend::VariantPack const&, CUstream_st*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #19 0x00007fff254330db in cudnn::backend::execute(cudnnContext*, cudnn::backend::ExecutionPlan&, cudnn::backend::VariantPack&) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #20 0x00007fff2572719d in cudnn::backend::EnginesAlgoMap<cudnnConvolutionBwdFilterAlgo_t, 7>::execute_wrapper(cudnnContext*, cudnnConvolutionBwdFilterAlgo_t, cudnn::backend::ExecutionPlan&, cudnn::backend::VariantPack&) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #21 0x00007fff25726d31 in cudnn::backend::cudnnConvolutionBackwardFilter(cudnnContext*, void const*, cudnnTensorStruct const*, void const*, cudnnTensorStruct const*, void const*, cudnnConvolutionStruct const*, cudnnConvolutionBwdFilterAlgo_t, void*, unsigned long, void const*, cudnnFilterStruct const*, void*) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #22 0x00007fff2504091a in cudnnConvolutionBackwardFilter () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #23 0x00007fff2403bfbd in at::native::raw_cudnn_convolution_backward_weight_out_32bit(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef---Type <return> to continue, or q <return> to quit--- <long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool)::{lambda(cudnnConvolutionBwdFilterAlgoPerf_t const&)#1}::operator()(cudnnConvolutionBwdFilterAlgoPerf_t const&) const () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #24 0x00007fff2403f01a in at::native::raw_cudnn_convolution_backward_weight_out_32bit(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #25 0x00007fff2403fcaf in at::native::raw_cudnn_convolution_backward_weight_out(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #26 0x00007fff240390fa in at::native::cudnn_convolution_backward_weight(char const*, c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #27 0x00007fff240397db in at::native::cudnn_convolution_backward_weight(c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #28 0x00007fff8637f6e2 in at::(anonymous namespace)::(anonymous namespace)::wrapper_cudnn_convolution_backward_weight(c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so #29 0x00007fff8637f771 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool), &at::(anonymous namespace)::(anonymous namespace)::wrapper_cudnn_convolution_backward_weight>, at::Tensor, c10::guts::typelist::typelist<c10::ArrayRef<long>, at::Tenso---Type <return> to continue, or q <return> to quit--- r const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool> >, at::Tensor (c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool)>::call(c10::OperatorKernel*, c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so #30 0x00007fff749d0485 in at::Tensor c10::Dispatcher::call<at::Tensor, c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool>(c10::TypedOperatorHandle<at::Tensor (c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool)> const&, c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) const () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #31 0x00007fff74865ea1 in at::cudnn_convolution_backward_weight(c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #32 0x00007fff240335c9 in at::native::cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so #33 0x00007fff8637f4f7 in at::(anonymous namespace)::(anonymous namespace)::wrapper_cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so #34 0x00007fff8637f58a in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bo---Type <return> to continue, or q <return> to quit--- ol, bool, std::array<bool, 2ul>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_cudnn_convolution_backward>, std::tuple<at::Tensor, at::Tensor>, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul> > >, std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>)>::call(c10::OperatorKernel*, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so #35 0x00007fff74866537 in at::cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #36 0x00007fff760e379f in torch::autograd::VariableType::(anonymous namespace)::cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #37 0x00007fff760e3eba in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>), &torch::autograd::VariableType::(anonymous namespace)::cudnn_convolution_backward>, std::tuple<at::Tensor, at::Tensor>, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul> > >, std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>)>::call(c10::OperatorKernel*, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so ---Type <return> to continue, or q <return> to quit--- #38 0x00007fff74866537 in at::cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool, std::array<bool, 2ul>) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #39 0x00007fff75f4420c in torch::autograd::generated::CudnnConvolutionBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #40 0x00007fff765be771 in torch::autograd::Node::operator()(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #41 0x00007fff765ba57b in torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #42 0x00007fff765bb19f in torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #43 0x00007fff765b2979 in torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so #44 0x00007fffe6d3c163 in torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) () from /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/lib/libtorch_python.so #45 0x00007fffe7d486df in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #46 0x00007ffff7bbb6db in start_thread (arg=0x7ffeb98e6700) at pthread_create.c:463 #47 0x00007ffff6f3771f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Output of nvidia-smi is ri Jun 11 19:14:32 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.119.03 Driver Version: 450.119.03 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 | | N/A 37C P0 38W / 150W | 0MiB / 7618MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
st30608
Interesting, could you try and isolate this to a self-contained minimal training/val loop that still causes the segfault (it can have random training data if that works)? I will see if I can reproduce it on similar hardware.
st30609
I use the adamw as the optimizer and after the training run a day I got this problem: [epoch][s/s_per_e/gs]: [99][304/319/31899], lr: 0.000001000796, loss: 0.251922130585 [epoch][s/s_per_e/gs]: [99][305/319/31900], lr: 0.000001000000, loss: 0.198185890913 Traceback (most recent call last): File "train_main.py", line 745, in main() File "train_main.py", line 740, in main main_worker(0, ngpus_per_node, args) File "train_main.py", line 591, in main_worker optimizer.step() File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/optim/optimizer.py", line 89, in wrapper return func(*args, **kwargs) File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/optim/adamw.py", line 121, in step group['eps']) File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/optim/<em>functional.py", line 122, in adamw param.mul</em>(1 - lr * weight_decay). RuntimeError: result type ComplexFloat can't be cast to the desired output type Float Please help.
st30610
It turns out that the lr goes to very small < 1e-6. After fixed this the problem solved.
st30611
That sounds quite weird. Do you have a minimal code snippet to reproduce the error message by changing the learning rate?
st30612
Hello , The parameter shuffle in DataLoader class seems to affect the model in some way. I have a saved model for a binary classification task (cats vs dogs) and changing the parameter shuffle in DataLoader affects my model heavily. I have used torch.save method to save my trained model , and I used torch.load method to load it in another python file. When shuffle is set to False in DataLoader , the model gives around 52% accuracy but the saved model had about 98% accuracy during validation tests. The model only performs how it is supposed to perform is when shuffle is set to True in the prediction python file I use. The validation set used for testing accuracy of the model while changing shuffle parameter is the same. Below is a snippet of code which calculates accuracy of the model. I would also like to note that the model is a pretrained model called resnet18 , i only trained the fc layer in order to fit it to my task. test_loader = DataLoader(test_set , batch_size=64 , shuffle=False) total_correct = 0 total_seen = 0 for xb,yb in test_loader: xb=xb.cuda() yb=yb.cuda() preds = model(xb.float()) total_correct += ((torch.sum(preds.round() == yb.reshape(-1,1))).item()) total_seen += yb.numel() print(total_correct / total_seen) When shuffle is set to False , like the above snippet , the accuracy i get is 0.5284231339594662 When shuffle is set to True , the accuracy i get is 0.957983193277311 This change is only by changing shuffle to True. I don’t understand why or how DataLoader affects my pretrained model’s accuracy , please help me out. Also , somehow using model.eval() solves the above problem.
st30613
Solved by ptrblck in post #2 This would be expected, since calling model.eval() would disable dropout layers (shouldn’t make a difference regarding shuffling the dataset) and would use the running stats of all batchnorm layers. If you leave the model in training mode, the batchnorm layers would use the current batch stats to …
st30614
auzuha: Also , somehow using model.eval() solves the above problem. This would be expected, since calling model.eval() would disable dropout layers (shouldn’t make a difference regarding shuffling the dataset) and would use the running stats of all batchnorm layers. If you leave the model in training mode, the batchnorm layers would use the current batch stats to normalize the inputs and would also update the running stats. Shuffling the dataset is thus making a difference, which is why you should call model.eval() during validation and testing.
st30615
Not sure if this is the best place to post! I’ve worked with this tut for a couple hours and I found the following bugs: The tutorial below contains a broken link under “View on GitHub”. The collate_fn=generate_batch is broken as torch.nn.utils.rnn.pad_sequence swaps the dimensions of data, yielding batches where the 0th element is full of <sos> tokens etc. The correct way would be to use batch_first=True in pad_sequence. Hope it helps!
st30616
Thanks for reporting this! I assume you mean this tutorial 1. CC @vincentqb for visibility
st30617
I am trying to train a neural network where I have a custom activation function. However, the loss function remains constant through all epochs. When I printed the weight.grad for the linear layer, it returns None, which means that somewhere, a gradient is not being computed. Here is my NN class code: class OptNet(nn.Module): def __init__(self): super(OptNet,self).__init__() m = 2 self.fc1 = nn.Linear(m*m, 2*m*m, bias=False) with torch.no_grad(): self.fc1.weight.data = torch.tensor([[1.1,0,0,0],[0,1.3,0,0],[0,0,1.2,0],[0,0,0,1.3],[-1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,-1]]) self.myact = MyActivationFunction(m) def forward(self, x): m = 2 x = self.fc1(x) return self.myact(x) And here is my activation function code (it is meant to solve a quadratic program with one of the parameters being reshaped x): class MyActivationFunction(nn.Module): def __init__(self, m=2): super(MyActivationFunction, self).__init__() self.m = m def forward(self, x): Q = torch.tensor([[1.3, 0.3], [0.3, 1.7]], dtype=torch.float32) q = torch.tensor([1,-1], dtype=torch.float32) A = torch.reshape(x, (4,2)) b = torch.ones(2*self.m, dtype=torch.float32) e = torch.tensor([], dtype=torch.float32) return QPFunction(verbose=-1)(Q, q, A, b, e, e) And finally, here is the code used to actually train the network. The dataMatrix and solutions tensors are the training data set, which I have used for other purposes before so I am fairly confident these are generated correctly. net = OptNet() optimizer = optim.SGD(net.parameters(), lr=0.5) loss_func = nn.MSELoss() maxepochs = 100 lossarray = numpy.zeros(maxepochs) for epoch in range(maxepochs): optimizer.zero_grad() predictions = torch.zeros(trainPoints,m) for eachpoint in range(trainPoints): predictions[eachpoint] = net(Variable(dataMatrix[eachpoint])) loss = Variable(loss_func(predictions, solutions), requires_grad=True) loss.backward() print(net.fc1.weight.grad) #this prints None optimizer.step() lossarray[epoch] = loss if epoch%5 == 1: print(loss) #this is always constant I figure it must be something to do with the ActivationFunction which uses QPFunction from the qpth library, but this library has differentiation implemented. Any help would be greatly appreciated, thanks!
st30618
Hi Jason! I haven’t read your code, nor can I speak about QPFunction. However: Jason_Hu1: loss = Variable(loss_func(predictions, solutions), requires_grad=True) Here you create a new Variable that is no longer connected to the computation graph. (Note that Variable is deprecated, and in current versions of pytorch is just a Tensor). Creating it with requires_grad = True doesn’t fix this problem. Consider: >>> torch.__version__ '1.7.1' >>> loss_func = torch.nn.MSELoss() >>> pred = torch.randn (3, requires_grad = True) >>> targ = torch.randn (3) >>> loss1 = torch.autograd.Variable (loss_func (pred, targ), requires_grad = True) >>> loss1.backward() >>> pred.grad >>> loss2 = loss_func (pred, targ) >>> loss2.backward() >>> pred.grad tensor([-0.4108, 0.5605, -0.4539]) Best. K. Frank
st30619
Hello. I’d like to know efficient implementation way situation like below. Model is consist of independent sub-modules, and there’s forwarding order between modules so previous module’s output will send as next module’s input.(this behavior is well-defined in model’s forward function.) I’d like to add the following functionality to model: Designate specific modules(by configuration file) and each designated module generates (huge) data that can be shared by all modules after designated module(in forwarding order) For example, assume model has [m1, m2, m3, m4, m5] in sequence. If I designate m2 and m4, and each generates data2 and data4. data2 can be shared in [m2, m3, m4, m5] and data4 can be shared [m4, m5]. As I mentioned above, each generated data is quite huge and shareable so I don’t want to forward this data just as part of output. (also modifying well-defined original model’s forward function is annoying job) Currently, I’ve implemented like below: Assign empty buffer for each modules in model. Register forward hook at designated module which send generated data to following modules. But the performance seems bad: takes 2x much more times measured same condition platform compared to original model. The problem is each module’s does not know other module’s existence. I’ve thought about idea ‘melting’ all modules into one ‘unified’ model make the problem easier, but the model is complicated and have so many advantages that I cannot give up when keep module-based architecture. Any ideas will be welcomed. Thanks for reading!
st30620
Hello everyone. I am witnessing strange and imo wrong behavior, can somebody please explain to me why this is happening? I loaded my pretrained model via checkpoint = torch.load(params.pretrained_model, map_location=torch.device(cuda:0)) model.load_state_dict(checkpoint['model_state_dict']) And while all Tensors in checkpoint['model_state_dict'] have device type equals to cuda:0, all parameters in the model have device type equals to cpu. Why does it happen so that if I want to load a specific Tensor as a parameter, its value is being loaded while its device type not? Does it mean that an extra copy of each parameter is being created on a different device?
st30621
x = torch.tensor([0,0,0,0]) k=torch.tensor([1,2,3,4]) x.data.copy_(k) #1 #or x.copy_(k) #2 x
st30622
Solved by albanD in post #2 Hi, The first one should never be used as .data should not be used. The new api for #1 is: with torch.no_grad(): x.copy_(k) And the difference is clearer: use #1 (EDIT: the version of #1 given just above in my comment, not your original one) if the copy should not be tracked by the autograd …
st30623
Hi, The first one should never be used as .data should not be used. The new api for #1 is: with torch.no_grad(): x.copy_(k) And the difference is clearer: use #1 (EDIT: the version of #1 given just above in my comment, not your original one) if the copy should not be tracked by the autograd (like initializing some weights) and use #2 if you want gradients.
st30624
I may be tired, but I do not understand. If you said: The first one should never be used as .data should not be used. And later: use #1 if the copy should not be tracked by the autograd This is opposite.
st30625
Sorry, I wasn’t very clear. Use the other equivalent version of #1 that I gave you (the one with torch.no_grad()).
st30626
Hi, I have a question regarding this reply. Why shouldn’t .data.copy_ be used?? Is p.copy_ always preferred than p.data.copy_?
st30627
.data should not be used anymore. It is unsafe and might lead to silently wrong gradients.
st30628
Regardless of the pytorch version we are using?? Can I get more understanding of why it is unstable??
st30629
Hello together, I wanted to ask if there is an option so save images with torchvision.utils.save_image using a vectorgraphic filetype that is accepted by Word and PowerPoint. I know that I can save the images as .eps what is a vectorgraphic, but isn´nt compatible with Word and PP. On the other hand .svg and .emf are suitable with Word and PP but aren´t supported by torchvision.utils.save_image. Is there any type of vectorgraphic I missed that is suitable for both sides? Many thanky in advance! Best regards, Max
st30630
torchvision.utils.save_image uses PIL.Image.save to save the image in these lines of code 1. Based on this it seems you can pass a format argument, which should be one of the supported PIL formats as seen here 3.
st30631
I`ve seen this also. But using .wmf does’nt work for me, altough it should be supported. I get the following error. raise OSError("WMF save handler not installed")
st30632
This error seems to indicate that the WMF library is missing or that PIL wasn’t installed with it, so you might need to either install dependencies or update PIL etc.
st30633
Hi all! I have a server with 2 x RTX 3090 GPUs. Installed Nvidia driver 460, CUDA 11.1, PyTorch nightly (1.8), on Ubuntu 20 and tried running deep learning benchmarks. The problem is everything runs fine if I use a single GPU. But the moment when I run both of them, the PC just shuts off. I tried using a stress test that loaded both GPUs 100% utilization and it worked fine without crashing. I tried limiting the power of GPUs to 200W (using ‘sudo nvidia-smi -pl 200’ command), started the pytorch training script and it crashed again so I guess it isn’t power supply issue (it’s a SilverStone 1500 watt power supply) here are the code lines I use for using the 2GPUs: model = models.resnet152(pretrained=False) model.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) model.fc = nn.Linear(2048, 2) if torch.cuda.device_count() > 1: model = nn.DataParallel(model) model.to(device) for input_images, labels in dataloaders[‘train’]: # Enable CUDA: use GPUs for model computation input_images, labels = input_images.to(device), labels.to(device) don’t know how to proceed with this… help needed. thanks!
st30634
danohev: Installed Nvidia driver 460, CUDA 11.1 no, just CUDA 11.1. I was told thig might still be PSU issue. even though the stress test with 100% utilization for both 2GPUs passes correctly, when running the pytorch training the GPUs+CPU might have big power spikes which the PSU cant handle (1500Wat, rated ‘80 PLUS Silver’). I got the recommendation to get a PSU with ‘80 platinum’ or ‘80 titanium’ rating what do you think?
st30635
Any PSU above 1200W usually requires that the incoming voltage be of a higher rating to achieve the max stated watt rating of the PSU. What is the AC input voltage rating coming into the PSU? (This can vary by country.)
st30636
By way of example, CORSAIR AXi Series AX1500i Digital 1500W 80 PLUS TITANIUM Haswell Ready Full Modular ATX12V & EPS12V SLI and Crossfire Ready Power Supply with C-Link Monitoring and Control - Newegg.com 5 This PSU is rated 1500W, but if the incoming power supply is 100-115V, it will only provide 1300W.
st30637
Any follow up on this? I got a similar situation with pretty much the same rig. The PSU power draw is about 600W when the rig crashes.
st30638
I have some data which is thrice large as my system’s RAM. I need to run some Deep Learning models using pytorch. Could you please advise how can I use torch data loaders (or alternative) in this scenario? Assume my data is stored as subfolders inside parent directory as below. Transaction_Data/ ---Customer1/ --- day1 --- day2 . --- dayN ---Customer2/ --- day1 --- day2 . --- dayN ---CustomerN/ --- day1 --- day2 . --- dayN Lets assume these are clean data and each customer is like an individual data frame (days represent rows). I want to load these data in batches (probably 5 customers at once - can fit into memory) and train DL model using torch. What is the efficient way to load these? I need to iterate over these data and run DL models. Should I be using custom dataset? Any pointers would be really appreciated. Thank you very much in advance!
st30639
Solved by eqy in post #6 You can start by taking a look at the default dataset classes: torch.utils.data — PyTorch 1.8.1 documentation and seeing if your data fits the map style of iterable style abstraction. The map style is usually a straightforward abstraction for many datasets as you only need to define an __getitem__ …
st30640
I don’t think you need to do anything special for this scenario as usually the amount of memory required is some small multiple of the batch_size (due to prefetching). For example ImageNet spans hundreds of gigabytes (even more when converted to uncompressed RGB format) yet only a small fraction of this (host and device) memory is required for training.
st30641
@eqy Thank you for your response! I understand that we will need only small memory to fit those batches, but how will I load the data in the first place given the above folder structure?
st30642
It is difficult to understand what your data organization is without knowing what the meaning of data1 and data2 are. Are they different classes? Different datasets? If they are different classes, then a DatasetFolder or ImageFolder is exactly the abstraction for this this scenario: torchvision.datasets — Torchvision 0.8.1 documentation (pytorch.org) 2
st30643
No, they are not different classes. Hope below sample structure explains it better. Transaction_Data/ ---Customer1/ --- day1 --- day2 . --- dayN ---Customer2/ --- day1 --- day2 . --- dayN ---CustomerN/ --- day1 --- day2 . --- dayN Lets assume these are clean data and each customer is like an individual data frame (days represent rows). I want to load these data in batches (probably 5 customers at once - can fit into memory) and train DL model using torch. What is the efficient way to load these? Thank you very much in advance!
st30644
You can start by taking a look at the default dataset classes: torch.utils.data — PyTorch 1.8.1 documentation 4 and seeing if your data fits the map style of iterable style abstraction. The map style is usually a straightforward abstraction for many datasets as you only need to define an __getitem__ and a __len__ function. Once you have a usable dataset, using a dataloader torch.utils.data.dataloader — PyTorch 1.8.1 documentation will handle the parallelization and loading in memory for you.
st30645
Hi, I am looking at using a generator to generate time series data. Example code I am looking at is the CausalConvGenerator in (pytorch-GAN-timeseries/convolutional_models.py at 8e7d62fed6f4061d13ec9dfd84e07520d4257ed2 · proceduralia/pytorch-GAN-timeseries · GitHub 9) The generator currently takes Input: (batch_size, seq_len, input_size) Output: (batch_size, seq_len, outputsize) I am looking to modify this to instead take Input: (batch_size, noise_dim) Output: (batch_size, seq_len, outputsize) Would someone mind pointing me in the right direction on how to modify the architecture?
st30646
The linked model uses nn.Conv1d layers, which is why the 3-dimensional input is expected (it should have the shape [batch_size, channels, seq_len]). If you want to use only two dimensions, you could change the layer type to e.g. linear layers and would need to experiment with the architecture (i.e. number of features etc.). Alternatively, you could also add an additional dimension (e.g. the channel or temporal dimension) and run some experiments with the current architecture.
st30647
Hi, all How to set ‘nan’ in Tensor to 0? Now I have a extremely inefficient method: my_tensor_np = my_tensor.cpu().numpy() my_tensor_np [np.isnan(my_tensor_np )] = 0 my_tensor.copy_(torch.from_numpy(my_tensor_np ).cuda()) But copy tensor between gpu and cpu takes lots of time, so I need a more efficient way. Anyone can help me? Thanks a lot!
st30648
NaN means a value which is undefined or unrepresentable. In most cases it makes no sense to simply set NaNs to zero.
st30649
Thank you, iamalbert A paper I recently read use this trick but implemented in Theano. I want to re-implement their algorithm in PyTorch. I think np.isnan is a useful function, but torch doesn’t implement it, is there a efficient solution? Thank you!
st30650
It’s simple, a != a will give you a ByteTensor, indicating the positions of NaNs >>> b nan nan -0.8395 nan nan nan -1.7921 nan 0.1864 [torch.FloatTensor of size 3x3] >>> b != b 1 1 0 1 1 1 0 1 0 [torch.ByteTensor of size 3x3] you can use b[b != b] = 0 to set all NaNs to zero.
st30651
Seems not work now. I tried below code in v0.20: a = torch.Tensor([1,2,3,4,5]) b = 0.0 c = a / b c != c got all 0s. Is there any function like np.nan_to_num?
st30652
@Ben, that’s because c in your example is all Inf, not NaN. Albert’s suggestion works: a = torch.Tensor([float('NaN'), 1, float('NaN'), 2, 3]) print(a) a[a != a] = 0 print(a)
st30653
Hi @Chen-Wei_Xie, torch.isnan() was implemented earlier this year. So now you can set the NaN’s to 0’s using my_tensor[torch.isnan(my_tensor)] = 0 Cheers
st30654
See discussion in ReLU turns nan into zeros 243. As of pytorch 4.1 ReLU can’t be used for this anymore though. I’ve asked about it there.
st30655
For completness I’ll copy my answer: ReLU turns nan into zeros since NaN != NaN you could do my_tensor[my_tensor!=my_tensor] = 0
st30656
Thank you, I’ll continue the discussion here. This works both forwards/backwards on CPU: import torch model = torch.nn.Linear(10,10) x = torch.ones(10,10).detach() x[0,0] = x[0,0]+float('NaN') optimizer = torch.optim.SGD(model.parameters(), lr=0.1) model.zero_grad() y = model(x) y[y!=y] = 0 loss = y.sum() loss.backward() optimizer.step() Could someone verify if it works on GPU? Also, anyone has insights on the computational cost vs the ol’ ReLU-hack?
st30657
Modifying tensors in-place can cause issues with backprop. Here is my solution, since this is still ranked highly on google: safe_tensor = torch.where(torch.isnan(my_tensor), torch.zeros_like(my_tensor), my_tensor)
st30658
Hi, From version 1.8.1, torch.nan_to_num — PyTorch 1.8.1 documentation 95 is now available. It replaces NaN , positive infinity, and negative infinity values in input with the values specified by nan , posinf , and neginf , respectively.
st30659
In pytorch when train model over we can save model . such as torch.save({ ‘epoch’: epoch, ‘model_state_dict’: model.state_dict(), ‘optimizer_state_dict’: optimizer.state_dict(), ‘loss’: loss, … }, PATH) and load model and param such as model.load_state_dict(checkpoint[‘model_state_dict’]) optimizer.load_state_dict(checkpoint[‘optimizer_state_dict’]) epoch = checkpoint[‘epoch’] loss = checkpoint[‘loss’] but in libtorch . I don’t know how to achieve it . Please help me?
st30660
from efficientnet_pytorch import EfficientNet class modelController: def __init__(self): self.model = self.get_model() self.data_transforms = transforms.Compose([ transforms.Resize((224,112)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) def get_model(self): model = EfficientNet.from_pretrained('efficientnet-b0', num_classes = 2) model.cuda() model.eval() model.half() return model def get_model_inference(self, img): with torch.no_grad(): result = self.model(img) result = self.get_inference(result) return result import cv2 import numpy as np from classifier import modelController import torch import time import copy mc = modelController() tensor = torch.rand((32,3,224,112)) tensor = tensor.cuda() tensor = tensor.half() print(tensor.size()) # print(final_img.size()) total_time = 0 for i in range(50): t1 = time.time() get_person_cat(tensor) t2 = time.time() - t1 print("time taken: ", t2) total_time += t2 print("avg totatl time: ", total_time/50) I have an efficientnet-b0 model. I am getting 0.0173 ms average inference time with model.half() and tensor.half() but when I comment out model.half() and tensor.half(), I get 0.0172 ms average inference time. Why is the fp16 model not taking less time to infer?
st30661
Hi, There are two factors that I think are contributing to your results: You initialise a random tenor at the start of your program, this tensor is used to calculate the timing for each sample. I would recommend using real-world data if possible for better results or passing a new tensor for each inf step (see example bellow) The performance difference torch.float32 and torch.float16 is so small that you are not able to measure the difference at the current precision level. You could try making the task more difficult for your model, a quick method could be to pass more data for the model to compute. I would be interested to see if any PyTorch developers have anything to add to this. import cv2 import numpy as np from classifier import modelController import torch import time import copy mc = modelController() tensor = torch.rand((50,3,224,112)) tensor = tensor.cuda() tensor = tensor.half() print(tensor.size()) # print(final_img.size()) total_time = 0 for i in range(50): t1 = time.time() get_person_cat(tensor[i].unsqueeze(0)) t2 = time.time() - t1 print("time taken: ", t2) total_time += t2 print("avg totatl time: ", total_time/50)
st30662
knoriy: I would recommend using real-world data if possible for better results or passing a new tensor for each inf step (see example bellow) I loaded a raw rgb image in the for loop first but no difference in inference time was seen.
st30663
With your code, on fp32 I am getting 0.016 ms time and on fp16 I am getting 0.017 ms
st30664
@ptrblck can you tell me what I can do to get a smaller inference time using fp16 model when compared with a similar fp32 model
st30665
For this kind of timing you should add a torch.cuda.synchronize() before stopping the time to ensure that no kernels are still in-flight on the GPU when timing stops. You might also want to add a warmup run before timing to make sure the noise from the first run is not included.
st30666
eqy: torch.cuda.synchronize() can u give an example code? I am confused about where to add torch.cuda.synchronize() between or after the for loop
st30667
Sure, the general pattern is like torch.cuda.synchronize() # clear out everything that might have been going on before t1 = time.time() for i in range(iterations): do_stuff_on_gpu() torch.cuda.synchronize() #make sure everything finished t2 = time.time() diff = t2 - t1