id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st180200 | When I worked with Keras sometime back, it used to increase the training data. I don’t know if things have changed.
my3bikaht:
but also what kind of augmentation
I don’t know if this will help me increase the training data. It would only return the transformed image and not multiple images. I have written a code and I was wondering if you can tell me if this is what you meant by running in loop here:
my3bikaht:
run multiple training loops
for epoch in range(num_epochs):
running_loss_train = 0.0
running_corrects_train, running_corrects_test = 0, 0
for _ in tqdm(range(10)):
# print('Epoch {}/{}'.format(epoch, num_epochs - 1))
# print('-' * 10)
# Each epoch has a training and validation phase
model.train() # Set model to training mode
# Iterate over data.
running_loss_aug = 0.0
running_corrects_train_aug, running_corrects_test_aug = 0, 0
for inputs, labels in (trainloader):
inputs = inputs.cuda(0)
labels = labels.cuda(1)
# print(inputs, labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(True):
outputs = model(inputs)
# print(outputs)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
loss.backward()
optimizer.step()
# statistics
running_loss_train += loss.item() * inputs.size(0)
running_loss_aug += loss.item() * inputs.size(0)
count=0
for i in range(inputs.size(0)):
if(torch.argmax(outputs[i]) == torch.argmax(labels.data[i])):
count+=1
running_corrects_train += count
running_corrects_train_aug += count
running_loss_aug = running_loss_aug / len(trainloader.dataset)
running_corrects_train_aug = running_corrects_train_aug / len(trainloader.dataset)
with torch.no_grad():
model.eval()
for inputs, labels in (testloader):
# Generate outputs
outputs = model(inputs)
count = 0
for i in range(inputs.size(0)):
if(torch.argmax(outputs[i]) == torch.argmax(labels.data[i])):
count+=1
running_corrects_test_aug += count
running_corrects_test += count
epoch_acc_test = running_corrects_test_aug / len(testloader.dataset)
acc_data_test.append(epoch_acc_test)
# print('After each train: Train_Loss: {:.4f} Train_Acc: {:.4f} Test_Acc: {:.4f}'.format(
# running_loss_aug, running_corrects_train_aug, epoch_acc_test))
scheduler.step()
epoch_loss = running_loss_train / 20100.
epoch_acc_train = running_corrects_train / 20100.
epoch_acc_test = running_corrects_test / 20100
loss_data_train.append(epoch_loss)
acc_data_train.append(epoch_acc_train) |
st180201 | Let me show you what I mean:
class dataload(Dataset):
def __init__(self, x, transform=None):
self.data = [(el, 'none') for el in x]
self.data.extend([(el, 'augm1') for el in x])
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, i):
img = Image.open(self.data[i][0])
# img = img.transpose((2, 0, 1))
# img = torch.from_numpy(img).float()
tmp = np.int32(filenames[i].split('/')[-1].split('_')[0][1])
label = np.zeros(67)
label[tmp] = 1
label = torch.from_numpy(label).float()
if self.data[i][1] == 'augm1' and self.transform:
img = self.transform(img)
return img,label
This way self.data stores twice as much records: one with augm1 flag and will trigger augmentation, another won’t. |
st180202 | This did double the training data. Thanks. However, I have a question. Since I am using these transforms:
transform_img = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
So shouldn’t the increase be 4x?
Also, I made a change to the code because it was returning PIL Image type but non-transformed images:
class dataload(Dataset):
def __init__(self, x, transform=None):
self.data = [(el, 'none') for el in x]
self.data.extend([(el, 'augm1') for el in x])
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, i):
img = mpimg.imread(self.data[i][0]) - [103.939, 116.779, 123.68]
img = img.transpose((2, 0, 1))
img = torch.from_numpy(img).float()
tmp = np.int32(filenames[i].split('/')[-1].split('_')[0][1])
label = np.zeros(67)
label[tmp] = 1
label = torch.from_numpy(label).float()
if self.data[i][1] == 'augm1' and self.transform:
img = Image.open(self.data[i][0])
img = self.transform(img)
return img,label
Let me know what you think. |
st180203 | Flock1:
So shouldn’t the increase be 4x?
Nope, these are not separate transformations but single sequence |
st180204 | Flock1:
transform_img = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
You can create multiple different transform sequences, then extend self.data few more times with different augmentation flags and run different transforms depending on value of the flag. |
st180205 | Soemthing like this?
def __init__(self, x, transform=None):
self.data = [(el, 'none') for el in x]
self.data.extend([(el, 'augm1') for el in x])
self.data.extend([(el, 'augm2') for el in x])
self.data.extend([(el, 'augm3') for el in x])
self.transform = transform
and then
if self.data[i][1] == 'augm1' and self.transform:
img = Image.open(self.data[i][0])
img = self.transform(img)
if self.data[i][2] == 'augm2' and self.transform:
img = Image.open(self.data[i][0])
img = self.transform(img)
if self.data[i][3] == 'augm3' and self.transform:
img = Image.open(self.data[i][0])
img = self.transform(img) |
st180206 | yup, just one thing - you are applying same transform every time. You need to define additional transform sequences, for example one with horizontal flip and another with resize and crop.
transform_crop = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
also I would still advise you to look into timm’s library of augmentations. Cutmix and Mixup helped me a lot lately, accuracy went up from 94.5% to 99.37% |
st180207 | my3bikaht:
You need to define additional transform sequences
Can you elaborate on that? Because I have create something like that as mentioned here
Flock1:
transforms
I’ll definitely checkout the library. |
st180208 | def __init__(self, x):
self.data = [(el, 'none') for el in x]
self.data.extend([(el, 'augm1') for el in x])
self.data.extend([(el, 'augm2') for el in x])
self.data.extend([(el, 'augm3') for el in x])
self.transform1 = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
self.transform2 = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
self.transform3 = transforms.Compose([
transforms.RandomErasing(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
and
if self.data[i][1] == 'augm1':
# do self.transform1
if self.data[i][1] == 'augm2':
# do self.transform2
if self.data[i][1] == 'augm3':
# do self.transform3 |
st180209 | Hi
I have a neural net model with optimizer state data saved on a pickled file (excuse if my terminology is imprecise) at a checkpoint. The tensors for the model and the optimizer were all saved from the GPU, and when the checkpoint is loaded using torch.load, they are loaded on the GPU again. However, when I load from the checkpoint, I would like some specific optimizer state tensors (e.g., the exponential moving average momentum tensor of the Adam optimizer, for example) to be loaded into CPU instead of GPU. I have found very little documentation on the map_location option in torch.load to explain how to write a function that does this. Can you please provide an example for how to do this for one or more tensors? You can assume that the optimizer tensors to be loaded into CPU are state[‘exp_avg’] and state[‘exp_avg_sq’] for definiteness.
Thanks! |
st180210 | You can pass map_location='cpu' to torch.load to load the object to the CPU.
Here is a small example showing the usage:
# save optimizer.state_dict() with CUDATensors
model = models.resnet18().cuda()
optimizer = torch.optim.Adam(model.parameters(), lr=1.)
out = model(torch.randn(1, 3, 224, 224).cuda())
out.mean().backward()
optimizer.step()
torch.save(optimizer.state_dict(), 'opt.pth')
optimizer.state_dict()
# in a new script make sure the GPU is not visible and load the state_dict to the CPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ""
import torch
import torchvision.models as models
torch.cuda.get_device_name()
# > RuntimeError: No CUDA GPUs are available
model = models.resnet18()
optimizer = torch.optim.Adam(model.parameters(), lr=1.)
optimizer.load_state_dict(torch.load('opt.pth'))
# > RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
optimizer.load_state_dict(torch.load('opt.pth', map_location='cpu')) # works |
st180211 | New to python. How can I modify the class to filter files in the folder with a string. Right now it returns all files in folder_containing_the_content_folder which could be millions of items. I would like to isolate files that contain a specific string, for example, isolate all files that contain ‘v_1234_frame’:
# Image loader
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Lambda(lambda x: x.mul(255))
])
image_dataset = utils.ImageFolderWithPaths(folder_containing_the_content_folder, transform=transform)
image_loader = torch.utils.data.DataLoader(image_dataset, batch_size=batch_size)
The class that works requires a modification to filter file names that contain ‘v_1234_frame’:
class ImageFolderWithPaths(datasets.ImageFolder):
"""Custom dataset that includes image file paths.
Extends torchvision.datasets.ImageFolder()
Reference: https://discuss.pytorch.org/t/dataloader-filenames-in-each-batch/4212/2
"""
# override the __getitem__ method. this is the method dataloader calls
def __getitem__(self, index):
# this is what ImageFolder normally returns
original_tuple = super(ImageFolderWithPaths, self).__getitem__(index)
# the image file path
path = self.imgs[index][0]
# make a new tuple that includes original and the path
tuple_with_path = (*original_tuple, path)
return tuple_with_path
I am learning python and just can’t seem to come up with the solution. Hope you can help/suggest a change to the class or calling method. |
st180212 | You could e.g. create a custom find_classes method and pass it to DatasetFolder as described here 1. |
st180213 | Very new to python and pytorch. In the FindClasses example how do you pass that result of that to my ImageFolderWithPaths class? Basically how can I pass anything to the ImageFolderWithPaths class in addition to the folder? |
st180214 | One approach would be to write a custom Dataset by deriving from DatasetFolder and override the find_classes method. You could also alternatively create an ImageFolder dataset and try to reassign the new find_classes method to the internal method (haven’t checked this approach). |
st180215 | Built my own data loader to isolate files via a wildcard pattern in glob and then loop through those to create a tensor for each image, passing that to my model which required it to be converted to a float. Extract the base name (image name) from the path (ex. img_frame1.jpg). Save result to my style folder. This method gives me total control over the files via the wildcard. I have included the other functions used in the solution. Note: I am not using a gpu to process these so I can run it on a standard python web server. Hopefully this helps someone in the future. Simple is sometimes better
# Load image file
# def load_image(path):
# # Images loaded as BGR
# img = cv2.imread(path)
# return img
# def itot(img, max_size=None):
# # Rescale the image
# if (max_size == None):
# itot_t = transforms.Compose([
# # transforms.ToPILImage(),
# transforms.ToTensor(),
# transforms.Lambda(lambda x: x.mul(255))
# ])
# else:
# H, W, C = img.shape
# image_size = tuple([int((float(max_size) / max([H, W])) * x) for x in [H, W]])
# itot_t = transforms.Compose([
# transforms.ToPILImage(),
# transforms.Resize(image_size),
# transforms.ToTensor(),
# transforms.Lambda(lambda x: x.mul(255))
# ])
#
# # Convert image to tensor
# tensor = itot_t(img)
#
# # Add the batch_size dimension
# tensor = tensor.unsqueeze(dim=0)
# return tensor
folder_data = glob.glob(folder_containing_the_content_folder + "content_folder/" + video_token + "_frame*.jpg")
# image_dataset = utils.ImageFolderWithPaths(folder_containing_the_content_folder, transform)
# image_loader = torch.utils.data.DataLoader(image_dataset, batch_size=batch_size)
# Load Transformer Network
net = transformer.TransformerNetwork()
net.load_state_dict(torch.load(style_path))
net = net.to(device)
torch.cuda.empty_cache()
with torch.no_grad():
for image_name in folder_data:
img = utils.load_image(image_name)
img = img / 255.0 # standardize the data/transform
img_tensor = utils.itot(img)
# style image tensor
generated_tensor = net(img_tensor.float())
# convert image the model modified tensor back to an image
generated_image = utils.ttoi(generated_tensor)
image_name = os.path.basename(image_name)
# save generated image to folder
utils.saveimg(generated_image, save_folder + image_name) |
st180216 | Half of my project written in C++ with pybind nogil wrapper and unpickleable c++ class, so I can only use multi-thread loader instead of multi-process.
Does pytorch implement multi-thread for loader? If not, why? |
st180217 | Solved by ejguan in post #5
General reason is multithreading is not fast in python. For the use case of DataLoader, most of users create Dataset using pure python code. Then, there won’t be any performance gain if we provide that.
But, we are working on new DataLoader-DataPipe, and hopefully we can provide multithreading Data… |
st180218 | No, because of the Python GIL which would block the threads and thus wouldn’t yield any speedup. |
st180219 | c++ function with pybind11::gil_scoped_release can avoid python gil problem and get ultra-fast multi-thread dataloader without pickle/IPC overhead if all bottleneck functions in dataloader code are written in c++. I think we should add a option to use thread workers. |
st180220 | FindDefinition:
if all bottleneck functions in dataloader code are written in c++
Would this mean that the standard approach of using e.g. ImageFolder or a custom Dataset wouldn’t be suitable for this approach?
If so, I think it might be a great extension (so feel free to create a feature request on GitHub for it and explain your interest in implementing it). |
st180221 | General reason is multithreading is not fast in python. For the use case of DataLoader, most of users create Dataset using pure python code. Then, there won’t be any performance gain if we provide that.
But, we are working on new DataLoader-DataPipe, and hopefully we can provide multithreading DataLoader for you in the short future. And, you can use that to call a c++ function without python gil. |
st180222 | Hello,
I am working on a CNN based classification.
I am using torchvision.ImageFolder to set up my dataset then pass to the DataLoader and feed it to
pretrained resnet34 model from torchvision.
I have a highly imbalanced dataset which hinders model performance.
Say ‘0’: 1000 images, ‘1’:300 images.
I know I have two broad strategies: work on resampling (data level) or on loss function(algorithm level).
I first tried to change the cross entropy loss to custom FocalLoss. But somehow I am getting even worse performance like below:
my training function looks like this can anybody tell me what I am missing out or doing wrong?
def train_model(model, data_loaders, dataset_sizes, device, n_epochs=20):
optimizer = optim.Adam(model.parameters(), lr=0.0001)
scheduler = lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1)
loss_fn = FocalLoss().to(device)
history = defaultdict(list)
best_accuracy = 0
for epoch in range(n_epochs):
print(f'Epoch {epoch + 1}/{n_epochs}')
print('-' * 10)
train_acc, train_loss = train_epoch(
model,
data_loaders['train'],
loss_fn,
optimizer,
device,
scheduler,
dataset_sizes['train']
)
print(f'Train loss {train_loss} accuracy {train_acc}')
val_acc, val_loss = eval_model(
model,
data_loaders['val'],
loss_fn,
device,
dataset_sizes['val']
)
print(f'Val loss {val_loss} accuracy {val_acc}')
print()
history['train_acc'].append(train_acc)
history['train_loss'].append(train_loss)
history['val_acc'].append(val_acc)
history['val_loss'].append(val_loss)
if val_acc > best_accuracy:
torch.save(model.state_dict(), 'best_model_state.bin')
best_accuracy = val_acc
print(f'Best val accuracy: {best_accuracy}')
model.load_state_dict(torch.load('best_model_state.bin'))
return model, history
The custom FocalLoss function from the web looks like below (sorry, I forgot the reference):
class FocalLoss(nn.Module):
#WC: alpha is weighting factor. gamma is focusing parameter
def __init__(self, gamma=0, alpha=None, size_average=True):
#def __init__(self, gamma=2, alpha=0.25, size_average=False):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.alpha = alpha
if isinstance(alpha, (float, int)): self.alpha = torch.Tensor([alpha, 1 - alpha])
if isinstance(alpha, list): self.alpha = torch.Tensor(alpha)
self.size_average = size_average
def forward(self, input, target):
if input.dim()>2:
input = input.view(input.size(0), input.size(1), -1) # N,C,H,W => N,C,H*W
input = input.transpose(1, 2) # N,C,H*W => N,H*W,C
input = input.contiguous().view(-1, input.size(2)) # N,H*W,C => N*H*W,C
target = target.view(-1, 1)
logpt = F.log_softmax(input, dim=1)
logpt = logpt.gather(1,target)
logpt = logpt.view(-1)
pt = logpt.exp()
if self.alpha is not None:
if self.alpha.type() != input.data.type():
self.alpha = self.alpha.type_as(input.data)
at = self.alpha.gather(0, target.data.view(-1))
logpt = logpt * at
loss = -1 * (1 - pt)**self.gamma * logpt
if self.size_average: return loss.mean()
else: return loss.sum() |
st180223 | Solved by mMagmer in post #2
Take a look at the original paper for more insight.
For focal loss:
" In practice α may be set by inverse class frequency or treated as a hyperparameter to set by cross validation"
Also, your problem is not highly imbalance. You can use weighted cross entropy and get a good performance. |
st180224 | Take a look at the original paper 1 for more insight.
For focal loss:
" In practice α may be set by inverse class frequency or treated as a hyperparameter to set by cross validation"
Also, your problem is not highly imbalance. You can use weighted cross entropy 1 and get a good performance. |
st180225 | As far as I know, the random transformations (e.g. random crop, random resized crop, etc.) from torchvision.transforms module apply the same transformations to all the images of a given batch.
Is there any efficient way to apply different random transformations for each image in a given mini-batch?
Thanks in advance. |
st180226 | Solved by ptrblck in post #2
I think you would need to apply the random transformations on each sample and could use e.g. transforms.Lambda for it:
# setup
x = torch.zeros(3, 10, 10)
x[:, 3:7, 3:7] = 1.
# same transformation on each sample
transform = transforms.RandomCrop(size=5)
y1 = transform(x)
print(y1)
# apply transfor… |
st180227 | I think you would need to apply the random transformations on each sample and could use e.g. transforms.Lambda for it:
# setup
x = torch.zeros(3, 10, 10)
x[:, 3:7, 3:7] = 1.
# same transformation on each sample
transform = transforms.RandomCrop(size=5)
y1 = transform(x)
print(y1)
# apply transformation per sample
y2 = transforms.Lambda(lambda x: torch.stack([transform(x_) for x_ in x]))(x)
print(y2) |
st180228 | Hello, I ma trying to define a Dataset class which is suppoose to treat dataset with label and without label. But I ma getting and error
Item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} AttributeError: ‘list’ object has no attribute ‘items’
code
# Create torch dataset
class Dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels=None):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
if self.labels:
item["labels"] = torch.tensor(self.labels[idx])
#print(item)
return item
def __len__(self):
print(len(self.encodings["input_ids"]))
return len(self.encodings["input_ids"])
# prepare dat for classification
tokenizer = FlaubertTokenizer.from_pretrained(model_name)
print("Transform xml file to pandas series core...")
text, file_name = transform_xml_to_pd(file) # transform xml file to pd
# Xtest_emb, s = get_flaubert_layer(Xtest['sent'], path_to_model_lge) # index 2 correspond to sentences
#print(text)
print("Preprocess text with spacy model...")
clean_text = make_new_traindata(text['sent'])
#print(clean_text[1]) # clean text ; 0 = raw text ; and etc...
X = list(clean_text)
X_text_tokenized = []
for x in X:
#print(type(x))
x_encoded = tokenizer(str(x), padding="max_length", truncation=True, max_length=512)
#print(type(x_encoded))
#print(x_encoded)
X_text_tokenized.append(x_encoded)
#print(type(X_text_tokenized))
X_data = Dataset(X_text_tokenized)
print(type(X_data))
print(X_data['input_ids'])
Error :slight_smile:
File "/scriptTraitements/classifying.py", line 153, in __getitem__
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
AttributeError: 'list' object has no attribute 'items'
Any idea ?
furthermore How can I access element of the corpus pass to the Dataset class. When printing it , I get only this : <main.Dataset object at 0x7fc93bec1df0> |
st180229 | self.encodings is a list (of dictionaries) and thus it does not have a .items() function. Taking a guess that X_text_tokenized contains all data and we need to index it to get one item, I think you need to update
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
to
item = {key: torch.tensor(val) for key, val in self.encodings[idx].items()} |
st180230 | I create a neural network, which should search the image for objects (faces away from the camera). In fact I need 4 coordinates, for the example I attach the image in which these coordinates are marked. There is a good set of transformers in pytorch, but the problem is that if the transformation is related to image distortion, you need to change the target respectively, otherwise the coordinates will not coincide with the desired points. What is the best way to do this? Maybe PyTorch already has tools implemented for such purposes?
This is what the target looks like for the image below - it is a set of coordinates
[[1253 368]
[1251 589]
[1386 579]
[1368 800]]
Translated with www.DeepL.com/Translator (free version)
Снимок экрана 2021-12-09 080614720×408 321 KB |
st180231 | you can use :
albumentations.ai
Albumentations Documentation - Using Albumentations to augment keypoints
Albumentations: fast and flexible image augmentations |
st180232 | Thank you. That’s almost what I need, except for one thing. It doesn’t have elastic transformations, which are usually the most needed. More precisely, they are present as an option, but do not support points. Perhaps you know a package which implements this feature with point support? |
st180233 | i don’t know .
maybe you can find a solution here or in this code .
Also, elastic transform doesn’t change pixel locality by much, so maybe you can use original key points. |
st180234 | Hi guys,
I am absolutely new here and therefore have little or no experience with PyTorch.
I would like someone to be able to tell me the best way to use a data array from Matlab as a data set, preferably with a normalization of the data, e.g. in the range [0, 1].
Here a little more detail about my data:
I have a two-dimensional array, with the rows corresponding to frames from several spectrograms. The size of the array is 60000 x 1025, i.e. 60000 frames with 1025 frequency bins. Since I use an auto encoder, my input data is also my target data.
I already know that I can use mat73.loadmat (path) to read my Matlab data as an array in Python. Now the only question is how can I create a normalized dataset from this?
I hope you guys can help me.
Thanks in advance |
st180235 | Solved by mMagmer in post #2
if you are loading your data to a numpy array, do as follow:
dataset = torch.from_numpy(data)
and if your data is loading to python list:
dataset = torch.tensor(data)
for data normalization:
dMin = (dataset.min(0,keepdim=True))[0]
dMax = (dataset.max(0,keepdim=True))[0]
NDataset = (dataset - d… |
st180236 | if you are loading your data to a numpy array, do as follow:
dataset = torch.from_numpy(data)
and if your data is loading to python list:
dataset = torch.tensor(data)
for data normalization:
dMin = (dataset.min(0,keepdim=True))[0]
dMax = (dataset.max(0,keepdim=True))[0]
NDataset = (dataset - dMin)/(dMax - dMin)
then you need to define data loader:
dl = torch.utils.data.DataLoader(NDataset,batch_size=100,shuffle=True)
for i in range(epoch):
for x in dl:
#do something with it. |
st180237 | Solved by ptrblck in post #4
I’m not sure if this issue might be Windows-specific, but could you update PyTorch to the latest nightly release and check, if this might have been an already known and fixed issue, please? |
st180238 | I cannot reproduce the issue in a recent master build and am able to save a tensor of e.g. 16GB using:
>>> import torch
>>> x = torch.randn(int(4*1024**3), device='cuda')
>>> print(torch.cuda.memory_allocated()/1024**3)
16.0
>>> torch.save(x, 'tmp.pt')
>>> y = torch.load('tmp.pt')
>>> print(y.shape)
torch.Size([4294967296]) |
st180239 | I used the same operation as you, but an error occurred when I load the built tensor.
image1403×378 27.5 KB |
st180240 | I’m not sure if this issue might be Windows-specific, but could you update PyTorch to the latest nightly release and check, if this might have been an already known and fixed issue, please? |
st180241 | Is there a way to get std::shared_ptr<torch::jit::Graph>& graph
from torch::jit::script::Module
I exported a module in python using:
torchscript_trace = torch.jit.trace(my_module, example_inputs)
torchscript_trace = torch.jit.freeze(torchscript_trace.eval())
torchscript_trace.save('model.pt')
and wanted to get the graph object in C++ using:
torch::jit::script::Module module;
module = torch::jit::load('./model.pt'); |
st180242 | Hello,
I’m trying to export a model in onnx and to run it with TensorRT.
from polygraphy.backend.trt import EngineFromNetwork, NetworkFromOnnxPath
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.x2 = torch.zeros((2048, 1)).cuda()
def forward(self, x1):
x2 = self.x2
idx = x2 < x1
x1[idx] = x2[idx]
return x1
if __name__ == '__main__':
onnx_file = 'test.onnx'
model = Model()
x = torch.zeros((2048, 1)).cuda()
torch.onnx.export(model, x, onnx_file, input_names=['input'], output_names=['output'], opset_version=11)
build_engine = EngineFromNetwork(NetworkFromOnnxPath(onnx_file))
engine = build_engine()
While the above model is correctly exported as an onnx, TensorRT has trouble parsing it:
[02/02/2022-17:38:36] [TRT] [E] ModelImporter.cpp:773: While parsing node number 3 [NonZero -> "4"]:
[02/02/2022-17:38:36] [TRT] [E] ModelImporter.cpp:774: --- Begin node ---
[02/02/2022-17:38:36] [TRT] [E] ModelImporter.cpp:775: input: "3"
output: "4"
name: "NonZero_3"
op_type: "NonZero"
[02/02/2022-17:38:36] [TRT] [E] ModelImporter.cpp:776: --- End node ---
[02/02/2022-17:38:36] [TRT] [E] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[E] In node 3 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
I’d like to rewrite the code in such a way that this NonZero operation is not present in the onnx graph, is that possible?
This is the onnx graph:
graph(%input : Float(2048, 1, strides=[1, 1], requires_grad=0, device=cuda:0),
%23 : Long(1, strides=[1], requires_grad=0, device=cpu),
%24 : Long(1, strides=[1], requires_grad=0, device=cpu)):
%1 : Float(2048, 1, strides=[1, 1], requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]() # C:/Users/arosasco/PycharmProjects/pcr/delete2.py:13:0
%2 : Float(2048, 1, strides=[1, 1], requires_grad=0, device=cuda:0) = onnx::Constant[value=<Tensor>]()
%3 : Bool(2048, 1, strides=[1, 1], requires_grad=0, device=cuda:0) = onnx::Less(%2, %input) # C:/Users/arosasco/PycharmProjects/pcr/delete2.py:13:0
%4 : Long(2, *, device=cpu) = onnx::NonZero(%3)
%5 : Long(*, 2, device=cpu) = onnx::Transpose[perm=[1, 0]](%4)
%6 : Float(*, strides=[1], requires_grad=0, device=cuda:0) = onnx::GatherND(%1, %5) # C:/Users/arosasco/PycharmProjects/pcr/delete2.py:14:0
%7 : Long(2, strides=[1], device=cpu) = onnx::Shape(%input)
%8 : Bool(2048, 1, device=cpu) = onnx::Expand(%3, %7)
%9 : Long(2, *, device=cpu) = onnx::NonZero(%8)
%10 : Long(*, 2, device=cpu) = onnx::Transpose[perm=[1, 0]](%9)
%11 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={-1}]()
%12 : Float(*, device=cpu) = onnx::Reshape(%6, %11)
%13 : Long(2, strides=[1], device=cpu) = onnx::Shape(%10)
%14 : Long(device=cpu) = onnx::Constant[value={0}]()
%15 : Long(device=cpu) = onnx::Gather[axis=0](%13, %14)
%18 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0]](%15)
%21 : Float(*, device=cpu) = onnx::Slice(%12, %23, %18, %24)
%output : Float(2048, 1, strides=[1, 1], requires_grad=0, device=cuda:0) = onnx::ScatterND(%input, %10, %21) # C:/Users/arosasco/PycharmProjects/pcr/delete2.py:14:0
return (%output)
It looks like the problem is around lines 13 and 14 of the above scripts:
idx = x2 < x1
x1[idx] = x2[idx]
I’ve tried to change the first line with torch.zeros_like(x1).to(torch.bool) but the problem persists so I’m thinking the issue is with the second one.
I have no clue on how to solve it, can anyone help? |
st180243 | Solved by Andrea_Rosasco in post #2
Apparently changing it with torch.where(x2 < x1, x2, x1) solved the problem. |
st180244 | I’m trying to save a serialized tensorRT optimized model using torch_tensorrt from one environment and then load it in another environment (different GPUs. one has Quadro M1000M, and another has Tesla P100.
In both environments I don’t have full sudo control where I can install whatever I want (i.e. can’t change nvidia driver), but I am able to install different cuda toolkits locally, same with pip installs with wheels.
I have tried:
env #1 =
Tesla P100,
Nvidia driver 460,
CUDA 11.3 (checked via torch.version.cuda). nvidia-smi shows 11.2. has many cuda versions installed from 10.2 to 11.4
CuDNN 8.2.1.32
TensorRT 8.2.1.8
Torch_TensorRT 1.0.0
Pytorch 1.10.1+cu113 (conda installed)
env #2 =
Quadro M1000M
Nvidia driver 455
CUDA 11.3(checked via torch.version.cuda, backwards compatibilty mode I believe, but technically 11.3 requires 460+ nvidia driver according to the compatibility table). nvidia-smi shows 11.1. has 10.2 version available aside from 11.3 I installed.
CuDNN 8.2.1.32
TensorRT 8.2.1.8
Torch_TensorRT 1.0.0
Pytorch 1.10.1+cu113 (pip installed)
So as you can see the only difference is really the GPU and the NVIDIA driver (455 vs 460).
Is this supposed to work?
On env#1, I can torch_tensorrt compile any models
On env#2, I run into issues if I try to compile any slightly complex models (i.e. resnet34) where it says:
WARNING: [Torch-TensorRT] - Dilation not used in Max pooling converter
WARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.6.3 but loaded cuBLAS/cuBLAS LT 11.5.1
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 1: [wrapper.cpp::plainGemm::197] Error Code 1: Cublas (CUBLAS_STATUS_NOT_SUPPORTED)
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
If I try to “torch.jit.load” any model made in env #1 (even the simplest ones like a model with 1 conv2d layer) on env #2, I get the following error msg:
~/.local/lib/python3.6/site-packages/torch/jit/_serialization.py in load(f, map_location, _extra_files)
159 cu = torch._C.CompilationUnit()
160 if isinstance(f, str) or isinstance(f, pathlib.Path):
→ 161 cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
162 else:
163 cpp_module = torch._C.import_ir_module_from_buffer(
RuntimeError: [Error thrown at core/runtime/TRTEngine.cpp:44] Expected most_compatible_device to be true but got false
No compatible device was found for instantiating TensorRT engine
Environment
Explained above |
st180245 | Based on the raised warnings and errors, I would recommend to create an issue in the Torch-TensorRT GitHub repository so that the devs could try to debug the issues. |
st180246 | Thanks, I created it here in case anyone wants to follow the issue:
github.com/NVIDIA/Torch-TensorRT
❓ [Question] Trying to find compatible versions between two different environments 1
opened
Feb 1, 2022
hanbrianlee
question
## ❓ Question
I'm trying to save a serialized tensorRT optimized model using …torch_tensorrt from one environment and then load it in another environment (different GPUs. one has Quadro M1000M, and another has Tesla P100.
In both environments I don't have full sudo control where I can install whatever I want (i.e. can't change nvidia driver), but I am able to install different cuda toolkits locally, same with pip installs with wheels.
## What you have already tried
I have tried:
env #1 =
1. Quadro M1000M,
2. Nvidia driver 460,
3. CUDA 11.3 (checked via torch.version.cuda). nvidia-smi shows 11.2. has many cuda versions installed from 10.2 to 11.4
4. CuDNN 8.2.1.32
5. TensorRT 8.2.1.8
6. Torch_TensorRT 1.0.0
7. Pytorch 1.10.1+cu113 (conda installed)
env #2 =
1. Quadro M1000M
2. Nvidia driver 455
3. CUDA 11.3(checked via torch.version.cuda, backwards compatibilty mode I believe, but technically 11.3 requires 460+ nvidia driver according to the compatibility table). nvidia-smi shows 11.1. has 10.2 version available aside from 11.3 I installed.
4. CuDNN 8.2.1.32
5. TensorRT 8.2.1.8
6. Torch_TensorRT 1.0.0
7. Pytorch 1.10.1+cu113 (pip installed)
--------------------------------------------------------------------------
So as you can see the only difference is really the GPU and the NVIDIA driver (455 vs 460).
Is this supposed to work?
On env#1, I can torch_tensorrt compile any models
On env#2, I run into issues if I try to compile any slightly complex models (i.e. resnet34) where it says:
WARNING: [Torch-TensorRT] - Dilation not used in Max pooling converter
WARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuBLAS/cuBLAS LT 11.6.3 but loaded cuBLAS/cuBLAS LT 11.5.1
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 1: [wrapper.cpp::plainGemm::197] Error Code 1: Cublas (CUBLAS_STATUS_NOT_SUPPORTED)
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
--------------------------------------------------------------------------
If I try to "torch.jit.load" any model made in env #1 (even the simplest ones like a model with 1 conv2d layer) on env #2, I get the following error msg:
~/.local/lib/python3.6/site-packages/torch/jit/_serialization.py in load(f, map_location, _extra_files)
159 cu = torch._C.CompilationUnit()
160 if isinstance(f, str) or isinstance(f, pathlib.Path):
--> 161 cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
162 else:
163 cpp_module = torch._C.import_ir_module_from_buffer(
RuntimeError: [Error thrown at core/runtime/TRTEngine.cpp:44] Expected most_compatible_device to be true but got false
No compatible device was found for instantiating TensorRT engine
## Environment
Explained above |
st180247 | Hi all,
I guess I could have titled my topic: “Can torch.jit be used for the same purpose as numba’s njit or is it something strictly used to optimize models for inference?”, but this sounded a bit too long.
I am trying to understand why this torch jitted function is so very slow. I think this code snippet is better than any attempt to explain it:
import timeit
import torch
def if_max_torch_base(tensor: torch.Tensor, threshold=torch.Tensor) -> torch.Tensor:
"Just the base pytorch way"
return torch.any(tensor > threshold)
@torch.jit.script
def if_max_torch_jit(tensor: torch.Tensor, threshold: torch.Tensor) -> torch.Tensor:
"Attempt at exploiting torch.jit"
for x in tensor:
if x > threshold:
return torch.tensor(True) # no need to test further, let's just return early
return torch.tensor(False)
if_max_torch_jit = torch.jit.trace(if_max_torch_jit, (torch.rand(5), torch.tensor(0.5)))
def main():
tensor = torch.linspace(0, 1, 1_000_000, device="cuda")
for name, func in (
["base", if_max_torch_base],
["jit", if_max_torch_jit],
):
for threshold in (0.5, 1.0):
print(name, threshold)
t = torch.tensor(threshold, device="cuda")
timer = timeit.Timer(lambda: func(tensor, t))
print(timer.repeat(repeat=3, number=100))
main()
And this is the output:
base 0.5
[0.0002641710452735424, 0.00016550999134778976]
base 1.0
[0.00017875991761684418, 0.00016467086970806122]
jit 0.5
[70.17099338211119, 71.27563373814337]
jit 1.0
[139.18801530217752, 139.25591901200823]
Why does this jitted function have abysmal performance? Am I just completely misusing torch.jit? Can torch.jit can event be used for this purpose?
This is what if_max_torch_jit.code returns:
def if_max_torch_jit(tensor: Tensor,
threshold: Tensor) -> Tensor:
_0 = uninitialized(Tensor)
_1 = torch.len(tensor)
_2 = False
_3 = _0
_4 = 0
_5 = torch.gt(_1, 0)
while _5:
x = torch.select(tensor, 0, _4)
if bool(torch.gt(x, threshold)):
_6, _7, _8 = False, True, torch.tensor(True)
else:
_6, _7, _8 = True, False, _0
_9 = torch.add(_4, 1)
_5, _2, _3, _4 = torch.__and__(torch.lt(_9, _1), _6), _7, _8, _9
if _2:
_10 = _3
else:
_10 = torch.tensor(False)
return _10
This seems overly convoluted, but I am not familiar with low-level programming at all…
Thanks for reading me and thanks in advance for educating me. |
st180248 | Hello, good morning. I am relatively new to Pytorch but liking it a lot! I have a BERT classification model that I am able to save, load, and run inference on locally, additionally the model is trained using nvidia CUDA GPU. Torch version is 1.10.0+cu102 on Windows 10 64 bit .Up to this point the model loads fine
model.to(device)
model.load_state_dict(torch.load(‘MyModelPath/MyModel15.model’, map_location=torch.device(‘cpu’)))
Here is where the problem starts for me
scripted_model = Torch.Jit.Script(model)
throws this error message
Traceback (most recent call last):
File “C:\Users\javedh\lib\site-packages\IPython\core\interactiveshell.py”, line 3457, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File “”, line 1, in
scripted_model = torch.jit.script(model)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit_script.py”, line 1257, in script
return torch.jit._recursive.create_script_module(
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit_recursive.py”, line 451, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit_recursive.py”, line 464, in create_script_module_impl
property_stubs = get_property_stubs(nn_module)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit_recursive.py”, line 778, in get_property_stubs
properties_asts = get_class_properties(module_ty, self_name=“RecursiveScriptModule”)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 161, in get_class_properties
getter = get_jit_def(prop[1].fget, f"__{prop[0]}_getter", self_name=self_name)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 264, in get_jit_def
return build_def(parsed_def.ctx, fn_def, type_line, def_name, self_name=self_name, pdt_arg_types=pdt_arg_types)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 315, in build_def
build_stmts(ctx, body))
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 137, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 137, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 287, in call
return method(ctx, node)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 550, in build_Return
return Return(r, None if stmt.value is None else build_expr(ctx, stmt.value))
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 287, in call
return method(ctx, node)
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 702, in build_Call
args = [build_expr(ctx, py_arg) for py_arg in expr.args]
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 702, in
args = [build_expr(ctx, py_arg) for py_arg in expr.args]
File “C:\Users\javedh\AppData\Roaming\Python\Python39\site-packages\torch\jit\frontend.py”, line 286, in call
raise UnsupportedNodeError(ctx, node)
torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren’t supported:
File “C:\Users\javedh\lib\site-packages\transformers\modeling_utils.py”, line 987
activations".
“”"
return any(hasattr(m, “gradient_checkpointing”) and m.gradient_checkpointing for m in self.modules())
I am not sure how to make sense of it in a meaningful way, it seems like a package support issue, but I could be wrong. I would appreciate any assistance in understanding this issue.
Thank you very much! |
st180249 | The error points to:
torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren’t supported
which is raised due to the used generator expression as seen in this minimal code snippet:
class MyModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
a = any(i for i in range(10))
return x
model = MyModel()
x = torch.randn(1, 1)
out = model(x)
scripted = torch.jit.script(model)
# > UnsupportedNodeError: GeneratorExp aren't supported |
st180250 | thank you so much ptrlck, any recommendations on how I can address this issue?
thanks again
Haris |
st180251 | I tried spatial pyramid pooling (SPP) in one of my research work, which required to save the model in lite version so that it can be used by android app.
Jit Script:
from torch.utils.mobile_optimizer import optimize_for_mobile
traced_script_module = torch.jit.script(model)
traced_script_module_optimized = optimize_for_mobile(traced_script_module)
traced_script_module_optimized._save_for_lite_interpreter ("Model_PyTorchCNN_forMobileV10_Spatial.ptl")
Ref of SPP: https://github.com/revidee/pytorch-pyramid-pooling
But I am getting the following error:
RuntimeError:
_pad(Tensor input, int[] pad, str mode="constant", float value=0.) -> (Tensor):
Expected a value of type 'float' for argument 'value' but instead found type 'int'.
:
File "<ipython-input-9-030eeb1a6a5a>", line 47
assert w_pad1 + w_pad2 == (w_kernel * levels[i] - previous_conv_size[1]) and h_pad1 + h_pad2 == (h_kernel * levels[i] - previous_conv_size[0])
padded_input = F.pad(input=previous_conv, pad=[w_pad1, w_pad2, h_pad1, h_pad2],
~~~~~ <--- HERE
mode='constant', value=0)
if mode == "max":
'spatial_pyramid_pool' is being compiled since it was called from 'SpatialPyramidPooling.forward'
File "<ipython-input-10-52240195b8a3>", line 18
def forward(self, x):
return self.spatial_pyramid_pool(x, self.levels, self.mode)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
On the following block of code:
def spatial_pyramid_pool(previous_conv, levels, mode):
"""
Static Spatial Pyramid Pooling method, which divides the input Tensor vertically and horizontally
(last 2 dimensions) according to each level in the given levels and pools its value according to the given mode.
:param previous_conv input tensor of the previous convolutional layer
:param levels defines the different divisions to be made in the width and height dimension
:param mode defines the underlying pooling mode to be used, can either be "max" or "avg"
:returns a tensor vector with shape [batch x 1 x n],
where n: sum(filter_amount*level*level) for each level in levels
which is the concentration of multi-level pooling
"""
num_sample = previous_conv.size(0)
previous_conv_size = [int(previous_conv.size(2)), int(previous_conv.size(3))]
for i in range(len(levels)):
h_kernel = int(math.ceil(previous_conv_size[0] / levels[i]))
w_kernel = int(math.ceil(previous_conv_size[1] / levels[i]))
w_pad1 = int(math.floor((w_kernel * levels[i] - previous_conv_size[1]) / 2))
w_pad2 = int(math.ceil((w_kernel * levels[i] - previous_conv_size[1]) / 2))
h_pad1 = int(math.floor((h_kernel * levels[i] - previous_conv_size[0]) / 2))
h_pad2 = int(math.ceil((h_kernel * levels[i] - previous_conv_size[0]) / 2))
assert w_pad1 + w_pad2 == (w_kernel * levels[i] - previous_conv_size[1]) and h_pad1 + h_pad2 == (h_kernel * levels[i] - previous_conv_size[0])
padded_input = F.pad(input=previous_conv, pad=[w_pad1, w_pad2, h_pad1, h_pad2],
mode='constant', value=0.0)
if mode == "max":
pool = nn.MaxPool2d((h_kernel, w_kernel), stride=(h_kernel, w_kernel), padding=(0, 0))
elif mode == "avg":
pool = nn.AvgPool2d((h_kernel, w_kernel), stride=(h_kernel, w_kernel), padding=(0, 0))
else:
raise RuntimeError("Unknown pooling type: %s, please use \"max\" or \"avg\".")
x = pool(padded_input)
if i == 0:
spp = x.view(num_sample, -1)
else:
spp = torch.cat((spp, x.view(num_sample, -1)), 1)
return spp
If I change the value of “value = 0” to “value = 0.0”, I am getting the following error:
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::eq.Tensor(Tensor self, Tensor other) -> (Tensor):
Expected a value of type 'Tensor' for argument 'other' but instead found type 'str'. |
st180252 | Hi,
I have some recommendation models trained and torchscripted, which I’d like to use for model serving.
I’m using DJL which is a java library that loads torchscript models through JNI. Given I’m not doing much on the java side, I suppose the performance characteristics is largely dependently on the torchscript jit runtime.
Now my questions are:
How should one typically tune inter-op and intra-op threads to get best performance in terms of throughput and tail latency?
When I was working with tensorflow I used to set inter-op threads to the number of available cpus and intra-op threads to 1, which gives enough parallelism to handle a large number of requests. When I tried the same with torchscript model, unfortunately requests start queuing before even able to hit 50% cpu utilization. Is this expected?
Are there any steps that should be taken to process the torchscript model before using it for inference?
For example, I noticed torch.jit.freeze, but haven’t really tried it. How much performance gain can I typically expect by applying it?
Are there any other knobs to tune for cpu inference performance?
Apart from what’s mentioned above, what other knobs should one typically try to adjust for optimal performance?
Thanks! |
st180253 | I am trying to run the following code
import torch
@torch.jit.script
def foo(x: torch.Tensor, y: bool):
return x & y
print(foo.graph)
But ran into an error
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::__and__.Scalar(Tensor self, Scalar other) -> (Tensor):
Expected a value of type 'number' for argument 'other' but instead found type 'bool'.
aten::__and__.Tensor(Tensor self, Tensor other) -> (Tensor):
Expected a value of type 'Tensor' for argument 'other' but instead found type 'bool'.
aten::__and__.bool(bool a, bool b) -> (bool):
Expected a value of type 'bool' for argument 'a' but instead found type 'Tensor'.
aten::__and__.int(int a, int b) -> (int):
Expected a value of type 'int' for argument 'b' but instead found type 'bool'.
The original call is:
File "bool_torchscript.py", line 5
@torch.jit.script
def foo(x: torch.Tensor, y: bool):
return x & y
~~~~~ <--- HERE
Does this mean that scalar cannot be of bool type and has to be int/float only? |
st180254 | Hi,
I have a torch.jit.script module saved in Python and I load it in C++ for training. There are certain parameters which are passed to the init() of this module class, which I understand, are a part of the torch.jit.script module. Is there any way of changing these module class parameters after loading the module in CPP?
Note: If I create an api for modifying a class parameter, and if that parameter is being used by the forward api, then I get a runtime error while creating a torch.jit.script module of that class. |
st180255 | I saw some transfer learning Libtorch codes using pretrained torchscript models and all of them is only training additional layers added in Libtorch codes.
(e.g) Transfer-Learning-Dogs-Cats-Libtorch/main.cpp at master · krshrimali/Transfer-Learning-Dogs-Cats-Libtorch · GitHub)
My question is, can we train the torchscript model without defining network model in Libtorch?
I think torchscript model has already model’s weights and structure is similar to torch::nn::Module(actually it is instance of torch::jit::Module object) so I assume it’s not impossible…but still can’t find the way.
Best Regards |
st180256 | Solved by hshwang in post #2
Solved from this issue.
How to train a torch::jit::script::Module? · Issue #28478 · pytorch/pytorch (github.com) |
st180257 | Solved from this issue.
How to train a torch::jit::script::Module? · Issue #28478 · pytorch/pytorch (github.com) |
st180258 | Hello,
We have a customized model trained by YoloV5, and the default extension save format is .pt. I wonder if there is an appropriate method to convert this model into .ptl model file so that we can deploy it on mobile.
We tried tutorial (Prototype) Introduce lite interpreter workflow in Android and iOS — PyTorch Tutorials 1.9.0+cu102 documentation, but it didn’t work. The code we used are listed below
import torch
from yolov5.models.experimental import attempt_load
path_to_weight_file = '/path/to/our_own_model.pt'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# setup model
model = attempt_load(path_to_weight_file, map_location=device)
model.eval()
scripted_module = torch.jit.script(model)
scripted_module._save_for_lite_interpreter("scripted.ptl")
We got the error:
Traceback (most recent call last):
File "convertToPTL.py", line 24, in <module>
scripted_module = torch.jit.script(model)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_script.py", line 942, in script
return torch.jit._recursive.create_script_module(
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_recursive.py", line 391, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_script.py", line 391, in _construct
init_fn(script_module)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_recursive.py", line 428, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_script.py", line 391, in _construct
init_fn(script_module)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_recursive.py", line 428, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_recursive.py", line 452, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_recursive.py", line 335, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_script.py", line 1106, in _recursive_compile_class
_compile_and_register_class(obj, rcb, _qual_name)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/_script.py", line 65, in _compile_and_register_class
ast = get_jit_class_def(obj, obj.__name__)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/frontend.py", line 173, in get_jit_class_def
methods = [get_jit_def(method[1],
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/frontend.py", line 173, in <listcomp>
methods = [get_jit_def(method[1],
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/frontend.py", line 271, in get_jit_def
return build_def(ctx, fn_def, type_line, def_name, self_name=self_name)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/frontend.py", line 293, in build_def
param_list = build_param_list(ctx, py_def.args, self_name)
File "/Users/yin/Library/Python/3.8/lib/python/site-packages/torch/jit/frontend.py", line 320, in build_param_list
raise NotSupportedError(ctx_range, _vararg_kwarg_err)
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/warnings.py", line 477
def __exit__(self, *exc_info):
~~~~~~~~~ <--- HERE
if not self._entered:
raise RuntimeError("Cannot exit %r without entering first" % self)
'__torch__.warnings.catch_warnings' is being compiled since it was called from 'SPPF.forward'
File "/Users/yin/Desktop/real-time parking sign detection/yolov5/models/common.py", line 191
def forward(self, x):
x = self.cv1(x)
with warnings.catch_warnings():
~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
y1 = self.m(x) |
st180259 | I have a module that is running fine on one computer, but on another computer it throws this error
File "/home/guillefix/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: __nv_nvrtc_builtin_header.h(78048): error: function "operator delete(void *, size_t)" has already been defined
__nv_nvrtc_builtin_header.h(78049): error: function "operator delete[](void *, size_t)" has already been defined
2 errors detected in the compilation of "default_program".
nvrtc compilation failed:
#define NAN __int_as_float(0x7fffffff)
#define POS_INFINITY __int_as_float(0x7f800000)
#define NEG_INFINITY __int_as_float(0xff800000)
template<typename T>
__device__ T maximum(T a, T b) {
return isnan(a) ? a : (a > b ? a : b);
}
template<typename T>
__device__ T minimum(T a, T b) {
return isnan(a) ? a : (a < b ? a : b);
}
extern "C" __global__
void fused_tanh_mul_add__14203776399538843293(float* tv_, float* tb0_2, float* tv__, float* tv___, float* tv____, float* aten_log, float* aten_cat) {
{
if ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)<8ll ? 1 : 0) {
aten_cat[(long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)] = (((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)<4ll ? 1 : 0) ? (__ldg(tv__ + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x))) / (1.f / (1.f + (expf(0.f - ((__ldg(tv___ + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x))) * (tanhf(__ldg(tv____ + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)))) + 2.f)))) + 0.1192029193043709f) - (__ldg(tb0_2 + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x))) : __ldg(tv_ + ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) - 4ll));
}if ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)<4ll ? 1 : 0) {
float v = __ldg(tv___ + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x));
float v_1 = __ldg(tv____ + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x));
aten_log[(long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)] = logf(1.f / (1.f + (expf(0.f - (v * (tanhf(v_1)) + 2.f)))) + 0.1192029193043709f);
}}
}
The computer in which it works has cuda 11.4, and torch 1.10.0a0+git36449ea, and the comptuer in which it doesnt work has cuda 11.5 and torch 1.10.1+cu102 (though I also tried with 1.10.0+cu102). So it seems like quite a similar set up. Any idea about why it’s failing? |
st180260 | Solved by guillefix in post #2
It worked after updating the Nvidia Toolkit to version 11.6! |
st180261 | I use gdb to debug pytorch, but cann’t step in func and get detail func body. And backtrace is discontinuous.
image1812×718 168 KB |
st180262 | To get more information in gdb rebuild PyTorch with debug symbols via DEBUG=1 python setup.py install. |
st180263 | As show above, I found missing debug symbols. When I set breakpoint in the func body like runProfilingOptimizations function, it’s called in ProfilingGraphExecutorImpl::getOptimizedPlanFor function, the debug would hit breakpoint, when I key in si in gdb, it would also print ProfilingGraphExecutorImpl::runProfilingOptimizations, but when I key in s, it will jump out of the func and print ProfilingGraphExecutorImpl::getOptimizedPlanFor. @ptrblck |
st180264 | I don’t know why a debug build is not working as I haven’t seen this behavior before (i.e. they work in my setup, but I also haven’t stepped into the mentioned methods). |
st180265 | Hello folks -
I want some advice on error handling with torchscript code
Sample Code snippet which validates the user input in forward() and throws ValueError for invalid inputs. Serialize this code using Torchscript
import torch
from torch import nn
class Foo(nn.Module):
def forward(self, user_input_length: int):
if user_input_length > 512:
raise ValueError(f" Invalid user input. Max length allowed 512")
foo_ts = torch.jit.script(Foo())
torch.jit.save(foo_ts, "foo_ts.pt")
Sharing the torchscript file with the client who loads it in their environment.
foo_ts = torch.jit.load("foo_ts.pt")
foo_ts(198) # Valid Input
foo_ts(513) # Invalid Input - Let the user know
RuntimeError seen by the client when they pass an invalid user input
Traceback (most recent call last):
File "ts_exception.py", line 11, in <module>
foo_ts(513)
File "/home/anjch/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
torch.jit.Error: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "ts_exception.py", line 8, in forward
def forward(self, user_input_length: int):
if user_input_length > 512:
raise ValueError(f" Invalid user input. Max length allowed 512")
~~~~~~~~~~~~~~~~~~ <--- HERE
RuntimeError: Invalid user input. Max length allowed 512
In the current behavior, even if the actual code throws ValueError, client side always receives a RuntimeError. Due to this behavior, the client will not be able to programmatically distinguish between an actual invalid input error v/s other Runtime error (unless they parse the string error message)
Are there any suggestions on how to handle it gracefully where the client can distinguish between input errors vs other errors? |
st180266 | The following code works fine in regular Pytorch
def pad_audio(self,tensor,n_max=50000):
diff_pad = n_max - len(tensor)
tensor = F.pad(tensor,(int(diff_pad/2),diff_pad - int(diff_pad/2)),'constant',0)
But when using JIT script, I get the following error:
_pad(Tensor input, int[] pad, str mode="constant", float value=0.) -> (Tensor):
Expected a value of type 'List[int]' for argument 'pad' but instead found type 'Tuple[int, Tensor]'.
:
diff_pad = n_max - len(tensor)
tensor = F.pad(tensor,(int(diff_pad/2),diff_pad - int(diff_pad/2)),'constant',0)
~~~~~ <--- HERE
What is the correct way of using pad in TorchScript? |
st180267 | Make sure to use List[int] as described in the error message. Afterwards, you would run into another type mismatch, since the value is expected as a float and lastly you would have to annotate the inputs since n_max would be expected as a Tensor otherwise.
This should work:
@torch.jit.script
def pad_audio(tensor, n_max=50000):
# type: (Tensor, int) -> Tensor
diff_pad = int(n_max - len(tensor))
tensor = F.pad(tensor, [int(diff_pad/2), diff_pad - int(diff_pad/2)], 'constant', 0.)
return tensor |
st180268 | Hi,
I originally made a thread here but it seems like this category is more appropriate.
The torchvision docs in this link give an example of using torch.jit.script with transformations.
While this works well in the single process setting, for some reason it does not seem to work in multi-process settings, such as Distributed Data Parallel.
I am only trying to JIT the data transformation module, not the network itself.
The last line of the error message I get is:
File "/home/user/miniconda3/envs/pytorch/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
RuntimeError: Tried to serialize object __torch__.torch.nn.modules.container.Sequential which does not have a __getstate__ method defined!
Is there any workaround? Or any reference that mentions that JIT is not compatible with multi-processing?
Below I’ve attached a minimal code sample that can reproduce the error:
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torchvision.io as io
import torchvision.transforms as transforms
from torch.utils.data import DistributedSampler, DataLoader, Dataset
from tqdm import tqdm
def torch_image_loader(path):
mode = io.ImageReadMode.RGB
return io.read_image(path, mode)
class RGBDataset(Dataset):
def __init__(self, root="./", num_images=100, split='train', img_transforms=None):
self.root = root
self.num_images = num_images
self.split = split
self.img_transforms = img_transforms
self.file_names = ['red.png', 'green.png', 'blue.png']
self.image_loader = torch_image_loader
def __len__(self):
return self.num_images
def __getitem__(self, idx):
color_index = idx % 3
filename = self.file_names[color_index]
img_path = os.path.join(self.root, filename)
img = self.image_loader(img_path)
if self.img_transforms is not None:
img = self.img_transforms(img)
return img, color_index
def create_dataset():
train_augmentations_list = [transforms.ConvertImageDtype(torch.float),
transforms.Resize([224, ])]
train_augmentations = torch.jit.script(torch.nn.Sequential(*train_augmentations_list))
# train_augmentations = torch.nn.Sequential(*train_augmentations_list)
dataset = RGBDataset(img_transforms=train_augmentations)
return dataset
def create_train_dataloader(dataset):
sampler = DistributedSampler(dataset, shuffle=True)
return DataLoader(dataset, batch_size=8, num_workers=4, shuffle=False, pin_memory=True, drop_last=True,
sampler=sampler)
def main_process(gpu, world_size):
rank = gpu
dist.init_process_group(backend="nccl", init_method="env://", world_size=world_size, rank=rank)
torch.cuda.set_device(gpu)
train_dataset = create_dataset()
train_dataloader = create_train_dataloader(train_dataset)
for img, label in tqdm(train_dataloader):
pass
if __name__ == "__main__":
num_available_gpus = torch.cuda.device_count()
world_size = num_available_gpus
os.environ['MASTER_ADDR'] = "localhost"
os.environ['MASTER_PORT'] = str(12345)
mp.spawn(main_process, nprocs=world_size, args=(world_size,))
This assumes there are 3 images named red/green/blue.png in the same path.
The error seems to occur when the torch.jit.script is called (the error is raised during the create_dataset call).
The error does not occur when JIT is not used.
By the way, the environment I am using is:
Conda Python == 3.9.7
PyTorch == 1.10.1
torchvision==0.11.2 |
st180269 | Hello,
I’ve saved a model usint torch.jit.trace, and then loaded it using torch.jit.load.
The first time I run the model it outputs correctly, but the second and subsequent times, it throws this error
RuntimeError Traceback (most recent call last)
<ipython-input-25-946d12987c4d> in <module>
----> 1 out = model(inputs)
~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
RuntimeError: _Map_base::at
Any idea what could be causing it?
Checked with torch 1.8.0 and torch 1.10.0. Happens both on CPU and GPU |
st180270 | As the second pass apparently does the optimization, is there a way to disable this optimization, as that’s where it seems to be failing? |
st180271 | I have found there is obvious memory leak when I try to export torchscript multiple times.
Here is the piece of code to reproduce this fault. The process may occupy over 20 GB RAM after exporting resnet50 40 times.
import torch
import torchvision.models as vision_models
from torchvision.models import resnet
resnet50 = vision_models.resnet50()
for i in range(1000):
jit_resnet50 = torch.jit.trace(resnet50, torch.randn(16, 3, 448, 448))
print('{} times of jit trace done!'.format(i + 1))
jit_resnet50 = None
Pytorch version: 1.10.0
Platform: Ubuntu 18.04, with or without CUDA |
st180272 | Oh oh. I think this is half-expected because of how TorchScript’s caching works. It isn’t a good thing, though, and at least there should be a way to reset things.
Best regards
Thomas |
st180273 | PyTorch version: 1.6.0.dev20200407+cu101
When use torch.jit._overload_method to overload functions, it seems that there is no graph attribute in the output torchscript.
class MyModule(nn.Module):
def __init__(self):
super().__init__()
@torch.jit._overload_method # noqa: F811
def forward(self, x: int) -> int: # noqa: F811
pass
@torch.jit._overload_method # noqa: F811
def forward(self, x: torch.Tensor) -> torch.Tensor: # noqa: F811
pass
def forward(self, x): # noqa: F811
original = x
if isinstance(original, int):
return original + 2
else:
return original.sum()
model = MyModule()
s = torch.jit.script(model)
torch._C._jit_pass_inline(s.graph) # error happens here
error message:
torch.nn.modules.module.ModuleAttributeError: 'RecursiveScriptModule' object has no attribute 'graph' |
st180274 | Hi, this is is because there is no single “graph”, there are two separate graphs, one for each overload. I could maybe add an api for something like model.forwards.graphs, or model.forward.graph_for_types(1) to get the graph for a specific overload.
Generally, overloads should be usable and work as you expect when they are contained in other modules but you may run into a little bit of difficulty if you are interacting with them as a top level module. They’re still an internal feature but will be cleaned up and released probably in the next release or the one after. |
st180275 | eellison:
Generally, overloads should be usable and work as you expect when they are contained in other modules but you may run into a little bit of difficulty if you are interacting with them as a top level module. They’re still an internal feature but will be cleaned up and released probably in the next release or the one after.
Thank you very much:)
I put overloads into sub modules and find there is one single graph for one kind of overload as you said:
class MySubModule(nn.Module):
def __init__(self):
super().__init__()
@torch.jit._overload_method # noqa: F811
def forward(self, x: int) -> int: # noqa: F811
pass
@torch.jit._overload_method # noqa: F811
def forward(self, x: torch.Tensor) -> torch.Tensor: # noqa: F811
pass
def forward(self, x): # noqa: F811
original = x
if isinstance(original, int):
return original + 2
else:
return original.sum()
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.sub_module = MySubModule()
def forward(self, x):
return self.sub_module(x)
model = MyModule()
s = torch.jit.script(model)
torch._C._jit_pass_inline(s.graph)
print(s.graph)
output:
graph(%self : __torch__.MyModule,
%x.1 : Tensor):
%2 : __torch__.MySubModule = prim::GetAttr[name="sub_module"](%self)
%14 : None = prim::Constant()
%15 : Tensor = aten::sum(%x.1, %14)
return (%15)
Since default type is torch.Tensor, the corresponding graph is obtained. So is there any ways to obtain one single torchscript for all overloads for now? |
st180276 | The MyModule you’re using only takes a Tensor. If you want to access the int graph you could try using a different module with the submodule that takes in an int. |
st180277 | Sorry to dredge this one up but I had exactly the same problem, I was hoping I could ask @eellison how this works exactly?
Generally, overloads should be usable and work as you expect when they are contained in other modules
I get the idea: Once you call forward, then the torchscript can inspect types and choose the right graph? But is it possible to get a reference for some documentation to fully understand how this works (or to the code that does this)? I don’t quite understand why forward is treated differently - when this error occurs I can still call the other functions which have been overloaded. |
st180278 | Hi @Padarn_Wilson, the reason there aren’t good docs for this is because it wasn’t fully finished. there are still a few rough edges as you are encountering.
I don’t quite understand why forward is treated differently - when this error occurs I can still call the other functions which have been overloaded.
Hmm, what do you mean exactly ? Not sure I follow. |
st180279 | Hey @Elias_Ellison thanks for the calrification.
Elias_Ellison:
Hmm, what do you mean exactly ? Not sure I follow.
From the discussion above I understood that when .forward is invoked it will also do the job of inferring which overloaded functions to call based on the types present… but rereading it I realise I had my understanding a bit confused. I think it is clear now. |
st180280 | Hi,
I’m trying to develop some torch.jit modules with c++,and then use them in python with pybind.
I found that the orginal method of torch._C.ScriptModule._create_method_from_trace
was created with arg types torch::jit::Module in c++. But the corresponding method arg type was torch._c.ScriptModule in python.
How dose this conversion happens?
I try to define a method in c++ with arg type torch::jit::Module. When exported to python, it’s arg type still torch::jit::Module.
What’s missed when building the program? |
st180281 | Solved by EFans in post #2
Build with these cxx flags can fix:
cmake_cxx_flags: List[str] = []
for name in ["COMPILER_TYPE", "STDLIB", "BUILD_ABI"]:
val = getattr(torch._C, f"_PYBIND11_{name}")
if val is not None:
cmake_cxx_flags += [f'-DPYBIND11_{name}=\\"{val}\\"']
print("val: ", v… |
st180282 | Build with these cxx flags can fix:
cmake_cxx_flags: List[str] = []
for name in ["COMPILER_TYPE", "STDLIB", "BUILD_ABI"]:
val = getattr(torch._C, f"_PYBIND11_{name}")
if val is not None:
cmake_cxx_flags += [f'-DPYBIND11_{name}=\\"{val}\\"']
print("val: ", val)
print("cmake_cxx_flags: ", cmake_cxx_flags) |
st180283 | Hi, guys. I wonder how to call scripts written in cython when creating jit scripts?
Traditionally, I can call cython class directly in python file after compiling those cython modules. And part of my model inference requires those cython scripts for acceleration. I wonder if it is possible to call the functions and classes written in cython when creating jit.script for my model?
I tried directly calling them (just as how I did in the original python script), but it says that there’s missing a .py file (I thought this is as expected because all cython modules are ended with xxx.pyx and xxx.pyd, and they are compiled to xxx.cytree.cpython-38-x86_64-linux-gnu.so on my machine):
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/torch/_utils_internal.py", line 49, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
File "/opt/conda/lib/python3.8/inspect.py", line 967, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/lib/python3.8/inspect.py", line 790, in findsource
raise OSError('source code not available')
OSError: source code not available
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "new_convert.py", line 410, in <module>
tmp=torch.jit.script(LSTMNet(hanabi_config,inverse_transform))
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py", line 897, in script
return torch.jit._recursive.create_script_module(
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py", line 352, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py", line 410, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py", line 304, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
File "/opt/conda/lib/python3.8/site-packages/torch/jit/annotations.py", line 76, in get_signature
source = dedent(''.join(get_source_lines_and_file(fn)[0]))
File "/opt/conda/lib/python3.8/site-packages/torch/_utils_internal.py", line 56, in get_source_lines_and_file
raise OSError(msg) from e
OSError: Can't get source for <class 'cytree.Roots'>. TorchScript requires source access in order to carry out compilation, make sure original .py files are available.
I wonder how do you guys integrate cython modules to a jit script? Is re-write the cython module to the traditional python script the only solution?
Thanks! |
st180284 | You can add @jit.ignore decorated forwarders.
Alternatively, C++ extension ops work with JIT, even supporting gradients, but I don’t know how exactly to register them from python. In C++ it is like:
static auto registry =
torch::RegisterOperators()
.op("namespace::func", &func)
with that you can call torch.ops.namespace.func(…) |
st180285 | Hi Alex. Thanks for the reply. jit.ignore seems to disable the function being decorated. Is there any alternatives if I want to keep the decorated function as part of the inference part?
In my case, there are two codebases (1. python & cython 2. c++ & torchscript).
On the one hand, the forward pass of the model contains two steps. First decoding the input to features, and then conduct search algorithms (e.g., MCTS) to give the output. And to accelerate the search process, it is written in cython. The code logics here might be quite complex and may not be encapsulated in an operator.
On the other hand, a codebase loaded the above model and carry out the inference process as follows.
torch::jit::script::Module model_;
model_=model_(torch::jit::load(path, torch::Device(device)))
std::vector<torch::jit::IValue> jitInput;
//... push input to the vector
auto jitOutput = model_.forward(jitInput);; |
st180286 | Check Extending TorchScript with Custom C++ Classes — PyTorch Tutorials 1.10.1+cu102 documentation 2. If I recall correctly, cython can integrate c++ modules, so perhaps a hybrid module (adapter / facade to cython parts) would do what you want. |
st180287 | googlebot:
on can integrate c++ modules, so perhaps a hybrid module (adapter / facade to cython parts) would d
Thanks, Alex! I will check that and post my updates here. |
st180288 | I am using d2go.
when I convert a mask_rcnn_fbnetv3g_fpn based trained model,got the following error:
CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate xxxx
my convert code is:
patch_d2_meta_arch()
@contextlib.contextmanager
def create_fake_detection_data_loader(height, width, is_train):
with make_temp_directory(“detectron2go_tmp_dataset”) as dataset_dir: #创建一个临时目录 dataset_dir=detectron2go_tmp_dataset
runner = create_runner(“d2go.runner.GeneralizedRCNNRunner”) 。
cfg = runner.get_default_cfg()
cfg.DATASETS.TRAIN = [“default_dataset_train”]
cfg.DATASETS.TEST = [“default_dataset_test”]
with make_temp_directory("detectron2go_tmp_dataset") as dataset_dir: #再次创建相同的目录?
image_dir = os.path.join(dataset_dir, "images")
os.makedirs(image_dir)
image_generator = LocalImageGenerator(image_dir, width=width, height=height)
if is_train:
with _register_toy_dataset(
"default_dataset_train", image_generator, num_images=3
):
train_loader = runner.build_detection_train_loader(cfg)
yield train_loader
else:
with _register_toy_dataset(
"default_dataset_test", image_generator, num_images=3
):
test_loader = runner.build_detection_test_loader(
cfg, dataset_name="default_dataset_test"
)
yield test_loader
def test_export_torchvision_format():
runner = GeneralizedRCNNRunner()
cfg = runner.get_default_cfg()
cfg.merge_from_file(model_zoo.get_config_file(“mask_rcnn_fbnetv3a_dsmask_C4.yaml”))
cfg.MODEL.WEIGHTS = os.path.join("./output_mask_rcnn_fbnetv3a_dsmask_C4_20211225", “model_0009999.pth”)
cfg.MODEL_EMA.ENABLED = False
cfg.MODEL.DEVICE=“cpu”
cfg.DATASETS.TRAIN = (“infusion_train”,)
cfg.DATASETS.TEST = (“infusion_val”,)
cfg.DATALOADER.NUM_WORKERS = 1
#cfg.INPUT.MAX_SIZE_TEST = 1920
#cfg.INPUT.MAX_SIZE_TRAIN = 1920
#cfg.INPUT.MIN_SIZE_TEST = 1920
#cfg.INPUT.MIN_SIZE_TRAIN = (1920,)
cfg.SOLVER.IMS_PER_BATCH = 1
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 1
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 25
pytorch_model = runner.build_model(cfg, eval_only=True)
pytorch_model.cpu()
#pytorch_model.eval()
from typing import List, Dict
class Wrapper(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
coco_idx_list = [0,1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,25]
self.coco_idx = torch.tensor(coco_idx_list)
def forward(self, inputs: List[torch.Tensor]):
x = inputs[0].unsqueeze(0) * 255
scale = 320.0 / min(x.shape[-2], x.shape[-1])
x = torch.nn.functional.interpolate(x, scale_factor=scale, mode="bilinear", align_corners=True, recompute_scale_factor=True)
out = self.model(x[0])
res : Dict[str, torch.Tensor] = {}
res["boxes"] = out[0] / scale
res["labels"] = torch.index_select(self.coco_idx, 0, out[1])
res["scores"] = out[2]
#print("return",inputs,[res])
return inputs, [res]
size_divisibility = max(pytorch_model.backbone.size_divisibility, 10)
h, w = size_divisibility, size_divisibility * 2
with create_fake_detection_data_loader(h, w, is_train=False) as data_loader:
predictor_path = convert_and_export_predictor(
copy.deepcopy(cfg),
copy.deepcopy(pytorch_model),
"torchscript_int8@tracing",
'./',
data_loader,
)
orig_model = torch.jit.load(os.path.join(predictor_path, "model.jit"))
wrapped_model = Wrapper(orig_model)
# optionally do a forward
wrapped_model([torch.rand(3, 1920, 1080)])
scripted_model = torch.jit.script(wrapped_model)
scripted_model.save("d2go.pt")
if name == ‘main’:
test_export_torchvision_format() |
st180289 | The error message points out that you are running out of RAM. How large is the reported allocation? Does the values fit your expectation? If so, you would need to reduce the memory requirements by e.g. lowering the batch size if possible. |
st180290 | Hi, I am working on a tool to make model only load parameters when needed to reduce peak memory (link 1). And I would like to take advantage of the torch.jit optimization but it failed. Here is a reproducible code:
import functools
import torch
import torch.nn as nn
import torch.nn.functional as F # for torch.jit.script
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(32, 32)
self.fc2 = nn.Linear(32, 32)
def forward(self, input):
out = self.fc1(input)
out = self.fc2(out)
return out
def release_weights(module, _input=None, _output=None):
module.to('cpu')
def load_weights(module, _input=None):
module.to('cuda')
def lazy_loading(func, module):
@functools.wraps(func)
def wrapper(*args, **kwargs):
load_weights(module)
res = func(*args, **kwargs)
release_weights(module)
return res
return wrapper
def main():
model = Net().eval()
data = torch.rand(1, 32)
for module in [model.fc1, model.fc2]:
# method 1: decorator
module.forward = lazy_loading(module.forward, module)
# method 2: hook
# module.register_forward_hook(release_weights)
# module.register_forward_pre_hook(load_weights)
data = data.cuda()
with torch.no_grad():
model = torch.jit.script(model)
output = model(data)
main()
I’ve tried:
For adding decorators (before/after torch.jit.script), the decorator seems to be ignored.
For adding hooks before torch.jit.script, it comes Type mismatch with hooks.
For adding hooks after torch.jit.script, it raises RuntimeError: register_forward_hook is not supported on ScriptModules.
I wonder is it possible to script the model and lazily load parameters at the same time? (without directly modifying model internal code)
Any feedback would be greatly appreciated. Thanks!
torch version: 1.10.1 |
st180291 | With method 2 + type hints (link 1),
...
def release_weights(module, _input: Tuple[torch.Tensor], _output):
module.weight.data = module.weight.to('cpu')
...
it comes:
RuntimeError:
Tried to set an attribute: data on a non-class: Tensor:
File "/home/siahuat0727/test.py", line 22
def release_weights(module, _input: Tuple[torch.Tensor], _output):
module.weight.data = module.weight.to('cpu')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
So the main issue would be: Is it possible to move ScriptModule parameters across devices?
Here’s a similar issue Problem with jit TorchScript while copying data between GRUs · Issue #28267 · pytorch/pytorch (github.com). |
st180292 | Hi,
I’m trying to register hooks in order to get the layers’ activation values in my model.
It does work with normal python runtime (like in this example 1).
However I cannot make it work in JIT:
As questioned here 3 the type of “input” in the hook function is a tuple. And the Jit compiler does not like it:
Traceback (most recent call last):
File "main.py", line 22, in <module>
script_net = torch.jit.script(net)
File "/home/zodiac/.venv/ia/lib64/python3.7/site-packages/torch/jit/_script.py", line 1258, in script
obj, torch.jit._recursive.infer_methods_to_compile
File "/home/zodiac/.venv/ia/lib64/python3.7/site-packages/torch/jit/_recursive.py", line 451, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/zodiac/.venv/ia/lib64/python3.7/site-packages/torch/jit/_recursive.py", line 513, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/zodiac/.venv/ia/lib64/python3.7/site-packages/torch/jit/_script.py", line 587, in _construct
init_fn(script_module)
File "/home/zodiac/.venv/ia/lib64/python3.7/site-packages/torch/jit/_recursive.py", line 491, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/home/zodiac/.venv/ia/lib64/python3.7/site-packages/torch/jit/_recursive.py", line 520, in create_script_module_impl
create_hooks_from_stubs(concrete_type, hook_stubs, pre_hook_stubs)
File "/home/zodiac/.venv/ia/lib64/python3.7/site-packages/torch/jit/_recursive.py", line 377, in create_hooks_from_stubs
concrete_type._create_hooks(hook_defs, hook_rcbs, pre_hook_defs, pre_hook_rcbs)
RuntimeError: Hook 'hook' on module 'Linear' expected the input argument to be typed as a Tuple but found type: 'Tensor' instead.
This error occured while scripting the forward hook 'hook' on module Linear. If you did not want to script this hook remove it from the original NN module before scripting.
This hook was expected to have the following signature: hook(self, input: Tuple[Tensor], output: Tensor).
The type of the output arg is the returned type from either the forward method or the previous hook if it exists.
Note that hooks can return anything, but if the hook is on a submodule the outer module is expecting the same return type as the submodule's forward.
Here’s the minimum code needed to reproduce this issue:
import torch
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(1, 12)
def forward(self, x):
return self.l1(x)
def hook(model, input, output):
pass
net = NN()
net.l1.register_forward_hook(hook)
script_net = torch.jit.script(net)
Any ideas?
I’m on fedora 33, using Python 3.7.12 and Torch 1.10.0
Have a good day! |
st180293 | Hey @Zodiac @caillonantoine, we need type hints to make it works properly.
from typing import Tuple
...
def hook(model, input: Tuple[torch.Tensor], output):
pass
... |
st180294 | Hi, I have a model with two submodules A and B.
A should not be traced (control flow depends on the input tensor) and B contains “unsupported” input types.
Tracing is ideal for code that operates only on Tensor s and lists, dictionaries, and tuples of Tensor s. torch.jit.script — PyTorch 1.10.1 documentation
How can I trace only on B? (or trace the whole model but skip submodule A? torch.jit.ignore does not work for this case)
I think it’s technically possible since torch.jit.trace can correctly trace submodules with “unsupported” input types. (see comment1 below)
import torch
import torch.nn as nn
class Foo(nn.Module):
def __init__(self):
super().__init__()
self.identity = nn.Identity()
def forward(self, x1, x2):
out = self.identity(x1)
if x2 is not None:
out = out + self.identity(x2)
return out
class Bar(nn.Module):
def __init__(self):
super().__init__()
self.foo1 = Foo()
self.foo2 = Foo()
def forward(self, x):
out = self.foo1(x, x)
out = out + self.foo2(x, None)
return out
bar = Bar()
data = torch.ones(42)
# comment1: This is ok
# bar = torch.jit.trace(bar, data)
# comment2: This is ok too
# bar.foo1 = torch.jit.trace(bar.foo1, (data, data))
# comment3: Fails
bar.foo2 = torch.jit.trace(bar.foo2, (data, None))
RuntimeError: Type 'Tuple[Tensor, NoneType]' cannot be traced. Only Tensors and (possibly nested) Lists, Dicts, and Tuples of Tensors can be traced
Any feedback would be greatly appreciated!
torch version: 1.10.1 |
st180295 | Well, I found a solution that worked for me.
We can first torch.jit.script submodule A (maybe combine with torch.jit.ignore to make it scriptable, just to ensure that it will be skipped by torch.jit.trace), and then torch.jit.trace the whole model.
Still looking for a “solution” for the title. |
st180296 | Hi all,
I’m seeing use of torch.jit._overload_method in a library I am investigating, but I cannot see documentation on what this does at all?
I can guess the effect from the source code, but it would be nice to not be guessing, |
st180297 | I want to export LSTM based pytorch model to TorchScript script but I failed because of such error:
RuntimeError:
Arguments for call are not valid.
The following variants are available:
forward__0(__torch__.torch.nn.modules.rnn.LSTM self, Tensor input, (Tensor, Tensor)? hx=None) -> ((Tensor, (Tensor, Tensor))):
Expected a value of type 'Tensor' for argument 'input' but instead found type '__torch__.torch.nn.utils.rnn.PackedSequence'.
forward__1(__torch__.torch.nn.modules.rnn.LSTM self, __torch__.torch.nn.utils.rnn.PackedSequence input, (Tensor, Tensor)? hx=None) -> ((__torch__.torch.nn.utils.rnn.PackedSequence, (Tensor, Tensor))):
Expected a value of type 'Optional[Tuple[Tensor, Tensor]]' for argument 'hx' but instead found type 'Tensor (inferred)'.
Inferred the value for argument 'hx' to be of type 'Tensor' because it was not annotated with an explicit type.
The original call is:
File "", line 24
enforce_sorted=False)
output, feat = self.lstm(x, feat)
~~~~~~~~~ <--- HERE
output = pad_packed_sequence(output,
batch_first=True,
Script to reproduce:
import torch
from torch import nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
class LSTMBlock(nn.Module):
def __init__(self, input_size, hidden_size):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.lstm = nn.LSTM(input_size,
hidden_size,
batch_first=True,
bidirectional=False)
def forward(self, x, input_lengths, feat=None):
total_length = x.shape[1]
x = pack_padded_sequence(x,
input_lengths,
batch_first=True,
enforce_sorted=False)
output, feat = self.lstm(x, feat)
output = pad_packed_sequence(output,
batch_first=True,
total_length=total_length)[0]
return output, input_lengths, feat
model = LSTMBlock(100, 100)
script = torch.jit.script(model)
Is there a way how to handle models trained with packed_sequence? |
st180298 | Did you solve this issue? I met a very similar issue when I was trying to convert LSTM pytorch model to TorchScript. |
st180299 | Hi @Bartlomiej_Roszak I’m also facing same issue.
Can you please share what changes did you make exactly? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.