id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st180300 | Hello Team, it is my code.
I’m using Jit generated model with Libtorch in C\C++ in Ubuntu 18.04, with CUDA 10.2
I get correct result form my first call result = mE.forward(x);
torch::Tensor tensor_image = ...
torch::IValue result; // it is standard return of forward() in JIT traced model
std::vector<torch::jit::IValue> x;
x.push_back(tensor_image);
result = mE.forward(x);
std::cout << sp_and_gl_output_CUDA << std::endl;
std::vector<torch::IValue> in_to_G;
in_to_G.push_back(result);
img_tensor = mG.forward(in_to_G);
On the image print of result variable, returned by forward() method.
std::cout << sp_and_gl_output_CUDA << std::endl;
The result contained, two Tensors, but i can’t to pass it in next forward(), in right form.
изображение1757×1174 164 KB
The result of execution:
mG.forward load spatial and global:
forward() Expected a value of type 'Tensor' for argument 'v' but instead found type 'Tuple[Tensor, Tensor]'.
Position: 1
Declaration: forward(__torch__.models.networks.generator.StyleGAN2ResnetGenerator self, Tensor v, Tensor v0) -> (Tensor)
Exception raised from checkArg at ../aten/src/ATen/core/function_schema_inl.h:184 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f1c5d0290db in /home/interceptor/Документы/Git_Medium_repo/Binary_search_engine_CUDA/torch_to_cpp/libtorch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x7f1c5d024d2e in /home/interceptor/Документы/Git_Medium_repo/Binary_search_engine_CUDA/torch_to_cpp/libtorch/lib/libc10.so)
frame #2: <unknown function> + 0x128d542 (0x7f1c473af542 in /home/interceptor/Документы/Git_Medium_repo/Binary_search_engine_CUDA/torch_to_cpp/libtorch/lib/libtorch_cpu.so)
frame #3: <unknown function> + 0x12918c1 (0x7f1c473b38c1 in /home/interceptor/Документы/Git_Medium_repo/Binary_search_engine_CUDA/torch_to_cpp/libtorch/lib/libtorch_cpu.so)
frame #4: torch::jit::GraphFunction::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) + 0x2d (0x7f1c49c97b2d in /home/interceptor/Документы/Git_Medium_repo/Binary_search_engine_CUDA/torch_to_cpp/libtorch/lib/libtorch_cpu.so)
frame #5: torch::jit::Method::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) const + 0x175 (0x7f1c49ca7235 in /home/interceptor/Документы/Git_Medium_repo/Binary_search_engine_CUDA/torch_to_cpp/libtorch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x6266 (0x564143038266 in ./particle_cuda_test)
frame #7: <unknown function> + 0x47af (0x5641430367af in ./particle_cuda_test)
frame #8: __libc_start_main + 0xe7 (0x7f1bf9682bf7 in /lib/x86_64-linux-gnu/libc.so.6)
frame #9: <unknown function> + 0x4daa (0x564143036daa in ./particle_cuda_test)
Main module: torch_and_cuda_call.cpp completed.
ok! |
st180301 | Solved by AlexT in post #3
Seems it WORKED
auto output1_sp = sp_and_gl_output_CUDA.toTuple()->elements()[0].toTensor();
auto output2_gl = sp_and_gl_output_CUDA.toTuple()->elements()[1].toTensor();
//-------------------------------------------------------------------------------------------------------------------… |
st180302 | This placeholder for second call:
in_to_G.push_back(torch::ones({1, 8,8,8}).to(at::kCUDA));
in_to_G.push_back(torch::ones({1, 2048}).to(at::kCUDA));
img_tensor = mG.forward(in_to_G);
worked!
How can i get two Tensors for create new input vector from it?
i need to parse this result, to get Tensor in it:
torch::IValue result;
There are somethink like this issues on the stack and github:
c++ - How to get values return by Tuple Object in Maskcrnn libtorch - Stack Overflow 2
C++ Convert c10::IValue to tensor or tuple or to TensorList failed · Issue #48691 · pytorch/pytorch · GitHub 1 |
st180303 | Seems it WORKED
auto output1_sp = sp_and_gl_output_CUDA.toTuple()->elements()[0].toTensor();
auto output2_gl = sp_and_gl_output_CUDA.toTuple()->elements()[1].toTensor();
//-------------------------------------------------------------------------------------------------------------------
in_to_G.push_back(output1_sp.to(at::kCUDA));
in_to_G.push_back(output2_gl.to(at::kCUDA));
//-------------------------------------------------------------------------------------------------------------------
img_tensor = mG.forward(in_to_G); |
st180304 | Hello,
While trying to export a ppm_deepsup model, I got this error:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [512, 2048, 1, 1], but got 3-dimensional input of size [2048, 1, 1] instead.
The PPM_DEEPSUP is defined as follwos:
class PPMDeepsup(nn.Module):
def init(self, num_class=150, fc_dim=4096,
use_softmax=False, pool_scales=(1, 2, 3, 6)):
super(PPMDeepsup, self).init()
self.use_softmax = use_softmax
self.ppm = []
for scale in pool_scales:
self.ppm.append(nn.Sequential(
nn.AdaptiveAvgPool2d(scale),
nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False),
BatchNorm2d(512),
nn.ReLU(inplace=True)
))
self.ppm = nn.ModuleList(self.ppm)
self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1)
self.conv_last = nn.Sequential(
nn.Conv2d(fc_dim+len(pool_scales)*512, 512,
kernel_size=3, padding=1, bias=False),
BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout2d(0.1),
nn.Conv2d(512, num_class, kernel_size=1)
)
self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0)
self.dropout_deepsup = nn.Dropout2d(0.1)
def forward(self, conv_out, segSize=None):
conv5 = conv_out[-1]
input_size = conv5.size()
ppm_out = [conv5]
for pool_scale in self.ppm:
ppm_out.append(nn.functional.interpolate(
pool_scale(conv5),
(input_size[2], input_size[3]),
mode=‘bilinear’, align_corners=False))
ppm_out = torch.cat(ppm_out, 1)
x = self.conv_last(ppm_out)
if self.use_softmax: # is True during inference
x = nn.functional.interpolate(
x, size=segSize, mode=‘bilinear’, align_corners=False)
x = nn.functional.softmax(x, dim=1)
return x
# deep sup
conv4 = conv_out[-2]
_ = self.cbr_deepsup(conv4)
_ = self.dropout_deepsup()
_ = self.conv_last_deepsup()
x = nn.functional.log_softmax(x, dim=1)
_ = nn.functional.log_softmax(_, dim=1)
return (x, _)
and the export script is as follows :
model_decoder = ModelBuilder.build_decoder(“ppm_deepsup”, weights=checkpoint_decoder, use_softmax=True, num_class=150, fc_dim=2048)
model_decoder.load_state_dict(model_decoder_checkpoints, strict=False)
model_decoder.to(device)
model_decoder.eval()
example_enc = torch.zeros(1, 3, 450, 450).to(device)# torch.zeros(1, 3, 2048, 224,224).to(device)
example_dec = torch.zeros(1, 3, 224,224).to(device)
traced_encoder = torch.jit.trace(model_encoder, example_enc, strict=False)
traced_decoder_script = torch.jit.trace(model_decoder, traced_encoder(example_enc), example_dec, strict=True)#, example
Any help please to solve this issue ?
Thanks |
st180305 | your example tensors are 4d, it is inconsistent with 3d inputs to the model (no batch dim perhaps?) |
st180306 | Thank you for your reply. I pass torch.zeros(1, 3, …,… ) as Imentionned. Any other suggestion please ? |
st180307 | I’m attempting to JIT trace a semantic segmentation model based on a Resnet50 dilated encoder and an accompanying PPMDeepsup decoder to use in iOS/torch mobile.
When running the following to JIT trace my segmentation module with a random image input of 224x224:
input_tensor = torch.rand(1, 3, 224, 224)
model.eval()
script_model = torch.jit.trace(model, input_tensor)
I get the the following trace. Apparently when my Forward function calls nn.interpolate() it claims that the size variable (final image segmentation size of 224x224) is not provided (found None) even though as seen in the trace below I’ve even tried hard coding the necessary tuple of (224, 224). Why am I still receiving this exception?
Trace
~/thd-visual-ai/segmentation/semantic-segmentation-pytorch/models/models.py in forward(self, conv_out, segSize)
481 print(segSize)
482 if self.use_softmax: # is True during inference
--> 483 x = nn.functional.interpolate(x, size=(224, 224), mode='bilinear', align_corners=False)
484 x = nn.functional.softmax(x, dim=1)
485 return x
~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/torch/nn/functional.py in interpolate(input, size, scale_factor, mode, align_corners)
2516 raise NotImplementedError("Got 4D input, but linear mode needs 3D input")
2517 elif input.dim() == 4 and mode == 'bilinear':
-> 2518 return torch._C._nn.upsample_bilinear2d(input, _output_size(2), align_corners)
2519 elif input.dim() == 4 and mode == 'trilinear':
2520 raise NotImplementedError("Got 4D input, but trilinear mode needs 5D input")
~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/torch/nn/functional.py in _output_size(dim)
2470
2471 def _output_size(dim):
-> 2472 _check_size_scale_factor(dim)
2473 if size is not None:
2474 return size
~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/torch/nn/functional.py in _check_size_scale_factor(dim)
2461 def _check_size_scale_factor(dim):
2462 if size is None and scale_factor is None:
-> 2463 raise ValueError('either size or scale_factor should be defined')
2464 if size is not None and scale_factor is not None:
2465 raise ValueError('only one of size or scale_factor should be defined')
ValueError: either size or scale_factor should be defined
I’ve defined size at line 483 hardcoded to try to fix this and it’s still giving none. I will attach the model.summary() here of my semantic segmentation model to help.
Sequential(
(0): ResnetDilated(
(conv1): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu1): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu2): ReLU(inplace=True)
(conv3): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu3): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(64, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(128, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(1024, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): SynchronizedBatchNorm2d(1024, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
(bn2): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(1024, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
(bn2): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(1024, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
(bn2): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(1024, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
(bn2): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(1024, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
(bn2): SynchronizedBatchNorm2d(256, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(1024, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
(bn2): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(2048, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): SynchronizedBatchNorm2d(2048, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)
(bn2): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(2048, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)
(bn2): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): SynchronizedBatchNorm2d(2048, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
)
(1): PPMDeepsup(
(ppm): ModuleList(
(0): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(3): ReLU(inplace=True)
)
(1): Sequential(
(0): AdaptiveAvgPool2d(output_size=2)
(1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(3): ReLU(inplace=True)
)
(2): Sequential(
(0): AdaptiveAvgPool2d(output_size=3)
(1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(3): ReLU(inplace=True)
)
(3): Sequential(
(0): AdaptiveAvgPool2d(output_size=6)
(1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(3): ReLU(inplace=True)
)
)
(cbr_deepsup): Sequential(
(0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv_last): Sequential(
(0): Conv2d(4096, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): SynchronizedBatchNorm2d(512, eps=1e-05, momentum=0.001, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Dropout2d(p=0.1, inplace=False)
(4): Conv2d(512, 150, kernel_size=(1, 1), stride=(1, 1))
)
(conv_last_deepsup): Conv2d(512, 150, kernel_size=(1, 1), stride=(1, 1))
(dropout_deepsup): Dropout2d(p=0.1, inplace=False)
)
(2): NLLLoss()
)
I appreciate any help! |
st180308 | Can you share the model itself? A simple test around interpolate (using the nightly version of PyTorch) seems to indicate that it works
def test_interpolate(x):
return nn.functional.interpolate(x, size=(224, 224), mode='bilinear', align_corners=False)
# this is successful
x = torch.jit.trace(test_interpolate, (torch.randn(1, 3, 224, 224)))
print(x.graph)
If the bug persists, you might have better luck using torch.jit.script around the area that calls interpolate. |
st180309 | Hi,
currently I have to convert a model, and struggle to script a class that contains a parent member of the same class type.
As
@torch.jit.script
class Tree(object):
def __init__(self):
self.parent:List[Tree]=[]
results in
RuntimeError:
Assignment to attribute 'parent' cannot be of a type that contains class '__torch__.Tree '.
Classes that recursively contain instances of themselves are not yet supported:
I don’t think, this issue has already been tackled in TorchScript, so
can someone please give me a hint, how to work around this problem?
So far, I have tried:
class Model(Module):
def __init__(self, parent:Any)->None:
super().__init__()
self.parent = [parent]
def do_something(self, x:Tensor)->Tensor: return x
def forward(self, x:Tensor)->Tensor:
src = self
if torch.jit.isinstance(self.parent[0], Model):
src=self.parent[0]
print('using parent')
return src.do_something(x)
p = Model(None)
m = Model(p)
m(torch.rand(1,1))
xxx = torch.jit.script(m)
but not really successful… 'Model' has no attribute 'parent' |
st180310 | Hi, in my torch.jit.script I would like to convert a list to a tuple, but it seems not to be the easy way.
The size of the list is fixed - although not in a way that jit prefers: which seems to be the problem
On Python 3.8 with PyTorch 1.9.1
result = []
val = torch.full((1,1),1.)
result.append(val) # size() = 1
return tuple(result)
… fails with Runtimeerror: cannot statically infer the expected size of a list in this context:
same as
val = torch.full((1,1),1.)
result:List[Tensor] = val * 1
return tuple(result)
…fails with Runtimeerror: cannot statically infer the expected size of a list in this context:
Can someone please tell me how to do this in torch.jit.script (not trace!)
Thx |
st180311 | as error suggests, tuple sizes are static(hardcoded) when “compiled”, so you have to do something like
return (result[0],result[1])
and there may be a need to annotate this as Tuple[Tensor,Tensor] in some contexts |
st180312 | Ok, that’s meant by “statically inferred”. Coming from the C++ side, I would have expected some counterpart of C++
result[4] = {val}
retval = tuple(result)
like
result:List[Tensor,4] = val * 4
Thank you for your help |
st180313 | if you’re comparing to C++, List is like a vector<Tensor> of dynamic size, hence you cannot initialize specialized N-tuples from it, without runtime checks and metaprogramming anyway.
Though I agree that TorchScript is lacking in some aspects, in this case the workaround is to return a list. |
st180314 | I am using a library that calls torch.utils.cpp_extension.load() and I believe I am having trouble specifying the correct compiler. I receive a compiler warning:
!! WARNING !!
!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.
See pytorch/CONTRIBUTING.md at master · pytorch/pytorch · GitHub 1 for help
with compiling PyTorch from source.
!!!
!! WARNING !!
I used conda to build pytorch so I believe the incorrect system default compiler is being called. So I have tried assiging the CXX environment variable per the documentation 1:
To compile the sources, the default system compiler (c++ ) is used, which can be overridden by setting the CXX environment variable.
However I do not think I am specifying the correct path to the compiler that was used to build pytorch with conda on my linux system. Is this diagnosis on the correct path? If so, how do I determine the path to the g++ compiler used to build pytorch? |
st180315 | I have a model that takes a Python class as a parameter to forward():
class Inputs:
def __init__(self, a:Tensor, b:Tensor, c:Tensor):
self.a = a
self.b = b
self.c = c
class Model(nn.Module):
def forward(self, inputs: Inputs):
...
After converting it to a ScriptedModule and saving to a file, I would like to load and use this model in a C++ program.
However, I am not sure what would be the equivalent of Inputs class on the C++ side… Any pointers to docs/examples that show how to do this? |
st180316 | Hey there!
I am trying to implement a model where each ‘pixel’ of the input should run through its own GRU cell (instead of using the feature input of GRU by flattening and flattening the image).
This forces me to loop over a ModuleList of GRU’s to forward pass each pixel.
I am trying to speed it it up using TorchScript but without success. : (
I want to use this custom module during training.
This is my module:
class ParallelGruList(nn.Module):
def __init__(self, input_size, hidden_size, num_grus, device):
super().__init__()
self.num_grus = num_grus
self.input_size = input_size
self.hidden_size = hidden_size
self.device = device # passing the device for torch script
self.grus = nn.ModuleList(self.get_gru() for _ in range(self.num_grus)).to(device)
def get_gru(self):
return nn.GRU(input_size=self.input_size,
hidden_size=self.hidden_size,
num_layers=1,
batch_first=True,
bias=False)
def forward(self, x):
batch_size = x.shape[0]
sequence_length = x.shape[1]
output = torch.zeros((batch_size, sequence_length, 10, self.num_grus), dtype=torch.float32, device=self.device)
for i, gru in enumerate(self.grus):
output[:, :, :, i], _ = gru(x[:, :, :, i], None)
return output
And this is how I test it:
def jit_gru(state, grus):
start = time.time()
r = grus(state)
return time.time() - start
data = torch.rand(1, 100, 3, 14 * 14, dtype=torch.float32, device=device)
print('init gru')
if __name__ == '__main__':
print('\n')
data = torch.rand(1, 100, 3, 14 * 14, dtype=torch.float32, device=device)
print('init gru')
grus = ParallelGruList(input_size=3, hidden_size=10, num_grus=14 * 14, device=device).to(device)
print('run test')
h = []
for i in range(10):
h.append(jit_gru(data, grus))
print('exe time smart jit', np.mean(h))
Running it like this will generate the following output:
init gru
run test
exe time smart jit 0.2266002893447876
To speed things up I tried to initialize the module by:
grus = torch.jit.script(ParallelGruList(input_size=3, hidden_size=10, num_grus=14 * 14, device=device).to(device))
But this makes it a little slower:
exe time smart jit 0.296830940246582
I also tried to make the ModuleList a constant but this leads to the error:
UserWarning: 'grus' was found in ScriptModule constants, but it is a non-constant submodule. Consider removing it.
warnings.warn("'{}' was found in ScriptModule constants, "
I also tried to decorate the get_gru() method with @torch.jit.export but that but that caused the error:
Class GRU does not have an __init__ function defined:
I also tried a couple of other things that made the whole thing usually even slower. I’ve read the JIT reference guide and the documentation of torch.jit.script but I could not find anything that helped.
At this point I am thoroughly clueless.
Could someone point me in the right direction or let me know what I am missing?
Best,
Constantin |
st180317 | Questions and Help
I am writing a AlbertTokenizer based preprocessor which can be serialized using Torchscript.
One of the preprocessor steps is doing unicode normalization (ref implementation in HFs’s AlbertTokenizer)
outputs = unicodedata.normalize("NFKD", outputs)
However, when I try to serialize the preprocesor with this code, I get following error
Python builtin <built-in function normalize> is currently not supported in Torchscript:
File "<ipython-input-9-dcefac57d541>", line 53
outputs = text
if not self.keep_accents:
outputs = unicodedata.normalize("NFKD", outputs)
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
# outputs = "".join([c for c in outputs if not unicodedata.combining(c)])
return outputs
What is the recommended way to perform Unicode Normalization while making sure the code is serializable? |
st180318 | Solved by Michael_Suo in post #4
This tutorial is the best place to start: https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html. Of course it is not about unicode normalization; you may be have to find a cpp library that implements it (similar to what is done with opencv in the tutorial) |
st180319 | The best way to do this is to use the custom operator API to bind in a unicode normalization op. |
st180320 | Michael_Suo:
custom operator API
Thank you @Michael_Suo for your response. Is there any reference example that you can share for writing such custom operator? |
st180321 | This tutorial is the best place to start: https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html 2. Of course it is not about unicode normalization; you may be have to find a cpp library that implements it (similar to what is done with opencv in the tutorial) |
st180322 | Hi All,
I’ve been working with trying to call a register_full_backward_hook on a jitted function yet it’s never called at all. I did notice there’s an issue open about this (Support backward hooks in JIT · Issue #51969 · pytorch/pytorch · GitHub 3).
Are there any work-around solution to this?
Any help would be greatly apprecitated! |
st180323 | Hi All,
I was just wondering how torch.jit.script can be used on functions that take nn.Module as an argument as well as Tensor?
For example, I have an example script below which takes the laplacian of a given function, and the laplacian_jit function takes in 2 arguments; the function, net, and the input x (of which we are taking the laplacian). However, when running this it fails with the following error,
RuntimeError:
Unknown type name 'nn.Module':
File "test_jit_func_with_module.py", line 31
@torch.jit.script
def laplacian_jit(net: nn.Module, xs: Tensor):
~~~~~~~~~ <--- HERE
xis = [xi.requires_grad_() for xi in xs.flatten(start_dim=1).t()]
xs_flat = torch.stack(xis, dim=1)
It seems that JIT doesn’t support passing nn.Module as an argument type? Is there a way to define the type such that I can pass an nn.Module type object into the jitted-function?
The example script is below,
Any help will be greatly appreciated!
Thank you!
import torch
import torch.nn as nn
from typing import List, Optional
from torch import Tensor
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
def forward(self, x):
return x.pow(2).sum(dim=-1)
net = Net()
@torch.jit.script
def sumit(inp: List[Optional[torch.Tensor]]):
elt = inp[0]
if elt is None:
raise RuntimeError("blah")
base = elt
for i in range(1, len(inp)):
next_elt = inp[i]
if next_elt is None:
raise RuntimeError("blah")
base = base + next_elt
return base
@torch.jit.script
def laplacian_jit(net: nn.Module, xs: Tensor):
xis = [xi.requires_grad_() for xi in xs.flatten(start_dim=1).t()]
xs_flat = torch.stack(xis, dim=1)
ys = net(xs_flat.view_as(xs))
ones = torch.ones_like(ys)
grad_outputs = torch.jit.annotate(List[Optional[Tensor]], [])
grad_outputs.append(ones)
result = torch.autograd.grad([ys], [xs_flat], grad_outputs, retain_graph=True, create_graph=True)
dy_dxs = result[0]
if dy_dxs is None:
raise RuntimeError("blah")
generator_as_list = [dy_dxs[..., i] for i in range(len(xis))]
lap_ys_components = [torch.autograd.grad([dy_dxi], [xi], grad_outputs, retain_graph=True, create_graph=False)[0] \
for xi, dy_dxi in zip(xis,generator_as_list)]
lap_ys = sumit(lap_ys_components)
return lap_ys
x = torch.randn(4096,2)
laplacian_jit(x) |
st180324 | Hi All,
I just wanted to ask a brief follow-up question on torch.jit.script being applied to torch.custom.autograd functions. This has been asked before, and the answer’s been it’s not currently supported.
Is there any update on this? I did see this issue here, https://github.com/pytorch/pytorch/issues/22329 3.
I did find this artcile here 2 as a potential solution. Is this the only way to script custom function within PyTorch as of 1.10?
Thank you!
import torch
import torch.nn as nn
from torch.autograd import Function
class Square(Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x ** 2
@staticmethod
def backward(ctx, grad_y):
x, = ctx.saved_tensors
return SquareBackward.apply(x, grad_y)
class SquareBackward(Function):
@staticmethod
def forward(ctx, x, grad_y):
ctx.save_for_backward(x, grad_y)
return grad_y * 2 * x
@staticmethod
def backward(ctx, grad_grad_x):
x, grad_y = ctx.saved_tensors
return 2 * grad_y * grad_grad_x, 2 * x * grad_grad_x
class CustomFunc(nn.Module):
def __init__(self):
super(CustomFunc, self).__init__()
def forward(self, x):
return Square.apply(x)
func = CustomFunc()
x = torch.randn(10)
y=func(x)
print("x: ",x)
print("y: ",y)
print("x^2: ",x.pow(2)) #check it all works...
jit_func = torch.jit.script(func) #jit function, fails here |
st180325 | I find that the memory usage is really much when I load a cuda model. I do some experiments and find that if I load a traced model by torch::jit::load, It cannot be really released if it belongs to cuda.
here is my testing code:
class model
{
public:
model();
torch::jit::script::Module module;
void load()
{
module = torch::jit::load("/home/gino/textDetectorCuda.pt"); // a cuda model, I also prepared a cpu one.
}
};
int main()
{
{
//stage.1 initialization ( not yet load model )
cout<<"initialization ..."<<endl;
unique_ptr<model> myModel;
myModel=make_unique<model>();
cin.get();
//stage.2 load a model
cout<<"load ..."<<endl;
myModel->load();
cin.get();
//stage.3 release the unique_ptr
cout<<"try reset ... "<<endl;
myModel.reset();
cin.get();
}
//stage.4 outside the lifecycle
cout<<"try outside ... "<<endl;
cin.get();
cout<<"bye~"<<endl;
return 0;
}
the testing code contains 4 steps: 1. run the program and do nothing. 2. load a model by jit 3. reset the unique_ptr which contains the jit module 4. outside the lifecycle ( I assume that the ptr would automatically be gone so even I do something wrong to deal with the ptr, the ptr would still release here )
I use linux command to check the memory usage
free -m
and I would see the “available” value to check the memory is freed or not.
I use jit to trace the EasyOcr’s text detection model, then saved a cpu and cuda model. However, The model itself doesn’t matter to this testing.
Here is the testing result
available memory of CPU model
stage.0 (before running the program ): 6650
stage.1 (run the program and do nothing) : 6593
stage.2 (load the model) : 6515
stage.3 (release the outer class): 6590
stage.4 (before end the program): 6594
as you can see , the memory usage is released successfully when I use a cpu model. However, If I load a CUDA model it would be really huge and behavior wired.
available memory of CUDA model
stage.0 (before running the program ): 6545
stage.1 (run the program and do nothing) : 6459
stage.2 (load the model) : 5344
stage.3 (release the outer class): 5342
stage.4 (before end the program): 5340
now you see that even I reset the outer class , the memory usage is not released. The cuda model is too huge to ignore it so I’m finding some ways to release it when the model finished its job. I find some discussion about using “cudaDeviceReset()” can reset everything in cuda then free the memory usage. However, I’m curiously that the usage in the ram can also be released or not.
So, How do I release the jit::module correctly ? |
st180326 | Are we talking about CPU RAM or GPU RAM?
Upon first use of the GPU, PyTorch will initialize CUDA. This part uses (quite a bit) of memory (both GPU and CPU) that won’t be released until PyTorch exists. You could separate this in you analysis by creating a small cuda tensor before loading any model.
For the “user allocated CUDA ram”, using c10::emptyCache() probably is a good idea, also using the caching allocator’s statistics might give you more insight.
Best regards
Thomas |
st180327 | I don’t use JIT for the training of my model, but only during exporting. I have shape asserts and random sampling within my code so JIT complains that:
assert, boolean conversion is not supported
the model outputs does not match over two multiple runs - because of randomness
I would like to change the behavior of my forwards depending on whether they are exported or not. Is there a method or global variable that allows me to do this kind of check? I imagine something like:
if not torch.jit.is_jit_active():
assert x.shape == (2, 2)
I would use something similar for sampling since I’m only interested in modes of the distribution during exporting. |
st180328 | Solved by pkubik in post #2
Just accidentally stumbled on the right function torch.jit.is_tracing().
My linter says it’s not exported, so I’m not sure whether it’s safe to use but it says the same for nn.Parameter so there is probably nothing to worry about. Linters never liked Pytorch anyway. |
st180329 | Just accidentally stumbled on the right function torch.jit.is_tracing().
My linter says it’s not exported, so I’m not sure whether it’s safe to use but it says the same for nn.Parameter so there is probably nothing to worry about. Linters never liked Pytorch anyway. |
st180330 | Hello!
I was thinking that it could be useful to save metadata about a model when you export it via torchscript. Example of such metadata could e.g. be the git-hash that the model was trained under.
Some example code
import torch
import torch.nn as nn
class Mymodel(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(10, 10, 1, 1, 0, bias=True)
self.accuracy = 0
@torch.jit.export
def model_info(self):
return {
"git_hash": "*git-hash-info*",
"created": "*timestamp*",
"accuracy": f"{self.accuracy}",
"pytorch_version": "*version*",
}
def forward(self, x):
x = self.conv(x)
return x
model = Mymodel()
model.accuracy = 10
scripted_model = torch.jit.script(model)
model_path = 'testmodel.pt'
scripted_model.save(model_path)
loaded_model = torch.jit.load(model_path)
print(loaded_model)
print(loaded_model.model_info())
I think the practice could really help track your models in many systems. I haven’t seen the technique used before and would love some arguments against it or suggestion improvements.
What kind of information would be useful to embed in a torchscript model apart from what I’ve written above? |
st180331 | Hi! Maybe this isn’t the right place for this question. I am trying to write a PyBind11 Torch extension that takes a ScriptModule as input. I have some function like this:
torch::jit::script::Module preprocess(const torch::jit::script::Module& mod) {
torch::jit::script::Module new_mod(mod._ivalue()->name() + "_txp");
return new_mod;
}
void add_submodule(pybind11::module_& m) {
m.def("preprocess",
&preprocess,
"Preprocesses the TorchScript module",
"model"_a);
}
This compiles fine, but when I call the function from Python I get this error:
TypeError: func_name(): incompatible function arguments. The following argument types are supported:
1. (model: torch::jit::Module) -> torch::jit::Module
Invoked with: <torch._C.ScriptModule object at 0x7fc27d657770>
How do I resolve this? |
st180332 | Solved by Benjamin_Bolte1 in post #5
Ok, turns out I was missing some important -DPYBIND11_ directives. I’ve created a minimal example repo here for posterity’s sake: GitHub - codekansas/torchscript-example: Example CMake project for TorchScript |
st180333 | More context, this works fine:
torch::Tensor preprocess(const torch::Tensor& mod) { return mod; }
PYBIND11_MODULE(MODULE_NAME, m) { m.def("preprocess", &preprocess); }
Here’s my CMake file:
cmake_minimum_required(VERSION 3.12 FATAL_ERROR)
project(txp-cpp-backend LANGUAGES CXX C)
set(LIB_NAME cpp)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wformat")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror=cpp")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wformat-security")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fstack-protector")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DMODULE_NAME=lib${LIB_NAME}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -O3 -Wno-reorder")
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
find_package(PythonLibs REQUIRED)
find_library(TORCH_PYTHON_LIBRARY torch_python
PATHS "${TORCH_INSTALL_PREFIX}/lib")
message(STATUS "TORCH_PYTHON_LIBRARY: ${TORCH_PYTHON_LIBRARY}")
add_library(${LIB_NAME} SHARED txp.cpp)
target_compile_features(${LIB_NAME} PRIVATE cxx_std_14)
set_target_properties(${LIB_NAME} PROPERTIES OUTPUT_NAME ${LIB_NAME})
set_target_properties(${LIB_NAME} PROPERTIES POSITION_INDEPENDENT_CODE ON)
set_target_properties(${LIB_NAME}
PROPERTIES LINK_FLAGS "-Wl,-rpath,${CMAKE_INSTALL_RPATH}")
target_compile_options(
${LIB_NAME} PUBLIC -Wall -Wno-sign-compare -Wno-unused-function
-Wno-unknown-pragmas)
target_include_directories(
${LIB_NAME}
PUBLIC ${TORCH_INCLUDE_DIRS}
PUBLIC ${PYTHON_INCLUDE_DIRS})
target_link_libraries(
${LIB_NAME}
PUBLIC ${PYTHON_LIBRARIES}
PUBLIC ${TORCH_LIBRARIES}
PUBLIC ${TORCH_PYTHON_LIBRARY})
And this guy:
>>> import torch; torch.__version__
'1.11.0.dev20211121'
When I try to add #include <torch/csrc/jit/python/script_init.h> I get:
fatal error: torch/csrc/generic/Storage.h: No such file or directory
which is indeed missing from the Torch install path |
st180334 | What does the Python code look like? What I suspect is happening is: torch.jit.ScriptModuleis a Python wrapper; the actual pybinded type is actually stored in the attribute_c` on it. |
st180335 | Something like this:
script_model = torch.jit.load(cfg.model_file, map_location="cpu")
preprocessed_model = cpplib.preprocess(script_model._c)
Yep, I am using the ._c guy rather than the original module |
st180336 | Ok, turns out I was missing some important -DPYBIND11_ directives. I’ve created a minimal example repo here for posterity’s sake: GitHub - codekansas/torchscript-example: Example CMake project for TorchScript 2 |
st180337 | Hi! I’ve installed PyTorch using the standard Conda route (showing this version on my machine: pytorch=1.10.0=py3.8_cuda11.3_cudnn8.2.0_0)
I am trying to include this file:
/path/to/conda/env/lib/python3.8/site-packages/torch/include/torch/csrc/jit/python/pybind_utils.h
However, when I try to compile my project, it seems to be missing this: torch/csrc/distributed/rpc/py_rref.h
It seems as if the wheel was built with USE_DISTRIBUTED=1 but then the files are not packaged. What is the correct thing to do in this case? |
st180338 | Could you describe your use case a bit more, please?
Where do you want to include this file and which application claims the py_rref.h is missing?
It sounds as if you would like to build another application using the pre-built conda binaries? |
st180339 | Ah, I solved this by setting some flags in various places. Now getting a different error here → PyBind11 doesn't recognize `torch._C.ScriptModule` as `torch::jit::Module` 1 |
st180340 | I tried to load model in c++ with jit but the error raised:
error raised in this line :
model = torch::jit::load("Resource/model.pt");
Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
CPU: registered at aten\src\ATen\RegisterCPU.cpp:5925 [kernel]
BackendSelect: registered at aten\src\ATen\RegisterBackendSelect.cpp:596 [kernel]
Named: registered at ..\..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
AutogradCPU: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
AutogradCUDA: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
AutogradXLA: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
AutogradNestedTensor: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
AutogradPrivateUse1: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
AutogradPrivateUse2: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
AutogradPrivateUse3: registered at ..\..\torch\csrc\autograd\generated\VariableType_0.cpp:9273 [autograd kernel]
Tracer: registered at ..\..\torch\csrc\autograd\generated\TraceType_0.cpp:10499 [kernel]
Autocast: fallthrough registered at ..\..\aten\src\ATen\autocast_mode.cpp:250 [backend fallback]
Batched: registered at ..\..\aten\src\ATen\BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ..\..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Exception raised from reportError at ..\..\aten\src\ATen\core\dispatch\OperatorEntry.cpp:397 (most recent call first):
00007FFB821E0F9200007FFB821E0F30 c10.dll!c10::Error::Error [<unknown file> @ <unknown line number>]
00007FFB821E0C0E00007FFB821E0BC0 c10.dll!c10::detail::torchCheckFail [<unknown file> @ <unknown line number>]
00007FFB3E0C357E00007FFB3E0C33C0 torch_cpu.dll!c10::impl::OperatorEntry::reportError [<unknown file> @ <unknown line number>]
00007FFB3E55517D00007FFB3E4D6FA0 torch_cpu.dll!at::native::mkldnn_sigmoid_ [<unknown file> @ <unknown line number>]
00007FFB3E749A6900007FFB3E72BAB0 torch_cpu.dll!at::zeros_outf [<unknown file> @ <unknown line number>]
00007FFB3E74625D00007FFB3E72BAB0 torch_cpu.dll!at::zeros_outf [<unknown file> @ <unknown line number>]
00007FFB3E53A67900007FFB3E4D6FA0 torch_cpu.dll!at::native::mkldnn_sigmoid_ [<unknown file> @ <unknown line number>]
00007FFB3E68127700007FFB3E681150 torch_cpu.dll!at::empty_strided [<unknown file> @ <unknown line number>]
00007FFB3E3469A300007FFB3E346630 torch_cpu.dll!at::native::to_dense_backward [<unknown file> @ <unknown line number>]
00007FFB3E34658100007FFB3E346470 torch_cpu.dll!at::native::to [<unknown file> @ <unknown line number>]
00007FFB3E94A9CD00007FFB3E72BAB0 torch_cpu.dll!at::zeros_outf [<unknown file> @ <unknown line number>]
00007FFB3E913B0200007FFB3E72BAB0 torch_cpu.dll!at::zeros_outf [<unknown file> @ <unknown line number>]
00007FFB3E99239400007FFB3E985C80 torch_cpu.dll!at::is_custom_op [<unknown file> @ <unknown line number>]
00007FFB3E9E4E8B00007FFB3E9E4DE0 torch_cpu.dll!at::Tensor::to [<unknown file> @ <unknown line number>]
00007FFB400693F000007FFB400680A0 torch_cpu.dll!torch::jit::Unpickler::readInstruction [<unknown file> @ <unknown line number>]
00007FFB4006BBE800007FFB4006BA60 torch_cpu.dll!torch::jit::Unpickler::run [<unknown file> @ <unknown line number>]
00007FFB40066CF200007FFB40066CC0 torch_cpu.dll!torch::jit::Unpickler::parse_ivalue [<unknown file> @ <unknown line number>]
00007FFB4003A0E500007FFB40039D00 torch_cpu.dll!torch::jit::readArchiveAndTensors [<unknown file> @ <unknown line number>]
00007FFB40039CD300007FFB40038460 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FFB40036B3600007FFB400247C0 torch_cpu.dll!torch::jit::hasGradientInfoForSchema [<unknown file> @ <unknown line number>]
00007FFB4003868200007FFB40038460 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FFB4003810400007FFB40037FE0 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FF6C222A2E500007FF6C222A270 BgRemover.exe!U2Net::U2Net [D:\MyWorks\C++\BgRemover\BgRemover\U2Net.cpp @ 75]
00007FF6C222509100007FF6C2224E00 BgRemover.exe!SegmentationNextDetection [D:\MyWorks\C++\BgRemover\BgRemover\BgRemover.cpp @ 53]
00007FF6C2225B0900007FF6C2225B00 BgRemover.exe!main [D:\MyWorks\C++\BgRemover\BgRemover\BgRemover.cpp @ 117]
00007FF6C222B57400007FF6C222B468 BgRemover.exe!__scrt_common_main_seh [D:\agent\_work\13\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 288]
00007FFB97AD703400007FFB97AD7020 KERNEL32.DLL!BaseThreadInitThunk [<unknown file> @ <unknown line number>]
00007FFB98FA265100007FFB98FA2630 ntdll.dll!RtlUserThreadStart [<unknown file> @ <unknown line number>] |
st180341 | Hey, can you describe the solution. The link you provided is not working anymore. |
st180342 | This is the error I’m getting in android while loading the model.
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.android.example.cataractdetectionapp/com.android.example.cataractdetectionapp.InferenceActivity}:
com.facebook.jni.CppException: Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend.This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Vulkan, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]
Vulkan: registered at ../aten/src/ATen/native/vulkan/ops/Factory.cpp:47 [kernel]
BackendSelect: registered at aten/src/ATen/RegisterBackendSelect.cpp:665 [kernel]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:22 [kernel]
Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:22 [kernel]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:51 [backend fallback]
AutogradLazy: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:55 [backend fallback]
AutogradXPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradMLC: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:59 [backend fallback]
UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
I used the below code for loading the module:
Module module = LiteModuleLoader.load(assetFilePath(getApplicationContext(), "Model.ptl"));
The dependencies I’m using for Pytorch are as follows:
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.10.0' |
st180343 | Hello,
We are exploring the possibility of graph surgery on a jit scripted model.
The scripted model has a graph object. The graph object has blocks, nodes and variables.
If the graph has 10 nodes, is it possible to create two graph objects with 5 nodes each? |
st180344 | This is indeed possible, although if you are looking to do some lightweight graph transformations I would recommend torch.fx |
st180345 | Hello @Michael_Suo, thanks for your idea. Unfortunately we are bound to work on the JIT scripted model. So we can’t use fx for our purposes.
Do you think this is possible without fx using only the JIT API?
Thanks & Best regards,
RB |
st180346 | certainly it’s possible, you can look at some of the passes in torch/crsc/jit/passes as reference |
st180347 | Hi @Michael_Suo, I am thinking about how to auto add some fork/wait into the graph to enable the interop parallelism. Does the jit-trace/jit-script/FX already have this kind of feature or is it in the plan? |
st180348 | I have a torchscripted module which I am testing out for prediction. Each instance is a dictionary of tensors. The first instance I pass into the torchscripted module correctly runs through the model and generates an acceptable output. The second instance to the same torchscript object causes the below error. This only occurs when running on a cuda device and not on CPU. I have insured its not a problem with the data by passing in the exact same instance twice and observing the error get thrown on the second forward pass. Any idea what this could be?
Traceback (most recent call last):
File "predict.py", line 60, in <module>
output_greedy = script(batch)
File "xxx/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
RuntimeError: default_program(22): error: extra text after expected end of number
1 error detected in the compilation of "default_program".
nvrtc compilation failed:
#define NAN __int_as_float(0x7fffffff)
#define POS_INFINITY __int_as_float(0x7f800000)
#define NEG_INFINITY __int_as_float(0xff800000)
template<typename T>
__device__ T maximum(T a, T b) {
return isnan(a) ? a : (a > b ? a : b);
}
template<typename T>
__device__ T minimum(T a, T b) {
return isnan(a) ? a : (a < b ? a : b);
}
extern "C" __global__
void fused_neg_add_mul(float* t0, float* aten_mul) {
{
if (512 * blockIdx.x + threadIdx.x<67 ? 1 : 0) {
float v = __ldg(t0 + (512 * blockIdx.x + threadIdx.x) % 67);
aten_mul[512 * blockIdx.x + threadIdx.x] = ((0.f - v) + 1.f) * -1.000000020040877e+20.f;
}
}
} |
st180349 | Are you scripting the same model in the same file and the second invocation of torch.jit.script(model) raises this issue?
If so, do you see the error on any model or just your custom one? Could you also post the output of python -m torch.utils.collect_env here, please? |
st180350 | Are you scripting the same model in the same file and the second invocation of torch.jit.script(model) raises this issue?
I tried the following and got the same error. Scripting in the same file and then running the two prediction calls and scripting in the file, saving to disk, reloading in the same file and running the two inference calls. I was originally scripting with module.to_torchscript() in pytorch lightning so tried to switch to invoking torch.jit.script as well. Same error.
If so, do you see the error on any model or just your custom one? Could you also post the output of python -m torch.utils.collect_env here, please?
Collecting environment information...
PyTorch version: 1.8.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Python version: 3.6 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 460.80
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytorch-lightning==1.3.5
[pip3] torch==1.8.1
[pip3] torch4u==0.0.1
[pip3] torchmetrics==0.3.2
[conda] Could not collect |
st180351 | I tried a basic example with another module and it seems to not have issue. However, the module I am torchscripting is much more complex. Any advice on how one may proceed? I additionally tried running on an A100 with CUDA 11 and hit the same issue where it fails on the second attempt at a forward pass.
import torch
from torch import nn
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
module = NeuralNetwork().to(device='cuda')
script = torch.jit.script(module)
input = torch.rand((1,28,28)).to(device='cuda')
out = script(input)
out = script(input)
print(out) |
st180352 | I’m getting the same issue.
I load my script model and simply loop over images. The 2nd image results in this error every time (even when it’s the same image). Is there a way forward?
I triedthe example from @AndriyMulyar and the error does not occure. |
st180353 | The second loop doing this is because this is when the JIT fusers try to produce an optimized kernel by default.
This could be your environment if all JIT compilation fails, or a bug in the fuser.
Maybe you can isolate the issue by
making sure something works (from a fuser tutorial or so),
if it does, try to reduce your example by leaving out half of the computation to see if it still happens. If you do this a few times, you can replace inputs by random tensors of the same dtype, shape and stride. This might get us a shareable sellf-contained example.
Best regards
Thomas |
st180354 | What is the best workaround in Torchscript (PyTorch 1.9.1) for scripting
cache = defaultdict(lambda: None)
, which results in UnsupportedNodeError: Lambda aren’t supported:
and
cache = defaultdict(self.my_callable)
, which results in RuntimeError: method cannot be used as a value:
? |
st180355 | For example a simple jit’ed mse:
@torch.jit.script
def jit_mse(input, target):
return ((input - target)**2).mean()
shows the cuda kernel fused_sub_pow in the profiler, however it doesn’t change the original graph (with aten::sub and aten::pow)
This seems to make the graph property a bit useless, or is this not its intended usage?
Cheers |
st180356 | The graph is the unoptimized representation, see graph_for(input, target) for the “full deal”. So the graph property is more “how did TorchScript understand my program” than while graph_for is “what will TorchScript run”.
I’ve tried to summarize a bit of how that happens in a couple of articles on my blog: Optimizing functions in the JIT 1 and Runtime overview 1. Note that none of that is official information, so treat anything on my blog as “random person on the internet says…”.
Best regards
Thomas |
st180357 | Thank you for the quick reply @tom and also for the great articles!
It looks like torch.jit.last_executed_optimized_graph() is what I have been looking for.
Thanks |
st180358 | I’m using pytorch=1.3.1 to load model saved by pytorch=1.7.1 and received the following error:
RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 1. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:131)
Is there any way to load higher version model? Thanks for your help. |
st180359 | Solved by ptrblck in post #2
PyTorch doesn’t support forward compatibility (loading data from a newer into an older release), but backwards compatibility, so you would need to update PyTorch or save the model in 1.3.1. |
st180360 | PyTorch doesn’t support forward compatibility (loading data from a newer into an older release), but backwards compatibility, so you would need to update PyTorch or save the model in 1.3.1. |
st180361 | Hello! Apologies if this is a silly question, I’m very new to PyTorch - basically, I’m trying to save a traced model with torch.jit.save and then later load it with torch.jit.load and access the size of the input tensor, but it looks like the size information goes missing between saving and loading. If I run the snippet
import torch
import torchvision
model_name = "resnet18"
model = getattr(torchvision.models, model_name)(pretrained=True)
model = model.eval()
input_shape = [1, 3, 224, 224]
input_data = torch.randn(input_shape)
traced_model = torch.jit.trace(model, input_data)
print(list(traced_model.graph.inputs()))
torch.jit.save(traced_model, "resnet18.pth")
loaded_model = torch.jit.load("resnet18.pth")
print(list(loaded_model.graph.inputs()))
the first print (corresponding to the model before saving) prints out
[self.1 defined in (%self.1 : __torch__.torchvision.models.resnet.ResNet, %input.1 : Float(1:150528, 3:50176, 224:224, 224:1, requires_grad=0, device=cpu) = prim::Param()
), input.1 defined in (%self.1 : __torch__.torchvision.models.resnet.ResNet, %input.1 : Float(1:150528, 3:50176, 224:224, 224:1, requires_grad=0, device=cpu) = prim::Param()
)]
which shows some familiar numbers about the inputs, but the second print (corresponding to the loaded model) prints
[self.1 defined in (%self.1 : __torch__.torchvision.models.resnet.ResNet, %input.1 : Tensor = prim::Param()
), input.1 defined in (%self.1 : __torch__.torchvision.models.resnet.ResNet, %input.1 : Tensor = prim::Param()
)]
The input numbers have gone missing… Does anyone know why this happens? Or maybe there is some other way of saving a model on a disk and loading it later such that the information about inputs is still accessible? I’m asking that because I’m working on an ahead of time compiler which relies on fetching the input shape from the model. |
st180362 | Did you find out the solution? I am struggling to find the shape for this as well. |
st180363 | we use pytorch version of 1.7.0+cu101 to turn torchscript,and I successfully loaded the torchscript model in torch version of 1.7.0+cu101 to inference, but when I use pytorch version of 1.8.1+cu101 to inference, there are some error
python3 torch_test_jit_1109.py Traceback (most recent call last): File "torch_test_jit_1109.py", line 35, in <module> main(sys.argv[1], sys.argv[2]) File "torch_test_jit_1109.py", line 17, in main e2e_model = torch.jit.load(script_model) File "/data3/users/suziyi/Miniconda3/lib/python3.8/site-packages/torch/jit/_serialization.py", line 161, in load cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files) RuntimeError: Class Namespace cannot be used as a value: Serialized File "code/__torch__/torchaudio/sox_effects/sox_effects.py", line 5 effects: List[List[str]], channels_first: bool=True) -> Tuple[Tensor, int]: in_signal = __torch__.torch.classes.torchaudio.TensorSignal.__new__(__torch__.torch.classes.torchaudio.TensorSignal) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _0 = (in_signal).__init__(tensor, sample_rate, channels_first, ) out_signal = ops.torchaudio.sox_effects_apply_effects_tensor(in_signal, effects) 'apply_effects_tensor' is being compiled since it was called from 'SynthesizerTrn.forward' Serialized File "code/__torch__/models_torchaudio_torchscript.py", line 41 sr: Tensor, vol: Tensor) -> Tensor: _0 = __torch__.torchaudio.sox_effects.sox_effects.apply_effects_tensor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _1 = torch.tensor([(torch.size(x))[-1]], dtype=None, device=None, requires_grad=False) x_lengths = torch.to(_1, 4, False, False, None)
we use torchaudio to change audio speed,volume and so on, please help me
when we create torchscript the pytorch version is 1.7.0+cu101 , the torchaudio version is 0.7.0 |
st180364 | Hi
I tried to perform jit.trace
trace_model = torch.jit.trace(net,x)
trace_model.save('out.pt')
on a net that contains
import torch
from torch import nn as nn
from torch.nn import functional as F
__all__ = ['swish_me', 'SwishMe', 'mish_me', 'MishMe',
'hard_sigmoid_me', 'HardSigmoidMe', 'hard_swish_me', 'HardSwishMe']
@torch.jit.script
def swish_jit_fwd(x):
return x.mul(torch.sigmoid(x))
@torch.jit.script
def swish_jit_bwd(x, grad_output):
x_sigmoid = torch.sigmoid(x)
return grad_output * (x_sigmoid * (1 + x * (1 - x_sigmoid)))
class SwishJitAutoFn(torch.autograd.Function):
""" torch.jit.script optimised Swish w/ memory-efficient checkpoint
Inspired by conversation btw Jeremy Howard & Adam Pazske
https://twitter.com/jeremyphoward/status/1188251041835315200
Swish - Described originally as SiLU (https://arxiv.org/abs/1702.03118v3)
and also as Swish (https://arxiv.org/abs/1710.05941).
TODO Rename to SiLU with addition to PyTorch
"""
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return swish_jit_fwd(x)
@staticmethod
def backward(ctx, grad_output):
x = ctx.saved_tensors[0]
return swish_jit_bwd(x, grad_output)
def swish_me(x, inplace=False):
return SwishJitAutoFn.apply(x)
class SwishMe(nn.Module):
def __init__(self, inplace: bool = False):
super(SwishMe, self).__init__()
def forward(self, x):
return SwishJitAutoFn.apply(x)
While running
trace_model.save('out.pt')
it fails at ‘SwishJitAutoFn’ with following error message:
Traceback (most recent call last):
File "rk3566Test.py", line 269, in <module>
export_pytorch_model()
File "rk3566Test.py", line 56, in export_pytorch_model
trace_model.save('./sqnet_3566.pt')
File "/home/paul/rknn2/lib/python3.6/site-packages/torch/jit/__init__.py", line 1987, in save
return self._c.save(*args, **kwargs)
RuntimeError:
Could not export Python function call 'SwishJitAutoFn'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:
/home/paul/.cache/torch/hub/rwightman_gen-efficientnet-pytorch_master/geffnet/activations/activations_me.py(63): forward
/home/paul/rknn2/lib/python3.6/site-packages/torch/nn/modules/module.py(704): _slow_forward
/home/paul/rknn2/lib/python3.6/site-packages/torch/nn/modules/module.py(720): _call_impl
/home/paul/pytorch/AdaBins/models/unet_adaptive_bins.py(73): forward
/home/paul/rknn2/lib/python3.6/site-packages/torch/nn/modules/module.py(704): _slow_forward
/home/paul/rknn2/lib/python3.6/site-packages/torch/nn/modules/module.py(720): _call_impl
/home/paul/pytorch/AdaBins/models/unet_adaptive_bins.py(93): forward
/home/paul/rknn2/lib/python3.6/site-packages/torch/nn/modules/module.py(704): _slow_forward
/home/paul/rknn2/lib/python3.6/site-packages/torch/nn/modules/module.py(720): _call_impl
/home/paul/rknn2/lib/python3.6/site-packages/torch/jit/__init__.py(1109): trace_module
/home/paul/rknn2/lib/python3.6/site-packages/torch/jit/__init__.py(955): trace
rk3566Test.py(54): export_pytorch_model
rk3566Test.py(269): <module>
Am I missing something for the trace_model.save() or the SwishJitAutoFn code needs to be revised to be trace_model.save friendly?
Thank you very much for your help in advance. |
st180365 | Solved by ynjiun_wang in post #2
Found the solution: change the following line setting to True:
# Set to True if wanting to use torch.jit.script on a model
_SCRIPTABLE = True
in config.py |
st180366 | Found the solution: change the following line setting to True:
# Set to True if wanting to use torch.jit.script on a model
_SCRIPTABLE = True
in config.py 5 |
st180367 | Hey,
I have been having a hard time used the export to get it on mobile. First it said missing LA pack. We did a pytorch build for CPU and then it says model version supported is 2 - 5 but ours is 7. Which I assume is now the libtorch needs to be rebuilt to match the export version. Does that sound correct? |
st180368 | I can’t pass cuda stream as an arg to torch.jit.script function.
Following is code snippet.
Any help?
@torch.jit.script
def f(d, s: torch.Stream, i: int):
with torch.cuda.stream(s):
#do sth
@torch.jit.script
def r(tensors: List[Tensor], num_gpus: int):
futures: List[torch.jit.Future[Tensor]] = []
streams = [
torch.cuda.Stream(torch.device(f"cuda:{i}"))
for i in range(num_gpus)
]
for i in range(num_gpus):
futures.append(torch.jit.fork(f, tensors[i], streams[i], i))
results = [torch.jit.wait(fut) for fut in futures]
return results
Got RuntimeError:
f(Tensor d, Stream s, int i) -> (Tensor):
Expected a value of type 'Stream' for argument 's' but instead found type '__torch__.torch.classes.cuda.Stream (of Python compilation unit at: 0)'.
I also tried typing torch.cuda.Stream, and got:
'__torch__.torch.cuda.streams.Stream (of Python compilation unit at: 0x3e98e70)' object has no attribute or method 'cuda_stream'. Did you forget to initialize an attribute in __init__()?:
File "/data0/fumi/.pyenv/versions/venv374/lib/python3.7/site-packages/torch/cuda/streams.py", line 98
@property
def _as_parameter_(self):
return ctypes.c_void_p(self.cuda_stream)
~~~~~~~~~~~~~~~~ <--- HERE
'Stream.___as_parameter__getter' is being compiled since it was called from '__torch__.torch.cuda.streams.Stream' |
st180369 | Hi I’m trying to understand why I’m allowed to use a member value like this in forward but not in a custom scripted function.
import torch
import torch.nn as nn
class TestClass(nn.Module):
def __init__(self, y):
self.y = y
super().__init__()
def forward(self, x):
if self.y > 0:
return torch.zeros([1,1])
else:
return torch.ones([1,1])
# @torch.jit.script
def test(self, x):
if self.y > 0:
return torch.zeros([1,1])
else:
return torch.ones([1,1])
tester = TestClass(1)
torch.jit.script(tester)``` |
st180370 | Can a model serialized by torch.jit.trace be used for training? In the documentation it seems the ScriptModule is intented for inference and only training example is for torch::nn::Module. I wonder if training with ScriptModule is officially supported and is there documentation on that.
Related:
Can ScriptModule be used for training?
Can ScriptModule be used for training? The reason we ask this question is that we found some source code related back propagation in torch.jit. https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/autodiff.cpp#L1151
github.com/pytorch/pytorch
Document whether it is possible to train TorchScript modules 1
opened
Mar 1, 2019
martinruenz
oncall: jit
module: docs
module: cpp
triaged
## 📚 Documentation
I am interested in the following workflow, but my impressi…on is that this use case is not supported (yet?):
1. Prototype a model in **Python**
2. Export model to **C++** via TorchScript (`torch.jit.trace`)
3. Train model in **C++**
Based on the documentation, it is unclear whether this workflow is supported or not. The very first sentence of https://pytorch.org/docs/master/jit.html states
> TorchScript is a way to create serializable and **optimizable models** from PyTorch code
However, it is ambiguous if "**optimizable**" refers to training or the jit compilation process here.
It seems that `torch::jit::script::Module` is treated as a special case which does not share commonality / a base class with `torch::nn::Module`. Is there also a way to access the parameters of a `torch::jit::script::Module` in order to use it for training? At first I thought "**optimizable models**" was referring to this but I am unsure now.
This issue is related to the post: https://discuss.pytorch.org/t/can-scriptmodule-be-used-for-training/35932
cc @suo @yf225 |
st180371 | From PyTorch 1.6 to 1.7.0.dev20200914, I am able to export the PyTorch Lightning Module torch script.
But starting from 1.7.0.dev20200915, it started to show error like this.
RuntimeError:
Wrong type for attribute assignment. Expected None but got Any:
Could this be there are some changes to the JIT? If so, how can I fix it so it can be exported in 1.6 and upcoming 1.7.
Reproducible Link:
https://colab.research.google.com/drive/1DuVwIZ1XHPdyzIUiQcP5jG1O13Zv6K7z?usp=sharing 5
Thank you in advance. |
st180372 | Solved by SplitInfinity in post #6
We added support for module and class type properties recently, and I think the issue here is that a previously existing property that included code incompatible with TorchScript is now failing to compile. There are two options:
Modify the property definition so that it is TorchScript compatible.
E… |
st180373 | @ptrblck Thank you for your reply. Actually I am working on a PR of Lightning to support PyTorch 1.7.
The tests were passing prior nightly version but failing on the recent nightly version.
Can you share some links about JIT changes? |
st180374 | I don’t know which change might have caused this error, but based on the stack trace I would start with torch/jit/_recursive.py, since concrete_type._create_methods_and_properties seems to be failing.
Based on the history of this file, there was this PR 3 on Sept. 14th which would fit into the first breaking nightly release.
This PR also introduced _create_methods_and_properties as seen here 2, so it would be my best guess.
Maybe @SplitInfinity could help out as the author of this PR. |
st180375 | We added support for module and class type properties recently, and I think the issue here is that a previously existing property that included code incompatible with TorchScript is now failing to compile. There are two options:
Modify the property definition so that it is TorchScript compatible.
Exclude the property from compilation. This can be done by adding a class attributed named __ignored_properties__ and setting it to a list of the names of properties that should be ignored. You can see an example in this PR 10. In the same stack of PRs, there is another that enables the usual @ignore and @unused syntax work with properties, I plan to land that by the end of the week. |
st180376 | Hi @SplitInfinity, Thank you for your reply.
Setting __ignored_properties__ solves the issue, also this is good to see @ignore and @unused are WIP too.
Lastly, Thank you for your work on the PRs. |
st180377 | Hi, @ydcjeff can you explain a little bit more, where to add __ignored_properties__ ? |
st180378 | Hello everyone,
I would like to prune my model and then script it but I receive the following error: AttributeError: 'LnStructured' object has no attribute '__name__'
Can we prune a model and then script it?
I see that LnStructured actually add a forward pre_hook but jit.script can’t resolve its name.
cf:
Traceback (most recent call last):
File "src/model_optimizer.py", line 226, in <module>
script_module(pruned_model)
File "src/model_optimizer.py", line 20, in script_module
module = jit.script(module)
File "/venv/lib/python3.8/site-packages/torch/jit/_script.py", line 1257, in script
return torch.jit._recursive.create_script_module(
File "/venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 451, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 513, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/venv/lib/python3.8/site-packages/torch/jit/_script.py", line 587, in _construct
init_fn(script_module)
File "/venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 491, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 513, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/venv/lib/python3.8/site-packages/torch/jit/_script.py", line 587, in _construct
init_fn(script_module)
File "/venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 491, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 465, in create_script_module_impl
hook_stubs, pre_hook_stubs = get_hook_stubs(nn_module)
File "/venv/lib/python3.8/site-packages/torch/jit/_recursive.py", line 758, in get_hook_stubs
if pre_hook.__name__ in hook_map:
AttributeError: 'LnStructured' object has no attribute '__name__'
Here is the code to reproduce the error :
import torch
import torch.nn as nn
import torch.nn.utils.prune as prune
import torch.jit as jit
class Model(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(10, 10),
nn.ReLU(),
nn.Linear(10, 10),
nn.Sigmoid(),
)
def forward(self, t):
return self.layers(t)
model = Model()
for _, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
prune.ln_structured(module, name='weight', amount=0.4, dim=1, n=float('-inf'))
jit.script(model)
Thanks,
Thytu |
st180379 | I’m seeing a strange behavior in torch::jit::freeze with precision loss that I’m not understanding, hope someone can clarify. The below code is pasted and ran with torch 1.9.1+cu111. I have two models that are copies of each other. They are both frozen before running inference.
If I pass the same input to both, the exact same output is produced. However, if I pass the same input twice to the first model, and only once to the second model, the results are only equal up to some precision loss. This behavior does not reproduce if I do not freeze the models. The printed frozen code is the same for both models as well. Any ideas?
import argparse
import torch
from torch import nn
import copy
class BugModule(nn.Module):
def __init__(self):
super().__init__()
self.filter = nn.Conv2d(4, 4, kernel_size=1, padding=0, bias=False, groups=4)
def forward(self, x, y):
cov_xy = self.filter(x*y) - self.filter(x) * self.filter(y)
return cov_xy
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--resolution', type=int, default=[4,4], nargs=2)
args = parser.parse_args()
model1 = BugModule()
model2 = copy.deepcopy(model1)
model1 = model1.to(device='cuda', dtype=torch.float32).eval()
model1 = torch.jit.script(model1)
model1 = torch.jit.freeze(model1)
model2 = model2.to(device='cuda', dtype=torch.float32).eval()
model2 = torch.jit.script(model2)
model2 = torch.jit.freeze(model2)
w, h = args.resolution
x = torch.randn((1, 4, h, w), device='cuda', dtype=torch.float32)
y = torch.randn((1, 4, h, w), device='cuda', dtype=torch.float32)
with torch.no_grad():
z1 = model1(x, y)
# z1 = model1(x, y) # uncomment this to see precision loss
z2 = model2(x, y)
if not torch.equal(z1, z2):
print('FAILED')
else:
print('SUCCESS') |
st180380 | What you are seeing is a combination of two effects:
Floating point arithmetic has it that it is only approximate. The difference you are seeing is within the range of accepted variation for single precision (1e-7ish), so neither of the two results is “incorrect”. Typically one would use torch.allclose or something similar rather than torch.equal to check whether the results match.
In your case, the JIT fuser - activated only in the second run of the module - has a different way of combining the three outputs of filter to compute cov_xy (the multiplication and subtraction), generating a bespoke fused kernel for it. This computes the slightly different result compared to the usual operators for * and -. Very likely, the exact order of addition differs. Note that PyTorch makes no guarantees here that any particular version runs, not even when you ask it to be deterministic (which this bit would be after two runs by default, when the JIT uses the fused kernels).
Best regards
Thomas |
st180381 | An error occurs when executing the code below.
import torch
import torch.nn as nn
class AAA(nn.Module):
def __init__(self):
super(AAA, self).__init__()
def forward(self, x):
return x
class BBB(nn.Module):
def __init__(self, other: AAA=None):
super(BBB, self).__init__()
print(type([other]))
self.other = [other]
def forward(self, x):
src = self.other[0] # !!!Error!!!
return x
if __name__ == '__main__':
from utils.functions import init_console
init_console()
net_a = AAA()
net_b = BBB(other=net_a)
script_a = torch.jit.script(net_a)
script_b = torch.jit.script(net_b)
print('done')
Module ‘BBB’ has no attribute ‘other’ (This attribute exists on the Python module, but we failed to convert Python type: ‘list’ to a TorchScript type. Could not infer type of list element: Cannot infer concrete type of torch.nn.Module. Its type was inferred; try adding a type annotation for the attribute.):
File “C:/xxx/test.py”, line 20
def forward(self, x):
src = self.other[0]
~~~~~~~~~~ <— HERE
return x
Is there a way to make it work? |
st180382 | Hello, i have successfully managed to convert NVIDIA Tacotron2 using torch.jit.script. However, when i try to run it on an Android Device i get a RuntimeError: NNPACK SpatialConvolution_updateOutput failed
Below is the exact error:
2020-05-29 16:25:58.166 24263-24427…/AndroidRuntime: File “…/python3.6/site-packages/torch/nn/modules/conv.py”, line 207, in forward
self.weight, self.bias, self.stride,
_single(0), self.dilation, self.groups)
return F.conv1d(input, self.weight, self.bias, self.stride,
~~~~~~~~ <— HERE
self.padding, self.dilation, self.groups)
RuntimeError: NNPACK SpatialConvolution_updateOutput failed
Jit is created with PyTorch 1.5 and python 3.6.10
The Android version is 9.
Any idea on how to resolve this problem? |
st180383 | I am trying to convert a quasi-rnn implmentation into the TorchScript but I am getting this error:
RuntimeError:
NoneType cannot be used as a tuple:
File "/../rnn_models/qrnn_jit.py", line 171
for i, layer in enumerate(self.layers):
input, hn = layer(input, hidden[i])
~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
next_hidden.append(hn)
Any idea what might be the issue? any help is greatly appreciated. The code is below:
class JitQRNNLayer(jit.ScriptModule):
r"""Applies a single layer Quasi-Recurrent Neural Network (QRNN) to an input sequence.
Args:
input_size: The number of expected features in the input x.
hidden_size: The number of features in the hidden state h. If not specified, the input size is used.
save_prev_x: Whether to store previous inputs for use in future convolutional windows (i.e. for a continuing sequence such as in language modeling). If true, you must call reset to remove cached previous values of x. Default: False.
window: Defines the size of the convolutional window (how many previous tokens to look when computing the QRNN values). Supports 1 and 2. Default: 1.
zoneout: Whether to apply zoneout (i.e. failing to update elements in the hidden state) to the hidden state updates. Default: 0.
output_gate: If True, performs QRNN-fo (applying an output gate to the output). If False, performs QRNN-f. Default: True.
use_cuda: If True, uses fast custom CUDA kernel. If False, uses naive for loop. Default: True.
Inputs: X, hidden
- X (seq_len, batch, input_size): tensor containing the features of the input sequence.
- hidden (batch, hidden_size): tensor containing the initial hidden state for the QRNN.
Outputs: output, h_n
- output (seq_len, batch, hidden_size): tensor containing the output of the QRNN for each timestep.
- h_n (batch, hidden_size): tensor containing the hidden state for t=seq_len
"""
def __init__(self, input_size, hidden_size=None, save_prev_x=False, zoneout=0, window=1, output_gate=True, use_cuda=True):
super(JitQRNNLayer, self).__init__()
assert window in [1, 2], "This QRNN implementation currently only handles convolutional window of size 1 or size 2"
self.window = window
self.input_size = input_size
self.hidden_size = hidden_size if hidden_size else input_size
self.zoneout = zoneout
self.save_prev_x = save_prev_x
self.prevX = None
self.output_gate = output_gate
self.use_cuda = use_cuda
# One large matmul with concat is faster than N small matmuls and no concat
self.linear = nn.Linear(self.window * self.input_size, 3 * self.hidden_size if self.output_gate else 2 * self.hidden_size)
def reset(self):
# If you are saving the previous value of x, you should call this when starting with a new state
self.prevX = None
@jit.script_method
def forward(self, X, hidden):
seq_len, batch_size, _ = X.size()
source = None
if self.window == 1:
assert source is not None
source = X
elif self.window == 2:
# Construct the x_{t-1} tensor with optional x_{-1}, otherwise a zeroed out value for x_{-1}
Xm1 = []
Xm1.append(self.prevX if self.prevX is not None else X[:1, :, :] * 0)
# Note: in case of len(X) == 1, X[:-1, :, :] results in slicing of empty tensor == bad
if len(X) > 1:
Xm1.append(X[:-1, :, :])
Xm1 = torch.cat(Xm1, 0)
# Convert two (seq_len, batch_size, hidden) tensors to (seq_len, batch_size, 2 * hidden)
assert source is not None
source = torch.cat([X, Xm1], 2)
# Matrix multiplication for the three outputs: Z, F, O
assert source is not None
Y = self.linear(source)
# Convert the tensor back to (batch, seq_len, len([Z, F, O]) * hidden_size)
if self.output_gate:
Y = Y.view(seq_len, batch_size, 3 * self.hidden_size)
Z, F, O = Y.chunk(3, dim=2)
else:
Y = Y.view(seq_len, batch_size, 2 * self.hidden_size)
Z, F = Y.chunk(2, dim=2)
###
Z = torch.nn.functional.tanh(Z)
F = torch.nn.functional.sigmoid(F)
# If zoneout is specified, we perform dropout on the forget gates in F
# If an element of F is zero, that means the corresponding neuron keeps the old value
if self.zoneout:
if self.training:
mask = Variable(F.data.new(*F.size()).bernoulli_(1 - self.zoneout), requires_grad=False)
F = F * mask
else:
F *= 1 - self.zoneout
# Ensure the memory is laid out as expected for the CUDA kernel
# This is a null op if the tensor is already contiguous
Z = Z.contiguous()
F = F.contiguous()
# The O gate doesn't need to be contiguous as it isn't used in the CUDA kernel
# Forget Mult
# For testing QRNN without ForgetMult CUDA kernel, C = Z * F may be useful
C = JitForgetMult()(F, Z, hidden, use_cuda=self.use_cuda)
# Apply (potentially optional) output gate
if self.output_gate:
H = torch.nn.functional.sigmoid(O) * C
else:
H = C
# In an optimal world we may want to backprop to x_{t-1} but ...
if self.window > 1 and self.save_prev_x:
self.prevX = Variable(X[-1:, :, :].data, requires_grad=False)
return H, C[-1:, :, :]
class JitQRNN(jit.ScriptModule):
r"""Applies a multiple layer Quasi-Recurrent Neural Network (QRNN) to an input sequence.
Args:
input_size: The number of expected features in the input x.
hidden_size: The number of features in the hidden state h. If not specified, the input size is used.
num_layers: The number of QRNN layers to produce.
layers: List of preconstructed QRNN layers to use for the QRNN module (optional).
save_prev_x: Whether to store previous inputs for use in future convolutional windows (i.e. for a continuing sequence such as in language modeling). If true, you must call reset to remove cached previous values of x. Default: False.
window: Defines the size of the convolutional window (how many previous tokens to look when computing the QRNN values). Supports 1 and 2. Default: 1.
zoneout: Whether to apply zoneout (i.e. failing to update elements in the hidden state) to the hidden state updates. Default: 0.
output_gate: If True, performs QRNN-fo (applying an output gate to the output). If False, performs QRNN-f. Default: True.
use_cuda: If True, uses fast custom CUDA kernel. If False, uses naive for loop. Default: True.
Inputs: X, hidden
- X (seq_len, batch, input_size): tensor containing the features of the input sequence.
- hidden (layers, batch, hidden_size): tensor containing the initial hidden state for the QRNN.
Outputs: output, h_n
- output (seq_len, batch, hidden_size): tensor containing the output of the QRNN for each timestep.
- h_n (layers, batch, hidden_size): tensor containing the hidden state for t=seq_len
"""
def __init__(self, input_size, hidden_size,
num_layers=1, bias=True, batch_first=False,
dropout=0, bidirectional=False, layers=None, **kwargs):
assert bidirectional == False, 'Bidirectional QRNN is not yet supported'
assert batch_first == False, 'Batch first mode is not yet supported'
assert bias == True, 'Removing underlying bias is not yet supported'
super(JitQRNN, self).__init__()
if(layers):
self.layers = torch.nn.ModuleList(layers)
else:
self.layers = torch.nn.ModuleList([JitQRNNLayer(input_size if l == 0 else hidden_size, hidden_size, **kwargs) for l in range(num_layers)])
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = len(layers) if layers else num_layers
self.bias = bias
self.batch_first = batch_first
self.dropout = dropout
self.bidirectional = bidirectional
def reset(self):
r'''If your convolutional window is greater than 1, you must reset at the beginning of each new sequence'''
[layer.reset() for layer in self.layers]
@jit.script_method
def forward(self, input, hidden=None):
if hidden is None:
hidden = torch.zeros(self.num_layers, input.shape[1], self.hidden_size, dtype=input.dtype, device=input.device)
next_hidden = []
for i, layer in enumerate(self.layers):
input, hn = layer(input, hidden[i])
next_hidden.append(hn)
if self.dropout != 0 and i < len(self.layers) - 1:
input = torch.nn.functional.dropout(input, p=self.dropout, training=self.training, inplace=False)
next_hidden = torch.cat(next_hidden, 0).view(self.num_layers, *next_hidden[0].size()[-2:])
return input, next_hidden |
st180384 | Hi all,
In the ideal PyTorch workflow from training to production deployment, where should one freeze the model? In particular, assume you are training a model that you compile to TorchScript and want to keep somewhere to use for a while into the future.
Should I torch.jit.freeze before saving the trained model, and always use the frozen saved model? Or should I save a normal TorchScript model, and torch::jit::freeze it when I load it in C++ for inference, re-freezing and optimizing it every I start an inference process? (Start up time is of no concern for my application, but backward compatibility is.)
Frozen models are clearly less flexible than unfrozen ones (`torch.jit.freeze`'d models cannot be moved to GPU with `.to()` · Issue #57569 · pytorch/pytorch · GitHub), but it is unclear to me whether:
The optimizations they apply are system dependent. (Will I get better performance by freezing on the target system?)
Frozen models are still expected to be fairly future-proof, or if they are more specific to the setup on which they were frozen.
Thanks. |
st180385 | Solved by Elias_Ellison in post #2
Hi!
Freezing optimizations should be system-independent. We introduced optimize_for_inference for system-dependent optimizations (does CUDNN exist and does it’s version work correctly with Conv-Add-Relu fusion, is MKLDNN installed). That one is still a little nascent - for now I’d just recommend us… |
st180386 | Hi!
Freezing optimizations should be system-independent. We introduced optimize_for_inference for system-dependent optimizations (does CUDNN exist and does it’s version work correctly with Conv-Add-Relu fusion, is MKLDNN installed). That one is still a little nascent - for now I’d just recommend using it with vision models.
Yes, frozen models are expected to be future-proof. Optimize for inference is not because it bakes in things wrt/the system so saving and loading it isn’t recommended / an intended use case. |
st180387 | Hi @Elias_Ellison ,
Thanks for the quick answer— this is great, was not aware before of optimize_for_inference. Will keep an eye on it as it becomes relevant to a broader class of models.
To flip the question around, are TorchScript models that are not frozen just as well qualified for being archived for later use? (If I want to save them unfrozen and then freeze them “on-demand” before I use them.) |
st180388 | There shouldn’t be any downside to storing non-frozen torchscript models – they’re intended to be forward compatible as well. |
st180389 | Hello, probably a very obscure question:
assuming we have a string which encodes a valid inlined_graph IR representation of a (JIT scripted) model, is there a way to re-assemble the original model in form of, e.g., an nn.Module or RecursiveScriptModule or any similar complete object?
Thanks a lot,
Best regards,
RB |
st180390 | To reproduce:
Expected behaviour:
def func(obj_x, obj_ind):
new_x = torch.zeros((20, 3, 7), device=obj_x.device, dtype=torch.float32)
j = torch.zeros((1,), device=obj_x.device, dtype=torch.long)
for i, idx in enumerate(obj_ind[1:], start=1):
new_x[idx][j] = obj_x[i]
return new_x
obj_x = torch.randn(10, 7)
obj_ind = torch.tensor(list(range(0, 20, 2)))
func(obj_x, obj_ind)
>> tensor([[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
[[-0.2289, -1.1137, 0.2680, 0.0294, 0.3793, -0.1392, -0.7233],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
[[ 0.0367, 0.5694, -0.1834, 0.9413, 1.1738, -1.4721, -2.0817],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
...
Using the torchscript:
@torch.jit.script
def func(obj_x, obj_ind):
new_x = torch.zeros((20, 3, 7), device=obj_x.device, dtype=torch.float32)
j = torch.zeros((1,), device=obj_x.device, dtype=torch.long)
for i, idx in enumerate(obj_ind[1:], start=1):
new_x[idx][j] = obj_x[i]
return new_x
func(obj_x, obj_ind)
tensor([[[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.]],
...
Did I miss something? |
st180391 | Solved by ptrblck in post #2
I think you are hitting multiple issues.
The first one seems to be the “double indexing” which doesn’t assign the values to the original tensor.
Index new_x and the assignment should work:
new_x[idx, j] = obj_x[i]
However, even though new_x would contain values now, it seems that enumerate(..., … |
st180392 | I think you are hitting multiple issues.
The first one seems to be the “double indexing” which doesn’t assign the values to the original tensor.
Index new_x and the assignment should work:
new_x[idx, j] = obj_x[i]
However, even though new_x would contain values now, it seems that enumerate(..., start=idx) is not working correctly while scripting and the start value is ignored:
def fun():
for i, idx in enumerate(torch.arange(5), start=1):
print(i, idx)
fun()
@torch.jit.script
def fun():
for i, idx in enumerate(torch.arange(5), start=1):
print(i, idx)
fun()
Output:
1 tensor(0)
2 tensor(1)
3 tensor(2)
4 tensor(3)
5 tensor(4)
0 0
[ CPULongType{} ]
1 1
[ CPULongType{} ]
2 2
[ CPULongType{} ]
3 3
[ CPULongType{} ]
4 4
[ CPULongType{} ]
Would you mind creating an issue for this on GitHub? |
st180393 | Thanks for the response @ptrblck
At least now I know how to tackle the first issue. Also, I created the issue regarding the bug you’ve found enumerate(..., start=idx) is not working correctly while scripting · Issue #67142 · pytorch/pytorch · GitHub 1
Feel free to continue our discussion on Github. |
st180394 | I have a module with some torch.jit.exported methods and some regular methods. This module has some attributes that I want to use in non-scripted methods. These attributes disappear when I script the module.
class Foo:
def __init__(self):
# Commenting out this line gets rid of the AttributeError
pathlib.Path("/tmp/tmp").mkdir(parents=True, exist_ok=True)
self.y = 5
class M(nn.Module):
def __init__(self, f):
super(M, self).__init__()
self.f = f
@torch.jit.ignore
def get_f(self):
return self.f # <-- Getting an attribute error here
m = torch.jit.script(M(Foo()))
m.get_f()
I am curious if this is a bug or there is some rationale behind this. It seems like assigning self.f = f in M.__init__ makes torch script analyze Foo.__init__ and discard the attribute if scripting it fails.
Because I never use this attribute in compiled methods, there should not be a reason to try compiling it and discarding. Maybe there is some annotation I can use to ask torch script to just keep this attribute attached to M? Is there some “good” workaround? |
st180395 | iga:
Because I never use this attribute in compiled methods, there should not be a reason to try compiling it and discarding.
I’m not sure I understand the use case completely, but you are indeed using it in the scripted m module.
If you create an eager module first, the attribute will be there:
module = M(Foo())
m = torch.jit.script(module)
m.get_f() # error
module.get_f() # works |
st180396 | Yeah, the attribute is obviously there before scripting. My questions are:
Why scripting removes this attribute when it is used only in torch.jit.ignore'ed methods?
Is there a good way to make torch.jit.script not remove this attribute? I can just do m.f = f to assign it again after scripting, but this is fairly ugly when you have many attributes. |
st180397 | I’m not sure, if there is a way to ignore attributes, which seems to be your case. I.e. if you initialize Foo as an nn.Module it would also be scripted as a RecursiveScriptModule, but I guess you don’t want that. |
st180398 | Bug
Traceback (most recent call last):
File "/Users/simon/.pyenv/versions/3.7.4/lib/python3.7/site-packages/torch/jit/__init__.py", line 1034, in trace_module
module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace)
RuntimeError: type INTERNAL ASSERT FAILED at ../torch/csrc/jit/ir.h:1266, please report a bug to PyTorch. (setType at ../torch/csrc/jit/ir.h:1266)
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x11ac06787 in libc10.dylib)
frame #1: torch::jit::Value::setType(std::__1::shared_ptr<c10::Type>) + 459 (0x11eee6eab in libtorch.dylib)
frame #2: torch::jit::Graph::createGetAttr(torch::jit::Value*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 259 (0x11ef6ee03 in libtorch.dylib)
frame #3: torch::jit::(anonymous namespace)::ConvertTracedAttrReferences::convertAttrReferencesToLocalGetAttrs(torch::jit::Block*, c10::QualifiedName const&, torch::jit::Value*) + 714 (0x11efdc3da in libtorch.dylib)
frame #4: torch::jit::FixupTraceScopeBlocks(std::__1::shared_ptr<torch::jit::Graph>&, torch::jit::script::Module*) + 316 (0x11efd935c in libtorch.dylib)
frame #5: torch::jit::tracer::trace(std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> >, std::__1::function<std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> > (std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> >)> const&, std::__1::function<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > (at::Tensor const&)>, bool, torch::jit::script::Module*) + 2127 (0x11f1c8fef in libtorch.dylib)
frame #6: torch::jit::tracer::createGraphByTracing(pybind11::function const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> >, pybind11::function const&, bool, torch::jit::script::Module*) + 361 (0x11b7d3a49 in libtorch_python.dylib)
frame #7: void pybind11::cpp_function::initialize<torch::jit::script::initJitScriptBindings(_object*)::$_13, void, torch::jit::script::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::function, pybind11::tuple, pybind11::function, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(torch::jit::script::initJitScriptBindings(_object*)::$_13&&, void (*)(torch::jit::script::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::function, pybind11::tuple, pybind11::function, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::__invoke(pybind11::detail::function_call&) + 319 (0x11b814d9f in libtorch_python.dylib)
frame #8: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3372 (0x11b1479fc in libtorch_python.dylib)
<omitting python frames>
frame #13: __pyx_f_18_pydevd_frame_eval_35pydevd_frame_evaluator_darwin_37_64_get_bytecode_while_frame_eval + 6471 (0x11aa0cef7 in pydevd_frame_evaluator_darwin_37_64.cpython-37m-darwin.so)
frame #18: __pyx_f_18_pydevd_frame_eval_35pydevd_frame_evaluator_darwin_37_64_get_bytecode_while_frame_eval + 6471 (0x11aa0cef7 in pydevd_frame_evaluator_darwin_37_64.cpython-37m-darwin.so)
frame #23: __pyx_f_18_pydevd_frame_eval_35pydevd_frame_evaluator_darwin_37_64_get_bytecode_while_frame_eval + 6471 (0x11aa0cef7 in pydevd_frame_evaluator_darwin_37_64.cpython-37m-darwin.so)
frame #27: __pyx_f_18_pydevd_frame_eval_35pydevd_frame_evaluator_darwin_37_64_get_bytecode_while_frame_eval + 6471 (0x11aa0cef7 in pydevd_frame_evaluator_darwin_37_64.cpython-37m-darwin.so)
frame #35: __pyx_f_18_pydevd_frame_eval_35pydevd_frame_evaluator_darwin_37_64_get_bytecode_while_frame_eval + 6471 (0x11aa0cef7 in pydevd_frame_evaluator_darwin_37_64.cpython-37m-darwin.so)
frame #40: __pyx_f_18_pydevd_frame_eval_35pydevd_frame_evaluator_darwin_37_64_get_bytecode_while_frame_eval + 6471 (0x11aa0cef7 in pydevd_frame_evaluator_darwin_37_64.cpython-37m-darwin.so)
frame #57: start + 1 (0x7fff6bfa93d5 in libdyld.dylib)
frame #58: 0x0 + 10 (0xa in ???)
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14.6
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.18.2
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] Could not collect
Repeat
from collections import defaultdict
import torch
import torch.jit
import torch.nn as nn
import torch.nn.functional as F
from torchvision import models
from utils import *
class MyConv2D(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, stride=1):
super(MyConv2D, self).__init__()
self.weight = torch.zeros((out_channels, in_channels, kernel_size, kernel_size))
self.bias = torch.zeros(out_channels)
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = (kernel_size, kernel_size)
self.stride = (stride, stride)
def forward(self, x):
return F.conv2d(x, self.weight, self.bias, self.stride)
def extra_repr(self):
s = ('{in_channels}, {out_channels}, kernel_size={kernel_size}'
', stride={stride}')
return s.format(**self.__dict__)
class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
self.conv = nn.Sequential(
*ConvLayer(channels, channels, kernel_size=3, stride=1),
*ConvLayer(channels, channels, kernel_size=3, stride=1, relu=False)
)
def forward(self, x):
return self.conv(x) + x
def ConvLayer(in_channels, out_channels, kernel_size=3, stride=1,
upsample=None, instance_norm=True, relu=True, trainable=False):
layers = []
if upsample:
layers.append(nn.Upsample(mode='nearest', scale_factor=upsample))
layers.append(nn.ReflectionPad2d(kernel_size // 2))
if trainable:
layers.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride))
else:
layers.append(MyConv2D(in_channels, out_channels, kernel_size, stride))
if instance_norm:
layers.append(nn.InstanceNorm2d(out_channels))
if relu:
layers.append(nn.ReLU())
return layers
class TransformNet(nn.Module):
def __init__(self, base=8):
super(TransformNet, self).__init__()
self.base = base
self.weights = []
self.downsampling = nn.Sequential(
*ConvLayer(3, base, kernel_size=9, trainable=True),
*ConvLayer(base, base * 2, kernel_size=3, stride=2),
*ConvLayer(base * 2, base * 4, kernel_size=3, stride=2),
)
self.residuals = nn.Sequential(*[ResidualBlock(base * 4) for _ in range(5)])
self.upsampling = nn.Sequential(
*ConvLayer(base * 4, base * 2, kernel_size=3, upsample=2),
*ConvLayer(base * 2, base, kernel_size=3, upsample=2),
*ConvLayer(base, 3, kernel_size=9, instance_norm=False, relu=False, trainable=True),
)
self.get_param_dict()
def forward(self, X):
y = self.downsampling(X)
y = self.residuals(y)
y = self.upsampling(y)
return y
def get_param_dict(self):
"""找出该网络所有 MyConv2D 层,计算它们需要的权值数量"""
param_dict = defaultdict(int)
for value in self.named_modules():
if isinstance(value[1], MyConv2D):
param_dict[value[0]] += int(np.prod(value[1].weight.shape))
param_dict[value[0]] += int(np.prod(value[1].bias.shape))
return param_dict
def set_my_attr(self, name, value):
# 下面这个循环是一步步遍历类似 residuals.0.conv.1 的字符串,找到相应的权值
target = self
for x in name.split('.'):
if x.isnumeric():
target = target.__getitem__(int(x))
else:
target = getattr(target, x)
# 设置对应的权值
n_weight = np.prod(target.weight.shape)
target.weight = value[:n_weight].view(target.weight.shape)
target.bias = value[n_weight:].view(target.bias.shape)
def set_weights(self, weights, i=0):
"""输入权值字典,对该网络所有的 MyConv2D 层设置权值"""
for name, param in weights.items():
self.set_my_attr(name, weights[name][i])
class MetaNet(nn.Module):
def __init__(self, param_dict):
super(MetaNet, self).__init__()
self.hidden = nn.Linear(1920, 128 * len(param_dict))
self.fc_list = []
for index, (name, params) in enumerate(param_dict.items()):
setattr(self, name, nn.Linear(128, params))
self.fc_list.append(name)
def forward(self, mean_std_features):
hidden = F.relu(self.hidden(mean_std_features))
filters = []
labels = []
for index, name in enumerate(self.fc_list):
labels.append(torch.ones(1, dtype=torch.int8) * index)
filters.append(getattr(self, name)(hidden[:, index * 128:(index + 1) * 128]))
return tuple(filters)
def test_mobile():
transform_net = TransformNet(base=32).cpu().eval()
meta_net = MetaNet(transform_net.get_param_dict()).cpu().eval()
input_1 = torch.ones(1, 1920)
meta_net_mobile = torch.jit.trace(meta_net, input_1)
meta_net_mobile.save("models/meta_net_mobile.pt")
print(meta_net_mobile.code)
if __name__ == "__main__":
test_mobile() |
st180399 | Just an additional info:
It executes fine with the torch version below:
numpy==1.17.4
numpydoc==0.9.1
torch==1.3.1
torchvision==0.4.2 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.