id
stringlengths
3
8
text
stringlengths
1
115k
st180500
Hello! Following the official tutorial, I am trying to convert a torch modle to a torch script. I am worried about how to ensure the correctness of the torch script. When convert a pytorch model to torch script via tracing(resnet18 as an example), official code shows to use a example input torch.rand(1,3,224,224) to get a traced_script_module. And then, feed the module a torch.ones(1,3,224,224) to get a output tensor. But I get different outputs when feed same torch.ones(1,3,224,224). Just because the example torch.rand(1,3,224,224)s are different. How to ensure that the results are correct, when feed real images. thanks. res1588×926 134 KB
st180501
Solved by marc_q in post #5 Wow, that’s it. I understand it. When I use the pre-trained model, the trace script becomes unique. and generate same result no matter how many times i run it. thank you for your patience!
st180502
I cannot reproduce the issue using resnet18 as seen here: model = models.resnet18() x = torch.randn(1, 3, 224, 224) model_traced = torch.jit.trace(model, x) out = model(torch.ones(1, 3, 224, 224)) out_traced = model(torch.ones(1, 3, 224, 224)) print((out - out_traced).abs().max()) > tensor(0., grad_fn=<MaxBackward1>) In case you are using a different model, make sure the traced model uses the same “code path” as the eager model (e.g. in case data-dependent control flow is used inside the model or dropout etc.). EDIT: the tracer should also raise a warning e.g. if dropout was used in training mode: TracerWarning: Trace had nondeterministic nodes. Did you forget call .eval() on your model?
st180503
Thanks for your attention. I can reproduce this issue by run code twice. model = torchvision.models.resnet18() # An example input you would normally provide to your model's forward() method. example = torch.rand(1, 3, 224, 224) # Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing. traced_script_module = torch.jit.trace(model, example) output = traced_script_module.forward(torch.ones(1,3,224,224)) traced_script_module.save("traced_resnet_model.pt") #eg: first get: tensor([-0.0645, 0.1313, -0.3563, -0.0869, 0.3209], grad_fn=<SliceBackward>) # second get: tensor([-0.1508, -0.2974, 0.0963, -0.1596, -0.7377], grad_fn=<SliceBackward>) print(output[0, :5]) Basing code above,i run it first get tensor [-0.0645, 0.1313, -0.3563, -0.0869, 0.3209], and i try to run it again, i will get tensor[-0.1508, -0.2974, 0.0963, -0.1596, -0.7377]. The only difference between two procedures are var example. it seems to generate diffenent traced script. Besides, i compared to binary traced_resnet_model.pts, they are absolutly different,as shown below. >> cmp traced_resnet_model_1.pt traced_resnet_model_2.pt >> raced_resnet_model_1.pt traced_resnet_model_2.pt differ: char 51, line 1 image1920×441 189 KB
st180504
I don’t quite understand your posted code and the explanations, since the code doesn’t compare anything or do you mean you are getting different results by rerunning the entire script twice? In the latter case, this would be expected, as you are randomly initializing the resnet18. If you want to load the pretrained model, use pretrained=True, or load a custom state_dict otherwise.
st180505
Wow, that’s it. I understand it. When I use the pre-trained model, the trace script becomes unique. and generate same result no matter how many times i run it. thank you for your patience!
st180506
I want to optimize a model but I meet the error: Traceback (most recent call last): File "mobile_test.py", line 129, in <module> data_loader,bn_list[0],net_config)) File "mobile_test.py", line 62, in prepare_subnet script_subnet = torch.jit.script(subnet) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_script.py", line 943, in script obj, torch.jit._recursive.infer_methods_to_compile File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 391, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_script.py", line 391, in _construct init_fn(script_module) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 428, in init_fn scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 404, in create_script_module_impl property_stubs = get_property_stubs(nn_module) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 697, in get_property_stubs properties_asts = get_class_properties(module_ty, self_name="RecursiveScriptModule") File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 153, in get_class_properties getter = get_jit_def(prop[1].fget, f"__{prop[0]}_getter", self_name=self_name) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 271, in get_jit_def return build_def(ctx, fn_def, type_line, def_name, self_name=self_name) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 305, in build_def build_stmts(ctx, body)) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 129, in build_stmts stmts = [build_stmt(ctx, s) for s in stmts] File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 129, in <listcomp> stmts = [build_stmt(ctx, s) for s in stmts] File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 279, in __call__ return method(ctx, node) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 445, in build_Return return Return(r, None if stmt.value is None else build_expr(ctx, stmt.value)) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 279, in __call__ return method(ctx, node) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 781, in build_Dict return DictLiteral(range, [build_expr(ctx, e) for e in expr.keys], File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 781, in <listcomp> return DictLiteral(range, [build_expr(ctx, e) for e in expr.keys], File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 278, in __call__ raise UnsupportedNodeError(ctx, node) File "/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 111, in __init__ source_range = ctx.make_range(offending_node.lineno, AttributeError: 'NoneType' object has no attribute 'lineno' And this is my code link: https://drive.google.com/drive/folders/172QoQxYoB1fUPvwjudHWgBmRv-IfE8Mg?usp=sharing 1 You can run the test1 file to see the error. I have located the error in source code: 839b4d67e2751d1132bdbb066b775c9713×71 34 KB But we do not know how to solve it. Kind regards.
st180507
I’ve built Pytorch from source for Jetson Xavier successfully, and I am now interested in accelerating inference. What does the USE_TENSORRT flag accomplish? Seems like without this flag I could convert my model to ONNX, build the engine and call the NVinfer API directly? Thanks, Rich
st180508
AFAIK: Hi, when I wanted to compile Pytorch I checked for this flag. AFAIK it enables your Pytorch to use NVIDIA TensorRT library. Hopefully, any source building works even with e.g. 3.5 capability value in TORCH_ARCH with USE_TENSORRT set. Quoting TensorRT site: “You can import trained models from every deep learning framework into TensorRT. After applying optimizations, TensorRT selects platform specific kernels to maximize performance on Tesla GPUs in the data center, Jetson embedded platforms, and NVIDIA DRIVE autonomous driving platforms. With TensorRT developers can focus on creating novel AI-powered applications rather than performance tuning for inference deployment.” CCMake Hint: Supported HW (Cuda Capability > 5): docs.nvidia.com Support Matrix :: NVIDIA Deep Learning TensorRT Documentation 2
st180509
Anyone got to the bottom of this? I have been trying to make my pytorch models infer with tensorRT but it’s been pretty painful. The ONNX route kind of works but is not ideal. Then there are torch2trt and TRTtorch which are beta/alpha projects from nVidia. Is pytorch working to support a nvinfer/TensorRT backend?
st180510
@rafale77 You may want to check out pytorch-quantization 9 maintained by NVIDIA. According to the documentation they support post-training quantization and quantized fine-tuning.
st180511
The USE_TENSORRT flag probably does many things in the build, but at least one of the things it does is try to build the onnx-tensorrt package from github. The thing is though, the submodule pointer in the pytorch repo still points to a 2019 tag/commit from the onnx-tensorrt repo, when there have been several releases since then. That commit builds against a rather old version of TensorRT, I think pre 7.0.x releases, so there is no way onnx-tensorrt builds in the pytorch tree against any reasonably new TensorRT API / version. This leads me to conclude that the USE_TENSORRT flag is not supported, because if it were, the pytorch maintainers would at least update the submodule pointer and document what version of TensorRT to build against. My guess is they started down some route of making a TRT inference backend for pytorch but abandoned it and haven’t cleaned up the repo or cmake/setup.py options.
st180512
For example: import torch hidden_dim1 = 10 hidden_dim2 = 5 tagset_size = 2 class MyModel(torch.nn.Module): def __init__(self): super(MyModel, self).__init__() self.line1 = torch.nn.Linear(hidden_dim1, hidden_dim2) self.line2 = torch.nn.Linear(hidden_dim2, tagset_size) def forward(self, x, y): out1 = self.line1(x) out2 = self.line2(y) return out1 X = torch.randn(20, hidden_dim1) Y = torch.randn(hidden_dim1, hidden_dim2) inputs = (X, Y) model = MyModel() f = './model.onnx' torch.onnx.export(model, inputs, f, opset_version=9, example_outputs=None, input_names=["X"], output_names=["Y"],verbose=True) graph(%X : Float(20, 10, strides=[10, 1], requires_grad=0, device=cpu), %line1.weight : Float(5, 10, strides=[10, 1], requires_grad=1, device=cpu), %line1.bias : Float(5, strides=[1], requires_grad=1, device=cpu)): %Y : Float(20, 5, strides=[5, 1], requires_grad=1, device=cpu) = onnx::Gemm[alpha=1., beta=1., transB=1](%X, %line1.weight, %line1.bias) # /root/.conda/envs/torch1.9/lib/python3.6/site-packages/torch/nn/functional.py:1847:0 return (%Y) How every, the exported graph doesn’t contain line2 , maybe because the output of MyModel is not depend on out2 = self.line2(y) ? I guess the graph is pruned by default. What should I do if I want to not do pruning? Motivation I want to do something for self.named_parameters() in model.forward(), eg. def check_parameters(): # do something for parameters by calling # some ops including OP1, OP2 and so on return class MyModel(torch.nn.Module): def __init__(self): super(MyModel, self).__init__() self.line = torch.nn.Linear(hidden_dim1, hidden_dim2) def forward(self, x, y): out = self.line1(x) check_parameters() return out How every, the exported graph doesn’t contain OP1, OP2 , maybe because the output of MyModel is not depend on check_parameters() ? I guess the graph is pruned by default.
st180513
pytorch v1.8.1 I am exporting a simple model to ONNX and noticed that a [conv + BN + relu] block parameters were renamed. After stepping through the export code, I find the offending change happening within torch._C._jit_pass_onnx_eval_peephole, and presumably during conv+bn fusion here. after peephole: image (2)1792×910 159 KB parameters ‘18’ and ‘19’ are introduced after peephole, and later replace “conv_bn_and_relu” parameter names in the final graph. Is it possible to retain the “conv_bn_and_relu” parameter name in the graph?
st180514
Hello, I want to optimise a model but I meet the error: Traceback (most recent call last): File “mobile_test.py”, line 129, in data_loader,bn_list[0],net_config)) File “mobile_test.py”, line 62, in prepare_subnet script_subnet = torch.jit.script(subnet) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_script.py”, line 943, in script obj, torch.jit._recursive.infer_methods_to_compile File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 391, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 448, in create_script_module_impl script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_script.py”, line 391, in _construct init_fn(script_module) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 428, in init_fn scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 404, in create_script_module_impl property_stubs = get_property_stubs(nn_module) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 697, in get_property_stubs properties_asts = get_class_properties(module_ty, self_name=“RecursiveScriptModule”) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 153, in get_class_properties getter = get_jit_def(prop[1].fget, f"__{prop[0]}_getter", self_name=self_name) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 271, in get_jit_def return build_def(ctx, fn_def, type_line, def_name, self_name=self_name) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 305, in build_def build_stmts(ctx, body)) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 129, in build_stmts stmts = [build_stmt(ctx, s) for s in stmts] File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 129, in stmts = [build_stmt(ctx, s) for s in stmts] File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 279, in call return method(ctx, node) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 445, in build_Return return Return(r, None if stmt.value is None else build_expr(ctx, stmt.value)) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 279, in call return method(ctx, node) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 781, in build_Dict return DictLiteral(range, [build_expr(ctx, e) for e in expr.keys], File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 781, in return DictLiteral(range, [build_expr(ctx, e) for e in expr.keys], File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 278, in call raise UnsupportedNodeError(ctx, node) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/jit/frontend.py”, line 111, in init source_range = ctx.make_range(offending_node.lineno, AttributeError: ‘NoneType’ object has no attribute ‘lineno’ And this is my code: fuse_model(subnet) #quantize model subnet.qconfig = torch.quantization.default_qconfig subnet.qconfig = torch.quantization.get_default_qconfig(‘qnnpack’) torch.backends.quantized.engine = ‘qnnpack’ torch.quantization.prepare(subnet, inplace=True) #Calibrate print(‘Calibrating…’) #evaluate_ofa_subnet(subnet, #imagenet_data_path, net_config, data_loader, batch_size=16) #Convert model torch.quantization.convert(subnet, inplace=True) #optimize script_subnet = torch.jit.script(subnet) script_subnet_optimized = optimize_for_mobile(script_subnet) return script_subnet_optimized My code’s link is here: time - Google Drive You can run test.py file to run and should run the same error as me. So how to solve it? Kind regards.
st180515
I am trying to perform transfer learning using one of the silerio STT models located in the torch hub. The model loads as a JIT model which I am entirely unfamiliar with. Does anyone know how I can work with this model as I would a traditional pytorch module? I want to be able to freeze / unfreeze layers, remove layers, etc, as well as use it in a training loop. Thank you for your help!
st180516
I’m tracing the mapping network between two generators with torch.jit.trace_module Code example: mapping_input = torch.rand(1, 64, 504, 378).to("cuda"), inst_data inputs = {'forward' : encoder_forward_input, 'forward_encoder' : encoder_forward_input, 'forward_decoder' : decoder_forward_input} inputs_mapping = {'forward' : mapping_input, 'inference_forward' : mapping_input} traced_model_cuda = torch.jit.trace_module(self.mapping_net, inputs_mapping, strict=False) torch.jit.save(traced_model_cuda, "scratch_mapping_net_traced_model_forward_strict_off.pt") self.mapping_net = torch.jit.load("scratch_mapping_net_traced_model_forward_strict_off.pt") self.mapping_net.cuda(self.opt.gpu_ids[0]) label_feat_map=self.mapping_net.inference_forward(label_feat.detach(), inst_data) I’ve encountered this error when running traced graph and weights on a different image of the same size. torch.Size([1, 1, 2016, 1512]) Skip HR_input_2016.png due to an error: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/models/NonLocal_feature_mapping_model.py", line 231, in inference_forward _120 = torch.slice(x_unfold, 0, 0, 9223372036854775807, 1) _121 = torch.slice(_120, 1, 0, 9223372036854775807, 1) _122 = torch.view(composed_unfold, [32768, 154]) ~~~~~~~~~~ <--- HERE mask_index5 = torch.to(mask_index, dtype=4, layout=0, device=torch.device("cuda:0"), pin_memory=None, non_blocking=False, copy=False, memory_format=None) _123 = annotate(List[Optional[Tensor]], [None, None, mask_index5]) Traceback of TorchScript, original code (most recent call last): .../Global/models/networks.py(774): inference_forward .../Global/models/NonLocal_feature_mapping_model.py(197): inference_forward .../anaconda3/lib/python3.8/site-packages/torch/jit/_trace.py(934): trace_module .../Global/models/mapping_model.py(415): inference test.py(190): <module> RuntimeError: shape '[32768, 154]' is invalid for input of size 1474560 inference_forward method: link Nevertheless traced module running fine for the image it was traced with.
st180517
inst_data is a mask image with one channel passed as a tensor with size torch.Size([1, 1, 2016, 1512]) When I try mapping_input = torch.rand(1, 64, 504, 378).to("cuda"), torch.rand(1, 1, 2016, 1512).to("cuda") I’ve got this error instead: cannot perform reduction function max on tensor with no elements because the operation does not have an identity Is there any way of tracing this?
st180518
Hi, We have developed a NN JIT compiler, which is capable to compile a whole deep learning model. And now we want to add some TorchScript style JIT support transparently, like “pytorch/glow/torch_glow” did, so that user could just do a import my_jit, and then all jit stuff will be forward to our backend, pytorch will only act as an frontend that output a TorchScript Graph. When reading torch_glow’s code, I found it hard to understand how should it work, so I was wondering if there was some document/tutorial about how to impl a custom JIT backend?
st180519
You could add an op with a subgraph attribute that would run the subgraph in your system, register a custom pass moving ops you want to run yourself into the subgraph. This way you can even fall back to running ops you might not support in PyTorch. Best regards Thomas
st180520
Hi, Thomas Thanks, I’ve noticed this solution, there is also a tutorial jott - Writing a Toy Backend Compiler for PyTorch 2 talks about it. So it seems we need to create a dummy_op and a custom post pass that fuse everything in the TorchGraph into this dummy_op , and then, we could do the jit stuff during the dispatch on dummy_op? Is this right? I also notice that there is a torch::jit::registerPrePass, is it fine to do the fusing at there too? For we also have an IR optimize system, and we might want to get the raw graph IR. Could this fusing in PrePass cause any issue? Another question now is that glow also registerd a new backend torch::jit::backend<TorchGlowBackend>("glow") and a preprocess of this backend torch::jit::backend_preprocess_register("glow", preprocess);. And it seems this backend isn’t used during the transparent JIT, so when should I also implement such a backend, or what is the usecase of the torch::jit::backend? Sincerely
st180521
This is approximately what I had in mind. (And I do have a branch somewhere that I forget about that tries to make more of the JIT accessible form Python… Sigh.) There are some passes that you would want, but it probably is OK to register a pre pass, too, if you prefer the IR at that level. Now that you mention it, using backend probably is the right way to go. It is a relatively new part of the JIT (from 2020, in particular newer than the tutorial you linked). Best regards Thomas
st180522
When I make a small project to check. It works fine… But as soon as I put the same thing in the Kaldi (Pytorch lattice rescoring 2) and run the executable file then I receive the same error as mentioned below. Error data/pytorch/rnnlm/newmodel2.pt error loading the model _ivalue_ INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/api/object.cpp":19, please report a bug to PyTorch. (_ivalue at /pytorch/torch/csrc/jit/api/object.cpp:19) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7f83b939d666 in /home/rakesh/rishabh_workspace/Garbage/kaldi/tools/libtorch/lib/libc10.so) frame #1: torch::jit::Object::_ivalue() const + 0xab (0x7f83acdc42cb in /home/rakesh/rishabh_workspace/Garbage/kaldi/tools/libtorch/lib/libtorch_cpu.so) frame #2: torch::jit::Object::find_method(std::string const&) const + 0x26 (0x7f83acdc43a6 in /home/rakesh/rishabh_workspace/Garbage/kaldi/tools/libtorch/lib/libtorch_cpu.so) frame #3: <unknown function> + 0x9192f (0x5592eb17892f in lattice-lmrescore-py-rnnlm) frame #4: <unknown function> + 0x86b78 (0x5592eb16db78 in lattice-lmrescore-py-rnnlm) frame #5: <unknown function> + 0x867d8 (0x5592eb16d7d8 in lattice-lmrescore-py-rnnlm) frame #6: <unknown function> + 0x23c38 (0x5592eb10ac38 in lattice-lmrescore-py-rnnlm) frame #7: __libc_start_main + 0xe7 (0x7f836b610b97 in /lib/x86_64-linux-gnu/libc.so.6) frame #8: <unknown function> + 0x2340a (0x5592eb10a40a in lattice-lmrescore-py-rnnlm) pyrnnlm.cc #include <utility> #include <fstream> #include "pyrnnlm/pytorch-rnnlm.h" #include "util/stl-utils.h" #include "util/text-utils.h" // torch::Tensorflow includes were moved after tfrnnlm/tensorflow-rnnlm.h include to // avoid macro redefinitions. See also the note in tfrnnlm/tensorflow-rnnlm.h. #include <torch/torch.h> #include <torch/script.h> #include <iostream> #include <memory> #include <dirent.h> namespace kaldi { using std::ifstream; using py_rnnlm::KaldiPyRnnlmWrapper; using py_rnnlm::PyRnnlmDeterministicFst; // read a unigram count file of the OOSs and generate extra OOS costs for words void SetUnkPenalties(const string &filename, const fst::SymbolTable& fst_word_symbols, std::vector<float> *out) { if (filename == "") return; out->resize(fst_word_symbols.NumSymbols(), 0); // default is 0 ifstream ifile(filename.c_str()); string word; float count, total_count = 0; while (ifile >> word >> count) { int id = fst_word_symbols.Find(word); KALDI_ASSERT(id != -1); // fst::kNoSymbol (*out)[id] = count; total_count += count; } for (int i = 0; i < out->size(); i++) { if ((*out)[i] != 0) { (*out)[i] = log ((*out)[i] / total_count); } } } // Read pytorch model files // Done **** void KaldiPyRnnlmWrapper::ReadPyModel(const std::string &py_model_path, int32 num_threads) { // Need to initialise it // torch::jit::script::Module module; try { // Deserialize the ScriptModule from a file using torch::jit::load(). //std::cout << "Model " << py_model_path.substr(0, py_model_path.size()-1) << " /newmodel2.pt"; std::string file("/newmodel2.pt"); // Load model in the module std::string path = py_model_path + file; std::string::iterator st = std::remove(path.begin(), path.end(), ' '); path.erase(st, path.end()); std::cout << path; module = torch::jit::load(path); } catch (const c10::Error& e) { std::cerr << " error loading the model\n"; //return -1; return; } std::cout << "Language Model\n\n"; // (Samrat): Think we need few of these, not all word_id_tensor_name_ = "word_id"; context_tensor_name_ = "context"; log_prob_tensor_name_ = "log_prob"; rnn_out_tensor_name_ = "rnn_out"; rnn_states_tensor_name_ = "rnn_states"; initial_state_tensor_name_ = "initial_state"; } // Done **** // Batch_size = 1 they have hard code it KaldiPyRnnlmWrapper::KaldiPyRnnlmWrapper( const KaldiPyRnnlmWrapperOpts &opts, const std::string &rnn_wordlist, const std::string &word_symbol_table_rxfilename, const std::string &unk_prob_file, const std::string &py_model_path): opts_(opts) { ReadPyModel(py_model_path, opts.num_threads); fst::SymbolTable *fst_word_symbols = NULL; if (!(fst_word_symbols = fst::SymbolTable::ReadText(word_symbol_table_rxfilename))) { KALDI_ERR << "Could not read symbol table from file " << word_symbol_table_rxfilename; } fst_label_to_word_.resize(fst_word_symbols->NumSymbols()); for (int32 i = 0; i < fst_label_to_word_.size(); ++i) { fst_label_to_word_[i] = fst_word_symbols->Find(i); if (fst_label_to_word_[i] == "") { KALDI_ERR << "Could not find word for integer " << i << " in the word " << "symbol table, mismatched symbol table or you have discoutinuous " << "integers in your symbol table?"; } } // first put all -1's; will check later fst_label_to_rnn_label_.resize(fst_word_symbols->NumSymbols(), -1); num_total_words = fst_word_symbols->NumSymbols(); // read rnn wordlist and then generate ngram-label-to-rnn-label map oos_ = -1; { // input. ifstream ifile(rnn_wordlist.c_str()); string word; int id = -1; eos_ = 0; while (ifile >> word) { id++; rnn_label_to_word_.push_back(word); // vector[i] = word int fst_label = fst_word_symbols->Find(word); if (fst_label == -1) { // fst::kNoSymbol if (id == eos_) continue; KALDI_ASSERT(word == opts_.unk_symbol && oos_ == -1); oos_ = id; continue; } KALDI_ASSERT(fst_label >= 0); fst_label_to_rnn_label_[fst_label] = id; } } if (fst_label_to_word_.size() > rnn_label_to_word_.size()) { KALDI_ASSERT(oos_ != -1); } num_rnn_words = rnn_label_to_word_.size(); // we must have an oos symbol in the wordlist if (oos_ == -1) return; for (int i = 0; i < fst_label_to_rnn_label_.size(); i++) { if (fst_label_to_rnn_label_[i] == -1) { fst_label_to_rnn_label_[i] = oos_; } } AcquireInitialTensors(); SetUnkPenalties(unk_prob_file, *fst_word_symbols, &unk_costs_); delete fst_word_symbols; } KaldiPyRnnlmWrapper::~KaldiPyRnnlmWrapper() { } // Done void KaldiPyRnnlmWrapper::AcquireInitialTensors() { // Status status; // get the initial context; this is basically the all-0 tensor /* (Samrat): Have to figure out get_initial_state(batch_size) ? what should btchsz be ? */ //auto hidden = module.get_method("get_initial_state")({torch::tensor({1})}); //initial_context_ = hidden.toTensor(); initial_context_=module.get_method("get_initial_state")({torch::tensor({1})}).toTensor(); //changed function call name (Samrat) auto bosword = torch::tensor({eos_}); auto hidden = module.get_method("single_step_rnn_out")({initial_context_, bosword}); initial_cell_ = hidden.toTensor(); // { // std::vector<torch::Tensor> state; // status = bundle_.session->Run(std::vector<std::pair<string, torch::Tensor> >(), // {initial_state_tensor_name_}, {}, &state); // if (!status.ok()) { // KALDI_ERR << status.ToString(); // } // initial_context_ = state[0]; // } // get the initial pre-final-affine layer // { // std::vector<torch::Tensor> state; // torch::Tensor bosword(tensorflow::DT_INT32, {1, 1}); // bosword.scalar<int32>()() = eos_; // eos_ is more like a sentence boundary // std::vector<std::pair<string, torch::Tensor> > inputs = { // {word_id_tensor_name_, bosword}, // {context_tensor_name_, initial_context_}, // }; // status = bundle_.session->Run(inputs, {rnn_out_tensor_name_}, {}, &state); // if (!status.ok()) { // KALDI_ERR << status.ToString(); // } // initial_cell_ = state[0]; // } } /* // Need to change ***** BaseFloat KaldiPyRnnlmWrapper::GetLogProb(int32 word, int32 fst_word, const torch::Tensor &context_in, const torch::Tensor &cell_in, torch::Tensor *context_out, torch::Tensor *new_cell) { torch::Tensor thisword(torch::Tensor, {1, 1}); thisword.scalar<int32>()() = word; std::vector<torch::Tensor> outputs; std::vector<std::pair<string, torch::Tensor> > inputs = { {word_id_tensor_name_, thisword}, {context_tensor_name_, context_in}, }; if (context_out != NULL) { // The session will initialize the outputs // Run the session, evaluating our "c" operation from the graph Status status = bundle_.session->Run(inputs, {log_prob_tensor_name_, rnn_out_tensor_name_, rnn_states_tensor_name_}, {}, &outputs); if (!status.ok()) { KALDI_ERR << status.ToString(); } *context_out = outputs[1]; *new_cell = outputs[2]; } else { // Run the session, evaluating our "c" operation from the graph Status status = bundle_.session->Run(inputs, {log_prob_tensor_name_}, {}, &outputs); if (!status.ok()) { KALDI_ERR << status.ToString(); } } float ans; if (word != oos_) { ans = outputs[0].scalar<float>()(); } else { if (unk_costs_.size() == 0) { ans = outputs[0].scalar<float>()() - log(num_total_words - num_rnn_words); } else { ans = outputs[0].scalar<float>()() + unk_costs_[fst_word]; } } return ans; } */ /* Below is my(Samrat) modified version of the above function only. Replace if you think something is incorrect. */ BaseFloat KaldiPyRnnlmWrapper::GetLogProb(int32 word, int32 fst_word, const torch::Tensor &context_in, const torch::Tensor &cell_in, torch::Tensor *context_out, torch::Tensor *new_cell) { //torch::Tensor thisword(torch::Tensor, {1, 1}); //thisword.scalar<int32>()() = word; torch::Tensor thisword = torch::tensor({word}); //std::vector<torch::Tensor> outputs; // std::vector<std::pair<string, torch::Tensor> > inputs = { // {word_id_tensor_name_, thisword}, // {context_tensor_name_, context_in}, // }; auto outputs = module.get_method("single_step")({context_in, thisword}); if (context_out != NULL) { // The session will initialize the outputs // Run the session, evaluating our "c" operation from the graph // Status status = bundle_.session->Run(inputs, // {log_prob_tensor_name_, // rnn_out_tensor_name_, // rnn_states_tensor_name_}, {}, &outputs); // if (!status.ok()) { // KALDI_ERR << status.ToString(); // } *context_out = module.get_method("single_step_rnn_out")({context_in, thisword}).toTensor(); *new_cell = module.get_method("single_step_rnn_state")({context_in, thisword}).toTensor(); } //else { // Run the session, evaluating our "c" operation from the graph // Status status = bundle_.session->Run(inputs, // {log_prob_tensor_name_}, {}, &outputs); // if (!status.ok()) { // KALDI_ERR << status.ToString(); // } //} /* (Samrat): Can through error so have to check manually in testLM Hopefully expect it to return a float */ float log_prob=(float)module.get_method("single_step_log")({context_in, thisword}).toDouble(); float ans; if (word != oos_) { //ans = outputs[0].scalar<float>()(); ans = log_prob; } else { if (unk_costs_.size() == 0) { //ans = outputs[0].scalar<float>()() - log(num_total_words - num_rnn_words); ans = log_prob - log(num_total_words - num_rnn_words); } else { //ans = outputs[0].scalar<float>()() + unk_costs_[fst_word]; ans = log_prob + unk_costs_[fst_word]; } } return ans; } // Done ***** const torch::Tensor& KaldiPyRnnlmWrapper::GetInitialContext() const { return initial_context_; } const torch::Tensor& KaldiPyRnnlmWrapper::GetInitialCell() const { return initial_cell_; } int KaldiPyRnnlmWrapper::FstLabelToRnnLabel(int i) const { KALDI_ASSERT(i >= 0 && i < fst_label_to_rnn_label_.size()); return fst_label_to_rnn_label_[i]; } // Done ***** PyRnnlmDeterministicFst::PyRnnlmDeterministicFst(int32 max_ngram_order, KaldiPyRnnlmWrapper *rnnlm) { KALDI_ASSERT(rnnlm != NULL); max_ngram_order_ = max_ngram_order; rnnlm_ = rnnlm; std::vector<Label> bos; const torch::Tensor& initial_context = rnnlm_->GetInitialContext(); const torch::Tensor& initial_cell = rnnlm_->GetInitialCell(); state_to_wseq_.push_back(bos); state_to_context_.push_back(new torch::Tensor(initial_context)); state_to_cell_.push_back(new torch::Tensor(initial_cell)); wseq_to_state_[bos] = 0; start_state_ = 0; } // Done ***** PyRnnlmDeterministicFst::~PyRnnlmDeterministicFst() { for (int i = 0; i < state_to_context_.size(); i++) { delete state_to_context_[i]; } for (int i = 0; i < state_to_cell_.size(); i++) { delete state_to_cell_[i]; } } // Done ***** void PyRnnlmDeterministicFst::Clear() { // similar to the destructor but we retain the 0-th entries in each map // which corresponds to the <bos> state for (int i = 1; i < state_to_context_.size(); i++) { delete state_to_context_[i]; } for (int i = 1; i < state_to_cell_.size(); i++) { delete state_to_cell_[i]; } state_to_context_.resize(1); state_to_cell_.resize(1); state_to_wseq_.resize(1); wseq_to_state_.clear(); wseq_to_state_[state_to_wseq_[0]] = 0; } // Done ***** fst::StdArc::Weight PyRnnlmDeterministicFst::Final(StateId s) { // At this point, we should have created the state. KALDI_ASSERT(static_cast<size_t>(s) < state_to_wseq_.size()); std::vector<Label> wseq = state_to_wseq_[s]; BaseFloat logprob = rnnlm_->GetLogProb(rnnlm_->GetEos(), -1, // only need type; this param will not be used *state_to_context_[s], *state_to_cell_[s], NULL, NULL); return Weight(-logprob); } // Done ***** bool PyRnnlmDeterministicFst::GetArc(StateId s, Label ilabel, fst::StdArc *oarc) { KALDI_ASSERT(static_cast<size_t>(s) < state_to_wseq_.size()); std::vector<Label> wseq = state_to_wseq_[s]; torch::Tensor *new_context = new torch::Tensor(); torch::Tensor *new_cell = new torch::Tensor(); // look-up the rnn label from the FST label int32 rnn_word = rnnlm_->FstLabelToRnnLabel(ilabel); BaseFloat logprob = rnnlm_->GetLogProb(rnn_word, ilabel, *state_to_context_[s], *state_to_cell_[s], new_context, new_cell); wseq.push_back(rnn_word); if (max_ngram_order_ > 0) { while (wseq.size() >= max_ngram_order_) { // History state has at most <max_ngram_order_> - 1 words in the state. wseq.erase(wseq.begin(), wseq.begin() + 1); } } std::pair<const std::vector<Label>, StateId> wseq_state_pair( wseq, static_cast<Label>(state_to_wseq_.size())); // Attemps to insert the current <lseq_state_pair>. If the pair already exists // then it returns false. typedef MapType::iterator IterType; std::pair<IterType, bool> result = wseq_to_state_.insert(wseq_state_pair); // If the pair was just inserted, then also add it to <state_to_wseq_> and // <state_to_context_>. if (result.second == true) { state_to_wseq_.push_back(wseq); state_to_context_.push_back(new_context); state_to_cell_.push_back(new_cell); } else { delete new_context; delete new_cell; } // Creates the arc. oarc->ilabel = ilabel; oarc->olabel = ilabel; oarc->nextstate = result.first->second; oarc->weight = Weight(-logprob); return true; } } // namespace kaldi
st180523
@krrishabh Can you post this in ‘jit’ category? And, check what pytorch/libtorch version is used to generate your .pt file and what libtorch version you were used to load it. This might be a version mismatch as well.
st180524
I have change the tag from C++ to jit libtorch is used to load the model libtorch Build version : 1.6.0.dev20200501+cu101 Model is saved in Pytorch version : 1.3.1 Moreover, I have made a small project where it is working fine… but in my Kaldi Project the model is not able to get load. (both have same Configuration)
st180525
@krrishabh I think version mismatch might be the root cause. If you saved your .pt with 1.3.1, please use the same version of libtorch to load it. We have a similar issue here, ‘https://github.com/pytorch/pytorch/issues/39623 40’ please read the comments see how to download libtorch 1.3.1, or if possible, upgrade your pytorch to 1.6dev, or I believe 1.5 is fine.
st180526
I have upgrade the Pytorch version to 1.5 but I am getting the same error. Moreover, It was not a problem of version because when I am building a small project it is working correctly.
st180527
@krrishabh Since the error message says “error loading the model”, it is likely the program failed to load the model file in torch::jit::load(path). Please make sure data/pytorch/rnnlm/newmodel2.pt exists and it is accessible from the binary. If you compile the program into build directory, the path might be ../data/pytorch/rnnlm/newmodel2.pt. And the message “_ivalue_ INTERNAL ASSERT FAILED at ...” is shown when an instance of torch::jit::script::Module is not initialized (like the situation that the exception from torch::jit::load() is caught and you reference the uninitialized module instance after that).
st180528
@m4saka Did you figure out this issue? I’ve been suffering the same. torch::jit::load brings the model without a problem as an individual project, but when it is combined with another project, it shows: error loading the model terminate called after throwing an instance of ‘c10::Error’ what(): ivalue INTERNAL ASSERT FAILED at “…/torch/csrc/jit/api/object.cpp”:19, please report a bug to PyTorch. Exception raised from _ivalue at …/torch/csrc/jit/api/object.cpp:19 (most recent call first):
st180529
I am facing the same problem. In individual consol project it runs without any problem. But with combined project it can’t load the same model.
st180530
I’m calling torch.jit.trace() on my model and it appears to be stuck in a loop at these lines File "/data/scratch/karima/anaconda3/envs/mother/lib/python3.7/difflib.py", line 1032, in _fancy_helper yield from g File "/data/scratch/karima/anaconda3/envs/mother/lib/python3.7/difflib.py", line 1020, in _fancy_replace yield from self._fancy_helper(a, best_i+1, ahi, b, best_j+1, bhi) File "/data/scratch/karima/anaconda3/envs/mother/lib/python3.7/difflib.py", line 1032, in _fancy_helper yield from g File "/data/scratch/karima/anaconda3/envs/mother/lib/python3.7/difflib.py", line 1020, in _fancy_replace yield from self._fancy_helper(a, best_i+1, ahi, b, best_j+1, bhi) File "/data/scratch/karima/anaconda3/envs/mother/lib/python3.7/difflib.py", line 1032, in _fancy_helper my model is implemented as a DAG of python objects that inherit from nn.Module and call the .run() functions of their child nodes which in turn call the .forward() implementations like in the example below: class InterleavedSumOp(nn.Module): def __init__(self, operand, C_out): super(InterleavedSumOp, self).__init__() self.C_out = C_out self._operands = nn.ModuleList([operand]) self.output = None def _initialize_parameters(self): self._operands[0]._initialize_parameters() def run(self, model_inputs): if self.output is None: operand = self._operands[0].run(model_inputs) self.output = self.forward(operand) return self.output def forward(self, x): x_shape = list(x.shape) x_reshaped = torch.reshape(x, (x_shape[0], x_shape[1]//self.C_out, self.C_out, x_shape[2], x_shape[3])) out = x_reshaped.sum(1, keepdim=False) return out def to_gpu(self, gpu_id): self._operands[0].to_gpu(gpu_id)
st180531
Hi, I’m trying to trace FasterRCNN to use in Pytorch Mobile on iOS. I simply trace as shown below: model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() input_tensor = torch.rand(1,3,224,224) script_model = torch.jit.trace(model, input_tensor) script_model.save("models/fRCNN_resnet50.pt") I receive a “Only tensors or tuples of tensors can be output from traced functions (getOutput at …/torch/csrc/jit/tracer.cpp:209)” error as shown below RuntimeError: Only tensors or tuples of tensors can be output from traced functions (getOutput at ../torch/csrc/jit/tracer.cpp:209) frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x128cc29e7 in libc10.dylib) frame #1: torch::jit::tracer::TracingState::getOutput(c10::IValue const&) + 1785 (0x120164069 in libtorch.dylib) frame #2: torch::jit::tracer::exit(std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> > const&) + 232 (0x120167108 in libtorch.dylib) frame #3: torch::jit::tracer::createGraphByTracing(pybind11::function const&, torch::jit::tracer::TypedStack, pybind11::function const&, bool, torch::jit::script::Module*) + 916 (0x11c957914 in libtorch_python.dylib) frame #4: void pybind11::cpp_function::initialize<torch::jit::script::initJitScriptBindings(_object*)::$_16, void, torch::jit::script::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::function, pybind11::tuple, pybind11::function, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(torch::jit::script::initJitScriptBindings(_object*)::$_16&&, void (*)(torch::jit::script::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::function, pybind11::tuple, pybind11::function, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::__invoke(pybind11::detail::function_call&) + 197 (0x11c993185 in libtorch_python.dylib) frame #5: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3324 (0x11c4cd92c in libtorch_python.dylib) frame #6: _PyCFunction_FastCallDict + 183 (0x10e3d0167 in Python) frame #7: call_function + 184 (0x10e452d28 in Python) frame #8: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #9: _PyEval_EvalCodeWithName + 2447 (0x10e45388f in Python) frame #10: fast_function + 545 (0x10e454141 in Python) frame #11: call_function + 401 (0x10e452e01 in Python) frame #12: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #13: _PyEval_EvalCodeWithName + 2447 (0x10e45388f in Python) frame #14: fast_function + 545 (0x10e454141 in Python) frame #15: call_function + 401 (0x10e452e01 in Python) frame #16: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #17: _PyEval_EvalCodeWithName + 2447 (0x10e45388f in Python) frame #18: PyEval_EvalCode + 100 (0x10e448954 in Python) frame #19: builtin_exec + 548 (0x10e445fe4 in Python) frame #20: _PyCFunction_FastCallDict + 491 (0x10e3d029b in Python) frame #21: call_function + 439 (0x10e452e27 in Python) frame #22: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #23: gen_send_ex + 183 (0x10e3a7fe7 in Python) frame #24: _PyEval_EvalFrameDefault + 11552 (0x10e44b740 in Python) frame #25: gen_send_ex + 183 (0x10e3a7fe7 in Python) frame #26: _PyEval_EvalFrameDefault + 11552 (0x10e44b740 in Python) frame #27: gen_send_ex + 183 (0x10e3a7fe7 in Python) frame #28: _PyCFunction_FastCallDict + 560 (0x10e3d02e0 in Python) frame #29: call_function + 439 (0x10e452e27 in Python) frame #30: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #31: fast_function + 381 (0x10e45409d in Python) frame #32: call_function + 401 (0x10e452e01 in Python) frame #33: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #34: fast_function + 381 (0x10e45409d in Python) frame #35: call_function + 401 (0x10e452e01 in Python) frame #36: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #37: _PyEval_EvalCodeWithName + 2447 (0x10e45388f in Python) frame #38: _PyFunction_FastCallDict + 763 (0x10e45445b in Python) frame #39: _PyObject_FastCallDict + 247 (0x10e3873e7 in Python) frame #40: _PyObject_Call_Prepend + 149 (0x10e387505 in Python) frame #41: PyObject_Call + 96 (0x10e387220 in Python) frame #42: _PyEval_EvalFrameDefault + 28250 (0x10e44f87a in Python) frame #43: _PyEval_EvalCodeWithName + 2447 (0x10e45388f in Python) frame #44: fast_function + 545 (0x10e454141 in Python) frame #45: call_function + 401 (0x10e452e01 in Python) frame #46: _PyEval_EvalFrameDefault + 27670 (0x10e44f636 in Python) frame #47: gen_send_ex + 183 (0x10e3a7fe7 in Python) frame #48: builtin_next + 92 (0x10e446bcc in Python) frame #49: _PyCFunction_FastCallDict + 491 (0x10e3d029b in Python) frame #50: call_function + 439 (0x10e452e27 in Python) frame #51: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #52: _PyEval_EvalCodeWithName + 2447 (0x10e45388f in Python) frame #53: fast_function + 545 (0x10e454141 in Python) frame #54: call_function + 401 (0x10e452e01 in Python) frame #55: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #56: gen_send_ex + 183 (0x10e3a7fe7 in Python) frame #57: builtin_next + 92 (0x10e446bcc in Python) frame #58: _PyCFunction_FastCallDict + 491 (0x10e3d029b in Python) frame #59: call_function + 439 (0x10e452e27 in Python) frame #60: _PyEval_EvalFrameDefault + 27511 (0x10e44f597 in Python) frame #61: _PyEval_EvalCodeWithName + 2447 (0x10e45388f in Python) frame #62: fast_function + 545 (0x10e454141 in Python) frame #63: call_function + 401 (0x10e452e01 in Python) Could someone explain to me how to properly trace or script a pretrained object detection model such as this one? I don’t know which steps I might be missing if any!
st180532
Solved by xta0 in post #7 @HussainHaris David is right, the torchvision c++ APIs are not supported on mobile yet. If your local pytorch version is 1.4.0, you can use the python API below to examine the ops used by your model torch.jit.export_opnames(traced_script_module)
st180533
The MaskRCNN based models don’t support tracing due to the error you saw, but thanks to this PR 80 you can use torch.jit.script to compile the whole network. You’ll need to get the most current version of torchvision by building it from source 32, then you can do model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() script_model = torch.jit.script(model)
st180534
driazati: building it from source Hi! I know some time has passed between now and when this was response was written. Would recompiling from source still be the solution for this? Will try tonight! Thanks
st180535
We had a release recently, so it should work so long as you’re using the latest versions of PyTorch and torchvision. # Clear out old versions pip uninstall torch pip uninstall torch pip uninstall torchvision # Install the most recent versions pip install torch torchvision
st180536
driazati: script_model = torch.jit.script(model) Okay, I was able to successfuly script the model as a pt file. I proceeded to place this pt file in an iOS project via pytorch mobile bridging headers. I also changed the settings to accept an 800x800 input image since the model specifies that. However when I try to run predict/inference on Swift/iOS I get the following error. 2020-01-29 12:43:32.086469-0500 torchMobile[11047:2729940] Unknown builtin op: torchvision::_new_empty_tensor_op. Could not find any similar ops to torchvision::_new_empty_tensor_op. This op may not exist or may not be currently supported in TorchScript. : at /Users/hxh85ki/Desktop/Projects/thdEnv/lib/python3.6/site-packages/torchvision/ops/new_empty_tensor.py:16:11 output (Tensor) """ return torch.ops.torchvision._new_empty_tensor_op(x, shape) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE Serialized at code/__torch__/torchvision/ops/new_empty_tensor.py:4:7 def _new_empty_tensor(x: Tensor, shape: List[int]) -> Tensor: _0 = ops.torchvision._new_empty_tensor_op(x, shape) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return _0 '_new_empty_tensor' is being compiled since it was called from 'interpolate' Serialized at code/__torch__/torchvision/ops/misc.py:25:2 align_corners: Optional[bool]=None) -> Tensor: _1 = __torch__.torchvision.ops.misc._output_size _2 = __torch__.torchvision.ops.new_empty_tensor._new_empty_tensor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _3 = uninitialized(Tensor) if torch.gt(torch.numel(input), 0): 'interpolate' is being compiled since it was called from 'GeneralizedRCNNTransform.resize' Serialized at code/__torch__/torchvision/models/detection/transform.py:79:4 target: Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]: _18 = __torch__.torchvision.models.detection.transform.resize_boxes _19 = __torch__.torchvision.ops.misc.interpolate ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _20 = __torch__.torchvision.models.detection.transform.resize_keypoints _21 = uninitialized(Tuple[Tensor, Optional[Dict[str, Tensor]]]) 'GeneralizedRCNNTransform.resize' is being compiled since it was called from 'GeneralizedRCNNTransform.forward' at /Users/hxh85ki/Desktop/Projects/thdEnv/lib/python3.6/site-packages/torchvision/models/detection/transform.py:47:34 "of shape [C, H, W], got {}".format(image.shape)) image = self.normalize(image) image, target_index = self.resize(image, target_index) ~~~~~~~~~~~ <--- HERE images[i] = image if targets is not None and target_index is not None: Serialized at code/__torch__/torchvision/models/detection/transform.py:29:33 pass image0 = (self).normalize(image, ) _2 = (self).resize(image0, target_index, ) ~~~~~~~~~~~~ <--- HERE image1, target_index0, = _2 _3 = torch._set_item(images0, i, image1) Fatal error: Can't find the model file!: file /Users/hxh85ki/thd-visual-ai/pytorchMobile/torchMobile/torchMobile/ViewController.swift, line 110 2020-01-29 12:43:32.088293-0500 torchMobile[11047:2729940] Fatal error: Can't find the model file!: file /Users/hxh85ki/thd-visual-ai/pytorchMobile/torchMobile/torchMobile/ViewController.swift, line 110 My model file is definitely in the project and I’m able to successfully able to run any normal image classification models on iOS. But this is my first attempt with Object Detection using Faster RCNN. Where am I going wrong? Any help would be appreciated. If necessary I could also take it to the mobile thread. From my understanding, if I can trace/script a pt file for mobile, pytorch mobile should be able to run it? This is a vanilla Faster RCNN Resnet50 fpn. Thanks, Haris
st180537
We don’t currently build and ship torchvision or torchaudio for mobile. We’re looking into it.
st180538
@HussainHaris David is right, the torchvision c++ APIs are not supported on mobile yet. If your local pytorch version is 1.4.0, you can use the python API below to examine the ops used by your model torch.jit.export_opnames(traced_script_module)
st180539
@David_Reiss I try to load faster rcnn model with nightly libtorch on Win10. But it failed. Is it support win10?
st180540
Thanks, how is shipping torchvision models for object detection or segmentation such as this FasterRCNN different from say shipping a resnet50 on mobile? Isn’t resnet50 and other CV models torchvision models? Just so I can understand better!
st180541
Detection models use special ops that are not part of the PyTorch core. Some segmentation architectures (like U-Net) work fine without those ops. We are looking into supporting them. Win10 is supported.
st180542
I’ve also run into this same issue using the pre-trained Faster RCNN model from pytorch, saved using model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() script_module = torch.jit.script(model) script_module.save("../models/frcnn_torchscript.pt") Module.load using the Java wrappers in Android Studio gives: Unknown builtin op: torchvision::_new_empty_tensor_op. Could not find any similar ops to torchvision::_new_empty_tensor_op. This op may not exist or may not be currently supported in TorchScript. Is this something I could fix by compiling the latest Pytorch Android from source?
st180543
As David stated, F-RCNN is a detection model using special ops not part of the Pytorch Core and is therefore not directly scriptable for mobile deployment unfortunately
st180544
Ok thanks @HussainHaris. Do you have any insight on what this change 14 did and what I could do to build it into the android libraries? It seems like there is a torchvision package available now (used in this example I was following 12) which uses at least some of the vision ops, so maybe there is a way for me to hack the missing ops in and unblock myself for a while?
st180545
mrpropellers: this change I didn’t see this actually, maybe @David_Reiss could shed some light on these changes for MaskRCNN scriptability? Please let me know @mrpropellers of any updates with this attempt. Currently I’m trying to add MaskRCNN Resnet50 via MLRI/Tensorflow Lite’s experimental convertor 21.
st180546
Hi @HussainHaris . It has been a while since this issue was active, so I’m wondering if you’ve had any more success with this in more recent versions of torch and torchvision. I’m trying to do the same thing as you, where I save a trace of a faster rcnn model, and the problem appears to be that those ops functions are not available for tracing. Are there any ways to do this now that the packages have been updated. Thank you
st180547
I have a customized C++ operation batch_score_nms which returns a vector<at::Tensor> with size (Batchsize * 3). When I trace this graph, I get below output: %470 : Tensor[] = torch_ipex::batch_score_nms(%bboxes, %probs, %3, %2) # /home/lesliefang/pytorch_1_7_1/ssd-rn34/frameworks.ai.pytorch.cpu-models/inference/others/cloud/single_stage_detector/pytorch/utils.py:169:0 %471 : Tensor, %472 : Tensor, %473 : Tensor = prim::ListUnpack(%470) %477 : Tensor[] = prim::ListConstruct(%471, %472, %473) return (%477) I am a little confused when we need to ListUnpack and tensor list and pack it again for output? It seems Redundant .
st180548
I am getting the following error while trying to load a saved model checkpoint (.pth file). RuntimeError: Error(s) in loading state_dict for DataParallel: Unexpected key(s) in state_dict: "module.scibert_layer.embeddings.position_ids" I trained my sequence labeling model in DataParallel (torch version 1.7.0) but am trying to load it without the DataParallel (torch version 1.9.0). Currently, I understand that not using DataParallel caused the issue of RuntimeError: Error(s) in loading state_dict for DataParallel:, but could it also be because I am using different versions of torch or training and loading the model checkpoint? The model is wrapped in DataParallel using the following chunk of code. if exp_args.parallel == 'true': if torch.cuda.device_count() > 1: model = nn.DataParallel(model, device_ids = [0, 1, 2, 3]) print("Using", len(model.device_ids), " GPUs!") print("Using", str(model.device_ids), " GPUs!") model.to(f'cuda:{model.device_ids[0]}') elif exp_args.parallel == 'false': model = nn.DataParallel(model, device_ids = [0]) This is my model. DataParallel( (module): SCIBERTPOSAttenCRF( (scibert_layer): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(31090, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (1): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (2): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (3): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (4): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (5): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (6): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (7): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (8): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (9): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (10): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (11): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (lstmpos_layer): LSTM(44, 20, batch_first=True, bidirectional=True) (self_attention): MultiheadAttention( (out_proj): NonDynamicallyQuantizableLinear(in_features=40, out_features=40, bias=True) ) (lstm_layer): LSTM(808, 512, batch_first=True, bidirectional=True) (hidden2tag): Linear(in_features=1024, out_features=2, bias=True) (crf_layer): CRF(num_tags=2) ) ) How should I go ahead correctly wrapping the checkpoint in nn.DataParallel or should I use the correct version of torch that could fix this problem? I will be grateful for any help or hint.
st180549
Solved by someAdjectiveNoun in post #2 The error is solved after wrapping the model checkpoint in nn.DataParallel in the following way and fixing the version mismatch. checkpoint = torch.load('model.pth', map_location='cuda:0') model.load_state_dict( checkpoint ) model = torch.nn.DataParallel(model, device_ids=[0])
st180550
The error is solved after wrapping the model checkpoint in nn.DataParallel in the following way and fixing the version mismatch. checkpoint = torch.load('model.pth', map_location='cuda:0') model.load_state_dict( checkpoint ) model = torch.nn.DataParallel(model, device_ids=[0])
st180551
Hi, I am writing my custom layers in C++. Now I have created the Linear layer and I want to parallelize the GEMM function (I use a custom naive gemm). Can I jit compile with openmp threads? Because when I try to use openmp pragma for a parallel for, it seems pytorch give me weird errors that it does not know some of my Aten functions. i.e. (‘Tensor’ does not name a type; did you mean ‘THTensor’) Thanks in advance
st180552
I have a model that has a GRU in it. I have traced the model but when I load it and call the forward method it works only if the length of the sequence I use is the same as the dummy input used when tracing. I get this warning when tracing: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently ​give incorrect results). My dummy sequence has a length of 25 with features of size 256 (6400). I chose this at random since the model should work for variable sequence lengths. However, when I load the model and pass a sequence with a different length to the dummy input I get this error: RuntimeError: shape ‘[-1, 30, 256]’ is invalid for input of size 6400 Is this normal (i.e. what the warning was about) and if so are there any workarounds? I suppose one workaround is to have a max size and pad the tensor when I use it but this is not optimal.
st180553
Solved by dfalbel in post #4 I think scripting works fine in that case and is probably the best option: import torch import torch.nn as nn class MyGRU (nn.Module): def __init__ (self): super(MyGRU, self).__init__() self.gru = nn.GRU(10, 32, 5) def forward (self, x): return self.gru(x) module = torch.jit.sc…
st180554
AFAICT this is exactly tthe case where tracing doesn’t work well. See the Warning section in the torch.jit.trace documentation page: https://pytorch.org/docs/stable/generated/torch.jit.trace.html#torch.jit.trace 3 … Tracing will not record any control-flow like if-statements or loops. … It also offers a possible solution: In cases like these, tracing would not be appropriate and scripting 2 is a better choice. If you trace such models, you may silently get incorrect results on subsequent invocations of the model. The tracer will try to emit warnings when doing something that may cause an incorrect trace to be produced.
st180555
The problem in my case seems to be that the handling of variable sized tensors (due to variable sequence lengths). I assume then that this is internally implemented with a loop. I will look into scripting but if the loop is in the pytorch code then how would I do it? Another alternative that I just considered is inputting the hidden state for the GRU and passing it to the network which will output the hidden state for use in future steps. That way I suppose I could use use a dummy with length 1 and perform the loop outside. Would something like this work?
st180556
I think scripting works fine in that case and is probably the best option: import torch import torch.nn as nn class MyGRU (nn.Module): def __init__ (self): super(MyGRU, self).__init__() self.gru = nn.GRU(10, 32, 5) def forward (self, x): return self.gru(x) module = torch.jit.script(MyGRU()) print(module(torch.randn(100, 15, 10))[0].shape) print(module(torch.randn(100, 20, 10))[0].shape)
st180557
Hi, I am planning to use GitHub - pierre-wilmot/NeuralTextureSynthesis: Code for "Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses" 1 However, this seems to be done with previous Pytorch and CUDA versions. Currently I am using Pytorch 1.6.0, Python 3.7.9, Windows 10, CUDA 10.1/10.2, but got the following errors: Style Transfer Traceback (most recent call last): File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1515, in _run_ninja_build env=env) File "D:\Anaconda3\envs\tf2\lib\subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "main.py", line 17, in <module> cpp = torch.utils.cpp_extension.load(name="histogram_cpp", sources=["histogram.cpp", "histogram.cu"]) File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 974, in load keep_intermediates=keep_intermediates) File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1179, in _jit_compile with_cuda=with_cuda) File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1279, in _write_ninja_file_and_build_library error_prefix="Error building extension '{}'".format(name)) File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1529, in _run_ninja_build raise RuntimeError(message) RuntimeError: Error building extension 'histogram_cpp': [1/3] cl /showIncludes -DTORCH_EXTENSION_NAME=histogram_cpp -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include -ID:\Anaconda3\envs\tf2\lib\sit e-packages\torch\include\torch\csrc\api\include -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\TH -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -ID:\Anaconda3\envs\tf2\Include -D_GLIBCXX_USE_CXX11_ABI=0 /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -c .../histogram.cpp /Fohistogram.o Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29913 for x64 Copyright (C) Microsoft Corporation. All rights reserved. [2/3] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_witho ut_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xco mpiler /MD -DTORCH_EXTENSION_NAME=histogram_cpp -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Anaconda3\envs\ tf2\lib\site-packages\torch\include\TH -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -ID:\Anaconda3\envs\tf2\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA _NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=sm_75 -c .../histogram.cu -o histogram.cuda.o FAILED: histogram.cuda.o C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll _interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=histogram_cpp -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Anaconda3\envs\tf2\li b\site-packages\torch\include\TH -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -ID:\Anaconda3\envs\tf2\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HA LF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=sm_75 -c .../histogram.cu -o histogram.cuda.o D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\c10/util/ThreadLocalDebugInfo.h(12): warning: modifier is ignored on an enum specifier D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\ATen/record_function.h(18): warning: modifier is ignored on an enum specifier D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(483): error: a member with an in-class initializer must be const D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(496): error: a member with an in-class initializer must be const D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(510): error: a member with an in-class initializer must be const D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(523): error: a member with an in-class initializer must be const D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/autograd/profiler.h(97): warning: modifier is ignored on an enum specifier D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/autograd/profiler.h(126): warning: modifier is ignored on an enum specifier 4 errors detected in the compilation of "C:/Users/.../AppData/Local/Temp/tmpxft_00004a38_00000000-10_histogram.cpp1.ii". histogram.cu ninja: build stopped: subcommand failed. Is there any idea how to modify it? I am also wondering if there is any general way to adapt the older version CUDA kernel to newer ones? Thanks in advance.
st180558
The error seems to be related to this issue 1 so you might want to install a newer PyTorch release or cherry-pick the fix 3 into your branch and rebuild PyTorch from source, if 1.6.0 is needed.
st180559
Hi everyone, I am wondering whether there is a way to pass custom objects to forward() when using jit.trace. I use custom objects to isolate some behaviour from the modules I pass my object to. For example some modules in my model can work with either Tensors or dictionaries of tensors as long as view() is supported. I implemented a wrapper that provides element-wise view() for dictionaries of tensors. Here are some examples that show what I want to achieve and what the problem is: Example 1: def foo(x, y): return 2 * x["a"] + y inputs_bar = ({"a": torch.rand(3)}, torch.rand(3)) traced_foo = torch.jit.trace(foo, inputs_bar) This works fine as dict is supported. Example 2 class Bar: def __init__(self, a): self.a = a def return_modified_content(self): return self.a + 10 def bar(x: Bar, y): return 2 * x.return_modified_content() + y inputs_bar = (Bar(torch.rand(3)), torch.rand(3)) traced_bar = torch.jit.trace(bar, inputs_bar) Results in traced = torch._C._create_function_from_trace( RuntimeError: Type 'Tuple[__torch__.Bar, Tensor]' cannot be traced. Only Tensors and (possibly nested) Lists, Dicts, and Tuples of Tensors can be traced From the error I conclude that for some reason arguments must be of some default container types. Example 3 class Bar2(dict): def __init__(self, a): super().__init__({"a": a}) self.a = a def return_modified_content(self): return self.a + 10 def bar2(x: Bar2, y): print(type(x)) return 2 * x.return_modified_content() + y inputs_bar2 = (Bar2(torch.rand(3)), torch.rand(3)) traced_bar2 = torch.jit.trace(bar2, inputs_bar2) outputs <class 'dict'> and throws error AttributeError: 'dict' object has no attribute 'return_modified_content'. It seems like Bar2 is converted to dict. Is there a way to make this work?
st180560
hi sir, I’m playing other person’s crnn project ( GitHub - meijieru/crnn.pytorch: Convolutional recurrent network in pytorch )and try to convert the code to torchScript so I can run the model in the libtorch. In order not to show the "_is_full_backward_hook : NoneType " error in libtorch, I trace all the modules and output it . import torch import torch.nn as nn class BidirectionalLSTM(nn.Module): def __init__(self, nIn, nHidden, nOut): super(BidirectionalLSTM, self).__init__() tempLayer=torch.jit.trace(nn.LSTM(nIn, nHidden, bidirectional=True),torch.rand(nHidden,nIn,nIn)) self.rnn = tempLayer tempLayer2 = torch.jit.trace(nn.Linear(nHidden * 2, nOut), torch.rand(nOut,nHidden * 2)) self.embedding = tempLayer2 def forward(self, input): recurrent, _ = self.rnn(input) T, b, h = recurrent.size() t_rec = recurrent.view(T * b, h) output = self.embedding(t_rec) # [T * b, nOut] output = output.view(T, b, -1) return output class CRNN(nn.Module): def __init__(self, imgH, nc, nclass, nh, n_rnn=2, leakyRelu=False): super(CRNN, self).__init__() assert imgH % 16 == 0, 'imgH has to be a multiple of 16' ks = [3, 3, 3, 3, 3, 3, 2] ps = [1, 1, 1, 1, 1, 1, 0] ss = [1, 1, 1, 1, 1, 1, 1] nm = [64, 128, 256, 256, 512, 512, 512] cnn = nn.Sequential() def convRelu(i, batchNormalization = False): nIn = nc if i == 0 else nm[i - 1] nOut = nm[i] tempLayer = torch.jit.trace(nn.Conv2d(nIn, nOut, ks[i], ss[i], ps[i]),torch.rand(1,nIn,100,100)) cnn.add_module('conv{0}'.format(i), tempLayer) if batchNormalization: # error happened here! tempLayer = torch.jit.trace(nn.BatchNorm2d(nOut), torch.rand(20,nOut, 35, 45)) cnn.add_module('batchnorm{0}'.format(i), tempLayer) if leakyRelu: tempLayer = torch.jit.trace(nn.LeakyReLU(0.2, inplace=True), torch.rand(1, 1, 100, 100)) cnn.add_module('relu{0}'.format(i), tempLayer) else: tempLayer = torch.jit.trace(nn.ReLU(True), torch.rand(2)) cnn.add_module('relu{0}'.format(i), tempLayer) convRelu(0) tempLayer=torch.jit.trace(nn.MaxPool2d(2, 2),torch.rand(64,16,64)) cnn.add_module('pooling{0}'.format(0), tempLayer) # 64x16x64 convRelu(1) tempLayer = torch.jit.trace(nn.MaxPool2d(2, 2), torch.rand(128, 8, 32)) cnn.add_module('pooling{0}'.format(1), tempLayer) # 128x8x32 convRelu(2, True) convRelu(3) tempLayer = torch.jit.trace( nn.MaxPool2d((2, 2), (2, 1), (0, 1)), torch.rand(256,4,16)) cnn.add_module('pooling{0}'.format(2), tempLayer) # 256x4x16 convRelu(4, True) convRelu(5) tempLayer = torch.jit.trace(nn.MaxPool2d((2, 2), (2, 1), (0, 1)), torch.rand(512, 2, 16)) cnn.add_module('pooling{0}'.format(3), tempLayer) # 512x2x16 convRelu(6, True) # 512x1x16 self.cnn = cnn self.rnn = nn.Sequential( BidirectionalLSTM(512, nh, nh), BidirectionalLSTM(nh, nh, nclass)) def forward(self, input): # conv features conv = self.cnn(input) b, c, h, w = conv.size() #assert h == 1, "the height of conv must be 1" conv = conv.squeeze(2) conv = conv.permute(2, 0, 1) # [w, b, c] # rnn features output = self.rnn(conv) return output loadedModel=torch.load("/home/gino/crnn.pth") model = CRNN(32, 1, 37, 256) model.cpu() model.load_state_dict(loadedModel, True) rnnJit = torch.jit.script(model) rnnJit.save("rnnJit.pt") I want to load the pretrain model. If I change the cnn.add_module('batchnorm{0}'.format(i), nn.BatchNorm2d(nOut)) to tempLayer = torch.jit.trace(nn.BatchNorm2d(nOut), torch.rand(20,nOut, 35, 45)) cnn.add_module('batchnorm{0}'.format(i), tempLayer) it would report the error , If I ignore it , the prediction result would go wrong. RuntimeError: Error(s) in loading state_dict for CRNN: Missing key(s) in state_dict: “cnn.batchnorm2.num_batches_tracked”, “cnn.batchnorm4.num_batches_tracked”, “cnn.batchnorm6.num_batches_tracked”. if the nn.BatchNorm2d is not in jit.trace , the model’s forwarding function works fine in pytorch, but It would cause problem when be exported to pt file for c++ libtorch. what should I do?
st180561
Hello! I’m trying to run compile my model that contains a custom op with TorchScript using the method in the following link: https://pytorch.org/tutorials/advanced/torch-script-parallelism.html 2 However, it failed with the following error Python builtin <built-in method apply of FunctionMeta object at 0x55ad125f0e18> is currently not supported in TorchScript. indicating the apply function in the custom extension code is not supported by TorchScript. I wonder if there’s any way to fix this? Also is there any plan to support apply in TorchScript recently? Thanks in advance!
st180562
I wish to freeze the following model Recognizer_Net which is structured as: class LayerNorm_LSTM(nn.Module): def __init__(self, in_dim=512, hidden_dim=512, bidirectional=False): super(LayerNorm_LSTM, self).__init__() self.layernorm = nn.LayerNorm(in_dim) self.lstm = nn.LSTM(in_dim, hidden_dim, batch_first=True, bidirectional=bidirectional) @torch.jit.export def forward( self, input_: torch.Tensor, hidden: Tuple[torch.Tensor, torch.Tensor] ) -> Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: input_ = self.layernorm(input_) lstm_out, hidden = self.lstm(input_, hidden) return lstm_out, hidden class Recognizer_Net(nn.Module): def __init__(self, input_dim, hidden_dim, num_hidden, BN_dim, mel_mean, mel_std, bidirectional=False): super(MT_LSTM_BN_LayerNorm_Net, self).__init__() self.linear_pre1 = nn.Linear(input_dim, hidden_dim) self.linear_pre2 = nn.Linear(hidden_dim, hidden_dim) self.lstm = nn.ModuleList([ LayerNorm_LSTM(in_dim=hidden_dim, hidden_dim=hidden_dim, bidirectional=bidirectional) for i in range(num_hidden) ]) self.BN_linear = nn.Linear(hidden_dim, BN_dim) self.tanh = nn.Tanh() self.num_hidden = num_hidden self.hidden_dim = hidden_dim self.mel_mean = mel_mean self.mel_std = mel_std @torch.jit.export def init(self) -> Tuple[torch.Tensor, torch.Tensor]: hidden_states = torch.zeros(self.num_hidden, 1, self.hidden_dim) cell_states = torch.zeros(self.num_hidden, 1, self.hidden_dim) return hidden_states, cell_states @torch.jit.export def forward( self, x: torch.Tensor, hidden: Tuple[torch.Tensor, torch.Tensor] ) -> Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: hidden_states, cell_states = hidden x = (x - self.mel_mean) / self.mel_std pre_linear1 = F.relu(self.linear_pre1(x)) lstm_out = F.relu(self.linear_pre2(pre_linear1)) for l, lstm_layer in enumerate(self.lstm): lstm_out, (hidden_states[l:l+1], cell_states[l:l+1]) = lstm_layer(lstm_out, (hidden_states[l:l+1], cell_states[l:l+1])) BN_out = self.tanh(self.BN_linear(lstm_out)) return BN_out, (hidden_states, cell_states) I compile the JIT model as follows: recognizer = Recognizer_Net(input_dim=80, hidden_dim=512, num_hidden=3, BN_dim=256, mel_mean=mel_mean, mel_std=mel_std, bidirectional=False).cpu() recognizer.load_state_dict(checkpoint) recognizer.eval() net_jit = torch.jit.script(recognizer) net_jit = torch.jit.freeze(net_jit, preserved_attrs=["init"]) Now when I compare the non-JIT model with this newly frozen JIT model, I find that the returned BN_out is consistent, the returned cell_states is consistent, BUT the returned hidden_states is not consistent. Such as: input = torch.zeros((1, 300, 80), dtype=torch.float) recognizer_jit = torch.jit.load(frozen_jit_model_path, map_location='cpu') recognizer_hidden_jit = recognizer_jit.init() BN_out_jit, recognizer_hidden_jit = recognizer_jit.forward(input, recognizer_hidden_jit) recognizer = Recognizer_Net(input_dim=80, hidden_dim=512, num_hidden=3, BN_dim=256, mel_mean=mel_mean, mel_std=mel_std, bidirectional=False).cpu() recognizer.load_state_dict(checkpoint) recognizer.eval() recognizer_hidden = recognizer.init() BN_out, recognizer_hidden = recognizer.forward(input, recognizer_hidden) I get that BN_out_jit == BN_out and recognizer_hidden_jit[1] == recognizer_hidden[1], but recognizer_hidden_jit[0] != recognizer_hidden[0]. Note that I do not get this problem if I do not use the Freezing API when compiling the JIT model. Is this a bug with the Freezing API? Or am I missing some steps in the freezing of the jit model?
st180563
I wonder if it’s possible to retrieve the names of the resulting output when calling a traced function, for example: import torch from collections import namedtuple output = namedtuple("output", ["hello", "bye"]) def f (x): return output(x, 2*x) traced_fn = torch.jit.trace(f, torch.tensor(1)) traced_fn(torch.tensor(2)) Returns an unamed tuple: (tensor(2), tensor(4)) I know I can return a dictionary and disable strict, to get named outputs but would like to know if this this possible with named tuples.
st180564
When debugging a model, it’s useful to add asserts and other similar logic to catch bugs. However, these asserts can be slow, so for production use cases, it’s useful to remove them all. Running with python -O will do that. However, when exporting a model, the asserts remain in the compiled graph even with the optimized flag turned on. class DummyNet(torch.nn.Module): def __init__(self) -> None: super().__init__() self.conv = torch.nn.Conv2d(20, 20, 3) def forward(self, x): assert x.shape[1] == 20 return self.conv(x) def export(): net = DummyNet() jit_net = torch.jit.script(net) print(jit_net.graph) Produces this graph graph(%self : __torch__.DummyNet, %x.1 : Tensor): %25 : str = prim::Constant[value="AssertionError: "]() %4 : int = prim::Constant[value=1]() %6 : int = prim::Constant[value=20]() %3 : int[] = aten::size(%x.1) # <string>:7:9 %5 : int = aten::__getitem__(%3, %4) %7 : bool = aten::eq(%5, %6) = prim::If(%7) block0(): -> () block1(): = prim::RaiseException(%25) -> () %13 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self) %15 : Tensor = prim::CallMethod[name="forward"](%13, %x.1) return (%15)
st180565
That sounds like a good feature request. Would you mind creating the request on GitHub as well so that the code owners could take a look at it?
st180566
Added the ticket here Add support for Python's optimized mode to torchscript · Issue #60953 · pytorch/pytorch · GitHub 6
st180567
How to convert model with custom C++ CUDA layer to TorchScript? I have a model with Mixed C++/CUDA layer created like here: Custom C++ and CUDA Extensions — PyTorch Tutorials 1.9.0+cu102 documentation 2 . I want to convert the whole model to TorchScript to use it in production. Is it possible to convert it? If yes, what is the best and proper way to do it?
st180568
Hi everyone, I export a model by torch.onnx.export function. But I give 8 args, so the forward function needs 8 args, but the output graph only have 6 input nodes. I know maybe the other 2 inputs doesn’t make sense to final outputs, so they don’t appear in the output graph. My question is now I don’t know which 6 args are needed in 8 args. BTW, the code is from github, so I don’t know why the 2 args not be used now.
st180569
I quantized a model that uses modules from torch librosa. I can quantize it and pass batches through the quantized model, but when trying to save it, the module LogmelFilterBank throws the following error: cannot create weak reference to 'numpy.ufunc' object The source code for this module is the following: class LogmelFilterBank(nn.Module): def __init__(self, sr=22050, n_fft=2048, n_mels=64, fmin=0.0, fmax=None, is_log=True, ref=1.0, amin=1e-10, top_db=80.0, freeze_parameters=True): r"""Calculate logmel spectrogram using pytorch. The mel filter bank is the pytorch implementation of as librosa.filters.mel """ super(LogmelFilterBank, self).__init__() self.is_log = is_log self.ref = ref self.amin = amin self.top_db = top_db if fmax == None: fmax = sr//2 self.melW = librosa.filters.mel(sr=sr, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax).T # (n_fft // 2 + 1, mel_bins) self.melW = nn.Parameter(torch.Tensor(self.melW)) if freeze_parameters: for param in self.parameters(): param.requires_grad = False def forward(self, input): r"""Calculate (log) mel spectrogram from spectrogram. Args: input: (*, n_fft), spectrogram Returns: output: (*, mel_bins), (log) mel spectrogram """ # Mel spectrogram mel_spectrogram = torch.matmul(input, self.melW) # (*, mel_bins) # Logmel spectrogram if self.is_log: output = self.power_to_db(mel_spectrogram) else: output = mel_spectrogram return output def power_to_db(self, input): r"""Power to db, this function is the pytorch implementation of librosa.power_to_lb """ ref_value = self.ref log_spec = 10.0 * torch.log10(torch.clamp(input, min=self.amin)) log_spec -= 10.0 * np.log10(np.maximum(self.amin, ref_value)) if self.top_db is not None: if self.top_db < 0: raise librosa.util.exceptions.ParameterError('top_db must be non-negative') log_spec = torch.clamp(log_spec, min=log_spec.max().item() - self.top_db) return log_spec I cannot understand where is the weak reference being thrown or how can I prevent it. I tried using the decorator tags as in this issue 1, but could not solve the problem. Thanks for any help you can provide.
st180570
Solved by amyxguo in post #3 I fixed the issue by replacing ndarray math operators (i.e. np.add, np.sub, np.ceil, and + - etc) with torch versions (i.e.torch.add, torch.sub etc).
st180571
I encountered the same problem. Also would like to see if anyone has found a way to fix or prevent it.
st180572
I fixed the issue by replacing ndarray math operators (i.e. np.add, np.sub, np.ceil, and + - etc) with torch versions (i.e.torch.add, torch.sub etc).
st180573
Hi everyone, is there a way to deactivate some TracerWarnings for when I know the input size will not change? My code is written such that it work with variable input sizes, but actually a single run always uses constant sized input. I do not want to switch to TorchScript because distributions are not properly supported and also because I do not need the functionality, since my inputs are actually constant. For example the following gives me a warning: all(s == x.shape[0] for s in x.shape) TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! I know why the warning makes sense, however I am wondering whether I can deactivate it since I know that in a single run the condition will be constant.
st180574
You could try to set check_trace=False or if that doesn’t help you could try to remove them using warnings.filterwarnings.
st180575
ptrblck: u Thanks. check_trace doesn’t work for me, but I will look into warnings.filterwarnings!
st180576
Hi, I am trying to do an online optimization of some parameters using torch.jit.script online, and need to torch.jit.export my operations. When I build the training procedure manually it works, but when I try to use a torch.optim it does not seem to be supported (see the added code at the bottom). In the module, manual_optim_step(x) works as expected (and minimizes the loss), but when I try to script the module with the @torch.jit.export decorator around optim_step(x) uncommented I get the following error: RuntimeError: Module 'MyOptimModule' has no attribute 'optim' (This attribute exists on the Python module, but we failed to convert Python type: 'torch.optim.sgd.SGD' to a TorchScript type. Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type SGD.. Its type was inferred; try adding a type annotation for the attribute.): File "<ipython-input-5-4d7f596ef214>", line 44 loss = self.forward(x).sum().abs() self.optim.zero_grad() ~~~~~~~~~~ <--- HERE loss.backward() I do assume this simply means that torch.optim is not supported in jit.script yet. So my question is: (a) is there any plans for this? and (b) what is the best approach to train parameters during jit.script? Is it simply to reimplement gradient based optimization as I do in manual_optim_step()? import torch x = torch.rand((2,)) # Dummy input data class MyOptimModule(torch.nn.Module): def __init__(self, N, M): super(MyOptimModule, self).__init__() self.linear = torch.nn.Linear(N, M) self.optim = torch.optim.SGD(self.parameters(), lr=0.1) def forward(self, input: torch.Tensor): output = self.linear(input) return output @torch.jit.export def manual_optim_step(self, x: torch.Tensor, num_epochs: int=100): """ Inspired by the following code: https://pytorch.org/tutorials/recipes/distributed_optim_torchscript.html """ print(f" parsum before: \t {self.linear.weight.sum()}") # create some dummy loss: loss = self.forward(x).abs().sum() #print(loss) loss.backward() with torch.no_grad(): for p in [self.linear.weight, self.linear.bias]: p.add_(-0.01*p.grad) if p.grad is not None: p.grad.detach_() p.grad.zero_() #self.linear.weight.data.add_(-0.01*self.linear.weight.grad) print(f" parsum after: \t {self.linear.weight.sum()}") @torch.jit.export def optim_step(self, x: torch.Tensor): print(f" parsum before: \t {self.linear.weight.sum()}") # create some dummy loss: loss = self.forward(x).sum().abs() self.optim.zero_grad() loss.backward() print(f" gradients: \t {self.linear.weight.grad}") self.optim.step() print(f" parsum after: \t {self.linear.weight.sum()}") # %% model_with_optim = MyOptimModule(2, 3) scripted_module = torch.jit.script(model_with_optim) scripted_module.manual_optim_step(x)
st180577
Could anybody please kindly explain difference between torchscript and onnx? As far as I understand, both are the scripted formats to export PyTorch models for faster inference on devices/environments without Python dependency (please correct me if I am wrong). In which real-world use case one would prefer over the other. Thank you!
st180578
Hi I am finding that although torch.jit.script makes CPU code about 8x faster, it makes GPU code about 5x slower, compared to code on each device that is not compiled with torch.jit.script. How do I fix this - am I using the JIT wrong somehow? Test code below. import torch import numpy as np size=1000000 # data size def test(device,torchscript): def torch_call(x,mask,a,b): # some simulated workload... x[mask]=1 torch.bucketize(a,b) if torchscript: torch_call = torch.jit.script(torch_call) # create data x = torch.zeros(size) mask = torch.randint(2,(size,))==1 a = torch.from_numpy(np.random.random(size)) b = torch.from_numpy(np.linspace(0,1,1000)) if device=="cuda": x = x.cuda() mask = mask.cuda() a = a.cuda() b = b.cuda() # time torch_call start = torch.cuda.Event(enable_timing=True) end = torch.cuda.Event(enable_timing=True) torch_call(x,mask,a,b) # warmup call start.record() torch_call(x,mask,a,b) end.record() torch.cuda.synchronize() print (f"{device} {type(torch_call)=} time={start.elapsed_time(end)}") test("cuda",False) test("cuda",True) test("cpu",False) test("cpu",True) Results: cuda type(torch_call)=<class 'function'> time=0.16105599701404572 cuda type(torch_call)=<class 'torch.jit.ScriptFunction'> time=0.7593600153923035 cpu type(torch_call)=<class 'function'> time=83.50847625732422 cpu type(torch_call)=<class 'torch.jit.ScriptFunction'> time=10.096927642822266
st180579
I wonder if register_parameter is expected to work with ScriptdModules, for example: import torch module = torch.jit.trace(torch.nn.Linear(10, 10), torch.randn(100, 10)) module.register_parameter("new_parameter", torch.nn.Parameter(torch.randn(10, 10))) AttributeError: cannot assign parameter before Module.init() call I don’t have an specific use case for this… I am implementing a ScriptModule wrapper for R’s implementation of torch and wanted to know if this is expected to wok.
st180580
Solved by ptrblck in post #2 I don’t think this would be a supported use case, since the already traced model won’t have a chance to use the newly registered parameter afterwards. In the case that this parameter is indeed used in the forward (but not registered yet), tracing would raise an error due to the usage of undefined p…
st180581
I don’t think this would be a supported use case, since the already traced model won’t have a chance to use the newly registered parameter afterwards. In the case that this parameter is indeed used in the forward (but not registered yet), tracing would raise an error due to the usage of undefined parameters. On the other hand, if this parameter is never used in the forward, registering it afterwards won’t change anything, so I assume the error is expected.
st180582
@ptrblck Thanks for your explanation! In theory one could use register_parameter to overwrite weights, eg: import torch module = torch.jit.trace(torch.nn.Linear(10, 10), torch.randn(100, 10)) module.register_parameter("weight", torch.nn.Parameter(torch.ones(10, 10))) And expect it to be used by the ScriptModule. This seems to work fine in non-scripted modules. But there are other ways to do it anyway.
st180583
Hi, I am trying to jit.script a module containing a parametrization as follow: class SWSConv2d(nn.Conv2d): r""" 2D Conv layer with Scaled Weight Standardization. Characterizing signal propagation to close the performance gap in unnormalized ResNets https://arxiv.org/abs/2101.08692 """ def __init__(self, in_channels: int, out_channels: int, kernel_size: int, stride: int=1, padding: int=0, padding_mode: str='zeros', dilation=1, groups: int=1, bias: bool=True): super().__init__(in_channels, out_channels, kernel_size, stride=stride, padding=padding, padding_mode=padding_mode, dilation=dilation, groups=groups, bias=bias) self.register_parametrization() def register_parametrization(self): if not TP.is_parametrized(self, 'weight'): TP.register_parametrization(self, 'weight', ScaledWeight2DStandardization(out_channels=self.weight.shape[0], use_gain=True, eps=1e-4)) def remove_parametrization(self): if TP.is_parametrized(self, 'weight'): TP.remove_parametrizations(self, 'weight', leave_parametrized=True) model_jit = torch.jit.script(model) fails with the following error: File "/Users/ganneheim/anaconda3/envs/pytorch1.9/lib/python3.7/site-packages/torch/jit/frontend.py", line 137, in <listcomp> stmts = [build_stmt(ctx, s) for s in stmts] File "/Users/ganneheim/anaconda3/envs/pytorch1.9/lib/python3.7/site-packages/torch/jit/frontend.py", line 330, in __call__ raise UnsupportedNodeError(ctx, node) torch.jit.frontend.UnsupportedNodeError: global variables aren't supported: File "/Users/ganneheim/anaconda3/envs/pytorch1.9/lib/python3.7/site-packages/torch/nn/utils/parametrize.py", line 166 def get_parametrized(self) -> Tensor: global _cache ~~~~~~ <--- HERE parametrization = self.parametrizations[tensor_name]
st180584
Is it possible to run a trace of a model in .eval() mode which can latter be used to find the freeze of the model frozen_module = torch.jit.freeze(traced)
st180585
Is it possible to convert a script module to trace with dummy input tensors. I tried with code model=torch.jit.load('model.pt') dummy=torch.rand((1,3,224,224)) traced=torch.jit.trace(model,dummy) Following warning was thrown: UserWarning: The input to trace is already a ScriptModule, tracing it is a no-op. Returning the object as is.
st180586
Hi All, I was wondering if it’s possible to jit a network whose output depends on a flag? For example, I have a neural network that has an internal flag self.use_det. The network is represented as a nn.Module whose forward method comprises of a few nn.Linear layers that eventually produce a batch of matrices in the shape [B,N,N] where B is the batch size and N is the number of input nodes in the input layer. However, the returned value from the network is determined by the state of the self.use_det flag. If this flag is set to False, the network a tensor of shape [B,N,N] but if self.use_det = True then it return a tensor of shape[B,2] (due to the use of using torch.slogdet on the tensor. Now, the question. Is it possible to jit a network where the output depends on this flag? Because I tried naively applying torch.jit.script(net) but I get the follwoing error, Traceback (most recent call last): File "run_mcmc.py", line 53, in <module> net = torch.jit.script(net) File "~/anaconda3/lib/python3.8/site-packages/torch/jit/_script.py", line 942, in script return torch.jit._recursive.create_script_module( File "~/anaconda3/lib/python3.8/site-packages/torch/jit/_recursive.py", line 391, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File "~/anaconda3/lib/python3.8/site-packages/torch/jit/_recursive.py", line 452, in create_script_module_impl create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) File "~/anaconda3/lib/python3.8/site-packages/torch/jit/_recursive.py", line 335, in create_methods_and_properties_from_stubs concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults) RuntimeError: Previous return statement returned a value of type Tensor but this return statement returns a value of type Tuple[Tensor, Tensor]: File "~/main.py", line 55 else: sign, logabsdet = self.slogdet(matrices) return sign, logabsdet ~~~~~~~~~~~~~~~~~~~~~~ <--- HERE The forward of the class concludes with, if(self.use_det): return matrices else: sign, logabsdet = self.slogdet(matrices) return sign, logabsdet Is this possibe? Any help is apprecitated! Thank you!
st180587
Solved by ptrblck in post #2 Yes, it should work as long as you don’t change the output type. In your case you could return an empty tensor in the if path as seen here: class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1, 10) self.fc2 = nn.Linear(1, 2) sel…
st180588
Yes, it should work as long as you don’t change the output type. In your case you could return an empty tensor in the if path as seen here: class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1, 10) self.fc2 = nn.Linear(1, 2) self.flag = True def forward(self, x): if self.flag: out = self.fc1(x) return out, torch.empty_like(out) else: out = self.fc2(x) out2 = out + 1 return out, out2 # eager mode model = MyModel() x = torch.randn(1, 1) out = model(x) print(out[0].shape) > torch.Size([1, 10]) model.flag = False out = model(x) print(out[0].shape) > torch.Size([1, 2]) # script model = torch.jit.script(model) out = model(x) print(out[0].shape) > torch.Size([1, 2]) model.flag = True out = model(x) print(out[0].shape) > torch.Size([1, 10])
st180589
Hi ptrblck, I’ve just implemented your solution and I get an error. I was wondering if you could take a look? Traceback (most recent call last): File "main.py", line 152, in <module> X, _ = sampler(burn_in) File "~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "~/anaconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "~/src/Samplers.py", line 83, in forward self.step() File "~/anaconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "~/src/Samplers.py", line 67, in step log_a = self._log_pdf(xcand).detach_() - self._log_pdf(self.chains).detach_() #calculate log acceptance probability File "~/anaconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "~/src/Samplers.py", line 47, in _log_pdf return torch.slogdet(self.network(x)[0])[1].mul(2).detach_() File "~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) RuntimeError: nvrtc: error: failed to open libnvrtc-builtins.so.11.1. Make sure that libnvrtc-builtins.so.11.1 is installed correctly. nvrtc compilation failed: #define NAN __int_as_float(0x7fffffff) #define POS_INFINITY __int_as_float(0x7f800000) #define NEG_INFINITY __int_as_float(0xff800000) template<typename T> __device__ T maximum(T a, T b) { return isnan(a) ? a : (a > b ? a : b); } template<typename T> __device__ T minimum(T a, T b) { return isnan(a) ? a : (a < b ? a : b); } extern "C" __global__ void fused_tanh_add(double* t0, double* t1, double* aten_add) { { double v = __ldg(t1 + (64 * (((512 * blockIdx.x + threadIdx.x) / 64) % 24) + ((512 * blockIdx.x + threadIdx.x) / 1536) * 1536) + (512 * blockIdx.x + threadIdx.x) % 64); double v_1 = __ldg(t0 + (64 * (((512 * blockIdx.x + threadIdx.x) / 64) % 24) + ((512 * blockIdx.x + threadIdx.x) / 1536) * 1536) + (512 * blockIdx.x + threadIdx.x) % 64); aten_add[(64 * (((512 * blockIdx.x + threadIdx.x) / 64) % 24) + ((512 * blockIdx.x + threadIdx.x) / 1536) * 1536) + (512 * blockIdx.x + threadIdx.x) % 64] = (tanh(v)) + v_1; } } I assume this is more likely a non-pytorch issue? I’m running Ubuntu 20.04, CUDA 11.2, Driver 460.80 and pytorch is 1.8.1+cu111 (So I assume it’s running CUDA 11.1, could that be an issue?) Thank you!
st180590
This error seems to be related to this issue 6, which was already fixed, so you might want to update to the latest release (1.9.0) or the nightly.
st180591
RuntimeError: undefined value hidden_t: File “/workspace/OpenNMT-py-1.2.0_torchscript_testfuncwork/onmt/encoders/image_encoder.py”, line 143 out = torch.cat(all_outputs, 0) return hidden_t, out, lengths ~~~~~~~~ <--- HERE when I tried to convert Pytorch to Torchscript full error message : Traceback (most recent call last): File “translate.py”, line 22, in print(“my_module(imgr) :”,my_module(imgr)) File “/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl result = self.forward(*input, **kwargs) File “translate.py”, line 18, in forward result = main(imgr) File “/workspace/OpenNMT-py-1.2.0_torchscript_testfuncwork/onmt/bin/translate.py”, line 74, in main return translate(opt, imgr) File “/workspace/OpenNMT-py-1.2.0_torchscript_testfuncwork/onmt/bin/translate.py”, line 18, in translate translator = build_translator(opt, logger=logger, report_score=True) File “/workspace/OpenNMT-py-1.2.0_torchscript_testfuncwork/onmt/translate/translator.py”, line 29, in build_translator fields, model, model_opt = load_test_model(opt) File “/workspace/OpenNMT-py-1.2.0_torchscript_testfuncwork/onmt/model_builder.py”, line 112, in load_test_model model = build_base_model(model_opt, fields, use_gpu(opt), checkpoint, File “/workspace/OpenNMT-py-1.2.0_torchscript_testfuncwork/onmt/model_builder.py”, line 184, in build_base_model sm = torch.jit.script(model) File “/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py”, line 942, in script return torch.jit._recursive.create_script_module( File “/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py”, line 391, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File “/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py”, line 448, in create_script_module_impl script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn) File “/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py”, line 391, in _construct init_fn(script_module) File “/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py”, line 428, in init_fn scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn) File “/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py”, line 452, in create_script_module_impl create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) File “/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py”, line 335, in create_methods_and_properties_from_stubs concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults) RuntimeError: undefined value hidden_t: File “/workspace/OpenNMT-py-1.2.0_torchscript_testfuncwork/onmt/encoders/image_encoder.py”, line 143 out = torch.cat(all_outputs, 0) return hidden_t, out, lengths ~~~~~~~~ <--- HERE what I tried to convert : model.py “”" Onmt NMT Model base class definition “”" import torch.nn as nn #import os class NMTModel(nn.Module): “”" full_path = os.path.realpath(file) path, filename = os.path.split(full_path) print(path + ’ → ’ + filename + " NMTModel" + “\n”) “”" “”" Core trainable object in OpenNMT. Implements a trainable interface for a simple, generic encoder + decoder model. Args: encoder (onmt.encoders.EncoderBase): an encoder object decoder (onmt.decoders.DecoderBase): a decoder object """ def __init__(self, encoder, decoder): super(NMTModel, self).__init__() self.encoder = encoder self.decoder = decoder def forward(self, src, tgt, lengths, bptt=False, with_align=False): """Forward propagate a `src` and `tgt` pair for training. Possible initialized with a beginning decoder state. Args: src (Tensor): A source sequence passed to encoder. typically for inputs this will be a padded `LongTensor` of size ``(len, batch, features)``. However, may be an image or other generic input depending on encoder. tgt (LongTensor): A target sequence passed to decoder. Size ``(tgt_len, batch, features)``. lengths(LongTensor): The src lengths, pre-padding ``(batch,)``. bptt (Boolean): A flag indicating if truncated bptt is set. If reset then init_state with_align (Boolean): A flag indicating whether output alignment, Only valid for transformer decoder. Returns: (FloatTensor, dict[str, FloatTensor]): * decoder output ``(tgt_len, batch, hidden)`` * dictionary attention dists of ``(tgt_len, batch, src_len)`` """ print("model.py") dec_in = tgt[:-1] # exclude last target from inputs enc_state, memory_bank, lengths = self.encoder(src, lengths) if bptt is False: self.decoder.init_state(src, memory_bank, enc_state) dec_out, attns = self.decoder(dec_in, memory_bank, memory_lengths=lengths, with_align=with_align) return dec_out, attns def update_dropout(self, dropout): self.encoder.update_dropout(dropout) self.decoder.update_dropout(dropout) where I got Undefined value error image_encoder.py image408×898 37.7 KB Can anyone help?..
st180592
It’d be better to get a dev’s opinion on this, but I think this issue is due to the self.rnn. Whereabouts is self.rnn defined? Is it something that’s inferred? Because pytorch is saying hidden_t isn’t defined for some reason!
st180593
self.rnn was defined at image_encoder.py init def __init__(self, num_layers, bidirectional, rnn_size, dropout, image_chanel_size=3): super(ImageEncoder, self).__init__() self.num_layers = num_layers self.num_directions = 2 if bidirectional else 1 self.hidden_size = rnn_size self.layer1 = nn.Conv2d(image_chanel_size, 64, kernel_size=(3, 3), padding=(1, 1), stride=(1, 1)) self.layer2 = nn.Conv2d(64, 128, kernel_size=(3, 3), padding=(1, 1), stride=(1, 1)) self.layer3 = nn.Conv2d(128, 256, kernel_size=(3, 3), padding=(1, 1), stride=(1, 1)) self.layer4 = nn.Conv2d(256, 256, kernel_size=(3, 3), padding=(1, 1), stride=(1, 1)) self.layer5 = nn.Conv2d(256, 512, kernel_size=(3, 3), padding=(1, 1), stride=(1, 1)) self.layer6 = nn.Conv2d(512, 512, kernel_size=(3, 3), padding=(0, 0), stride=(1, 1)) self.batch_norm1 = nn.BatchNorm2d(256) self.batch_norm2 = nn.BatchNorm2d(512) self.batch_norm3 = nn.BatchNorm2d(512) src_size = 512 dropout = dropout[0] if type(dropout) is list else dropout self.rnn = nn.LSTM(src_size, int(rnn_size / self.num_directions), num_layers=num_layers, dropout=dropout, bidirectional=bidirectional) self.pos_lut = nn.Embedding(1000, src_size)
st180594
Strange, does self.rnn return 2 arguments? For some reason, Torchscript can’t find the hidden_t tensor generated by self.rnn.
st180595
https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html 1 yes, I tried print hidden_t.
st180596
Hi, Is there any way to convert a torch._C.Graph text like this image630×526 15.9 KB generated by torch.jit.trace(model, input) directly into .dot language used by networkX, GraphViz, etc. like this en.wikipedia.org DOT (graph description language) DOT is a graph description language. DOT graphs are typically files with the filename extension gv or dot. The extension gv is preferred, to avoid confusion with the extension dot used by versions of Microsoft Word before 2007. Various programs can process DOT files. Some, such as dot, neato, twopi, circo, fdp, and sfdp, can read a DOT file and render it in graphical form. Others, such as gvpr, gc, acyclic, ccomps, sccmap, and tred, read DOT files and perform calculations on the represented gr... Thank you
st180597
I’m planning on using this repo GitHub - asappresearch/sru: Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755), which contains a RNN variant that’s fast to train. I followed the installation, which was very simple, but got the following error: /home/hnguyen/sru/sru/cuda_functional.py:23: UserWarning: Just-in-time loading and compiling the CUDA kernels of SRU was unsuccessful. Got the following error: Error building extension 'sru_cuda': [1/2] /usr/local/cuda-10.1/bin/nvcc --generate-dependencies-with-compile --dependency-output sru_cuda_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=sru_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include/TH -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda-10.1/include -isystem /home/hnguyen/miniconda3/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -std=c++14 -c /home/hnguyen/sru/sru/csrc/sru_cuda_kernel.cu -o sru_cuda_kernel.cuda.o FAILED: sru_cuda_kernel.cuda.o /usr/local/cuda-10.1/bin/nvcc --generate-dependencies-with-compile --dependency-output sru_cuda_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=sru_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include/TH -isystem /home/hnguyen/miniconda3/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda-10.1/include -isystem /home/hnguyen/miniconda3/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -std=c++14 -c /home/hnguyen/sru/sru/csrc/sru_cuda_kernel.cu -o sru_cuda_kernel.cuda.o nvcc fatal : Unknown option '-generate-dependencies-with-compile' ninja: build stopped: subcommand failed. warnings.warn("Just-in-time loading and compiling the CUDA kernels of SRU was unsuccessful. " after running import torch from sru import SRU This issue was not mentioned anywhere in existing issues, probably because it’s kind of uncommon. Does someone understand what this error is trying to say?
st180598
Solved by ptrblck in post #2 The --generate-dependencies-with-compile argument was added in CUDA10.2, if I’m not mistaken, so you might need to update your local CUDA toolkit.
st180599
The --generate-dependencies-with-compile 6 argument was added in CUDA10.2, if I’m not mistaken, so you might need to update your local CUDA toolkit.