id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181300 | Hey, we don’t provide static libs currently. You’ll have to build that from source. |
st181301 | tkr:
cuffe2_nvrtc.lib
Unlike the relations between .so and .a on Linux, .lib files don’t necessarily refer to the file name of a static lib. It can also be an import library for the DLL. |
st181302 | Hey, we don’t provide static libs currently. You’ll have to build that from source.
Let me rephrase what I wanted to say…
What I want to do is to create successful static version of libtorch.
I have built the static libtorch from source with “set BUILD_SHARED_LIBS=OFF” BUT
cuffe2_nvrtc.lib is NOT created.
caffe2_nvrtc.dll is created under torch/bin.
My assumption is below.
“SHARED” is set in add_library and because of that, DLL is created. At the same time,
BUILD_SHARED_LIBS=OFF and dllexport is not defined and lib is not created. Files under lib are linked to the application and error occurs.
I now know by using official caffe2_nvrtc.lib and caffe2_nvrtc.dll, the application succeeds.
I’d like to know how to create static libtorch library.
I’ve done rewriting CMakeList.txt and built static version of libtorch and
caffe2_nvrtc.lib is created (and this should be static lib, right?)
caffe2_nvrtc.dll is NOT created.
By linking above to the app, it will end up with the error I wrote in the beginning.
By placing official caffe2_nvrtc.dll next to the app, it works. (meaning static lib is not created correctly ?)
Any suggestion to succeed my attempt? |
st181303 | Maybe some of the libs are optimized away. You could try passing /WHOLEARCHIVE:caffe2_nvrtc.lib in your project to force the linker to stop doing that. |
st181304 | I thought the same way.
/WHOLEARCHIVE:caffe2_nvrtc.lib
link C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\lib\x64\nvrtc.lib
I tried above and still get the same error.
The application does not load “nvrtc64_101_0.dll”
Any other suggestion? |
st181305 | You may need to link other CUDA libs as well, like cudart64_xxx.lib. But some of them doesn’t have static libs so you still need some DLLs. |
st181306 | I’ve tried few other things and below change on caffe2/CMakeList.txt worked!
diff --git a/caffe2/CMakeLists.txt b/caffe2/CMakeLists.txt
index 8025a7de3c..8e94978e72 100644
--- a/caffe2/CMakeLists.txt
+++ b/caffe2/CMakeLists.txt
@@ -561,7 +561,7 @@ if (NOT INTERN_BUILD_MOBILE OR NOT BUILD_CAFFE2_MOBILE)
${TORCH_SRC_DIR}/csrc/cuda/comm.cpp
${TORCH_SRC_DIR}/csrc/jit/tensorexpr/cuda_codegen.cpp
)
- add_library(caffe2_nvrtc SHARED ${ATen_NVRTC_STUB_SRCS})
+ add_library(caffe2_nvrtc ${ATen_NVRTC_STUB_SRCS})
target_link_libraries(caffe2_nvrtc ${CUDA_NVRTC} ${CUDA_CUDA_LIB} ${CUDA_NVRTC_LIB})
target_include_directories(caffe2_nvrtc PRIVATE ${CUDA_INCLUDE_DIRS})
install(TARGETS caffe2_nvrtc DESTINATION "${TORCH_INSTALL_LIB_DIR}")
@@ -703,6 +703,9 @@ ELSEIF(USE_CUDA)
cuda_add_library(torch_cuda ${Caffe2_GPU_SRCS})
set(CUDA_LINK_LIBRARIES_KEYWORD)
torch_compile_options(torch_cuda) # see cmake/public/utils.cmake
+ if (NOT BUILD_SHARED_LIBS)
+ target_compile_definitions(torch_cuda PRIVATE USE_DIRECT_NVRTC)
+ endif()
if (USE_NCCL)
target_link_libraries(torch_cuda PRIVATE __caffe2_nccl)
In aten/src/ATen/cuda/detail/CUDAHooks.cpp, “#ifdef USE_DIRECT_NVRTC” directive is used.
But “USE_DIRECT_NVRTC” was not defined in any CMakeList.txt and because of that,
application linked with satic libtorch tries to load “caffe2_nvrtc.dll”. |
st181307 | Hi,
In python, we could use print (model) to show the network architecture, how about in cpp?
Say i hv loaded jit model with jit load with the name model, how could i do the similar in cpp.
Thanks,
Rgds,
CL |
st181308 | Solved by driazati in post #2
We don’t have any built-in utils for this, but you can do a simple version manually:
#include <torch/script.h>
#include <iostream>
#include <memory>
void tabs(size_t num) {
for (size_t i = 0; i < num; i++) {
std::cout << "\t";
}
}
void print_modules(const torch::jit::script::Module& modu… |
st181309 | We don’t have any built-in utils for this, but you can do a simple version manually:
#include <torch/script.h>
#include <iostream>
#include <memory>
void tabs(size_t num) {
for (size_t i = 0; i < num; i++) {
std::cout << "\t";
}
}
void print_modules(const torch::jit::script::Module& module, size_t level = 0) {
std::cout << module.name().qualifiedName() << " (\n";
for (const auto& module : module.get_modules()) {
tabs(level + 1);
print_modules(module.module, level + 1);
}
tabs(level);
std::cout << ")\n";
}
int main(int argc, const char *argv[]) {
torch::jit::script::Module container = torch::jit::load("m.pt");
print_modules(container);
return 0;
} |
st181310 | Hi Driazati,
Thanks for the codes, it helps. just some typo i think, the
print_modules(module.module, level + 1);
should be
print_modules(module, level + 1);
Appreciate your help. The references for these seems to be not much, could you share some good reference for this ? For e.g., the method (or the var) that can return the input size of a pre-trained network etc.
Thanks again.
Regards,
CL |
st181311 | You can find the generated docs here https://pytorch.org/cppdocs/api/namespace_torch__jit.html#namespace-torch-jit 14. Unfortunately they are pretty sparse, we have ongoing efforts to improve them.
As for getting the input size a network expects, we don’t have any utilities for that since TorchScript models aren’t specialized to specific shapes. |
st181312 | hi,
thanks again for the reply and link. yes i’ve visited the link before, might be too sparse for non hardcore programmer. anyway, thanks for your reply again.
rgds,
CL |
st181313 | Hi,
Hope you don’t mind, I created another topic to get more details from the jit model. I understand that there are no shape properties for the input, I was wondering whether we could print the each layers details as stated in :
Model Summary for libTorch jit model? jit
Hi,
Is it possible to get the model summary described in this link
, but for cpp jit model?
This is an enhancement for printing the model in this link;
Thanks.
rgds,
CL |
st181314 | hi,
adding these lines
for (const auto& parameter : module.get_parameters()) {
tabs(level + 1);
std::cout << parameter.name() << '\t';
std::cout << parameter.value().toTensor().sizes() << '\n';
}
shall give the summary for the network.
if anyone having better way to show the network summary for jit model in C++, please feel free to share.
Thanks.
rgds,
CL |
st181315 | does anybody know how to do this using libtorch (iOs)? module.get_parameters() is not defined. |
st181316 | I try to make predictions for a 4-class classification project in C++ side using libtorch 1.4. However, I cannot obtain same predictions compared to Python side. Firstly, I obtain same input tensor values just before prediction. When I compare the output tensor values, I noticed that they are different. You can find those values in that picture:
Output_tensor_values_Python_and_C++1883×861 114 KB
Left side includes Python output tensor values and the prediction results for each input picture.
Right side includes C++ output tensor values and the prediction results for each input picture.
Could you offer a solution to obtain same output tensor values and prediction results? |
st181317 | Solved by sercan in post #3
I noticed that I used opencv functions to apply normalization using this code:
subtract(image, Scalar(0.485, 0.456, 0.406), temp);
divide(temp, Scalar(0.229, 0.224, 0.225), image);
This operation changes only the first channel and does not change other channels. For this reason, in fact, input ten… |
st181318 | Did you make sure to set the model to evaluation mode via model.eval() before executing the forward pass?
Also, I assume you’ve checked the inputs already or are you using any (random) preprocessing steps? |
st181319 | I noticed that I used opencv functions to apply normalization using this code:
subtract(image, Scalar(0.485, 0.456, 0.406), temp);
divide(temp, Scalar(0.229, 0.224, 0.225), image);
This operation changes only the first channel and does not change other channels. For this reason, in fact, input tensor values are different. I applied normalization directly on tensor values writing this code:
tensor_image = tensor_image.permute({ 2,0,1 });//chw
tensor_image = tensor_image.toType(torch::kFloat);
tensor_image = tensor_image.div(255.0);
//normalize
tensor_image[0] = tensor_image[0].sub_(0.485).div_(0.229);
tensor_image[1] = tensor_image[1].sub_(0.456).div_(0.224);
tensor_image[2] = tensor_image[2].sub_(0.406).div_(0.225);
Thus, I obtained the same input tensor values compared to Python side. After prediction, I obtained the same output tensor values. My problem solved. |
st181320 | I want to deploy pytorch YOLO model in windows c++ environment.
The environment is libtorch1.4-debug + vs2017 + cuda10.1,
the c++ inference code can run successfully, but it takes about 0.16s for
an input image, as a comparison, the time is 0.03s in python environment.
And i also found that the GPU usage rate is rather low, any ideas on how to solve this problem? |
st181321 | How did you profile the libtorch and Python code?
Note that CUDA operations are asynchronous, so you would need to synchronize the code before starting and stopping the timer. |
st181322 | Plus: why do you use the debug version for benchmarking? The optimization switches are turned off in those builds. |
st181323 | I created a trace file in Python side. When I try to load it in C++ side, It gave me this error message.
libtorch_1_5_load_error_message1032×390 30.1 KB
I couldn’t load this trace file. Could you help me? |
st181324 | I want to convert tsm model https://github.com/mit-han-lab/temporal-shift-module 3 to onnx. when convert it show me that
TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
out[:, :-1, :fold] = x[:, 1:, :fold] # shift left
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:37: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
out[:, :-1, :fold] = x[:, 1:, :fold] # shift left
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:38: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:38: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:39: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:39: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
this is the code snippet:
out = torch.zeros_like(x)
out[:, :-1, :fold] = x[:, 1:, :fold] # shift left
out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right
out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
i dont think this is a inplace operate.
and then I get the onnx model .but the output is different with the pytorch output.
pytorch output:[[0.04369894 0.09070115 0.8193747 … 0.25200492 0.0048383 0.04694336]]
onnx output:[0. 0. 0. … 0.03162321 0. 0. ]] |
st181325 | For using torch.jit.script is recommended to pass model to cpu if you are doing inference in that device? |
st181326 | @torch.jit.script
def check_init(input_data, hidden_size, prev_state):
# type: (torch.Tensor, int, torch.Tensor) -> torch.Tensor
batch_size = input_data.size(0)
spatial_size_0 = input_data.size(2)
spatial_size_1 = input_data.size(3)
# generate empty prev_state, if None is provided
state_size = (2, batch_size, hidden_size ,spatial_size_0, spatial_size_1)
if prev_state.size(0) == 0:
state = torch.zeros(state_size, device=input_data.device)
else:
state = prev_state.view(state_size)
return state
I am trying to export my model to ONNX and I have a function that will check if the previous state is initialized and I will initialize it based on the input size. Because I have an if statement I decorated the function with @torch.jit.script. prev_state is a tensor of dimension 5. At first I pass an empty tensor like this torch.tensor([]).view(0,0,0,0,0). In all subsequent runs, I will pass back the returned tensor. This check_init function is used at 3 different places in the network and the input_data variable is the output of one of the stages of the neural net. For the original input of the full neural net, I have set the input and output to have dynamic_axes.
The model is properly exported to a .onnx file. However, when I run it with an input, I get the following error at the second iteration (the first iteration when prev_state is of size (0,0,0,0,0) works fine):
2020-06-04 13:32:14.289010608 [E:onnxruntime:, sequential_executor.cc:281 Execute] Non-zero status code returned while running Identity node. Name:'Identity_29' Status Message: /onnxruntime_src/onnxruntime/core/framework/execution_frame.cc:66 onnxruntime::common::Status onnxruntime::IExecutionFrame::GetOrCreateNodeOutputMLValue(int, const onnxruntime::TensorShape*, OrtValue*&, size_t) shape && tensor.Shape() == *shape was false. OrtValue shape verification failed. Current shape:{0,0,0,0,0} Requested shape:{2,1,64,92,120}
2020-06-04 13:32:14.289043204 [E:onnxruntime:, sequential_executor.cc:281 Execute] Non-zero status code returned while running If node. Name:'If_21' Status Message: Non-zero status code returned while running Identity node. Name:'Identity_29' Status Message: /onnxruntime_src/onnxruntime/core/framework/execution_frame.cc:66 onnxruntime::common::Status onnxruntime::IExecutionFrame::GetOrCreateNodeOutputMLValue(int, const onnxruntime::TensorShape*, OrtValue*&, size_t) shape && tensor.Shape() == *shape was false. OrtValue shape verification failed. Current shape:{0,0,0,0,0} Requested shape:{2,1,64,92,120}
Traceback (most recent call last):
File "run_onnx.py", line 108, in <module>
runner.update(input_tensor, last_timestamp)
File "/home/test/image_onnx.py", line 103, in update
onnx_out = self.onnx_session.run(None, onnx_input)
File "/home/test/lib/python3.8/site-packages/onnxruntime/capi/session.py", line 111, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running If node. Name:'If_21' Status Message: Non-zero status code returned while running Identity node. Name:'Identity_29' Status Message: /onnxruntime_src/onnxruntime/core/framework/execution_frame.cc:66 onnxruntime::common::Status onnxruntime::IExecutionFrame::GetOrCreateNodeOutputMLValue(int, const onnxruntime::TensorShape*, OrtValue*&, size_t) shape && tensor.Shape() == *shape was false. OrtValue shape verification failed. Current shape:{0,0,0,0,0} Requested shape:{2,1,64,92,120}
Shouldn’t the exporter ‘propagate’ the dynamic axis property to all subsequent stages of the network and realize that those variables will also have dynamic size? Any ideas on how to solve this problem?
Exporting using just TorchScript works fine but it seems ONNX is less flexible |
st181327 | Hi @Andreas_Georgiou,
This error occurs within ONNX Runtime, so it’s likely the case that you should report an issue there, and then work backwards up the stack. It’s not clear if the issue is within PyTorch ONNX export, or if the ONNX exporter is emitting a valid ONNX model and it’s a failed analysis within ONNX runtime. |
st181328 | #jit #quantization
Hello!
I am trying to convert quantized model to Caffe2. I know that it is needed to use TorchScript tracing before ONNX exporting. I am trying to convert RCNN model the following way: 1) perform quantization, 2) trace quantized backbone to torchscript, 3) swap original backbone with quantized one (other parts of the network as they were), 4) patch and export network with ONNX
It is possible to convert original (non-quantized) network to Caffe2 without any errors, but when I do swapping of backbone and export ONNX I see the following:
Traceback (most recent call last):
File "./tools/torchscript_converter.py", line 158, in <module>
caffe2_model = export_caffe2_model(cfg, orig_model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 157, in export_caffe2_model
return Caffe2Tracer(cfg, model, inputs).export_caffe2()
File "/root/some_detectron2/detectron2/export/api.py", line 95, in export_caffe2
predict_net, init_net = export_caffe2_detection_model(model, inputs)
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 148, in export_caffe2_detection_model
onnx_model = export_onnx_model(model, (tensor_inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 67, in export_onnx_model
export_params=True,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 525, in _export
fixed_batch_size=fixed_batch_size)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 364, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 317, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 277, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 562, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 359, in forward
self._force_outplace,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 345, in wrapper
outs.append(self.inner(*trace_inputs))
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 560, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 546, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/root/some_detectron2/detectron2/export/caffe2_modeling.py", line 326, in forward
features = self._wrapped_model.backbone(images.tensor)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 560, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 536, in _slow_forward
return self.forward(*input, **kwargs)
RuntimeError: Tried to trace <__torch__.densepose.modeling.layers.bifpn.BiFPN object at 0x5570e7633550> but it is not part of the active trace. Modules that are called during a trace must be registered as
submodules of the thing being traced.
Is this because I’ve mixed TracedModule with eager mode modules? |
st181329 | yeah according to the error message, the model being tracing is calling something outside of the traced model. |
st181330 | I think the step 2 here is probably wrong, you should not trace a quantized model before swapping it in? You can swap a quantized backbone model in first, then do onnx export, there’s no need to do explicit tracing as ONNX export will do it underlying. |
st181331 | Thank you @wanchaol, @jerryzh168. I’ll try again
But I’ve already opened a topic with such question and @supriyar has said that it is needed to jit.trace before onnx.export if we work with quantized model.
Here’s the refference ONNX export of quantized model 69 |
st181332 | Yes. I’ve tried and finally it’s given other errors
For now it is model with quantized backbone and untouched other parts. It is possible to onnx.export original model, but quantized behave the following way:
Traceback (most recent call last):
File "./tools/torchscript_converter.py", line 145, in <module>
caffe2_model = export_caffe2_model(cfg, torch_model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 157, in export_caffe2_model
return Caffe2Tracer(cfg, model, inputs).export_caffe2()
File "/root/some_detectron2/detectron2/export/api.py", line 95, in export_caffe2
predict_net, init_net = export_caffe2_detection_model(model, inputs)
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 145, in export_caffe2_detection_model
onnx_model = export_onnx_model(model, (tensor_inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 62, in export_onnx_model
operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 530, in _export
fixed_batch_size=fixed_batch_size)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 366, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 283, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 574, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 371, in forward
self._force_outplace,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 357, in wrapper
outs.append(self.inner(*trace_inputs))
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 572, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 558, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/root/some_detectron2/detectron2/export/caffe2_modeling.py", line 326, in forward
features = self._wrapped_model.backbone(images.tensor)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 572, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 558, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/DensePose_ADASE/densepose/modeling/quantize_caffe2.py", line 166, in new_forward
p5, p4, p3, p2 = self.bottom_up(x) # top->down
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 572, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 558, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/timm/models/efficientnet.py", line 350, in forward
x = self.conv_stem(x)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 572, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 558, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py", line 37, in forward
input, self._packed_params, self.scale, self.zero_point)
RuntimeError: Tried to trace <__torch__.torch.classes.quantized.Conv2dPackedParamsBase object at 0x5632a7b49180> but it is not part of the active trace. Modules that are called during a trace must be registered as submodules of the thing being traced.
Why this traceable on original but not on quantized? Any thoughts? |
st181333 | I’m facing the same error when I try to export pytorch’s quantized MobilenetV2 model to ONNX. Is there any updates on this? |
st181334 | Have a look at this thread maybe you’ll find something relevant
[ONNX] Quantized fused Conv2d won't trace quantization
Please re-try with pytorch nightly build, we recently fixed this so you shouldn’t be seeing this error anymore.
Seems like the conv layer is not quantized so it produces onnx::Conv as opposed to the _caffe2::Int8Conv operator. Currently the onnx export path to caffe2 does not support partially quantized model, so it expects the entire pytorch model to be able to get quantized. |
st181335 | Bug
To Reproduce
Executing torch.jit.script over my model is working however it returns a model that fails at runtime.
Looking deeply the nn.ModuleList is loosing None elements from the Modulelist.
Here, above I attach a code for reproducing the error:
import os
import sys
import torch.nn as nn
import torch.nn.functional as F
import torch
from torchvision import transforms
from PIL import Image
class TestBlock(nn.Module):
def __init__(self):
super(TestBlock, self).__init__()
layers = []
layers.append(None)
layers.append(None)
layers.append(nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1,
bias=False))
self.layer = nn.ModuleList(layers)
def forward(self,x):
for aux in self.layer:
print("ENTER")
if aux is not None:
x = aux(x)
print("Not None")
return x
Creating model and tracing it:
model=TestBlock()
traced_cell=torch.jit.script(model)
Testing model with an image:
img = Image.open("test.png")
my_transforms = transforms.Compose([transforms.Resize((1002,1002)),
transforms.ToTensor(),
transforms.Normalize(
[0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
img_input= my_transforms(img).unsqueeze(0).cpu()
res=model(img_input)
This outputs the next:
ENTER
ENTER
ENTER
Not None
Traced version output:
res=traced_cell(img_input)
ENTER
Not None
Expected behavior
Get same output as original model |
st181336 | You are currently trying to script a tensor in:
traced_cell=torch.jit.script(aux)
so I assume you want to pass model instead to the method?
Try to narrow down the issue, as your current code contains more than 700 lines of code. |
st181337 | Sorry, I paste the wrong code. I am scripting the model with torch.jit.script(model). |
st181338 | ptrblck:
Try to narrow down the issue, as your current code contains more than 700 lines of code.
I reduced the model to 15lines producing the same error. Look at it again pls @ptrblck |
st181339 | That’s great. Thanks for reducing the code.
I would assume it’s on purpose for the JIT to remove no-ops from the graph, as they won’t do anything.
What’s your use case that you need these None objects in an nn.ModuleList?
As a workaround you could probably just add nn.Identity() modules instead of Nones. |
st181340 | ptrblck:
What’s your use case that you need these None objects in an nn.ModuleList ?
This is not my defined Arch, this arch is Microsoft HRNet 1.
But if you look in his code they are using it to create a new list:
y_list = self.stage2(x_list)
x_list = []
for i in range(self.transition2):
if self.transition2[i] is not None:
x_list.append(self.transition2[i](y_list[-1]))
else:
x_list.append(y_list[i])
self.transition2 is the module list containing None objects
With you workaround C should change is not None with not is isinstance(layer, nn.Identity())
I am going to try your work-around and let you know! |
st181341 | The previous approach work.
However, the model works fast with first image. If we pass an image again it never ends! In addition, TorchScript model is also slower at first image. |
st181342 | Thanks for the update and the code snippet.
The nn.Identity() approach might work, but looks quite hacky given the new code snippet.
However, I’m not familiar with the model, so don’t know which approach would be best to make is scriptable and would suggest to create an issue in their GitHub.
WaterKnight:
However, the model works fast with first image. If we pass an image again it never ends! In addition, TorchScript model is also slower at first image.
I understand that the eager model is working for a single iteration and hangs in the second one?
While the scripted model is slower in the first iteration and works fine afterwards? |
st181343 | ptrblck:
I understand that the eager model is working for a single iteration and hangs in the second one?
While the scripted model is slower in the first iteration and works fine afterwards?
The model works well in eager model. After scripting it, the first iteration is like 6 seconds, second one around 3minutes and third one and go on like 1 second.
I think that it is being optimized. Is there any way of having pre optimizing in a Flask API Rest or disabling optimization? Or Saving optimized one so when it gets loaded is the optimized version? |
st181344 | You could try to use
torch._C._jit_set_profiling_executor(False)
torch._C._jit_set_profiling_mode(False)
at the beginning of your script to disable the optimization.
However, how if your Flask application is running longer than a couple of seconds, the startup time could probably be ignored. |
st181345 | ptrblck:
However, how if your Flask application is running longer than a couple of seconds, the startup time could probably be ignored.
I don’t understand it.
My Flask application has several models loaded. It makes inference over the same image with different models, the models names are given in a list. |
st181346 | Since the second iteration seems to run for 3 minutes, I would ask you to create an issue here 1, as it doesn’t seem right. |
st181347 | If the first iteration would be a warmup time (ignoring the 3 minutes, which seems to be a bug), then you would only pay the cost once. Every other time the prediction would use the optimized graph and should be fast. |
st181348 | Ah okey. Thank you very much for the info and all your help in the forums!
What issue should I submit? Upload traced model or the model definition code?
I have an issue open for None objects dissapearing fron inside nn.ModuleList |
st181349 | If possible, a minimal code snippet, which is executable and shows the JIT behavior, where the second iteration takes 3 minutes, while the first one finishes in 6 seconds. |
st181350 | I don’t know if I will be able to paste a minimal snippet. This post was created with the part of nn.ModuleList containing Nones.
I have tried to measure times again:
Withouth tracing:
CPU times: user 12.8 s, sys: 1.41 s, total: 14.2 s
Wall time: 2.32 s
Scripted First Iteration:
CPU times: user 15 s, sys: 1.77 s, total: 16.8 s
Wall time: 4.64 s
Scripted Second Iteration:
CPU times: user 5min 8s, sys: 1.14 s, total: 5min 9s
Wall time: 5min
Scripted Third Iteration:
CPU times: user 11 s, sys: 1.18 s, total: 12.2 s
Wall time: 2.02 s |
st181351 | ptrblck:
I would ask you to create an issue here
Done.
github.com/pytorch/pytorch
[JIT] Scripted model second run cost 5 mins, after that 1-2 secs 4
opened
Jun 3, 2020
WaterKnight1998
🐛 Bug
This model uses an nn.ModuleList that used to contains None objects. After scripting the model None disappear. I opened an...
jit |
st181352 | Hi all,
It seems that .graph_for doesn’t seem to run optimizations anymore with PyTorch 1.5.
For example, from the following simple code, graph_for dumps different graphs between PyTorch 1.4 and PyTorch 1.5.
$ cat small.py
import torch
def f(x,y): return x + y * 3
print(torch.jit.script(f).graph_for(torch.rand(2, 2, device='cuda'), torch.rand(2, 2, device='cuda')))
PyTorch 1.4:
$ python small.py
graph(%x.1 : Float(*, *),
%y.1 : Float(*, *)):
%6 : Float(*, *) = prim::FusionGroup_0(%x.1, %y.1)
return (%6)
with prim::FusionGroup_0 = graph(%0 : Float(*, *),
%4 : Float(*, *)):
%2 : int = prim::Constant[value=1]()
%5 : int = prim::Constant[value=3]() # small.py:3:27
%6 : Float(*, *) = aten::mul(%4, %5) # small.py:3:23
%3 : Float(*, *) = aten::add(%0, %6, %2) # small.py:3:19
return (%3)
PyTorch 1.5:
$ python small.py
graph(%x.1 : Tensor,
%y.1 : Tensor):
%3 : int = prim::Constant[value=3]() # small.py:3:27
%2 : int = prim::Constant[value=1]()
%4 : Tensor = aten::mul(%y.1, %3) # small.py:3:23
%5 : Tensor = aten::add(%x.1, %4, %2) # small.py:3:19
return (%5)
Is this expected? With PyTorch 1.5, I don’t see any fusion group. |
st181353 | Found the description of the change in the release note: https://github.com/pytorch/pytorch/releases/tag/v1.5.0 4.
But even with torch._C._jit_set_profiling_mode(True), I don’t see any fusion group in the output.
$ cat small.py
import torch
torch._C._jit_set_profiling_mode(True)
def f(x,y): return x + y * 3
print(torch.jit.script(f).graph_for(torch.rand(2, 2, device='cuda'), torch.rand(2, 2, device='cuda')))
$ python small.py
graph(%x.1 : Tensor,
%y.1 : Tensor):
%2 : int = prim::Constant[value=3]() # small.py:4:27
%3 : int = prim::Constant[value=1]()
%4 : Tensor = prim::profile(%y.1)
%5 : Tensor = aten::mul(%4, %2) # small.py:4:23
%6 : Tensor = prim::profile(%x.1)
%7 : Tensor = prim::profile(%5)
%8 : Tensor = aten::add(%6, %7, %3) # small.py:4:19
%9 : Tensor = prim::profile(%8)
= prim::profile()
return (%9) |
st181354 | Could you try to run the code in the latest nightly build?
If you can still reproduce this issue, feel free to create an issue here 5 including the reproducible code snippets. |
st181355 | Hello!
I heard PyTorch supports Deformable Convolution out of the box since 1.4 release, I just can not say by looking at code if it is version 1 or version 2 (https://github.com/CharlesShang/DCNv2/tree/master 72).
My question is does the support for Deformable Convolution mean I can jit it to torchscript?
And if I add the DCv2 layer to PyTroch from the aforementioned repository can I convert it to torchscript?
Thank you. |
st181356 | It looks like the custom kernels in the repo linked are bound using pybind (https://github.com/CharlesShang/DCNv2/blob/master/src/vision.cpp 38). So they will not work with TorchScript as is. However, it is possible to register the operator with TorchScript following https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html 75, which would work with TorchScript. |
st181357 | It works fine.
GitHub
xi11xi19/CenterNet2TorchScript 142
centernet pytorch model to torch script model. Contribute to xi11xi19/CenterNet2TorchScript development by creating an account on GitHub. |
st181358 | Which libtorch version did u use to compile dcn_v2_cuda_forward_v2? I got error when compiling
candidate: constexpr torch::jit::RegisterOperators::RegisterOperators(torch::jit::RegisterOperators&&)
/usr/lib/libtorch_abi11_14/include/torch/csrc/jit/custom_operator.h:16:18: note: candidate expects 1 argument, 2 provided
CMakeFiles/dcn_v2_cuda_forward_v2.dir/build.make:1509: recipe for target 'CMakeFiles/dcn_v2_cuda_forward_v2.dir/vision.cpp.o' faile |
st181359 | In newer versions operator registration api changed, you should type like this
static auto registry = torch::RegisterOperators(“my_ops::warp_perspective”, &warp_perspective);
Change your first line. |
st181360 | Now it works fine (pythorch 1.4), thanks, btw do you try using pytorch version 1.5 I got this error:
Could not export Python function call ‘_DCNv2’. Remove calls to Python functions before export |
st181361 | I try to trace model for CenterTrack model, but I got this error
TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[26, 79, 149] (-1.9391770362854004 vs. -1.9395387172698975) and 669 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class) |
st181362 | Hi,
Is it possible to get the model summary described in this link
Is there similar pytorch function as model.summary() as keras?
Yes, you can get exact Keras representation, using this code.
Example for VGG16
from torchvision import models
from summary import summary
vgg = models.vgg16()
summary(vgg, (3, 224, 224))
----------------------------------------------------------------
Layer (type) Output Shpae Param #
================================================================
Conv2d-1 [-1, 64, 224, 224] 1792
ReLU-2 [-1, 64, 224, 224] …
, but for cpp jit model?
This is an enhancement for printing the model in this link;
Print network architecture in cpp jit jit
We don’t have any built-in utils for this, but you can do a simple version manually:
#include <torch/script.h>
#include <iostream>
#include <memory>
void tabs(size_t num) {
for (size_t i = 0; i < num; i++) {
std::cout << "\t";
}
}
void print_modules(const torch::jit::script::Module& module, size_t level = 0) {
std::cout << module.name().qualifiedName() << " (\n";
for (const auto& module : module.get_modules()) {
tabs(level + 1);
print_modules(module.module, level + 1);
…
Thanks.
rgds,
CL |
st181363 | Just for sharing, this the one of the way of show network architecture for jit model in cpp.
// torchex1.cpp : This file contains the 'main' function. Program execution
// begins and ends there.
//
#include <torch/script.h>
#include <iostream>
#include <inttypes.h>
#include <iostream>
#include <memory>
void tabs(size_t num) {
for (size_t i = 0; i < num; i++) {
std::cout << "\t";
}
}
void print_modules(const torch::jit::script::Module& module, size_t level = 0) {
// std::cout << module.name().qualifiedName() << " (\n";
std::cout << module.name().name() << " (\n";
for (const auto& parameter : module.get_parameters()) {
tabs(level + 1);
std::cout << parameter.name() << '\t';
std::cout << parameter.value().toTensor().sizes() << '\n';
}
for (const auto& module : module.get_modules()) {
tabs(level + 1);
print_modules(module, level + 1);
}
tabs(level);
std::cout << ")\n";
}
int main(int argc, const char* argv[]) {
torch::jit::script::Module container = torch::jit::load("net.pt");
print_modules(container);
return 0;
}
The output looks like:
net (
conv1 (
weight [10, 1, 5, 5]
bias [10]
)
conv2 (
weight [20, 10, 5, 5]
bias [20]
)
conv2_drop (
)
fc1 (
weight [50, 320]
bias [50]
)
fc2 (
weight [10, 50]
bias [10]
)
)
It seems like a stupid way, any smarter way please feel free to share.
thanks.
rgds,
CL |
st181364 | It also looks like there is script::Module::dump() which will print something like this, you can toggle it to include the sections that are relevant to you:
void dump(
bool print_method_bodies, // you probably want this to be `false` for a summary
bool print_attr_values,
bool print_param_values) const;
module __torch__.M {
parameters {
}
attributes {
training = True
}
methods {
method forward {
graph(%self : ClassType<M>,
%x.1 : Tensor):
%3 : Tensor = prim::CallMethod[name="other_fn"](%self, %x.1) # ../test.py:36:15
return (%3)
}
method other_fn {
graph(%self : ClassType<M>,
%x.1 : Tensor):
%4 : int = prim::Constant[value=1]()
%3 : int = prim::Constant[value=10]() # ../test.py:33:19
%5 : Tensor = aten::add(%x.1, %3, %4) # ../test.py:33:15
return (%5)
}
}
submodules {
}
} |
st181365 | when i use the code ,i met the trouble “class torch::jit::script::Module have no members name and get_parameters()” |
st181366 | I opened this issue becasue torch.jit.script is not working with hrnet
github.com/pytorch/pytorch
[JIT] Expected integer literal for index: 155
opened
May 28, 2020
WaterKnight1998
🐛 Bug
I have tried to convert this model to TorchScript using torch.jit.script. However, I am getting this issue:
RuntimeError:
Expected integer literal...
jit |
st181367 | I read this doc 2 and found that torch.set_num_threads and torch.set_num_interop_threads can only be called before running script model.
threads870×457 29.7 KB
Is there a way to control number of threads more flexibly in TorchScript that for a model’s different modules, we can set difference number of interop threads and introop threads to run?
Thanks for any answers! |
st181368 | Solved by Michael_Suo in post #2
No, there’s no option today for controlling parallelism on a module level |
st181369 | Hello,
I am trying to modify an existing Torchscript model for model compression. For example, I want to change several parameters of the aten:_convolution operators. To explain the question more clearly, suppose we traced a model in the following way:
net = torchvison.models.resnet18().cuda()
data = torch.ones(1, 3 , 224, 224)
traced = torch.jit.trace(net, data)
The question is can I modify the torchscript model by directly modifying the traced.graph so that the new model can be loaded and run in a different place? I also noticed that there is a ‘code’ variable in the ‘traced’, what is this variable for? Do I need to modify the ‘code’ variable at the same time?
Thanks ~ |
st181370 | What can be expected from JIT for simple linear models?
It’s possible I’m using JIT trace_module wrong, but here’s my model:
model = nn.Sequential(
nn.Linear(784, 200),
nn.ReLU(),
nn.Linear(200, 200),
nn.ReLU(),
nn.Linear(200, 10),
)
model = model.to(DEVICE)
model = torch.jit.trace_module(
model, {'forward': torch.randn(BATCH_SIZE, 784, device=DEVICE)})
for inputs, targets in data:
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, targets)
loss.backward()
optimizer.step()
A simple linear model for MNIST.
I have a few observations:
From using Nsight systems, it’s quite clear that without JIT, the performance is quite bad. This is normal since the kernels are very small, so there’s nothing much for the CPU or GPU to do.
I’d expect JIT to be able to fuse quite a few of these kernels though. However, when using trace as shown above, the situation is exactly the same as without JIT. How come? Am I using JIT correctly here? And what kernels can I expect to be fused for this (simple) model ? |
st181371 | Thanks for bringing this to our attention.
Supporting matmul + relu on CPU, among other types of fusion, in a few months is on our roadmap. Then we might look into GPU. We will let you know if it is close to being ready. |
st181372 | Thanks for the quick answer and yes, it would be good to know when this is ready ! |
st181373 | torch::Tensor hidden = module.get_method("get_initial_state")({torch::tensor({20})});
/home/rakesh/rishabh_workspace/Garbage/LMtesting/LMtesting.cpp:65:64: error: conversion from ‘ c10::IValue ’ to non-scalar type ‘ at::Tensor ’ requested
torch::Tensor hidden = module.get_method(“get_initial_state”)({torch::tensor({20})}) ;
^ |
st181374 | Solved by Michael_Suo in post #2
get_method returns an IValue, which you can sort of think of as like a PyObject. You need to explicitly cast it to the C++ type you want, so you should do hidden.toTensor() to get an at::Tensor. |
st181375 | get_method returns an IValue, which you can sort of think of as like a PyObject. You need to explicitly cast it to the C++ type you want, so you should do hidden.toTensor() to get an at::Tensor. |
st181376 | For example :
auto hidden_map = module.get_method("get_initial_state")({torch::tensor({1})});
torch::Tensor t = hidden_map.toTensor() |
st181377 | I’m trying to write a custom LSTM with TorchScript. Naturally, my first impulse was to take the fastRNNs benchmark custom_lstms file 7 and to run it. However, it fails in both torch 1.4 and 1.5.
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/test.py”, line 298, in forward
for rnn_layer in self.layers:
state = states[i]
output, out_state = rnn_layer(output, state)
~~~~~~~~~ <— HERE
output_states += [out_state]
i += 1
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/test.py”, line 238, in forward
for direction in self.directions:
state = states[i]
out, out_state = direction(input, state)
~~~~~~~~~ <— HERE
outputs += [out]
output_states += [out_state]
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/test.py”, line 210, in forward
def forward(self, input, state):
# type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]
inputs = reverse(input.unbind(0))
~~~~~~~ <— HERE
outputs = jit.annotate(List[Tensor], [])
for i in range(len(inputs)):
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/test.py”, line 90, in reverse
def reverse(lst):
# type: (List[Tensor]) -> List[Tensor]
return lst[::-1]
~~~~~~~~ <— HERE
RuntimeError: invalid vector subscript
To my understanding, negative strides are neither supported, nor on the roadmap (according to this). So this doesn’t surprise me, but why is that reverse function there then?
Am I missing something, is there a trick to reversing a Tensor like that?
Obviously, that would make creating a backwards layer easier (and probably faster than torch.flip, since it allocates new memory apparently) |
st181378 | torch/csrc/jit/ir/ir.h explains the concept of sub-blocks of a node and state that inputs to a block that represent control flow act as equivalent phi-nodes by defining a new Value to represent any term that has multiple definitions depending on how control flowed.
Is there an example where I can see the creation of these new Values?
Thanks! |
st181379 | The easiest way to see example IR is to compile a small torchscript snippet using torch.jit.script() and calling .graph on the resulting object. |
st181380 | Below code is not working in torch.jit.script
def forward(self, x):
start = time.time() * 1000
y = self.compute_conv1d((x))
print("cost %d ms" % (time.time() * 1000 - start))
return y
error
RuntimeError:
Python builtin <built-in function time> is currently not supported in Torchscript:
Since time.time() is not supported in Torchscript, is there any way to measure each module’s cost inside script model? I’m running on a cpu machine.
Thanks |
st181381 | Solved by googlebot in post #2
something like
@jit.ignore
def jtime() -> float:
return time.time()
autograd.profiler.profile + export_chrome_trace is another method |
st181382 | something like
@jit.ignore
def jtime() -> float:
return time.time()
autograd.profiler.profile + export_chrome_trace is another method |
st181383 | I’ve been trying to run torch.jit.script(rpc_model) on the RNN parallel model described at Pytorch’s RPC tutorial 2, but have been hitting issues.
My attempt includes adding script_model = torch.jit.script(model) tight before training loop and comment out the loop, as I dont need to train the model.
The first error I got is related to TorchScript not supportting *args and **kwargs from _remote_method and _call_method. In an attempt to bypass this error, I have changed RNNModel.forward to a simple emb = rpc_sync(self.emb_table_rref.owner(), EmbeddingTable.forward, args=input) (and commented out all other calls to local and remote layers) just to see whether this layer could be converted to torchscript. It failed with the following a similar issue, but not due to any public facing dictionary: torch.jit.frontend.NotSupportedError: keyword-arg expansion is not supported:. Here is the full stack
$ mpirun -np 2 python rnn_jit.py
Traceback (most recent call last):
File “rnn_jit.py”, line 156, in
run_worker()
File “rnn_jit.py”, line 145, in run_worker
_run_trainer()
File “rnn_jit.py”, line 116, in _run_trainer
traced = torch.jit.script(model)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/init.py”, line 1261, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 305, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 361, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 279, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 568, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/init.py”, line 1290, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 568, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/init.py”, line 1287, in script
ast = get_jit_def(obj)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 173, in get_jit_def
return build_def(ctx, py_ast.body[0], type_line, self_name)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 206, in build_def
build_stmts(ctx, body))
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 129, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 129, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 181, in call
return method(ctx, node)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 363, in build_If
build_stmts(ctx, stmt.body),
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 129, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 129, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 181, in call
return method(ctx, node)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 288, in build_Assign
rhs = build_expr(ctx, stmt.value)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 181, in call
return method(ctx, node)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/frontend.py”, line 464, in build_Call
raise NotSupportedError(kw_expr.range(), ‘keyword-arg expansion is not supported’)
torch.jit.frontend.NotSupportedError: keyword-arg expansion is not supported:
File “/opt/conda/lib/python3.7/site-packages/torch/distributed/rpc/api.py”, line 476
if qualified_name is not None:
fut = _invoke_rpc_builtin(dst_worker_info, qualified_name, rf, *args, **kwargs)
~~~~~~ <--- HERE
elif isinstance(func, torch.jit.ScriptFunction):
fut = _invoke_rpc_torchscript(dst_worker_info.name, func, args, kwargs)
‘rpc_sync’ is being compiled since it was called from ‘RNNModel.forward’
File “rnn_jit.py”, line 68
# pass input to the remote embedding table and fetch emb tensor back
# emb = _remote_method(EmbeddingTable.forward, self.emb_table_rref, input)
emb = rpc_sync(self.emb_table_rref.owner(), EmbeddingTable.forward, args=input)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
# output, hidden = self.rnn(emb, hidden)
# pass output to the rremote decoder and get the decoded output back
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[29685,1],1]
Exit code: 1 |
st181384 | Hey @Thiago.Crepaldi
TorchScript integration with RPC is still experimental, and we are working on closing the gaps. Currently, in v1.5, applications can run TorchScript functions using RPC, e.g., rpc_sync/rpc_async/remote(to, my_script_func, args=(...)). But, within a script function only rpc_async can be called. See the test below:
github.com
pytorch/pytorch/blob/dad552666e1aaff174b549a38fce24b517c53d21/torch/testing/_internal/distributed/rpc/jit/rpc_test.py#L146-L154 2
@torch.jit.script
def call_rpc_with_profiling(handle: Tensor, dst_worker_name: str) -> Tensor:
# Call rpc_async from within ScriptFunction and ensure that we can attach
# profiling callbacks. Note that handle here is a Tensor representation of
# RecordFunction.
fut = rpc.rpc_async(dst_worker_name, one_arg, (torch.tensor(1),))
torch.ops.profiler._call_end_callbacks_on_jit_fut(handle, fut)
ret = fut.wait()
return ret
Could you please try if using rpc_async works for you?
The issue below is tracking the progress on adding native RemoteModule support. Please feel free to comment there.
github.com/pytorch/pytorch
[Design][RFC] RemoteModule API Design 2
opened
Apr 23, 2020
xush6528
[Design][RFC] RemoteModule API Design
FB internal link, [Design] RemoteModule
Goal
Provide a wrapper nn.Module API that serves these purposes
Users should feel like using remote...
jit
module: rpc
triaged |
st181385 | Thanks for the quick reply, Shen Li. By using rpc_async, I got similar error:
$ mpirun -np 2 python rnn_jit.py
Traceback (most recent call last):
File “rnn_jit.py”, line 156, in
run_worker()
File “rnn_jit.py”, line 145, in run_worker
_run_trainer()
File “rnn_jit.py”, line 116, in _run_trainer
script_model = torch.jit.script(model)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/init.py”, line 1261, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 305, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 361, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File “/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 279, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError:
rpc_async(dst_worker_name, user_callable, args, kwargs)does not support kwargs yet:
File “rnn_jit.py”, line 68
# pass input to the remote embedding table and fetch emb tensor back
# emb = _remote_method(EmbeddingTable.forward, self.emb_table_rref, input)
emb = rpc_async(self.emb_table_rref.owner(), EmbeddingTable.forward, args=input)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
# output, hidden = self.rnn(emb, hidden)
# pass output to the rremote decoder and get the decoded output back
Line 68 refers to the rpc_async call inside the forward method
emb = rpc_async(self.emb_table_rref.owner(), EmbeddingTable.forward, args=input) |
st181386 | This script is a repro for the issue
import torch
import os
import torch.distributed as dist
from torch.distributed.rpc import rpc_sync
import torch.nn as nn
import torch.nn.functional as F
import torch.distributed.rpc as rpc
from torch import optim #XXX include this
from torch.distributed.optim import DistributedOptimizer #XXX include this
from torch.distributed.rpc import RRef, rpc_async, remote
import torch.distributed.autograd as dist_autograd #XXX include this
def get_local_rank():
return int(os.environ['OMPI_COMM_WORLD_LOCAL_RANK'])
def _parameter_rrefs(module):
param_rrefs = []
for param in module.parameters():
param_rrefs.append(RRef(param))
return param_rrefs
class EmbeddingTable(nn.Module):
r"""
Encoding layers of the RNNModel
"""
def __init__(self, ntoken, ninp, dropout):
super(EmbeddingTable, self).__init__()
self.drop = nn.Dropout(dropout)
self.encoder = nn.Embedding(ntoken, ninp).cuda()
self.encoder.weight.data.uniform_(-0.1, 0.1)
def forward(self, input):
return self.drop(self.encoder(input.cuda())).cpu() # XXX: extra ')'
class Decoder(nn.Module):
def __init__(self, ntoken, nhid, dropout):
super(Decoder, self).__init__()
self.drop = nn.Dropout(dropout)
self.decoder = nn.Linear(nhid, ntoken)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-0.1, 0.1)
def forward(self, output):
return self.decoder(self.drop(output))
class RNNModel(nn.Module):
def __init__(self, ps, ntoken, ninp, nhid, nlayers, dropout=0.5):
super(RNNModel, self).__init__()
# setup embedding table remotely
self.emb_table_rref = rpc.remote(ps, EmbeddingTable, args=(ntoken, ninp, dropout))
# setup LSTM locally
self.rnn = nn.LSTM(ninp, nhid, nlayers, dropout=dropout)
# setup decoder remotely
self.decoder_rref = rpc.remote(ps, Decoder, args=(ntoken, nhid, dropout))
def forward(self, input, hidden):
# pass input to the remote embedding table and fetch emb tensor back
# emb = _remote_method(EmbeddingTable.forward, self.emb_table_rref, input) # Original call
emb = rpc_async(self.emb_table_rref.owner(), EmbeddingTable.forward, args=input) # adapted call
output, hidden = self.rnn(emb, hidden)
# pass output to the rremote decoder and get the decoded output back
# decoded = _remote_method(Decoder.forward, self.decoder_rref, output) # Original call
decoded = rpc_async(self.decoder_rref.owner(), Decoder.forward, args=output) # adapted call
return decoded, hidden
def parameter_rrefs(self):
remote_params = []
# get RRefs of embedding table
# remote_params.extend(_remote_method(_parameter_rrefs, self.emb_table_rref)) # Original call
remote_params.extend(rpc_async(_parameter_rrefs, self.emb_table_rref))
# create RRefs for local parameters
remote_params.extend(_parameter_rrefs(self.rnn))
# get RRefs of decoder
# remote_params.extend(_remote_method(_parameter_rrefs, self.decoder_rref)) # Original call
remote_params.extend(rpc_async(_parameter_rrefs, self.decoder_rref)) # Adapted call
return remote_params
def _run_trainer():
ntoken = 10
ninp = 2
nhid = 3
nlayers = 4
model = RNNModel('ps', ntoken, ninp, nhid, nlayers) # XXX: no rnn.
script_model = torch.jit.script(model)
print(script_model)
def run_worker():
world_size=2
rank=get_local_rank()
os.environ['MASTER_ADDR'] = '10.123.134.28'
os.environ['MASTER_PORT'] = '21234'
if rank == 1:
rpc.init_rpc("trainer", rank=rank, world_size=world_size)
_run_trainer()
else:
rpc.init_rpc("ps", rank=rank, world_size=world_size)
# parameter server do nothing
pass
# block until all rpcs finish
rpc.shutdown()
if __name__=="__main__":
run_worker() |
st181387 | i followed this link cpu_threading_torchscript_inference 3 to try to enable inter op multi threading. In my test script, i called torch.set_num_interop_threads(8) to use all my cpu cores and runs 8 conv1d operators with torch.jit._fork. The result shows that torch.jit._fork is not working and all operators runs sequentially.
Duration the test, cpu usage is almost 100%, which means that multi threading is not actually enabled.
I got the warning “Access to a protected member _fork of a class” for torch.jit._fork, does this matter?
Any answers are appreciate and thank you for your help!
My core script
def forward(self, x, threads, seq):
iterval = seq // threads
conv_res = []
conv_threads = []
start = time.time() * 1000
for i in range(threads):
start_inner = time.time() * 1000
y = torch.jit._fork(self.compute, (x[:, :, i * iterval:(i + 1) * iterval]))
print("fork %d cost %d ms" % (i, time.time() * 1000 - start_inner))
conv_threads.append(y)
print("fork totally cost %d ms" % (time.time() * 1000 - start))
start = time.time() * 1000
for i in range(threads):
conv_res.append(torch.jit._wait(conv_threads[i]))
print("wait cost %d ms" % (time.time() * 1000 - start))
return conv_res
Test result
fork 0 cost 5 ms
fork 1 cost 5 ms
fork 2 cost 4 ms
fork 3 cost 5 ms
fork 4 cost 4 ms
fork 5 cost 5 ms
fork 6 cost 5 ms
fork 7 cost 5 ms
fork totally cost 41 ms
wait cost 0 ms
Full script
import time
import torch
import threading
import torch.nn as nn
from torch.nn.utils import weight_norm
class MyConvParallel(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__()
self.cell = nn.Conv1d(*args, **kwargs)
self.cell.weight.data.normal_(0.0, 0.02)
def compute(self, x):
return self.cell(x)
def forward(self, x, threads, seq):
iterval = seq // threads
conv_res = []
conv_threads = []
start = time.time() * 1000
for i in range(threads):
start_inner = time.time() * 1000
y = torch.jit._fork(self.compute, (x[:, :, i * iterval:(i + 1) * iterval]))
print("fork %d cost %d ms" % (i, time.time() * 1000 - start_inner))
conv_threads.append(y)
print("fork totally cost %d ms" % (time.time() * 1000 - start))
start = time.time() * 1000
for i in range(threads):
conv_res.append(torch.jit._wait(conv_threads[i]))
print("wait cost %d ms" % (time.time() * 1000 - start))
return conv_res
def main():
#print(*torch.__config__.show().split("\n"), sep="\n");exit(0)
intro_threads = 1
inter_threads = 8
dim = 64
kernels = 3
seq = 80000
torch.set_num_threads(intro_threads)
torch.set_num_interop_threads(inter_threads)
MyCell = MyConvParallel(dim, dim, kernel_size=kernels, stride=1)
MyCell.eval()
inputs = []
iter = 1000
for i in range(iter):
inputs.append(torch.rand(1, dim, seq))
start = time.time() * 1000
for i in range(iter):
print(i)
y = MyCell(inputs[i], inter_threads, seq)
#print(y)
end = time.time() * 1000
print('cost %d ms per iter\n' % ((end - start) / iter))
if __name__ == "__main__":
main() |
st181388 | Solved by googlebot in post #2
That only works in TorchScript, try MyCell=torch.jit.script(MyCell) |
st181389 | Hi,
I’ve written a model in Python, translated it to TorchScript with torch.jit.script(model), and serialized with torch.jit.save(). In a C++ program, I can load the model with torch::jit::load() and use the inference with model.forward().
Is there a way of accessing other (than forward) methods from the model in the C++ domain?
A direct model.my_method() call would require that the C++ compiler knows how to access this function. If the model is loaded during the execution, it may have functions of unknown name and the compiler (i.e., I) doesn’t know how to use them. It there a way to accomplish this, e.g., with something like model.call(‘my_method’, method_args)? |
st181390 | Solved by zdevito in post #2
You can use mymodule.get_method("name_of_your_method")(method_args); |
st181391 | Excellent! My guess wasn’t so far, but somehow I didn’t manage to find the correct function.
Many thanks! |
st181392 | Hi, everyone,
#jit #quantization
I am struggling with exporting quantized model from PyTorch to Caffe2. For now it is clear that previously quantized model should be traced then exported via ONNX to Caffe2. But what if it is impossible (for some reason) to trace some part of the network which might be non-quantized? Could we possibly mix TorchScript module and eager mode modules in ONNX export?
Also I have other question but it is not related to the topic name. I’ve also tried to convert whole traced model but ONNX throws an errors, that I unable to understand. The steps are the following:
I’ve traced quantized PyTorch model
Called torch.onnx.export with opset_version=11, operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK
That produces an error:
File "./tools/torchscript_converter.py", line 118, in <module>
onnx_model = export_onnx_model_with_torchscript(cfg, torch_model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 180, in export_onnx_model_with_torchscript
return Caffe2Tracer(cfg, model, inputs).export_onnx_with_torchscript()
File "/root/some_detectron2/detectron2/export/api.py", line 138, in export_onnx_with_torchscript
return export_onnx_model_impl(traced_model, (inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 67, in export_onnx_model
export_params=True,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 552, in _export
_check_onnx_proto(proto)
RuntimeError: Attribute 'kernel_shape' is expected to have field 'ints'
==> Context: Bad node spec: input: "441" input: "7" output: "442" op_type: "Conv" attribute { name: "dilations" ints: 1 ints: 1 type: INTS } attribute { name: "group" i: 32 type: INT } attribute { name: "
kernel_shape" type: INTS } attribute { name: "pads" ints: 1 ints: 1 ints: 1 ints: 1 type: INTS } attribute { name: "strides" ints: 1 ints: 1 type: INTS }
And the debug logs there is:
...
%442 : Tensor = onnx::Conv[dilations=[1, 1], group=32, kernel_shape=annotate(List[int], []), pads=[1, 1, 1, 1], strides=[1, 1]](%441, %7) # /root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/$
orch/nn/modules/conv.py:348:0
%443 : Tensor = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%442, %8, %9, %10, %11) # /root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/fu$
ctional.py:1957:0
%444 : Tensor = onnx::Relu(%443) # /root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/functional.py:1061:0
...
If it is impossible to tell what’s wrong, could you please guide me how to localize an issue? |
st181393 | #jit #quantization #mobile
Hello everyone,
After I was guided 2 how to deploy quantized models on mobile I’ve decided to give a try to quantized TorchScript model.
What I have is EfficientNet backbone that was quantized with QAT tools and qnnpack config. After quantization I’ve traced it with torch.jit.script and saved for later deploy. Also I have traced model without quantization
I’ve tried both on mobile.
In the second case everything works well. But in the first case I’m obtaining errors:
HWHMA:/data/local/Detectron2Mobile # ./speed_benchmark_torch --model=./ts_models/orig_backbone.pt --input_dims="1,3,512,768" --input_type=float --warmup=10 --iter=10
Starting benchmark.
Running warmup runs.
Main runs.
Main run finished. Milliseconds per iter: 2146.44. Iters per second: 0.465887
/speed_benchmark_torch --model=./ts_models/quant_backbone.pt --input_dims="1,3,512,768" --input_type=float --warmup=10 --iter=10 <
terminating with uncaught exception of type torch::jit::ErrorReport:
Unknown builtin op: quantized::batch_norm.
Here are some suggestions:
quantized::batch_norm2d
quantized::batch_norm3d
The original call is:
...<calls>...
Serialized File "code/__torch__/torch/nn/quantized/modules/batchnorm.py", line 14
_1 = self.running_mean
_2 = self.bias
input = ops.quantized.batch_norm(argument_1, self.weight, _2, _1, _0, 1.0000000000000001e-05, 0.44537684321403503, 129)
~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return input
Aborted
There’s no op quantized::batch_norm but there are quantized::batch_norm2d/3d. In the working case traced code looks the following way:
def forward(self,
argument_1: Tensor) -> Tensor:
_0 = self.running_var
_1 = self.running_mean
_2 = self.bias
input = torch.batch_norm(argument_1, self.weight, _2, _1, _0, False, 0.10000000000000001, 1.0000000000000001e-05, True)
return input
What is happening in the not working case? Is it wrong substitution while tracing or there’s an issue with quantized::batch_norm? |
st181394 | Solved by supriyar in post #3
For the first question - are you using nightly build of pytorch? The op name was updated in https://github.com/pytorch/pytorch/pull/36494 to quantized.batch_norm2d to make it more consistent with the implementation. You might have to re-do the QAT convert with the same pytorch build to make sure you… |
st181395 | UPD:
I’ve tried to convert traced graph to Caffe2. And have run into same problem.
Traced with torch.jit.script model cannot be converted to Caffe2 because of batchnorm.
model = torch.jit.load(buf)
f = io.BytesIO()
torch.onnx.export(model, x, f, example_outputs=outputs,
operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
f.seek(0)
Where the model is traced backbone
~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict)
157 torch.onnx.symbolic_helper._quantized_ops.clear()
158 # Unpack quantized weights for conv and linear ops and insert into graph.
--> 159 torch._C._jit_pass_onnx_unpack_quantized_weights(graph, params_dict)
160
161 # Insert permutes before and after each conv op to ensure correct order.
RuntimeError: false INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1586761698468/work/torch/csrc/jit/passes/onnx/unpack_quantized_weights.cpp:99, please report a bug to PyTorch. Unrecognized quantized operator while trying to compute q_scale for operator quantized::batch_norm |
st181396 | For the first question - are you using nightly build of pytorch? The op name was updated in https://github.com/pytorch/pytorch/pull/36494 6 to quantized.batch_norm2d to make it more consistent with the implementation. You might have to re-do the QAT convert with the same pytorch build to make sure you get the same op name.
For the second question - We currently do not have the quantized pytorch to caffe2 conversion flow working for the quantized::batch_norm2d operator. Mainly due to the fact that caffe2 quantized ops currently don’t have this operator. |
st181397 | @supriyar thank you!
There was a version right below that PR. As I can see now quantized::batchnorm2d appeared and the model works on mobile device. |
st181398 | @supriyar would you mind to discuss how to convert models for now. Is it possible to exclude bn operators and and reach same functionality only with int8_conv_op_relu?
As it was done in this example model https://github.com/caffe2/models/tree/master/resnet50_quantized 11
UPD: I think the answer is to use fused modules from intrinsic |
st181399 | Hello!
I’m currently trying to export a model to ONNX from PyTorch. I have some layers inside a model that are not supported by the ONNX and I don’t really want them to be inside ONNX graph anyway, so I’m trying to use keep_initializers_as_inputs=True in order to fool tracer that my variables are actually inputs and inside custom nn.Module implementation’s forward, I’ve added this:
if onnx.is_in_onnx_export():
gamma = torch.zeros([int(x.shape[2]), int(x.shape[3])]).clone().detach().requires_grad_(True)
beta = torch.zeros([int(x.shape[2]), int(x.shape[3])]).clone().detach().requires_grad_(True)
else:
but those initializers aren’t getting traced as graph inputs, but rather as constants and then being baked into a graph. I’ve tried various approach but all of them failed. Can somebody hint on how can one convert intermediate variables to graph inputs? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.