id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st180700 | I wouldn’t call it a restriction, but maybe an unwanted usage.
Modules containing parameters are initialized before they can be used.
E.g. conv and linear layers contain trainable parameters and you would create objects first:
conv = nn.Conv2d(3, 6, 3, 1, 1)
lin = nn.Linear(10, 10)
Later in your code you would then use these modules and feed an activation to them:
out_conv = conv(in_conv)
out_lin = lin(in_lin)
The common use case it to create these layers in the __init__ method of your custom module and use them in the forward.
However, you could of course create them outside of the __init__ and pass them to it or even pass them to the forward method.
The issue in your code is created, since you are recreating the modules in each forward pass, which will thus reset the parameters every time the forward pass is executed.
The nn tutorial might explain it in more detail. |
st180701 | Modules are computation blocks with state, can we put it this way? But simple ops like add do not hold states. And what you explained makes sense because during training we need to save states for every iteration.
But Torchscript is often used for inference. I don’t see why we cannot script or trace a forward function with a submodule defined in it. From my understanding, scripting or tracing a Function(or a Module) does not save states like weight or bias right? It will just record a pure function path. |
st180702 | Yes, the first explanation of “stateful” modules makes sense.
I’m not sure how TorchScript is related to this. Note that you surely can re-initialize modules in the forward pass, if you explicitly don’t want to train these layers and want to create new random parameters.
A scripted model should respect this workflow (even if it’s wrong from the point of view of training the model). |
st180703 | Hello there,
I am trying to compile an SSD-based object detector using torch.jit.script(model), and I am getting this error message for the following code segment:
# apply multibox head to source layers
# sources: python list (length: 7)
# self.loc: nn.ModuleList (length: 7)
# self.conf: nn.ModuleList (length: 7)
for (x, l, c) in zip(sources, self.loc, self.conf):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
loc.append(l(x).permute(0, 2, 3, 1).contiguous())
conf.append(c(x).permute(0, 2, 3, 1).contiguous())
How can I fix this?
Note: I had to use for k,v in enumerate(module_list): x = v(x) for “expected integer literal for index” error in an earlier segment of the code and it worked. However, this error is different.
Thanks. |
st180704 | Solved by blackberry in post #2
Solved.
In case someone encounters the same issue, here is how to change the code to be able to compile it:
for i, (l, c) in enumerate(zip(self.loc, self.conf)):
x = sources[i]
loc.append(l(x).permute(0, 2, 3, 1).contiguous())
conf.append(c(x).permute(0, 2, 3, 1).contiguous()) |
st180705 | Solved.
In case someone encounters the same issue, here is how to change the code to be able to compile it:
for i, (l, c) in enumerate(zip(self.loc, self.conf)):
x = sources[i]
loc.append(l(x).permute(0, 2, 3, 1).contiguous())
conf.append(c(x).permute(0, 2, 3, 1).contiguous()) |
st180706 | Hi there,
I’m trying to write a custom nms operator for TorchScript. The original code is here: https://github.com/lucastabelini/LaneATT/tree/main/lib/nms 5.
My code for building nms operator is here: https://github.com/hthoai/LaneATT/tree/torchscript/lib/nms_torchscript 6
I caught this error:
image1393×371 45.6 KB
Is there anyone know how to fix it?! |
st180707 | It seems that Python.h is missing on your system, so you might want to install e.g. libpython-dev. |
st180708 | I try to export a Decoder model of a Seq2Seq model. The definition of the class is as follow:
class Decoder(MarianPreTrainedModel):
def __init__(self, config: MarianConfig):
super().__init__(config)
self.dropout = config.dropout
self.layerdrop = config.decoder_layerdrop
self.padding_idx = config.pad_token_id
self.max_target_positions = config.max_position_embeddings
self.embed_positions = MarianSinusoidalPositionalEmbedding(
config.max_position_embeddings,
config.d_model,
self.padding_idx,
)
self.layers = nn.ModuleList([MarianDecoderLayer(config) for _ in range(config.decoder_layers)])
self.init_weights()
....
def forward(
self,
inputs_embeds: torch.Tensor,
encoder_hidden_states: torch.Tensor,
encoder_attention_mask: torch.Tensor,
decoder_past_key_values: PAST_KEY_VALUES_TYPE = None,
encoder_past_key_values: PAST_KEY_VALUES_TYPE = None,
):
....
The Module is modified from huggingface MarianMTModel.
When I export the model to ONNX, the eported model lost the arg encoder_hidden_states.
I have debug it in the export funtion, and I find the arg lost after this line:
params_dict = torch._C._jit_pass_onnx_eliminate_unused_items(graph, params_dict)
The 509th line in torch.onnx.utils.py
Before the funtion, I print the graph:
graph(%decoder_input_embeds : Float(*, *, 512, strides=[1024, 512, 1], requires_grad=0, device=cpu),
%encoder_hidden_states : Float(*, *, 512, strides=[3584, 512, 1], requires_grad=0, device=cpu),
%encoder_attention_mask : Long(*, *, strides=[7, 1], requires_grad=0, device=cpu),
%decoder_cache_values : Float(*, 8, *, 64, strides=[1536, 192, 64, 1], requires_grad=0, device=cpu),
%encoder_cache_values : Float(*, 8, *, 64, strides=[3584, 448, 64, 1], requires_grad=0, device=cpu),
%embed_positions.weight : Float(512, 512, strides=[512, 1], requires_grad=0, device=cpu),
%layers.0.self_attn.k_proj.weight : Float(512, 512, strides=[512, 1], requires_grad=1, device=cpu),
After the function, I print the graph:
graph(%decoder_input_embeds : Float(*, *, 512, strides=[1024, 512, 1], requires_grad=0, device=cpu),
%encoder_attention_mask : Long(*, *, strides=[7, 1], requires_grad=0, device=cpu),
%decoder_cache_values : Float(*, 8, *, 64, strides=[1536, 192, 64, 1], requires_grad=0, device=cpu),
%encoder_cache_values : Float(*, 8, *, 64, strides=[3584, 448, 64, 1], requires_grad=0, device=cpu),
%embed_positions.weight : Float(512, 512, strides=[512, 1], requires_grad=0, device=cpu),
So, why the arg encoder_hidden_states lost? Some approaches to solve this? |
st180709 | When using JNI from Java, it seems not possible to allocate tensor objects to the GPU, and pytorch JNI will always allocate the tensors to CPU.
It’s also not possible to move the model to GPU memory, or if the traced model uses a GPU, it’s not possible to move it back to CPU.
Can someone confirm that my understanding is correct? |
st180710 | It’s possible to that using the C++ API, which we can now use from Java with the JavaCPP Presets for PyTorch:
github.com
bytedeco/javacpp-presets 4
master/pytorch
The missing Java distribution of native C++ libraries |
st180711 | What does that mean?
This are the commands I ran
Heres the full error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1203, in script
return torch.jit.torch.jit._recursive.recursive_script(obj)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 195, in make_strong_submodule
new_strong_submodule = recursive_script(module)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 116, in recursive_script
return create_constant_iterable_module(mod)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 233, in create_constant_iterable_module
modules[key] = recursive_script(submodule)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 181, in create_method_from_fn
stub = torch.jit.script_method(fn, _jit_internal.createResolutionCallbackFromClosure(fn))
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1280, in script_method
ast = get_jit_def(fn, self_name="ScriptModule")
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 169, in get_jit_def
return build_def(ctx, py_ast.body[0], type_line, self_name)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 209, in build_def
build_stmts(ctx, body))
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in <listcomp>
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 185, in __call__
return method(ctx, node)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 346, in build_For
[build_expr(ctx, stmt.iter)], build_stmts(ctx, stmt.body))
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in <listcomp>
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 185, in __call__
return method(ctx, node)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 352, in build_If
build_stmts(ctx, stmt.body),
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in <listcomp>
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 185, in __call__
return method(ctx, node)
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 279, in build_Expr
return ExprStmt(build_expr(ctx, value))
File "/home/oywa/.local/lib/python3.6/site-packages/torch/jit/frontend.py", line 184, in __call__
raise UnsupportedNodeError(ctx, node)
torch.jit.frontend.UnsupportedNodeError: Yield aren't supported:
at /home/oywa/.local/lib/python3.6/site-packages/torch/nn/modules/module.py:980:16
>>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> print(module)
"""
memo = set()
for name, module in self._modules.items():
if module is not None and module not in memo:
memo.add(module)
yield name, module
~ <--- HERE
'__torch__.torchvision.models.densenet.___torch_mangle_79._DenseBlock.forward' is being compiled since it was called from '__torch__.torchvision.models.densenet.___torch_mangle_67.DenseNet.forward'
at /home/oywa/.local/lib/python3.6/site-packages/torchvision/models/densenet.py:155:8
def forward(self, x):
features = self.features(x)
~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
out = F.relu(features, inplace=True)
out = F.adaptive_avg_pool2d(out, (1, 1))
out = torch.flatten(out, 1)
out = self.classifier(out)
return out |
st180712 | torchvision was recently made 100% TorchScript compatible, but only in the most recent version. Can you check that your torchvision and PyTorch version?
import torch
print(torch.__version__)
import torchvision
print(torchvision.__version__) |
st180713 | Hi, everyone!
Issue Summary:
I’m trying to convert the ESM-1b protein transformer model from PyTorch to ONNX. I run into an issue when I want to provide an extra argument to the model.
Conversion Steps, Code, and Error Message:
Here is my conversion script (named convert_onnx_esm.py):
import os
import torch
import torch.onnx
import argparse
from esm.pretrained import load_model_and_alphabet_local
parser = argparse.ArgumentParser()
parser.add_argument("--model-path", type=str, required=True)
parser.add_argument("--converted-model-path", type=str, required=True)
commandline_args = parser.parse_args()
model, alphabet = load_model_and_alphabet_local(commandline_args.model_path)
batch_converter = alphabet.get_batch_converter()
data = [
("protein1", "VLAGG"),
("protein2", "KALTARQ"),
]
batch_labels, batch_strs, batch_tokens = batch_converter(data)
# an example forward pass would be: model(batch_tokens, repr_layers=[33])
# the conversion works if (batch_tokens, [33]) is changed to just: batch_tokens
with torch.no_grad():
torch.onnx.export(model,
(batch_tokens, [33]),
commandline_args.converted_model_path,
use_external_data_format=True,
opset_version=11,
do_constant_folding=True,
input_names=["inputs"],
output_names=["outputs"],
dynamic_axes={"inputs": [0, 1]}
)
Which I run like follows:
export MODEL_PATH=/tmp/models/esm/esm1b_t33_650M_UR50S.pt
export CONVERTED_GRAPH_PATH=/tmp/models/onnx_esm/graph.onnx
mkdir -p $(dirname $MODEL_PATH) $(dirname $CONVERTED_GRAPH_PATH)
curl https://dl.fbaipublicfiles.com/fair-esm/models/esm1b_t33_650M_UR50S.pt --output /tmp/models/esm/esm1b_t33_650M_UR50S.pt # this is a 7Gb file
python convert_onnx_esm.py --model-path $MODEL_PATH --converted-model-path $CONVERTED_GRAPH_PATH
And the error I get after running the conversion script:
RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: int
Environment:
Cuda 11.2, CudNN 8.1.1.33, and Python 3.8.5 with packages:
fair-esm==0.3.0
onnx==1.8.1
onnxconverter-common==1.6.0
onnxruntime-gpu==1.7.0
onnxruntime-tools==1.6.0
torch==1.9.0.dev20210318
Additional Info:
To no avail, I also tried changing the arguments (batch_tokens, [33]) to the dictionary format:
(batch_tokens, {"tokens": batch_tokens, "repr_layers": [33]})
And finally, in case it’s helpful, the nn.Module.forward() method definition starts like this:
def forward(self, tokens, repr_layers=[], need_head_weights=False, return_contacts=False)
Thank you for any tips or pointers! |
st180714 | I realized that passing the argument as a list is not valid. A tensor is needed, like this:
torch.onnx.export(model,
(batch_tokens, torch.tensor([33])),
converted_model_path,
use_external_data_format=True,
...
)
Unfortunately, torch.tensor([33])) is as good as [] as far as the model’s behavior is concerned. |
st180715 | Hi guys, this is my first post so don’t be too hard on me. I am currently playing around with the BiFuse mesh (GitHub - Yeh-yu-hsuan/BiFuse: [CVPR2020] BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion) and would like to convert the already trained mesh with torchscript so I can use inference in c++. The model has a certain complexity also I used torch.jit.script. However, this throws me the following error:
Traceback (most recent call last):
File "/home/anonym/python Projekte/WV_Depthestimation/WV_Depthestimation/convert2torchscript/convert.py", line 39, in <module>
main()
File "/home/anonym/python Projekte/WV_Depthestimation/WV_Depthestimation/convert2torchscript/convert.py", line 34, in main
jit = torch.jit.script(model)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_script.py", line 942, in script
return torch.jit._recursive.create_script_module(
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 391, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_script.py", line 391, in _construct
init_fn(script_module)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 428, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_script.py", line 391, in _construct
init_fn(script_module)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 428, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 452, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 335, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
unexpected expression on left-hand side of assignment:
File "/data/python Projekte/WV_Depthestimation/WV_Depthestimation/convert2torchscript/Utils/SpherePad.py", line 84
def forward(self, inputs):
[bs, c, h, w] = inputs.shape
~~~~~~~~~~~~ <--- HERE
#assert bs % 6 == 0 and h == w
key = '(%d,%d,%d)' % (h, w, self.pad_size)
Process finished with exit code 1
I was able to remove this error, but this error appears afterwards:
Traceback (most recent call last):
File "/home/anonym/python Projekte/WV_Depthestimation/WV_Depthestimation/convert2torchscript/convert.py", line 39, in <module>
main()
File "/home/anonym/python Projekte/WV_Depthestimation/WV_Depthestimation/convert2torchscript/convert.py", line 34, in main
jit = torch.jit.script(model)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_script.py", line 942, in script
return torch.jit._recursive.create_script_module(
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 391, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_script.py", line 391, in _construct
init_fn(script_module)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 428, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_script.py", line 391, in _construct
init_fn(script_module)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 428, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 452, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/home/anonym/python Projekte/WV_Depthestimation/lib/python3.8/site-packages/torch/jit/_recursive.py", line 335, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
Module 'SpherePad' has no attribute 'data' (This attribute exists on the Python module, but we failed to convert Python type: 'dict' to a TorchScript type. Dictionary inputs must have entries. Its type was inferred; try adding a type annotation for the attribute.):
File "/data/python Projekte/WV_Depthestimation/WV_Depthestimation/convert2torchscript/Utils/SpherePad.py", line 87
#assert bs % 6 == 0 and h == w
key = '(%d,%d,%d)' % (h, w, self.pad_size)
if key not in self.data:
~~~~~~~~~ <--- HERE
theta = 2 * np.arctan((0.5 * h + self.pad_size) / (0.5 * h))
e2c_ori = Equirec2Cube(1, 2*h, 4*h, h, 90)
I am a little bit on the hose, because the conversion to torchscript is new to me.I hope there is someone here who can help me.
My program to create the torchscript is:
import torch
from LoadData import MyData
from torch.utils.data import DataLoader
from Utils.ModelSaver import BaseSaver as ModelSaver
from Utils.CETransform import CETransform
from models.FCRN import MyModel as BiFuse
### configuration ###
path_img = "../data/"
path_weights = "models/weights"
def main():
img_data = MyData(path_img)
dataset = DataLoader(
img_data
)
saver = ModelSaver(path_weights)
model = BiFuse(
layers=50,
decoder="upproj",
output_size=None,
in_channels=3,
pretrained=True
).cuda()
saver.LoadLatestModel(model, None)
model.eval()
gpu_num = torch.cuda.device_count()
CE = CETransform()
data = next(iter(dataset))
#torch.jit.script_method(model)
jit = torch.jit.script(model)
if __name__ == '__main__':
main()
These error occurs in the SpherePad.py module:
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from .Equirec2Cube import Equirec2Cube
class SpherePad(nn.Module):
def __init__(self, pad_size):
super(SpherePad, self).__init__()
self.pad_size = pad_size
self.data = {}
# pad order: up, down, left, right sides
# use yes/no flag to choose flip/transpose or not
# notation: #face-#side_#flip-hor_#flip_ver_#transpose
# transpose is applied first
self.relation = {
'back': ['top-up_yes_yes_no', 'down-down_yes_yes_no', 'right-right_no_no_no', 'left-left_no_no_no'],
'down': ['front-down_no_no_no', 'back-down_yes_yes_no', 'left-down_yes_no_yes', 'right-down_no_yes_yes'],
'front': ['top-down_no_no_no', 'down-up_no_no_no', 'left-right_no_no_no', 'right-left_no_no_no'],
'left': ['top-left_yes_no_yes', 'down-left_no_yes_yes', 'back-right_no_no_no', 'front-left_no_no_no'],
'right': ['top-right_no_yes_yes', 'down-right_yes_no_yes', 'front-right_no_no_no', 'back-left_no_no_no'],
'top': ['back-up_yes_yes_no', 'front-up_no_no_no', 'left-up_no_yes_yes', 'right-up_yes_no_yes']
}
def _GetLoc(self, R_lst, grid_lst, K):
out = {}
pad = self.pad_size
f, cx, cy = K['f'], K['cx'], K['cy']
K_mat = torch.FloatTensor(
np.array([[f, 0, cx], [0, f, cy], [0, 0, 1]]))
grid_front = grid_lst[2] # 1 x h x h x 3
orders = ['back', 'down', 'front', 'left', 'right', 'top']
for i, face in enumerate(orders):
out[face] = {}
for j, connect_side in enumerate(['up', 'down', 'left', 'right']):
connected_face = self.relation[face][j].split('-')[0]
idx = orders.index(connected_face)
R_world_to_connected = R_lst[idx] # 3 x 3
R_world_to_itself = R_lst[i] # 3 x 3
R_itself_to_connected = torch.matmul(
R_world_to_connected, R_world_to_itself.transpose(0, 1))
new_grid = torch.matmul(
grid_front, R_itself_to_connected.transpose(0, 1))
proj = torch.matmul(new_grid, K_mat.transpose(0, 1))
x = proj[:, :, :, 0:1] / proj[:, :, :, 2:3]
y = proj[:, :, :, 1:2] / proj[:, :, :, 2:3]
x = (x - cx) / cx
y = (y - cy) / cy
xy = torch.cat([x, y], dim=3) # 1 x h x w x 2
out[face][connect_side] = {}
x = xy[:, :, :, 0:1]
y = xy[:, :, :, 1:2]
'''
mask1 = np.logical_and(x >= -1.01, x <= 1.01)
mask2 = np.logical_and(y >= -1.01, y <= 1.01)
mask = np.logical_and(mask1, mask2)
'''
mask1 = (x >= -1.01) & (x <= 1.01)
mask2 = (y >= -1.01) & (y <= 1.01)
mask = mask1 & mask2
xy = torch.clamp(xy, -1, 1)
if connect_side == 'up':
out[face][connect_side]['mask'] = mask[:, :pad, :, :]
out[face][connect_side]['xy'] = xy[:, :pad, :, :]
elif connect_side == 'down':
out[face][connect_side]['mask'] = mask[:, -pad:, :, :]
out[face][connect_side]['xy'] = xy[:, -pad:, :, :]
elif connect_side == 'left':
out[face][connect_side]['mask'] = mask[:, :, :pad, :]
out[face][connect_side]['xy'] = xy[:, :, :pad, :]
elif connect_side == 'right':
out[face][connect_side]['mask'] = mask[:, :, -pad:, :]
out[face][connect_side]['xy'] = xy[:, :, -pad:, :]
return out
def forward(self, inputs):
# here the first error occurred
# [bs, c, h, w] = inputs.shape
# removed the error
bs, c, h, w = inputs.shape
#assert bs % 6 == 0 and h == w
key = '(%d,%d,%d)' % (h, w, self.pad_size)
# here the second error occurred
if key not in self.data:
theta = 2 * np.arctan((0.5 * h + self.pad_size) / (0.5 * h))
e2c_ori = Equirec2Cube(1, 2*h, 4*h, h, 90)
e2c = Equirec2Cube(
1, 2*h, 4*h, h+2*self.pad_size, theta/np.pi * 180)
R_lst = [x.transpose(0, 1) for x in e2c.R_lst]
grid_lst = e2c.grid_lst
K = e2c_ori.intrisic
self.data[key] = self._GetLoc(R_lst, grid_lst, K)
pad = self.pad_size
orders = ['back', 'down', 'front', 'left', 'right', 'top']
out = []
for i, face in enumerate(orders):
this_face = inputs[i::6]
this_face = F.pad(this_face, (pad, pad, pad, pad))
repeats = this_face.shape[0]
for j, connect_side in enumerate(['up', 'down', 'left', 'right']):
connected_face_name = self.relation[face][j].split('-')[0]
connected_face = inputs[orders.index(connected_face_name)::6]
mask = self.data[key][face][connect_side]['mask'].cuda().repeat(repeats, 1, 1, c).permute(0, 3, 1, 2)
xy = self.data[key][face][connect_side]['xy'].cuda().repeat(repeats, 1, 1, 1)
interpo = F.grid_sample(connected_face, xy, mode='bilinear')
if connect_side == 'up':
this_face[:, :, :pad, :][mask] = interpo[mask]
elif connect_side == 'down':
this_face[:, :, -pad:, :][mask] = interpo[mask]
elif connect_side == 'left':
this_face[:, :, :, :pad][mask] = interpo[mask]
elif connect_side == 'right':
this_face[:, :, :, -pad:][mask] = interpo[mask]
out.append(this_face)
out = torch.cat(out, dim=0)
[bs, c, h, w] = out.shape
out = out.view(-1, bs//6, c, h, w).transpose(0,
1).contiguous().view(bs, c, h, w)
return out |
st180716 | DSL:
self.data = {}
TorchScript cannot infer the type of an empty dictionary (because there are no keys and values from which to infer the type). Try adding an annotation for this member at class scope:
class SpherePad(nn.Module):
data: Dict[str, ...]
I can’t tell for sure based on your code by the value type here seems to be some nested dictionary type. |
st180717 | Hi,
is there any method that can show model are using JIT or not? i mean, with this call torch.jit._state.disable() it will disable jit model initialization after that.
Just want to make sure model are loaded using JIT. |
st180718 | Solved by SplitInfinity in post #2
torch.jit._state.disable() and torch.jit._state.enable() are not meant to be user-facing APIs and should not be called. All user-facing APIs are exposed in torch/jit/__init__.py as torch.jit.XXX.
One thing you can do is check the type of the Module or function you have loaded. TorchScript modules a… |
st180719 | torch.jit._state.disable() and torch.jit._state.enable() are not meant to be user-facing APIs and should not be called. All user-facing APIs are exposed in torch/jit/__init__.py as torch.jit.XXX.
One thing you can do is check the type of the Module or function you have loaded. TorchScript modules are of type RecursiveScriptModule in Python, and functions are of type ScriptFunction. In addition, the serialization formats used by regular Pytorch and TorchScript are not the same, so if a model is successfully loaded using torch.jit.load, then it is a TorchScript model. |
st180720 | Hi folks. I am pretty new to pytorch, and torchscript, and I am very confused by the tutorial material on the subject. For instance, the " Using Scripting to Convert Modules" section of this doc: Introduction to TorchScript — PyTorch Tutorials 1.8.1+cu102 documentation 3
The first two “Out:” blocks of that section seem identical to me, aside from variable/parameter names, even though the second one supposedly is contains the branching logic that the first one is missing. So, am I going crazy, or is there something subtle happening there, or is that doc incorrect? |
st180721 | The outputs don’t make the difference obvious, but the warning raised by tracing the model would give you some information, what could fail using this approach:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if x.sum() > 0:
This warning should be raised while tracing the MyDecisionGate. |
st180722 | The documentation is incorrect because the control flow is in the innermost module and that is not the one for which code is being printed. I have had a PR 2 up to fix this for a while but couldn’t get a review for a while and I haven’t merged it since it’s been accepted. I will get this merged today. |
st180723 | Hi,
How could I get the Operator from torch::jit::module?
I want to traverse the whole module and estimate the costs on every op (since I want to get custom costs result, so using torch.autograd.profiler is not that applicable).
// some_cpp_code.cpp
<...>
m.def("get_cost", [](const torch::jit::Module& mod) {
auto forward_method = mod.get_method("forward");
assert(forward_method.graph());
Costs cost;
// Traverse the graph, is there any more intuitive way?
for (Node * n : forward_method.graph()->nodes()) {
auto kind = n->kind();
// Calculate the cost
switch (kind){
case aten::add:
cost += my_add_cost( n->inputs(), n->outputs());
case aten::conv2d:
cost += my_conv2d_cost(n->inputs(), n->outputs());
//...
}
});
The above way could (possibly) work, but it makes the code messy and actually I don’t need that much information. I just need the Operator and inputs and outputs. The ideal function call would be something like :
Costs my_cost(operator, inputs, outputs){
// Using a map to register correct cost function
}
// cost function
Costs my_conv2d_cost(operator, inputs, outputs){
// Get other information about this op
auto padding = inputs[4]; // Is this right?
}
I have 3 questions:
Is PyTorch offers any way to traverse the whole module and get every op? The above code would have to use a switch case to judge every case, this is quite annoying. Also, it will encounter many nodes like prim::constant , and those nodes are kind of not interesting.
Is getOperation(const Node* node = nullptr) 2 function the one I need? It returns the operation, but actually I don’t quite understand how to use it , it doesn’t filter out the prim::constant node as stated before, right?
How could I get the correct inputs, like the code shown of my_conv2d_cost, is this the right way? |
st180724 | No, but you can use Node::maybeOperator to skip nodes that don’t have operators (like prim nodes).
No; getOperation returns an Operation, which is an alias for std::function<void(Stack*)>. This is a callable function that executes the operation, but will not be useful for cost analysis. Consider using getOperator or maybeOperator; these functions return an Operator that has an associated schema (Operator::schema()) that can be used to determine which operator it is (essentially the same as n->kind() from your example).
Take a look at the functions in jit/ir.h; you will find functions there that help you access node inputs and their types. However, in the case of tensor types, you might not always find complete shape and dtype information (it depends on how the graph is produced and when your analysis code runs). |
st180725 | @SplitInfinity
Thanks for the reply! Could you give me some hints about this line?
However, in the case of tensor types, you might not always find complete shape and dtype information (it depends on how the graph is produced and when your analysis code runs).
I have also found that it is hard to get the info from incomplete tensor ( conv2d’s weight tensor for example). Could you give me some hints on how pytorch handles this case? Rerun the full module seems not a good idea for large module case in my opinion. |
st180726 | Unfortunately, there is no way to propagate shape information other than running the model. We are working on ways to let users add this information to graphs directly but this won’t be released anytime soon. |
st180727 | Thanks for your work!
We are working on ways to let users add this information to graphs directly
Do you mean the structured kernel definitions 2? If so then maybe I could try to refer to existing PRs? |
st180728 | No, that is different. There is ongoing work on a tensor DSL but unfortunately there’s no public RFC or something I can share. |
st180729 | I want to pass the traced module to C++ and get its input tensors’ shape. To do this, I use the following code :
# test.py
import torch
import custom_cpp_func
import torchvision.models as models
rn18 = models.resnet18()
traced_rn18 = torch.jit.trace(rn18, (torch.randn(5, 3, 224, 224),))
torch._C._jit_pass_inline(traced_rn18.graph)
custom_cpp_func.print_shape(traced_rn18._c)
// custom_op.cpp
int64_t print_shape(const torch::jit::Module &mod)
{
mod.eval(); // Error here
auto forward_method = mod.get_method("forward");
assert(forward_method.graph());
//...
std::vector<torch::jit::IValue> new_inputs;
new_inputs.push_back(torch::randn({5, 3, 224, 224}));
const c10::IValue result = mod.forward(new_inputs).toTensor(); //Error here
}
The above code could not compile, it has the following error:
error: 'this' argument to member function 'eval' has type 'const torch::jit::Module', but function is not marked const
mod.eval();
^~~
.../torch/csrc/jit/api/module.h:182:8: note: 'eval' declared here
void eval() {
error: 'this' argument to member function 'forward' has type 'const torch::jit::Module', but function is not marked const
const c10::IValue result = mod.forward(new_inputs).toTensor();
torch/csrc/jit/api/module.h:111:10: note: 'forward' declared here
IValue forward(std::vector<IValue> inputs) {
Could someone tell me what’s wrong with this? Why can’t I just pass the module as an argument ? Must I store the module and re-load it into the C++?
Secondly , how could I run the module to make sure the incomplete tensor could get correct shape information? |
st180730 | Solved by Stonepia in post #2
Well, I found that this is a stupid question. I wrongly passed the module as const, remove it to
int64_t print_shape(torch::jit::Module &mod)
would solve the problem. |
st180731 | Well, I found that this is a stupid question. I wrongly passed the module as const, remove it to
int64_t print_shape(torch::jit::Module &mod)
would solve the problem. |
st180732 | Can’t trace the model using torch.jit.trace. This is a resnet 101 based segmentation model.
I am using python 3.7, torch 1.8, rtx 3070 8gb.
My code:
Net=FCN.Net(CatDic.CatNum)
Net.load_state_dict(torch.load('./model.torch', map_location=torch.device('cuda')), strict=False)
Net.eval()
c = torch.jit.trace(Net, torch.randn(1, 640, 640, 3).cuda())
My neural network structure:
class Net(nn.Module):
def __init__(self, CatDict):
super(Net, self).__init__()
self.Encoder = models.resnet101(pretrained=True)
self.PSPScales = [1, 1 / 2, 1 / 4, 1 / 8]
self.PSPLayers = nn.ModuleList()
for Ps in self.PSPScales:
self.PSPLayers.append(nn.Sequential(
nn.Conv2d(2048, 1024, stride=1, kernel_size=3, padding=1, bias=True)))
self.PSPSqueeze = nn.Sequential(
nn.Conv2d(4096, 512, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(512, 512, stride=1, kernel_size=3, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU()
)
self.SkipConnections = nn.ModuleList()
self.SkipConnections.append(nn.Sequential(
nn.Conv2d(1024, 512, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU()))
self.SkipConnections.append(nn.Sequential(
nn.Conv2d(512, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
self.SkipConnections.append(nn.Sequential(
nn.Conv2d(256, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
# ------------------Skip squeeze applied to the (concat of upsample+skip conncection layers)-----------------------------------------------------------------------------
self.SqueezeUpsample = nn.ModuleList()
self.SqueezeUpsample.append(nn.Sequential(
nn.Conv2d(1024, 512, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU()))
self.SqueezeUpsample.append(nn.Sequential(
nn.Conv2d(256 + 512, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
self.SqueezeUpsample.append(nn.Sequential(
nn.Conv2d(256 + 256, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
self.OutLayersList =nn.ModuleList()
self.OutLayersDict={}
for f,nm in enumerate(CatDict):
self.OutLayersDict[nm]= nn.Conv2d(256, 2, stride=1, kernel_size=3, padding=1, bias=False)
self.OutLayersList.append(self.OutLayersDict[nm])
def forward(self,Images, UseGPU = True, TrainMode=False, FreezeBatchNormStatistics=False):
RGBMean = [123.68,116.779,103.939]
RGBStd = [65,65,65]
if TrainMode:
tp=torch.FloatTensor
else:
self.half()
tp=torch.HalfTensor
self.eval()
#InpImages = torch.autograd.Variable(torch.from_numpy(Images), requires_grad=False).transpose(2,3).transpose(1, 2).type(torch.FloatTensor)
InpImages = torch.autograd.Variable(Images, requires_grad=False).transpose(2,3).transpose(1, 2).type(tp)
if FreezeBatchNormStatistics==True: self.eval()
if UseGPU:
InpImages=InpImages.cuda()
self.cuda()
else:
self=self.cpu()
self.float()
InpImages=InpImages.type(torch.float).cpu()
for i in range(len(RGBMean)): InpImages[:, i, :, :]=(InpImages[:, i, :, :]-RGBMean[i])/RGBStd[i] # normalize image values
x=InpImages
SkipConFeatures=[] # Store features map of layers used for skip connection
x = self.Encoder.conv1(x)
x = self.Encoder.bn1(x)
x = self.Encoder.relu(x)
x = self.Encoder.maxpool(x)
x = self.Encoder.layer1(x)
SkipConFeatures.append(x)
x = self.Encoder.layer2(x)
SkipConFeatures.append(x)
x = self.Encoder.layer3(x)
SkipConFeatures.append(x)
x = self.Encoder.layer4(x)
PSPSize=(x.shape[2],x.shape[3]) # Size of the original features map
PSPFeatures=[] # Results of various of scaled procceessing
for i,PSPLayer in enumerate(self.PSPLayers): # run PSP layers scale features map to various of sizes apply convolution and concat the results
NewSize=(np.array(PSPSize)*self.PSPScales[i]).astype(np.int)
if NewSize[0] < 1: NewSize[0] = 1
if NewSize[1] < 1: NewSize[1] = 1
y = nn.functional.interpolate(x, tuple(NewSize), mode='bilinear')
y = PSPLayer(y)
y = nn.functional.interpolate(y, PSPSize, mode='bilinear')
PSPFeatures.append(y)
x=torch.cat(PSPFeatures,dim=1)
x=self.PSPSqueeze(x)
for i in range(len(self.SkipConnections)):
sp=(SkipConFeatures[-1-i].shape[2],SkipConFeatures[-1-i].shape[3])
x=nn.functional.interpolate(x,size=sp,mode='bilinear') #Resize
x = torch.cat((self.SkipConnections[i](SkipConFeatures[-1-i]),x), dim=1)
x = self.SqueezeUpsample[i](x)
self.OutLbDict = {}
ret_arr = np.eye(640, 640)
for nm in self.OutLayersDict:
l=self.OutLayersDict[nm](x)
l = nn.functional.interpolate(l,size=InpImages.shape[2:4],mode='bilinear') # Resize to original image size
tt, Labels = l.max(1) # Find label per pixel
self.OutLbDict[nm] = Labels
array = np.asarray(self.OutLbDict[nm].cpu())
resx = np.reshape(array, ((array.shape)[2], (array.shape)[1]))
ret_arr = list(ret_arr + resx*10)
return ret_arr
I get an error:
RuntimeError: Tracer cannot infer type of [array([..])]
:Could not infer type of list element: Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type ndarray.
If I remove all numpy arrays from the code, then I get a different error:
C:\anaconda3\lib\site-packages\torch\jit_trace.py in _check_trace(check_inputs, func, traced_func, check_tolerance, strict, force_outplace, is_trace_module, _module_class)
517 diag_info = graph_diagnostic_info()
518 if any(info is not None for info in diag_info):
→ 519 raise TracingCheckError(*diag_info)
520
521
TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
Graph diff:
graph(%self.1 : __torch__.FCN_NetModel.Net,
%Images : Tensor):
%2 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="SqueezeUpsample"](%self.1)
%3 : __torch__.torch.nn.modules.container.Sequential = prim::GetAttr[name="2"](%2)
%4 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="SkipConnections"](%self.1)
%5 : __torch__.torch.nn.modules.container.Sequential = prim::GetAttr[name="2"](%4)
........
The interesting thing is that if I run c = torch.jit.trace (Net, torch.randn (1, 640, 640, 3) .cuda ()) again, the last error does not occur and the tracing is successful. But this traced model doesn’t work. I would be grateful for your help. |
st180733 | As you’ve already explained, the first error is used if numpy arrays are used instead of tensors.
The second one is raised, if your forward pass is data-dependent and could change for different inputs.
Tracing a model would record all operations for the provided input and would not allow to execute conditions inside the model etc. If you want to use conditions, loops, etc. (as is suggested by the error message), you could torch.jit.script the model instead. |
st180734 | As for the first and second errors, you can provide me with an example, even if it is inaccurate, preferably on the code that you see in the question. Thank you very much, I roughly understood the problem, but still doubts and misunderstandings arise. |
st180735 | Based on the graph diff in the error message, the issue seems to be that one invocation of your module by the tracer calls self.SqueezeUpsample[2] and self.SkipConnections[2] but the next does not. But I cannot pinpoint in your code where this might be happening. self.SqueezeUpsample and self.skipConnections are used in for loops, but those loops have a deterministic number of iterations… |
st180736 | Andrew1:
Can’t trace the model using torch.jit.trace. This is a resnet 101 based segmentation model.
I am using python 3.7, torch 1.8, rtx 3070 8gb.
My code:
Net=FCN.Net(CatDic.CatNum)
Net.load_state_dict(torch.load('./model.torch', map_location=torch.device('cuda')), strict=False)
Net.eval()
c = torch.jit.trace(Net, torch.randn(1, 640, 640, 3).cuda())
My neural network structure:
class Net(nn.Module):
def __init__(self, CatDict):
super(Net, self).__init__()
self.Encoder = models.resnet101(pretrained=True)
self.PSPScales = [1, 1 / 2, 1 / 4, 1 / 8]
self.PSPLayers = nn.ModuleList()
for Ps in self.PSPScales:
self.PSPLayers.append(nn.Sequential(
nn.Conv2d(2048, 1024, stride=1, kernel_size=3, padding=1, bias=True)))
self.PSPSqueeze = nn.Sequential(
nn.Conv2d(4096, 512, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(512, 512, stride=1, kernel_size=3, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU()
)
self.SkipConnections = nn.ModuleList()
self.SkipConnections.append(nn.Sequential(
nn.Conv2d(1024, 512, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU()))
self.SkipConnections.append(nn.Sequential(
nn.Conv2d(512, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
self.SkipConnections.append(nn.Sequential(
nn.Conv2d(256, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
# ------------------Skip squeeze applied to the (concat of upsample+skip conncection layers)-----------------------------------------------------------------------------
self.SqueezeUpsample = nn.ModuleList()
self.SqueezeUpsample.append(nn.Sequential(
nn.Conv2d(1024, 512, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU()))
self.SqueezeUpsample.append(nn.Sequential(
nn.Conv2d(256 + 512, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
self.SqueezeUpsample.append(nn.Sequential(
nn.Conv2d(256 + 256, 256, stride=1, kernel_size=1, padding=0, bias=False),
nn.BatchNorm2d(256),
nn.ReLU()))
self.OutLayersList =nn.ModuleList()
self.OutLayersDict={}
for f,nm in enumerate(CatDict):
self.OutLayersDict[nm]= nn.Conv2d(256, 2, stride=1, kernel_size=3, padding=1, bias=False)
self.OutLayersList.append(self.OutLayersDict[nm])
def forward(self,Images, UseGPU = True, TrainMode=False, FreezeBatchNormStatistics=False):
RGBMean = [123.68,116.779,103.939]
RGBStd = [65,65,65]
if TrainMode:
tp=torch.FloatTensor
else:
self.half()
tp=torch.HalfTensor
self.eval()
#InpImages = torch.autograd.Variable(torch.from_numpy(Images), requires_grad=False).transpose(2,3).transpose(1, 2).type(torch.FloatTensor)
InpImages = torch.autograd.Variable(Images, requires_grad=False).transpose(2,3).transpose(1, 2).type(tp)
if FreezeBatchNormStatistics==True: self.eval()
if UseGPU:
InpImages=InpImages.cuda()
self.cuda()
else:
self=self.cpu()
self.float()
InpImages=InpImages.type(torch.float).cpu()
for i in range(len(RGBMean)): InpImages[:, i, :, :]=(InpImages[:, i, :, :]-RGBMean[i])/RGBStd[i] # normalize image values
x=InpImages
SkipConFeatures=[] # Store features map of layers used for skip connection
x = self.Encoder.conv1(x)
x = self.Encoder.bn1(x)
x = self.Encoder.relu(x)
x = self.Encoder.maxpool(x)
x = self.Encoder.layer1(x)
SkipConFeatures.append(x)
x = self.Encoder.layer2(x)
SkipConFeatures.append(x)
x = self.Encoder.layer3(x)
SkipConFeatures.append(x)
x = self.Encoder.layer4(x)
PSPSize=(x.shape[2],x.shape[3]) # Size of the original features map
PSPFeatures=[] # Results of various of scaled procceessing
for i,PSPLayer in enumerate(self.PSPLayers): # run PSP layers scale features map to various of sizes apply convolution and concat the results
NewSize=(np.array(PSPSize)*self.PSPScales[i]).astype(np.int)
if NewSize[0] < 1: NewSize[0] = 1
if NewSize[1] < 1: NewSize[1] = 1
y = nn.functional.interpolate(x, tuple(NewSize), mode='bilinear')
y = PSPLayer(y)
y = nn.functional.interpolate(y, PSPSize, mode='bilinear')
PSPFeatures.append(y)
x=torch.cat(PSPFeatures,dim=1)
x=self.PSPSqueeze(x)
for i in range(len(self.SkipConnections)):
sp=(SkipConFeatures[-1-i].shape[2],SkipConFeatures[-1-i].shape[3])
x=nn.functional.interpolate(x,size=sp,mode='bilinear') #Resize
x = torch.cat((self.SkipConnections[i](SkipConFeatures[-1-i]),x), dim=1)
x = self.SqueezeUpsample[i](x)
self.OutLbDict = {}
ret_arr = np.eye(640, 640)
for nm in self.OutLayersDict:
l=self.OutLayersDict[nm](x)
l = nn.functional.interpolate(l,size=InpImages.shape[2:4],mode='bilinear') # Resize to original image size
tt, Labels = l.max(1) # Find label per pixel
self.OutLbDict[nm] = Labels
array = np.asarray(self.OutLbDict[nm].cpu())
resx = np.reshape(array, ((array.shape)[2], (array.shape)[1]))
ret_arr = list(ret_arr + resx*10)
return ret_arr
I get an error:
RuntimeError: Tracer cannot infer type of [array([..])]
:Could not infer type of list element: Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type ndarray.
If I remove all numpy arrays from the code, then I get a different error:
C:\anaconda3\lib\site-packages\torch\jit_trace.py in _check_trace(check_inputs, func, traced_func, check_tolerance, strict, force_outplace, is_trace_module, _module_class)
517 diag_info = graph_diagnostic_info()
518 if any(info is not None for info in diag_info):
→ 519 raise TracingCheckError(*diag_info)
520
521
TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
Graph diff:
graph(%self.1 : __torch__.FCN_NetModel.Net,
%Images : Tensor):
%2 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="SqueezeUpsample"](%self.1)
%3 : __torch__.torch.nn.modules.container.Sequential = prim::GetAttr[name="2"](%2)
%4 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="SkipConnections"](%self.1)
%5 : __torch__.torch.nn.modules.container.Sequential = prim::GetAttr[name="2"](%4)
........
The interesting thing is that if I run c = torch.jit.trace (Net, torch.randn (1, 640, 640, 3) .cuda ()) again, the last error does not occur and the tracing is successful. But this traced model doesn’t work. I would be grateful for your help.
Your information is very interesting. Thank you for sharing |
st180737 | How can I workaround the errors?
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) → (Tensor):
Expected a value of type ‘Tensor’ for argument ‘self’ but instead found type ‘Optional[Tuple[]]’.
aten::add.Scalar(Tensor self, Scalar other, Scalar alpha=1) → (Tensor):
Expected a value of type ‘Tensor’ for argument ‘self’ but instead found type ‘Optional[Tuple[]]’.
aten::add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) → (Tensor(a!)):
Expected a value of type ‘Tensor’ for argument ‘self’ but instead found type ‘Optional[Tuple[]]’.
aten::add.t(t[] a, t[] b) → (t[]):
Could not match type Optional[Tuple[]] to List[t] in argument ‘a’: Cannot match List[t] to Optional[Tuple[]].
aten::add.str(str a, str b) → (str):
Expected a value of type ‘str’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
aten::add.int(int a, int b) → (int):
Expected a value of type ‘int’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
aten::add.float(float a, float b) → (float):
Expected a value of type ‘float’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
aten::add.int_float(int a, float b) → (float):
Expected a value of type ‘int’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
aten::add.float_int(float a, int b) → (float):
Expected a value of type ‘float’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
aten::add(Scalar a, Scalar b) → (Scalar):
Expected a value of type ‘number’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
add(float a, Tensor b) → (Tensor):
Expected a value of type ‘float’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
add(int a, Tensor b) → (Tensor):
Expected a value of type ‘int’ for argument ‘a’ but instead found type ‘Optional[Tuple[]]’.
The original call is:
File “…/transformers/src/transformers/models/bert/modeling_bert.py”, line 546
for i, layer_module in enumerate(self.layer):
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
layer_head_mask = head_mask[i] if head_mask is not None else None |
st180738 | It seems that the Tuple[Tensor, Tensor] is actually Optional[Tuple[Tensor, Tensor]]; you need to refine it so that TorchScript can safely conclude it is not none:
if all_hidden_states is not None:
...
or
assert all_hidden_states is not None |
st180739 | Hi there,
How could I go through the module and print its tensor size?
For example, I want to do the following thing:
void print_op_input(torch::jit::module & mod){
auto fwd_mtd = mod.get_method("forward");
for (torch::jit::Node * nd : fwd_method.graph()->nodes()){
if (nd->kind().is_aten()){
auto inputs = nd->inputs();
// Question : How to access the input?
// The inputs here are type intArrayRef<Value *>
}
}
}
The question is , how could I access the input? It is Value*, not IValue. How could I access its true stored value? |
st180740 | Solved by hidefromkgb in post #2
I think what you need is toIValue(), or even IValue(toIValue()) that removes c10::optional<> from c10::optional<IValue>.
No idea why we even need this whole c10::optional<> construct when IValue can simply return .isNone() == true — which it actually does after the proposed conversion if the value … |
st180741 | I think what you need is toIValue(), or even IValue(toIValue()) that removes c10::optional<> from c10::optional<IValue>.
No idea why we even need this whole c10::optional<> construct when IValue can simply return .isNone() == true — which it actually does after the proposed conversion if the value was empty. |
st180742 | The solution proposed by @hidefromkgb should work, but keep in mind that toIValue(const Value* v) will return an IValue only for Value*s that point to constants. |
st180743 | I try to TorchScript my function with @torch.jit.script decoration, which itself calls other functions from another module. But I ended up with the following error.
File “/venv/lib/python3.7/site-packages/torch/jit/init.py”, line 1551, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File “/venv/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 583, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File “/venv/lib/python3.7/site-packages/torch/jit/init.py”, line 1547, in script
ast = get_jit_def(obj, obj.name)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 185, in get_jit_def
return build_def(ctx, fn_def, type_line, def_name, self_name=self_name)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 219, in build_def
build_stmts(ctx, body))
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 193, in call
return method(ctx, node)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 384, in build_If
build_stmts(ctx, stmt.body),
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 193, in call
return method(ctx, node)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 384, in build_If
build_stmts(ctx, stmt.body),
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 193, in call
return method(ctx, node)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 385, in build_If
build_stmts(ctx, stmt.orelse))
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 193, in call
return method(ctx, node)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 384, in build_If
build_stmts(ctx, stmt.body),
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 126, in
stmts = [build_stmt(ctx, s) for s in stmts]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 193, in call
return method(ctx, node)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 413, in build_With
return With(r, build_withitems(ctx, stmt.items), build_stmts(ctx, stmt.body))
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 121, in build_withitems
items = [build_withitem(ctx, i) for i in items]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 121, in
items = [build_withitem(ctx, i) for i in items]
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 193, in call
return method(ctx, node)
File “/venv/lib/python3.7/site-packages/torch/jit/frontend.py”, line 282, in build_withitem
end = start + len(item.context_expr.id)
AttributeError: ‘Call’ object has no attribute ‘id’
The qualified_name in torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj)) is referenced to a simple function which does not call any PyTorch module.
I use PyTorch 1.6.0 with Python 3.7. Any idea? |
st180744 | Solved by SplitInfinity in post #3
The problem is that the first implementation of WithItemBuilder that you linked makes use of features of the ast module that were introduced in python3.8. The second one should be compatible with 3.8 as well as older versions and was introduced because of a previous bug report that WithItemBuilder d… |
st180745 | It seems changing the WithItemBuilder class from (PyTorch 1.6.0):
class WithItemBuilder(Builder):
@staticmethod
def build_withitem(ctx, item):
lineno = item.context_expr.lineno
start = item.context_expr.col_offset
op_vars = item.optional_vars
if op_vars:
end = op_vars.col_offset + len(op_vars.id)
else:
end = start + len(item.context_expr.id)
r = ctx.make_range(lineno, start, end)
return WithItem(r, build_expr(ctx, item.context_expr), build_expr(ctx, op_vars) if op_vars else None)
to (PyTorch master branch)
class WithItemBuilder(Builder):
@staticmethod
def build_withitem(ctx, item):
lineno = item.context_expr.lineno
start = item.context_expr.col_offset
end = start + len(pretty_node_names[ast.With])
op_vars = item.optional_vars
r = ctx.make_range(lineno, start, end)
return WithItem(r, build_expr(ctx, item.context_expr), build_expr(ctx, op_vars) if op_vars else None)
would resolve the issue, however I am not sure if this is really the fix to the problem. |
st180746 | The problem is that the first implementation of WithItemBuilder that you linked makes use of features of the ast module that were introduced in python3.8. The second one should be compatible with 3.8 as well as older versions and was introduced because of a previous bug report that WithItemBuilder did not work in 3.6. |
st180747 | Hi,
Currently i have this error when using chunk on forward after first iteration. but if chunk is replace with split, the error not happen. is there any fix related these problem?
is split same as chunk on term of backward?
this is minimal code i can reproduce
import torch
from torch import jit
from torch.nn import Parameter
class CustomRNNCell(jit.ScriptModule):
def __init__(self, input_size, hidden_size):
super(CustomRNNCell, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.weight_ih = Parameter(torch.randn(3 * hidden_size, input_size))
self.weight_hh = Parameter(torch.randn(3 * hidden_size, hidden_size))
self.bias_ih = Parameter(torch.randn(3 * hidden_size))
self.bias_hh = Parameter(torch.randn(3 * hidden_size))
@jit.script_method
def forward(self, input, state):
# type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]
hx, cx = state
gates = (torch.mm(input, self.weight_ih.t()) + self.bias_ih +
torch.mm(hx, self.weight_hh.t()) + self.bias_hh)
m, o, i = gates.chunk(3, 1)
m = torch.sigmoid(m)
o = torch.tanh(o)
i = torch.tanh(i)
cy = (1 - m) * cx + (m * i)
hy = (1 - o) * i + (o * cx)
return hy, (hy, cy)
torch.autograd.set_detect_anomaly(mode=True)
cell = CustomRNNCell(
input_size=1280,
hidden_size=256
)
for i in range(20):
x = torch.randn(8, 1280)
state = (
torch.zeros(8, 256),
torch.zeros(8, 256)
)
out, _ = cell(x, state)
print(i)
out.mean().backward()
and the error message
0
1
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py:147: UserWarning: Error detected in torch::jit::(anonymous namespace)::DifferentiableGraphBackward. Traceback of forward call that caused the error:
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 845, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 451, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 434, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-23-7bc819a85eda>", line 47, in <module>
out, _ = cell(x, state)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:104.)
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-23-7bc819a85eda> in <module>()
48
49 print(i)
---> 50 out.mean().backward()
1 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
145 Variable._execution_engine.run_backward(
146 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 147 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
148
149
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: tensor does not have a device |
st180748 | This is a bug in the autodiff, I would recommend to file an issue on the PyTorch github (and crosslink here and the issue). As someone who sometimes looks into PyTorch issues, thank you for making a reproducing example. These are gold to anyone trying to fix things!
Best regards
Thomas |
st180749 | Hi Thomas, big thanks for response before.
Just want to confirm something, actually i want to implement this paper 1, the code above is just a test to produce bug, but it’s base of this forward code
@jit.script_method
def forward(self, x, state):
# type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]
hx, cx = state
xh = (
torch.mm(x, self.weight_ih.t()) + self.bias_ih +
torch.mm(hx, self.weight_hh.t()) + self.bias_hh
)
i, m, o = xh.chunk(3, 1)
m = m + (self.weight_ch_m * cx)
o = o + (self.weight_ch_o * cx)
i = torch.tanh(i)
m = torch.sigmoid(m)
o = torch.sigmoid(o)
# Base on Formula
h = (1 - m) * cx + (m * i)
c = (1 - o) * i + (o * cx)
return h, (h, c)
since the h will be h + (c * 0) to make grad connected to backward, is the implementation of this code is correct for the paper 1 in term of forward and backward? or there is something wrong with my implementation?
Any response will be appreciate, Thanks. |
st180750 | Hi,
I am trying to get a sense for the level of support in TorchScript for back propagation graphs produced by using autograd. If anyone can provide a quick summary, and/or pointers to how one can experiment with this - it would be much appreciated.
Ljubisa |
st180751 | TorchScript has full support for PyTorch’s tape-based autograd. You can call backward() on your tensors if you are recording gradients and it should work. |
st180752 | Hi Michael,
Thanks for the prompt response. I am interested in tracing through the backward graph using TorchScript and dumping the IR for the autodiff-ed backdrop graph, for full graph optimization in a separate framework. To be precise - on the example of a backward op for a matmul, I’d expect to get the appropriately transposed matmul relevant to the backward pass in the dumped IR graph. Would you expect this to be possible?
Ljubisa |
st180753 | Ah, we do not have a public API for exposing a static backward graph, as PyTorch relies on dynamic autograd for doing automatic differentiation. We do have an internal API for symbolic differentiation (see torch/csrc/jit/runtime/autodiff.cpp which you can play with, but it is not complete and we don’t have any stability guarantees about it |
st180754 | Hi Michael,
My understanding is that torchscript is converted to SSA IR and then executed. How does the autograd work on the SSA IR?
DO you mean autodiff.cpp isn;t enabled for torchscript now? |
st180755 | Hi @Michael_Suo,
Curious if you could elaborate on how “dynamic” plays into this — if a compiled TorchScript model has been through profile-guided optimization and had all of the control flow stripped out, the actual autograd graph structure should be the same at each inference pass, yes?
When I run autograd with create_graph = True, is a graph being created that PyTorch knows how to execute, or only how to differentiate further?
The motivation for these questions is models whose inference pass involves a gradient of their initial prediction, and the hope that it might be possible to save/cache the compute graph representing their gradient as a model in its own right.
Thanks for your help! |
st180756 | Hi,
is it a known limitation that jit.trace will ignore temporary requires_grad = False?
Here is an example:
# EXAMPLE 1
import torch
from torch import nn, jit
from torch.optim import SGD
inputs = torch.tensor([2.0], device="cuda")
model = nn.Linear(1, 1, bias=False).to("cuda")
optimizer = SGD(model.parameters(), lr=1e-1)
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.model = model
def forward(self, x):
param = next(self.parameters())
param.requires_grad = True
x = self.model(x).mean()
param.requires_grad = False
return x
c = MyModule()
forward = jit.trace(c, (inputs,))
result = forward(inputs)
result.mean().backward()
optimizer.step()
optimizer.zero_grad()
print("It does not work fine!")
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
But when I switch the requires_grad flag:
# EXAMPLE 2
import torch
from torch import nn, jit
from torch.optim import SGD
inputs = torch.tensor([2.0], device="cuda")
model = nn.Linear(1, 1, bias=False).to("cuda")
optimizer = SGD(model.parameters(), lr=1e-1)
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.model = model
def forward(self, x):
param = next(self.parameters())
param.requires_grad = False # True --> False
x = self.model(x).mean()
param.requires_grad = True # False --> True
return x
c = MyModule()
forward = jit.trace(c, (inputs,))
result = forward(inputs)
result.mean().backward()
optimizer.step()
optimizer.zero_grad()
print("It does work fine!")
It works fine!
However, when I run it without jit, it runs like expected which is: example 1 runs fine and example 2 fails with an error! |
st180757 | Solved by tjoseph in post #3
For everyone wondering: requires_grad is not supposed to work. trace only tracks tensor operations, not attributes. See [JIT] jit.trace does not support parameter.requires_grad? · Issue #53515 · pytorch/pytorch · GitHub
Anyone know a solution? This is something that is supported in tensorflow, but … |
st180758 | Additional short question: is there something like torch.jit.ignore for tracing? |
st180759 | For everyone wondering: requires_grad is not supposed to work. trace only tracks tensor operations, not attributes. See [JIT] jit.trace does not support parameter.requires_grad? · Issue #53515 · pytorch/pytorch · GitHub 2
Anyone know a solution? This is something that is supported in tensorflow, but not in PyTorch it seems. Makes optimizing and training my model via jit very hard even though it would profit from jit. |
st180760 | Anyone know a solution?
script or trace deeper modules or functions - e.g. compiling(tracing) “infrastructure” MyModule makes no sense. Use @jit.ignore to break recursive compilation.
you can combine scripting & tracing, if you trace functions, e.g.:
def _beta_rsample_3d(a,b):
return D.Beta(a,b).rsample()
beta_rsample_3d = jit.trace(_beta_rsample_3d, (torch.ones(1,1,1), torch.ones(1,1,1)), check_trace=False) |
st180761 | Thank you for answering!
Can you explain why it does not make sense? Obviously here I use it only to showcase a minimal example, but in a more complex model, these kind of modules will exist. The solution you suggested is something I really want to avoid since it will clutter my model. I would not want to change my model to support scripting/tracing. If tracing would include tensor attributes I could just pass my whole forward pass to jit.trace which would properly separate optimization from model semantics. jit.script would be fine since it can be used at least with decorators, but unfortunately the missing support of torch.distributions does not allow jit.script.
btw: Do you know whether torch.jit.script will track tensor attributes (i.e. requires_grad) or is that something that the jit is generally not supposed to do and it is only meant for deployment? |
st180762 | tjoseph:
Can you explain why it does not make sense?
JIT speedups mostly come from tensor operations - like fused math operations and some optimizations specific to tensors, ops or layers. Classes like above are closer to non-computational wrappers - similar to training loop code, there is not much to optimize there.
tjoseph:
Do you know whether torch.jit.script will track tensor attributes (i.e. requires_grad) or is that something that the jit is generally not supposed to do and it is only meant for deployment?
It is read-only (and used as meta-information in compilation). Instead, you can detach() tensors, and for parameters - you’d have to set it from outside.
tjoseph:
If tracing would include tensor attributes I could just pass my whole forward pass to jit.trace which would properly separate optimization from model semantics. jit.script would be fine since it can be used at least with decorators, but unfortunately the missing support of torch.distributions does not allow jit.script.
Dunno, I find jit.trace too dangerous / limiting to use on module trees. As for torch.distributions, I mostly stopped using these wrappers when coding for performance. |
st180763 | Edit: Sorry, I didn’t see you already reported it as an issue and got more input there, too.
Best regards
Thomas |
st180764 | Hey, thank you for updating!
I see what you mean by my example not making sense. I just wanted to showcase the toggling of the requires_grad flag.
I am wondering why .detach() is supported, but not requires_grad. To me both fullfill orthogonal ways to influence gradient backpropagation. I guess it is just a practical limitation about how tracing works?
Why do you regard jit.trace as being dangerous/limiting? I am learning a lot here. Btw if you have some ressource where I can read up on the practical use of JIT I would appreciate you pointing me there! |
st180765 | param.requires_grad_(b) may also work, but frankly this may be not an anticipated use case. Intuitively, you own tensors created in functions, and can toggle requires_grad freely, but parameters are owned by a module, so it is not clear if toggling works cleanly from inside.
tjoseph:
Why do you regard jit.trace as being dangerous/limiting?
as you said above, “trace only tracks tensor operations”, this limits passable python code pretty severely. “dangerous” is associated hardcoding of non-tensor values as constants, though this issues warnings.
Contrived example that fails (late) with jit.trace:
k = x.size(0)
ls = [x[i] for i in range(k)] |
st180766 | googlebot:
k = x.size(0)
ls = [x[i] for i in range(k)]
Yes. So while requires_grad_ (always use the method!) could probably be supported, the list size here inherently is invisible to PyTorch in scripting.
The summary I always have is (I think it’s also in ch 15 of our book):
In tracing you can do anything you want, but the JIT won’t (and can’t) try to understand it all.
In scripting you can only do what the understands but the JIT will do it all.
This is inherent and to me it means that the second part would want to be extended in scope.
Best regards
Thomas |
st180767 | Thank you again for the explanation @tom @googlebot !
Something like this would solve my problem just fine:
c = MyModule()
# Tell tracer to also track some parameters
forward = jit.trace(c, (inputs,), track=c.parameters())
To me tracing looks like the more elegant solution if the requirements for tracing are met, which they are for me except that I need to record parameters requires_grad.
I guess currently there is nothing much I can do except either change my code to support scripting or not use the jit. |
st180768 | tom:
requires_grad_ (always use the method!)
Could you tell me why to use the method? In PyTorch docs requires_grad is set directly all the time. |
st180769 | tjoseph:
Could you tell me why to use the method? In PyTorch docs requires_grad is set directly all the time.
First, let me admit that there is a modicum of taste here.
I sometimes misspell requires.
I find myself thinking of “set requires grad to true” as an operation I apply to a tensor rather than the change of a data member. In other words, setting the gradient requirement is more “typical use of an object-modifying-method” than "typical use of =" to me.
Best regards
Thomas |
st180770 | I just find the same issue while using torch.jot.script in Pytorch 1.7.1. The easiest workaround is to just x.requires_grad_(True) to change the value. |
st180771 | I am using scripting to convert a model to TorchScript module. How can I disable the grad-calculation in predict method of the module?
I tried the standard no_grad() and set_grad_enabled() options. But it seems both are not supported in a jit.script_method. |
st180772 | Did you find a solution? I’m facing the same problem and getting OOM error (I can see when I loop over with torch.no_grad(): that the GPU memory increases) |
st180773 | I have succeeded adding my custom primitives to PyTorch via RegisterOperators and RegisterPass from within PyBind11. Models from TorchVision that I load and transform using torch.jit.trace definitely have them — I see their console output and the output tensor bears clear marks of their work.
However, when I try to torch.jit.save the traced model and torch.jit.load it back some other time, my prims are nowhere to be seen.
What am I doing wrong? How do I use my own primitives in a network loaded using torch.jit.load? |
st180774 | Hi,
I can see a new module in development called torch.fx with symbolic trace capability
Could you please explain the difference between torch.fx and torch.jit.script()
Thanks & Regards |
st180775 | Solved by James_Reed in post #4
Hi @zetyquickly,
Could you please describe what does it mean?
This concept is introduced in the documentation: torch.fx — PyTorch 1.8.0 documentation. FX produces valid Python nn.Module instances from its Graph representation
Why does one need to translate Python to Python?
FX emphasizes gen… |
st180776 | Hello @mathmanu,
torch.fx is different from TorchScript in that it is a platform for Python-to-Python transformations of PyTorch code. TorchScript, on the other hand, is more targeted at moving PyTorch programs outside of Python for deployment purposes. In this sense, FX and TorchScript are orthogonal to each other, and can even be composed with each other (e.g. transform PyTorch programs with FX, then subsequently export to TorchScript for deployment).
Please stay tuned for more information about FX early next year. Note that FX is very unstable at this point and we do not recommend building off of the code in master at this time |
st180777 | James_Reed:
Python-to-Python transformations
Could you please describe what does it mean?
Python code generation is what makes FX a Python-to-Python (or Module-to-Module) transformation toolkit. For each Graph IR, we can create valid Python code matching the Graph’s semantics. This functionality is wrapped up in GraphModule 1, which is a torch.nn.Module instance that holds a Graph as well as a forward method generated from the Graph.
Why does one need to translate Python to Python? Is IR the same IR that we acquire during jit.trace? |
st180778 | Hi @zetyquickly,
Could you please describe what does it mean?
This concept is introduced in the documentation: torch.fx — PyTorch 1.8.0 documentation 52. FX produces valid Python nn.Module instances from its Graph representation
Why does one need to translate Python to Python?
FX emphasizes generating Python code so that it can be used within the existing PyTorch eager ecosystem. That is to say, code transformed by FX is not locked into one specific runtime (e.g. TorchScript) and all the normal tooling that can be used with normal PyTorch modules can be used with FX-generated modules.
Is IR the same IR that we acquire during jit.trace ?
No, the IR is not the same as that produced by jit.trace, intentionally so. FX is an entirely separate system that is superior to jit.trace in several ways:
FX deeply integrates into the Python runtime, so it can better acquire accurate program representations, whereas jit.trace often silently captures wrong representations
FX’s Graph IR is much simpler since it represents the code at a higher level (torch.nn module calls are preserved rather than traced through). This is much easier to work with and understand and we have seen major productivity improvements using this IR |
st180779 | I have a loss function which is torchscripted using torch.jit.script decorator. Before torchscripting it works fine but after torchscripting it fails in the backward path due to some in-place operation. However, I cannot find any in-place operation in my loss function. Any idea? |
st180780 | Hi,
In Torchscript sample RNN 4 there was FusionGroup shown when printing graph for LSTMCell, but when i’m try to run torchscript LSTMCell from site and print graph, it’s not show a FusionGroup. Is that normal or there something missing for show FusionGroup like torch._C._jit_set_profiling_executor(False)? |
st180781 | Solved by googlebot in post #2
It is TensorExprGroup with the new executor, and these should appear after second invocation (for cuda, cpu fusion is off by default).
torch._C._jit_set_profiling_executor(False) may also be a good idea with varying shapes. |
st180782 | It is TensorExprGroup with the new executor, and these should appear after second invocation (for cuda, cpu fusion is off by default).
torch._C._jit_set_profiling_executor(False) may also be a good idea with varying shapes. |
st180783 | While exploring a torch.jit.trace() of the standard ResNet50 from torchvision.models I stumbled upon a peculiar structure at the very end of the network:
<...>
|
________v_________ ________ _________
| | | | | |
| aten::avg_pool2d | | Int(0) | | Int(-1) |
|__________________| |________| |_________|
| | |
| _____v______ __________v__________
| | | | |
|--------------> aten::size |---> prim::ListConstruct |
| |____________| |_____________________|
| |
| ____________ |
| | | |
`--------------> aten::view <--------------'
|____________|
|
v _________
#==============# <--|_weights_|
# aten::linear # _________
#==============# <--|_biases__|
What exactly is it needed for?
Why cannot aten::avg_pool2d be plugged directly into aten::linear?
At the moment I`m not too well-versed in PyTorch — but in those ML frameworks I`m familiar with nothing prevents such direct connections between 2D Average Pooling and Inner Product. |
st180784 | Solved by tom in post #2
I think with more recent PyTorch/TorchVision you’d get aten::flatten instead – this a traced version of torch.view(x, [x.size(0), -1]) to move from n,c,h,w to n,features.
Keras does this flattening implicitly when you use “GlobalAveragePooling” because it actually removes the spatial dimensions whi… |
st180785 | I think with more recent PyTorch/TorchVision you’d get aten::flatten instead – this a traced version of torch.view(x, [x.size(0), -1]) to move from n,c,h,w to n,features.
Keras does this flattening implicitly when you use “GlobalAveragePooling” because it actually removes the spatial dimensions which PyTorch’s pooling does not. |
st180786 | torch.view(x, [x.size(0), -1])
Do I get it right that this just tells torch.view to keep the batch count and merge the remaining dimensions of x together?
Damn, this explains so much. Thank you! |
st180787 | hidefromkgb:
Do I get it right that this just tells torch.view to keep the batch count and merge the remaining dimensions of x together?
Yes, this is exactly what it does. |
st180788 | There is a quantized network in ONNX format that I`d like to convert to TorchScript.
I know that such conversions are usually done backwards, from TS to ONNX, but I want to test my TS integration on some quantized models and I was told the easiest way is to use ONNX.
Does ONNX use the same execution pipeline as TorchScript which I can interface via RegisterPass and RegisterOperators?
If not, then how do I perform a ONNX → TorchScript conversion?
If this is impossible, how do I acquire and trace a quantized, preferably asymmetric, ResNet50 model? |
st180789 | Hello,
can I use torch.jit.trace after I already assigned the optimizer? I am currently met with loss explosion when I add torch.jit.trace.
Example pseudo code:
model = Model() # Model is an nn.Module
loss_fn = LossFn(model) # LossFn is an nn.Module without any parameters
optimizer = SGD(model.parameters())
batch = next(dataloader)
loss_fn = torch.jit.trace(loss_fn, (batch,))
# Training
for batch in dataloader:
loss = loss_fn(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad() |
st180790 | Hi,
I am wondering what the expected time for kernel calls is. PyTorch kernel calls are asynchronous, so the GPU will do work while the CPU can already launch new kernels. I would expect something like the forward pass to very quick in python as long as there are no synchronization points. Obviously, when I need the result, I would have to wait for the CPU.
However, when I benchmarked this by simply measuring time in python for different operations it seemed like these were blocking. E.g.
-prediction.log_prob(target).mean()
takes longer than the forward pass of my model for some reason.
Any idea whether this is to expect? |
st180791 | How did you measure these operations? E.g. are you seeing a long kernel launch for this stand-alone operation or as part or a larger workload etc.? |
st180792 | Sorry for not being clear. Here are some examples that I ran on a GTX1080TI with PyTorch 1.8.
Maybe I am understanding something wrong about asynchronous execution.
from timeit import default_timer as timer
import torch
from torch import nn, jit
from torch.distributions import Normal
from tqdm import tqdm
class Timer:
def __init__(self, name=None):
self.name = name
self.start = None
self.end = None
def __enter__(self):
self.start = timer()
def __exit__(self, *args, **kwargs):
end = timer()
duration = end - self.start
print("DURATION:", self.name, duration)
Example 1: Simple multiplication
with Timer("kernel complete"):
input_cpu = torch.rand(32, 512, 512).pin_memory()
parameter_cpu = torch.rand(32, 512, 512, requires_grad=True).pin_memory()
with Timer("kernel launch"):
for i in tqdm(range(512)):
input_gpu = input_cpu.to("cuda", non_blocking=True)
parameter_gpu = parameter_cpu.to("cuda", non_blocking=True)
result = input_gpu * parameter_gpu
torch.cuda.synchronize()
DURATION: kernel launch 0.040197614999669895
DURATION: kernel complete 14.410601890999715
Example 2: Simple multiplication with backward pass
with Timer("kernel complete"):
input_cpu = torch.rand(32, 512, 512).pin_memory()
parameter_cpu = torch.rand(32, 512, 512, requires_grad=True).pin_memory()
with Timer("kernel launch + cpu->gpu transfer"):
with Timer("kernel launch"):
for i in tqdm(range(512)):
input_gpu = input_cpu.to("cuda", non_blocking=True)
parameter_gpu = parameter_cpu.to("cuda", non_blocking=True)
result = input_gpu * parameter_gpu
result.mean().backward()
input_gpu = input_cpu.to("cuda", non_blocking=True)
parameter_gpu = parameter_cpu.to("cuda", non_blocking=True)
torch.cuda.synchronize()
DURATION: kernel launch 5.375270492999334
DURATION: kernel launch + cpu->gpu transfer 5.3755872629990336
DURATION: kernel complete 6.740538608999486
Example 3: torch.distributions.Normal .sample() seems to block:
with Timer("kernel complete"):
input_cpu = torch.rand(32, 512, 512).pin_memory()
parameter_cpu = torch.rand(32, 512, 512, requires_grad=True).pin_memory()
with Timer("kernel launch"):
for i in tqdm(range(512)):
input_gpu = input_cpu.to("cuda", non_blocking=True)
parameter_gpu = parameter_cpu.to("cuda", non_blocking=True)
result = Normal(input_gpu, 1).sample() * parameter_gpu
torch.cuda.synchronize()
DURATION: kernel launch 5.955725089000225
DURATION: kernel complete 7.688505447999887
Example 4: torch.normal() launches quicker?
with Timer("kernel complete"):
input_cpu = torch.rand(32, 512, 512).pin_memory()
parameter_cpu = torch.rand(32, 512, 512, requires_grad=True).pin_memory()
with Timer("kernel launch"):
for i in tqdm(range(512)):
input_gpu = input_cpu.to("cuda", non_blocking=True)
parameter_gpu = parameter_cpu.to("cuda", non_blocking=True)
result = (torch.normal(input_gpu, std=1)) * parameter_gpu
torch.cuda.synchronize()
DURATION: kernel launch 1.862492633000329
DURATION: kernel complete 7.237628412999584
Example 5: log_prob also blocks?
with Timer("kernel complete"):
input_cpu = torch.rand(32, 512, 512).pin_memory()
parameter_cpu = torch.rand(32, 512, 512, requires_grad=True).pin_memory()
with Timer("kernel launch"):
for i in tqdm(range(512)):
input_gpu = input_cpu.to("cuda", non_blocking=True)
parameter_gpu = parameter_cpu.to("cuda", non_blocking=True)
result = Normal(input_gpu, 1).log_prob(parameter_gpu)
torch.cuda.synchronize()
DURATION: kernel launch 6.612539947000187
DURATION: kernel complete 8.380056750000222 |
st180793 | Here, distributions are blocking because of scalar parameters, that are being wrapped in tensors (and copied to GPU blocking python) - this is avoidable with pre-created tensors (even better is to use simplified formulas for scale=1)
I’m not sure about backward() snippet, I think writing parameter_cpu.grad is blocking. But sync before starting backprop may also make sense… |
st180794 | Update for example 5 with @googlebot explanation. Still far from what I would expect…
with Timer("kernel complete"):
input_cpu = torch.rand(32, 512, 512).pin_memory()
parameter_cpu = torch.rand(32, 512, 512, requires_grad=True).pin_memory()
stddev = torch.ones_like(input_cpu, device="cuda")
with Timer("kernel launch"):
for i in tqdm(range(N_ITERATIONS)):
input_gpu = input_cpu.to("cuda", non_blocking=True)
parameter_gpu = parameter_cpu.to("cuda", non_blocking=True)
result = Normal(input_gpu, stddev).log_prob(parameter_gpu)
torch.cuda.synchronize()
With N_ITERATIONS = 512
DURATION: kernel launch 4.855967344999954
DURATION: kernel complete 7.614574080998864
With N_ITERATIONS = 2048
DURATION: kernel launch 23.592640275999656
DURATION: kernel complete 26.401106097000593 |
st180795 | I suspect you’re using 1.8 and this change affects blocking :
Enable distribution validation by default for torch.distributions (#48743)
This may slightly slow down some models. Concerned users may disable validation by using torch.distributions.Distribution.set_default_validate_args(False) or by disabling individual distribution validation via MyDistribution(…, validate_args=False).
FYI, previous version doesn’t block in this snippet, unless validate_args=[“scale”] is added |
st180796 | I have to correct myself. These are some results for example 5:
With set_default_validate_args(True) and N_ITERATIONS=32
DURATION: kernel launch 0.4117265930000258
DURATION: kernel complete 1.801716031000069
With set_default_validate_args(False) and N_ITERATIONS=32
DURATION: kernel launch 0.019914979000077437
DURATION: kernel complete 1.801976835000005
With set_default_validate_args(True) and N_ITERATIONS=512
DURATION: kernel launch 6.590126628999997
DURATION: kernel complete 8.072248363000085
With set_default_validate_args(False) and N_ITERATIONS=512
DURATION: kernel launch 4.870119396000064
DURATION: kernel complete 7.648874833000036
Is there something else I am missing? There seems to be a big difference in launch time depending on N_ITERATIONS. |
st180797 | I guess you’re overloading the cuda launch queue. Or garbage collection triggers cuda sync somehow. |
st180798 | Hi,
does anyone know whether there is some information somewhere about whether/when torch.Distributions will receive jit support?
Best
Tim |
st180799 | As far as I know, you can use torch.jit.trace on torch.Distributions. And there are tests covering this use case. torch.jit.script, however, doesn’t fully work with torch.Distributions yet. Hope this helps. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.