id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181100 | Hello everyone.
I am working with Torchscript on PyTorch 1.5.0.
I want to automatically grab input shape & dtype information from a deserialized, traced torchscript model with the following code line:
[(i.debugName().split('.')[0], i.type().sizes(), i.type().scalarType()) for i in
list(reloaded_module.graph.inputs())[1:]]
The Problem is that this gives me None for both the size and the scalarType when i deserialize the model.
On the other hand, when i execute the same line of code on a non-deserialized model (e.g. a just-in-time traced model that has not been serialized&deserialized) i get the desired results.
You should be able to reproduce this through just a few lines of code.
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True)
input_image = "get an image from somewhere"
traced_path = "./test.pt"
traced_module = torch.jit.trace(model,input_batch)
traced_module.save(traced_path)
reloaded_module = torch.jit.load(traced_path)
print([(i.debugName().split('.')[0], i.type().sizes(), i.type().scalarType()) for i in list(traced_module.graph.inputs())[1:]])
print([(i.debugName().split('.')[0], i.type().sizes(), i.type().scalarType()) for i in list(reloaded_module.graph.inputs())[1:]])
Any suggestions on solutions? Am i missing something or is this actually a bug in Torch-Serialization/Deserialization?
Best regards,
Knight3 |
st181101 | I went through the official doc of TorchScript (https://pytorch.org/docs/stable/jit.html 113) but didn’t understand clearly what is the advantage of Tracing over Scripting.
Af far as I understood, both jit.script and jit.trace can convert existing nn.Module instances into TorchScript. However Tracing cannot handle control flow such as if/for and it also requires an example input. The inability to handle control flow sounds like a huge deal breaker.
The only disadvantage of Scripting I noticed is that it cannot handle several builtin modules like RNN/GRU.
Are there any reasons to use Tracing over Scripting?
Thank you.
BTW, can LSTM be scripted? It is a little odd that RNN and GRU cannot be scripted but LSTM can. |
st181102 | Tracing lets you use dynamic behavior in Python since it just records tensor operations. This may work better for your use case, but as you pointed out there are some fundamental limitations such as the inability to trace control flow / Python values. They both should be pretty easy to try out on your codebase (i.e. call trace() or script() on a module). If tracing does not work out of the box it is unlikely that you can maintain the semantics of your model and get it working under tracing (i.e. if you use control flow). Scripting may require some work to get your model using only features supported by the compiler 122, but you will probably be able to get it working with some code changes.
GRU and LSTM can both be compiled on master/nightly, (AFAIK only LSTM is in the v1.2.0 release), we haven’t had many requests for RNN yet. If you’d like to see it (or something else in PyTorch) be script-able that isn’t already 16, please file an issue on GitHub. |
st181103 | @driazati Thank you for reply.
Do you mean that Tracing can handle almost all Python features/libraries except for for/while/if kind flows while Scripting can only handle a subset of Python features (aka TorchScript)? I still suspect there are many Python features that cannot be used with Tracing and therefore the difference between Tracing and Scripting is very small. Could you kindly give me an example where we should use Tracing over Scripting? |
st181104 | Tracing can handle anything that uses only PyTorch tensors and PyTorch operations. If someone passed a PyTorch tensor to a Pandas dataframe and did some operations, tracing wouldn’t capture that (though neither would script at this point), so there are limitations. If the only data flowing around your computations are tensors and there is no control flow, tracing is probably the way to go. Otherwise, use scripting.
The pytext 53 library uses a mix of scripting 27 and tracing 25, and it all generally works well since they can be mixed together 50 pretty seamlessly. |
st181105 | I have got similar confusion.
Seems that even If “the only data flowing around your computations are tensors and there is no control flow”, we can still use scripting. Does this mean we can actually use scripting for all cases? Any speed difference between the two?
Thanks. |
st181106 | Right, the only thing that would work in tracing but not scripting is use of Python language features and dynamic behavior that script mode doesn’t support. Since scripting compiles your code, you may have to do some work to make the compiler happy (e.g. add type annotations). But it is easy to try, just pass your module to torch.jit.script. The two compile to the same IR under the hood, so the speed should be about the same. |
st181107 | Hi,
I don’t understand what do you mean by dynamic behaviour and python language features that is supported in trace but not in script? also
what is the difference between Torchscript compiler and JIT compiler?
Scripting a function or `nn.Module` will inspect the source code,
compile it as TorchScript code using the TorchScript compiler.
Trace a function and return an executable
that will be optimized using just-in-time compilation.
I request you to explain those in detail.
Thanks. |
st181108 | Hi, would you be able to give an example of dynamic behavior / Python language features that script mode doesn’t support? It sounds a bit abstract to me compared to the control flow statements that I know only work with script mode. |
st181109 | It was also abstract for me in which can trace would be superior/more useful than scripting. I finally found an example in my code
import torch
from torch import nn
class MyModule(nn.Module):
def __init__(self, return_b=False):
super().__init__()
self.return_b = return_b
def forward(self, x):
a = x + 2
if self.return_b:
b = x + 3
return a, b
return a
model = MyModule(return_b=True)
# Will work
traced = torch.jit.trace(model, (torch.randn(10, ), ))
# Will fail
scripted = torch.jit.script(model)
This can easily be changed to be scriptable, but if you know the control flow is static once the model is exported, tracing will work just fine. |
st181110 | Is there a way to retrieve the device type and device ID of a model, from inside a custom JIT graph pass? In other words, if a customer does a model.to("cuda:0"), is there any way to retrieve “cuda, 0” from inside a graph pass? Please let me know. |
st181111 | This depends on the state of the graph.
For example, in a freshly traced graph, you have the values (mostly) with as complete tensor type and that includes the device. Similarly dimensioned tensors (with scalar type, dimension (1d 2d 3d etc) and device, requires grad) have that information, but more incomplete values don’t.
Best regards
Thomas |
st181112 | Can be done in a custom graph pass during inference. Graph pass only has reference to std::shared_ptr<Graph> . But assuming that this is during inference, is it possible to retrieve the device-type and device-id (in case of cuda)? This could be scripted model or traced model. |
st181113 | Hello,
I am playing with very simple dense layer implementation using torch.addmm and it seems that torch.jit.trace transforms addmm op to sequence of mm and add ops, leading to performance drop on CPU:
import torch
from torch.autograd import profiler
torch.set_num_threads(1)
def dense_layer(input, w, b):
return torch.addmm(input=b, mat1=input, mat2=w)
if __name__ == '__main__':
torch.random.manual_seed(1234)
a = torch.randn(100000, 10)
b = torch.randn(10, 10)
c = torch.randn(10)
with profiler.profile() as prof:
for i in range(1000):
dense_layer(a, b, c)
print(prof.key_averages().table(sort_by='cpu_time_total', row_limit=5))
traced = torch.jit.trace(dense_layer, (a, b, c))
with profiler.profile() as prof2:
for i in range(1000):
traced(a, b, c)
print(prof2.key_averages().table(sort_by='cpu_time_total', row_limit=5))
And the output of this script on EC2 Windows with Intel Xeon E5-2686 v4 @ 2.30GHz:
-------------- --------------- --------------- --------------- --------------- --------------- ---------------
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
-------------- --------------- --------------- --------------- --------------- --------------- ---------------
addmm 99.89% 5.603s 100.00% 5.609s 5.609ms 1000
expand 0.05% 2.927ms 0.09% 4.772ms 4.772us 1000
as_strided 0.03% 1.845ms 0.03% 1.845ms 1.845us 1000
-------------- --------------- --------------- --------------- --------------- --------------- ---------------
Self CPU time total: 5.609s
----------- --------------- --------------- --------------- --------------- --------------- ---------------
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
----------- --------------- --------------- --------------- --------------- --------------- ---------------
add 65.54% 5.190s 65.81% 5.212s 5.212ms 1000
mm 33.91% 2.685s 34.19% 2.708s 2.708ms 1000
empty 0.29% 23.118ms 0.29% 23.118ms 11.559us 2000
----------- --------------- --------------- --------------- --------------- --------------- ---------------
Self CPU time total: 7.919s
Same script run on Fedora 32 with Intel Core i7-8700K CPU @ 3.70GHz:
-------------- --------------- --------------- --------------- --------------- --------------- ---------------
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
-------------- --------------- --------------- --------------- --------------- --------------- ---------------
addmm 99.87% 1.863s 100.00% 1.866s 1.866ms 1000
expand 0.06% 1.164ms 0.10% 1.895ms 1.895us 1000
as_strided 0.04% 731.363us 0.04% 731.363us 0.731us 1000
-------------- --------------- --------------- --------------- --------------- --------------- ---------------
Self CPU time total: 1.866s
----------- --------------- --------------- --------------- --------------- --------------- ---------------
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
----------- --------------- --------------- --------------- --------------- --------------- ---------------
add 73.35% 1.430s 73.40% 1.431s 1.431ms 1000
mm 26.50% 516.765ms 26.60% 518.530ms 518.530us 1000
empty 0.09% 1.726ms 0.09% 1.726ms 0.863us 2000
----------- --------------- --------------- --------------- --------------- --------------- ---------------
Self CPU time total: 1.950s
On Fedora machine difference is less pronounced, but still takes place. What’s the reason of this transformation? Because at least for CPU both mm and addmm call same gemm so it seems more reasonable to expand vector to cover whole matrix and call gemm afterwards. Is it because of focus on GPU? Is there any way to produce CPU-effective trace for such case? |
st181114 | Solved by tom in post #2
There is a secret linear function
def dense_layer(input, w, b):
return torch.ops.aten.linear(input, w, b)
more seriously, I think that whatever causes addmm to be split could be considered a bug. |
st181115 | There is a secret linear function
def dense_layer(input, w, b):
return torch.ops.aten.linear(input, w, b)
more seriously, I think that whatever causes addmm to be split could be considered a bug. |
st181116 | Secret function also being split in trace mode
I will raise the issue then, however this does not look unintentional. |
st181117 | Ah, my bad. If you script that, it works. (even if you trace the scripted function, I think) |
st181118 | Moreover, running torch.jit.script(dense_layer) with the former implementation raises error:
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "<string>", line 3, in dense_layer
def addmm(self: Tensor, mat1: Tensor, mat2: Tensor, beta: number = 1.0, alpha: number = 1.0):
return self + mat1.mm(mat2)
~~~~~~~ <--- HERE
def batch_norm(input : Tensor, running_mean : Optional[Tensor], running_var : Optional[Tensor], training : bool, momentum : float, eps : float) -> Tensor:
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Edit: was just using wrong signature without keywords. |
st181119 | Hi, I have a module composed of conv2d and relu.
In profiling mode, we can find one prim::DifferentiableGraph node in the scripted module. However, no prim::DifferentiableGraph is found if we trace this module.
Could someone explain to me why there is this different behaviour between script and trace?
The code:
from __future__ import division
import argparse
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self, in_channels, out_channels, **kwargs):
super(MyModule, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)
self.relu = nn.ReLU()
def forward(self, x):
return self.relu(self.conv(x))
def test(mode):
print("*" * 10, mode, "*" * 10)
ConvRelu = MyModule(3, 32, kernel_size = 3, stride = 1)
x = torch.randn((1, 3, 8, 8))
x.requires_grad = True
if mode == 'script':
m = torch.jit.script(ConvRelu)
else:
m = torch.jit.trace(ConvRelu, x)
print('Conv2d+Relu Graph:\n', m.graph_for(x))
print('Conv2d+Relu Graph:\n', m.graph_for(x))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--mode', '-m', required=True, choices=['script', 'trace'], help="to script or to trace the module")
args = parser.parse_args()
torch._C._jit_set_profiling_mode(True)
torch._C._jit_set_profiling_executor(True)
test(args.mode)
Script:
python script_trace.py -m script
********** script **********
Conv2d+Relu Graph:
graph(%self : __torch__.MyModule,
%x.1 : Tensor):
%2 : int[] = prim::Constant[value=[0, 0]]()
%3 : int[] = prim::Constant[value=[1, 1]]()
%4 : int = prim::Constant[value=1]() # /home/pytorch/torch/nn/modules/conv.py:343:47
%5 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self)
%6 : Tensor = prim::GetAttr[name="weight"](%5)
%7 : Tensor? = prim::GetAttr[name="bias"](%5)
%8 : Tensor = prim::profile(%x.1)
%9 : Tensor = prim::profile(%6)
%10 : Tensor = aten::conv2d(%8, %9, %7, %3, %2, %3, %4) # /home/pytorch/torch/nn/modules/conv.py:345:15
%11 : Tensor = prim::profile(%10)
%result.2 : Tensor = aten::relu(%11) # /home/pytorch/torch/nn/functional.py:1063:17
%13 : Tensor = prim::profile(%result.2)
= prim::profile()
return (%13)
Conv2d+Relu Graph:
graph(%self : __torch__.MyModule,
%x.1 : Tensor):
%4 : int = prim::Constant[value=1]() # /home/pytorch/torch/nn/modules/conv.py:343:47
%3 : int[] = prim::Constant[value=[1, 1]]()
%2 : int[] = prim::Constant[value=[0, 0]]()
%21 : int = prim::BailoutTemplate_0()
%18 : Float(1, 3, 8, 8) = prim::BailOut[index=0](%21, %x.1, %self)
%5 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self)
%6 : Tensor = prim::GetAttr[name="weight"](%5)
%19 : Float(32, 3, 3, 3) = prim::BailOut[index=1](%21, %6, %5, %18)
%7 : Tensor? = prim::GetAttr[name="bias"](%5)
%10 : Tensor = aten::conv2d(%18, %19, %7, %3, %2, %3, %4) # /home/pytorch/torch/nn/modules/conv.py:345:15
%20 : Float(1, 32, 6, 6) = prim::BailOut[index=2](%21, %10)
%result.2 : Float(1, 32, 6, 6) = prim::DifferentiableGraph_1(%20)
return (%result.2)
with prim::BailoutTemplate_0 = graph(%self : __torch__.MyModule,
%x.1 : Tensor):
%2 : Float(1, 3, 8, 8) = prim::BailOut[index=0](%x.1, %self)
%3 : int[] = prim::Constant[value=[0, 0]]()
%4 : int[] = prim::Constant[value=[1, 1]]()
%5 : int = prim::Constant[value=1]() # /home/pytorch/torch/nn/modules/conv.py:343:47
%6 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self)
%7 : Tensor = prim::GetAttr[name="weight"](%6)
%8 : Float(32, 3, 3, 3) = prim::BailOut[index=1](%7, %6, %2)
%9 : Tensor? = prim::GetAttr[name="bias"](%6)
%10 : Tensor = aten::conv2d(%2, %8, %9, %4, %3, %4, %5) # /home/pytorch/torch/nn/modules/conv.py:345:15
%11 : Float(1, 32, 6, 6) = prim::BailOut[index=2](%10)
%result.2 : Float(1, 32, 6, 6) = aten::relu(%11) # /home/pytorch/torch/nn/functional.py:1063:17
return (%result.2)
with prim::DifferentiableGraph_1 = graph(%0 : Float(1, 32, 6, 6)):
%result.3 : Float(1, 32, 6, 6) = aten::relu(%0) # /home/pytorch/torch/nn/functional.py:1063:17
return (%result.3)
We can find one prim::DifferentiableGraph node when we print the graph for the second time.
Trace:
python script_trace.py -m trace
********** trace **********
Conv2d+Relu Graph:
graph(%self.1 : __torch__.MyModule,
%input.1 : Tensor):
%7 : None = prim::Constant(), scope: __module.conv
%6 : int[] = prim::Constant[value=[1, 1]]()
%5 : int[] = prim::Constant[value=[0, 0]]()
%4 : bool = prim::Constant[value=0](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%3 : int = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%2 : bool = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%23 : int = prim::BailoutTemplate_0()
%20 : Float(1, 3, 8, 8) = prim::BailOut[index=0](%23, %input.1, %self.1)
%8 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self.1)
%9 : Tensor = prim::GetAttr[name="weight"](%8)
%21 : Float(32, 3, 3, 3) = prim::BailOut[index=1](%23, %9, %20)
%input : Tensor = aten::_convolution(%20, %21, %7, %6, %5, %6, %4, %5, %3, %4, %4, %2), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%22 : Float(1, 32, 6, 6) = prim::BailOut[index=2](%23, %input)
%14 : Float(1, 32, 6, 6) = aten::relu(%22), scope: __module.relu # /home/pytorch/torch/nn/functional.py:1063:0
return (%14)
with prim::BailoutTemplate_0 = graph(%self.1 : __torch__.MyModule,
%input.1 : Tensor):
%2 : Float(1, 3, 8, 8) = prim::BailOut[index=0](%input.1, %self.1)
%3 : bool = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%4 : int = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%5 : bool = prim::Constant[value=0](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%6 : int[] = prim::Constant[value=[0, 0]]()
%7 : int[] = prim::Constant[value=[1, 1]]()
%8 : None = prim::Constant(), scope: __module.conv
%9 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self.1)
%10 : Tensor = prim::GetAttr[name="weight"](%9)
%11 : Float(32, 3, 3, 3) = prim::BailOut[index=1](%10, %2)
%input : Tensor = aten::_convolution(%2, %11, %8, %7, %6, %7, %5, %6, %4, %5, %5, %3), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%13 : Float(1, 32, 6, 6) = prim::BailOut[index=2](%input)
%14 : Float(1, 32, 6, 6) = aten::relu(%13), scope: __module.relu # /home/pytorch/torch/nn/functional.py:1063:0
return (%14)
Conv2d+Relu Graph:
graph(%self.1 : __torch__.MyModule,
%input.1 : Tensor):
%7 : None = prim::Constant(), scope: __module.conv
%6 : int[] = prim::Constant[value=[1, 1]]()
%5 : int[] = prim::Constant[value=[0, 0]]()
%4 : bool = prim::Constant[value=0](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%3 : int = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%2 : bool = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%23 : int = prim::BailoutTemplate_0()
%20 : Float(1, 3, 8, 8) = prim::BailOut[index=0](%23, %input.1, %self.1)
%8 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self.1)
%9 : Tensor = prim::GetAttr[name="weight"](%8)
%21 : Float(32, 3, 3, 3) = prim::BailOut[index=1](%23, %9, %20)
%input : Tensor = aten::_convolution(%20, %21, %7, %6, %5, %6, %4, %5, %3, %4, %4, %2), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%22 : Float(1, 32, 6, 6) = prim::BailOut[index=2](%23, %input)
%14 : Float(1, 32, 6, 6) = aten::relu(%22), scope: __module.relu # /home/pytorch/torch/nn/functional.py:1063:0
return (%14)
with prim::BailoutTemplate_0 = graph(%self.1 : __torch__.MyModule,
%input.1 : Tensor):
%2 : Float(1, 3, 8, 8) = prim::BailOut[index=0](%input.1, %self.1)
%3 : bool = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%4 : int = prim::Constant[value=1](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%5 : bool = prim::Constant[value=0](), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%6 : int[] = prim::Constant[value=[0, 0]]()
%7 : int[] = prim::Constant[value=[1, 1]]()
%8 : None = prim::Constant(), scope: __module.conv
%9 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv"](%self.1)
%10 : Tensor = prim::GetAttr[name="weight"](%9)
%11 : Float(32, 3, 3, 3) = prim::BailOut[index=1](%10, %2)
%input : Tensor = aten::_convolution(%2, %11, %8, %7, %6, %7, %5, %6, %4, %5, %5, %3), scope: __module.conv # /home/pytorch/torch/nn/modules/conv.py:346:0
%13 : Float(1, 32, 6, 6) = prim::BailOut[index=2](%input)
%14 : Float(1, 32, 6, 6) = aten::relu(%13), scope: __module.relu # /home/pytorch/torch/nn/functional.py:1063:0
return (%14)
In this case, there is no prim::DifferentiableGraph node in the graph.
PyTorch commit:
commit b58f89b2e4b4a6dc9fbc0c00e608de0f4db52267
changes made to the threshold:
diff --git a/torch/csrc/jit/runtime/graph_executor_impl.h b/torch/csrc/jit/runtime/graph_executor_impl.h
index a2fd10c..f4f43e2 100644
--- a/torch/csrc/jit/runtime/graph_executor_impl.h
+++ b/torch/csrc/jit/runtime/graph_executor_impl.h
@@ -40,8 +40,8 @@ bool getAutodiffSubgraphInlining();
// Tunable parameters for deciding when to create/keep subgraphs of
// differentiable code
-const size_t autodiffSubgraphNodeThreshold = 2;
-const size_t autodiffSubgraphInlineThreshold = 5;
+const size_t autodiffSubgraphNodeThreshold = 1;
+const size_t autodiffSubgraphInlineThreshold = 1;
Thanks. |
st181120 | Suppose I have the following graph and example code.
import torch
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
def forward(self, x):
z = x.squeeze(0)
z.add_(2)
x.mul_(5)
return x
if __name__ == "__main__":
script = torch.jit.script(Net().eval())
print(str(script))
print("-" * 10)
for node in script.graph.nodes():
print(node)
Is the loop guaranteed to iterate over the nodes in an order that preserves the semantics of the in-place operations in the graph?
In this simple example its just enough that the add is looped over before the mul, but the general case is more important to me.
I do get the correct order when running this script, but I’m interested if this is guaranteed. |
st181121 | Solved by tom in post #2
Reodering is done with checking dependencies, including side effects:
Best regards
Thomas |
st181122 | Reodering is done with checking dependencies, including side effects:
github.com
pytorch/pytorch/blob/58a7e73a95865e5c02a581703c39edd671495a6c/torch/csrc/jit/ir/alias_analysis.h#L98-L108 1
// Move 'n' (already in the graph) after 'movePoint' in the topological order.
//
// Tries to preserve value dependencies, so other nodes might be moved. We
// make two guarantees about the postcondition of the node list:
// - `n` is directly after `movePoint`.
// - only nodes between `n` and `movePoint` have been moved.
//
// Returns `false` if it's impossible to move `n` after `MovePoint` without
// violating dependencies, otherwise executes the move and returns `true`
TORCH_API bool moveAfterTopologicallyValid(Node* n, Node* movePoint);
TORCH_API bool moveBeforeTopologicallyValid(Node* n, Node* movePoint);
Best regards
Thomas |
st181123 | Hello All,
I am trying to trace the fasterrcnn-resnet50-fpn and save it in a .pt file later to be read and used by libtorch. I went through the thread- jit trace of fasterRCNN 2 and saw that the PR to add scriptability of fasterRCNN was merged
I am trying trace it using the following code (pytorch 1.6.0)
import torch
import torchvision
from PIL import Image
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True).eval()
example = tuple(list(torch.rand((3, 640, 480))))
traced_Script = torch.jit.trace(model, example_inputs=example)
traced_Script.save("/Models/fasterrcnn.pt")
and run into the following error -
TypeError: forward() takes from 2 to 3 positional arguments but 4 were given
The torchvision documentation says we are supposed to provide the input as list so I went ahead with that,
Could anyone please tell what could be going wrong
TIA |
st181124 | Hi all,
Is it possible to define and save a custom operator which can potentially take any number of inputs of any type and similarly return any number of outputs of any type? For example, these could be nodes representing/containing subgraph attribute.
Also, I wanted to understand why PyTorch JIT IR doesn’t support saving operator node’s attributes?
P.S: This topic is similar to Saving FusionGroup as part of ScriptModule 2 |
st181125 | Hi Vamshi,
For any number of inputs/outputs of any type, have you tried List[Any]? Your operator would need some way of knowing the exact type of each element though, which can be done by passing some metadata input or attribute.
I am not entirely sure what you mean by “JIT IR doesn’t support saving operator node’s attributes”, since node attributes are saved as far as I know. Please clarify. |
st181126 | Hi @Yanan_Cao: Thanks for the response. I had a couple of questions about this
I couldn’t find any example of any operator with schema List[Any] in 1.4 or 1.5. Could you please point to any example that I can follow?
List by definition expect all elements of same type right? So, how could this be used if the operator takes in “int, bool, float, tensor, lists” in any order and any number? |
st181127 | Does it have to be 1.4 or 1.5?
Here is an example, not sure if it is in 1.5 or 1.6
github.com
pytorch/pytorch/blob/8094228f26cdef24529bffadd6a6e43b56df2d6e/torch/csrc/jit/runtime/register_prim_ops_fulljit.cpp#L674 1
// round((x_e + x_r)/2)*2 = x_e + round(x_r/2)*2, where x_e is an even integer,
// x_r is either 0.5 of 1.5, round(x_r/2)*2 results a 0 or 2, so the final
// result will always be a even number. Due to symmetricity, it also applies to
// negative cases.
double round_to_even(double a) {
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)
return a - std::floor(a) == 0.5 ? (std::round(a * 0.5) * 2.0) : std::round(a);
}
RegisterOperators reg2({
// registered as Any[] so that heterogenous tuples can be called with len()
Operator(
"aten::len.any(Any[] a) -> int",
listLen,
aliasAnalysisFromSchema()),
// these ops have a specialized implementation for the list element type
#define CREATE_SPECIALIZED_LIST_OPS(decl_type, value_type) \
Operator( \
"aten::remove." decl_type "(" decl_type \
"[](a!) self, \
I think it can be heterogenous. Indeed it is not a common use case though, let us know if you hit any rough edges. |
st181128 | @Yanan_Cao: Thanks for the pointers. These were very helpful. Using the type Any seems like a flexible way. It wasn’t supported in 1.4.x though.
I am defining my custom operator as varargs.
my::Customop(...) -> (...)
This seems to work to save multiple inputs and multiple outputs of different types. Is this a recommended way to represent an operator, or should I look out for any corner case? |
st181129 | Given your requirement, I think this is probably the only way to represent the schema of your operator. Yes, you should look out for corner cases given that the version you are using is slightly older. |
st181130 | Is it possible to get the values of intermediate ops in the torchscript graph? Or is the only way to modify the source model to keep intermediate outputs in the actual output (or something like this)? |
st181131 | Hello
In order to measure the computation time of the deep learning model when using GPU, we have to keep in mind about GPU operations asynchronicity. Thus, we should use torch.cuda.Event or torch.autograd.profiler in Python.
I found torch::autograd::profiler in libtorch but there is no API documentation to know how to use functions in the library. This looks like C++ version of torch.autograd.profiler which to measure model computation time. However, there are many examples to measure computation time in Python but it is hard to find any example to measure computation time when using GPU in C++.
(This might be because the majority of users except me are familiar to use C++ and C++ library. )
Could someone provide a simple example to measure computation time of Torchscript model considering both CPU time and GPU time in C++?
FYI, in order to correctly measure computation time in Python3, please refer this link:
How to measure time in PyTorch
I have seen lots of ways to measure time in PyTorch. But what is the most proper way to do it now (both for cpu and cuda)?
Should I clear the memory cache if I use timeit?
And is it possible to get accurate results if I’m computing on a cluster? And is it a way to make this results reproducible?
And what is better: timeit or profiler?
Many thanks!
I copied this question here 2 |
st181132 | std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
model.forward(inputs).toTensor();
cudaDeviceSynchronize(); // declared in cuda.h
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
This is my solution to measure elapsed time for model computation when using GPU.
If there is a better way, plz suggest me.
Thanks! |
st181133 | I got an error with c10::Error: Couldn't find an operator for aten::dropout(Tensor input, float p, bool train) when I tried to load a torchscript model by libtorch c++ api. The linking libtorch library is compiled with script/build_pytorch_android.sh script.
target platform: android arm64-v8a
pytorch tag: v1.6.0
And if I linked the program with x86 prebuilt libtorch library, and run it in x86 platform, all worked well. |
st181134 | I’m using the same traced model in both c++ and python but I’m getting different outputs.
Python Code
import cv2
import numpy as np
import torch
import torchvision
from torchvision import transforms as trans
# device for pytorch
device = torch.device('cuda:0')
torch.set_default_tensor_type('torch.cuda.FloatTensor')
model = torch.jit.load("traced_facelearner_model_new.pt")
model.eval()
# read the example image used for tracing
image=cv2.imread("videos/example.jpg")
test_transform = trans.Compose([
trans.ToTensor(),
trans.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
resized_image = cv2.resize(image, (112, 112))
tens = test_transform(resized_image).to(device).unsqueeze(0)
output = model(tens)
print(output)
C++ code
#include <iostream>
#include <algorithm>
#include <opencv2/opencv.hpp>
#include <torch/script.h>
int main()
{
try
{
torch::jit::script::Module model = torch::jit::load("traced_facelearner_model_new.pt");
model.to(torch::kCUDA);
model.eval();
cv::Mat visibleFrame = cv::imread("example.jpg");
cv::resize(visibleFrame, visibleFrame, cv::Size(112, 112));
at::Tensor tensor_image = torch::from_blob(visibleFrame.data, { 1, visibleFrame.rows, visibleFrame.cols, 3 }, at::kByte);
tensor_image = tensor_image.permute({ 0, 3, 1, 2 });
tensor_image = tensor_image.to(at::kFloat);
tensor_image[0][0] = tensor_image[0][0].sub(0.5).div(0.5);
tensor_image[0][1] = tensor_image[0][1].sub(0.5).div(0.5);
tensor_image[0][2] = tensor_image[0][2].sub(0.5).div(0.5);
tensor_image = tensor_image.to(torch::kCUDA);
std::vector<torch::jit::IValue> input;
input.emplace_back(tensor_image);
// Execute the model and turn its output into a tensor.
auto output = model.forward(input).toTensor();
output = output.to(torch::kCPU);
std::cout << "Embds: " << output << std::endl;
std::cout << "Done!\n";
}
catch (std::exception e)
{
std::cout << "exception" << e.what() << std::endl;
}
}
Python output:
tensor([[-1.6270e+00, -7.8417e-02, -3.4403e-01, -1.5171e+00, -1.3259e+00,
-1.1877e+00, -2.0234e-01, -1.0677e+00, 8.8365e-01, 7.2514e-01,
2.3642e+00, -1.4473e+00, -1.6696e+00, -1.2191e+00, 6.7770e-01,
-3.0230e-01, -1.5904e+00, 1.7433e+00, -7.8862e-01, 3.9448e-01,
-1.7189e+00, 1.1014e+00, -2.2981e+00, -5.1542e-01, -1.1593e-01,
6.5024e-01, -6.8557e-01, -7.0064e-01, -1.0784e+00, -7.7883e-01,
1.3773e+00, -1.5619e+00, -2.0540e-01, 1.2147e+00, 7.3867e-01,
1.1110e+00, 1.0524e-01, -1.1249e+00, -5.0620e-01, -5.2198e-01,
1.3556e+00, -1.5315e+00, 1.0446e-01, 9.1795e-01, 2.7186e+00,
-6.9594e-01, 7.4122e-01, 1.4757e+00, 1.2925e-01, -2.6900e-01,
1.5588e+00, -1.0609e+00, -2.0121e-01, -6.8162e-01, 1.1572e-01,
-1.7430e-01, -1.4399e+00, 1.4873e+00, 1.1772e+00, 8.0879e-01,
-1.3121e-01, -2.0003e+00, -7.4500e-02, -4.1007e-01, -1.2315e+00,
-1.1150e+00, -2.1979e+00, -1.2252e+00, -1.5357e+00, 2.3477e+00,
-1.9694e+00, 1.8873e+00, 3.2776e-01, -7.6457e-01, -1.7912e+00,
5.7192e-01, -2.5461e-01, -6.7235e-01, -3.1392e+00, -8.8816e-01,
-6.2070e-01, -7.2750e-01, 2.4999e-01, 1.1434e+00, 1.0114e+00,
3.4786e-01, 9.9722e-01, -4.8731e-01, -5.6572e-01, 1.2642e+00,
-4.4803e-01, -1.4394e+00, -1.8629e-01, 5.3590e-01, 1.4678e+00,
8.5147e-02, -2.0793e+00, -2.8566e-01, 2.9678e-01, -3.4123e-01,
3.1120e-01, 7.2252e-01, 2.7816e+00, 1.0683e+00, -3.1785e+00,
-6.7824e-01, -1.7665e-02, 5.2761e-01, 1.1141e-01, -1.6249e+00,
-2.0966e+00, 1.2752e+00, -8.8363e-01, -1.9442e+00, 1.5579e+00,
5.6738e-01, -3.4520e-01, 9.1841e-01, 7.5063e-02, -1.6585e+00,
2.5177e-01, -1.3581e+00, 3.4045e-01, 1.2807e+00, -3.7098e-01,
5.8744e-01, 9.2038e-01, -4.1502e-01, -1.4006e+00, 1.3954e+00,
-1.1765e+00, 1.3100e+00, 2.1202e+00, 3.0595e+00, 1.7250e-01,
-5.0746e-01, -1.1361e+00, 1.3437e+00, -8.2815e-02, -1.0477e+00,
8.5581e-01, 2.4402e+00, 1.6616e+00, -1.9156e+00, 4.2771e-01,
1.7761e+00, 1.5104e-01, -2.7037e-01, -6.1427e-02, -1.0483e+00,
-2.2830e-01, 3.9742e-01, -6.7260e-01, 2.4361e+00, -7.6196e-01,
1.0965e+00, 1.4753e+00, 8.5338e-01, 4.5726e-01, -1.8667e-01,
-1.1761e+00, -8.8821e-02, 1.3202e-01, 1.5002e+00, -4.9365e-01,
-1.0977e+00, -2.9104e-02, -3.5381e-01, -2.2095e-01, 9.3996e-01,
-1.0770e+00, 9.3767e-01, 2.2430e+00, -7.1536e-01, -7.0468e-01,
-2.1124e+00, -2.7435e+00, 1.7995e+00, 4.1688e-01, 4.2249e-01,
1.1487e-01, -1.1160e-01, 2.0495e+00, -1.6678e+00, -2.2310e+00,
3.1619e-01, -1.0459e-01, -5.3289e-01, -3.8420e-01, -1.3272e+00,
-4.5785e-01, -1.3917e+00, 1.3051e-01, -1.6694e+00, 2.3753e+00,
7.4885e-01, 2.2261e+00, 3.5489e-01, 2.2460e+00, -7.0667e-01,
-3.1920e-01, 2.7467e-01, -1.4723e-01, 2.2449e-01, 3.0860e-01,
-5.6551e-01, 1.3486e+00, -1.0313e+00, -1.8844e-01, -5.4212e-01,
-8.9150e-01, 2.1663e-01, -2.3341e-02, 5.4041e-01, -2.8048e-01,
-8.5421e-01, -1.3455e+00, -5.4566e-03, 3.3249e-01, 3.2633e-02,
-7.2821e-01, -2.1179e+00, -4.3671e-01, 1.6922e-01, -1.5222e+00,
-8.1076e-01, -4.5145e-01, 1.0031e+00, 3.8981e-01, -7.5108e-01,
1.2772e+00, 1.0216e+00, -8.8832e-02, 7.2678e-01, 2.3863e-01,
-7.2614e-01, -9.3102e-01, 1.0179e-01, -3.1820e-01, 1.7549e+00,
2.4568e-02, -2.4448e-01, 6.6527e-01, 8.9161e-01, 2.4075e-01,
7.7993e-01, -2.9786e-01, 3.7189e-01, -1.8534e+00, 1.2161e+00,
-1.4340e-01, -8.4045e-01, -1.7490e-02, -6.3605e-02, -2.6961e-01,
-6.0356e-02, 1.6479e-02, 8.4313e-02, 1.2867e+00, -1.8166e+00,
-4.4236e-01, 1.9492e+00, 7.5414e-02, -1.1048e+00, 3.2055e-01,
1.6554e+00, 1.6603e+00, 5.2739e-01, -8.8670e-02, -3.8753e-01,
1.1036e+00, -8.2550e-02, 1.5303e+00, 7.2115e-01, 6.3496e-01,
-5.9476e-01, -1.7111e+00, -7.4406e-02, 1.2575e+00, 1.0652e+00,
3.3742e-01, -6.1574e-01, -7.7878e-01, -1.5626e+00, 2.0075e+00,
7.8007e-01, 2.3359e+00, -5.8407e-01, -3.6670e-02, -1.8357e+00,
-8.5492e-01, -7.9237e-02, -3.4835e+00, 1.8853e-01, -6.3243e-01,
-1.4143e-01, -1.5573e+00, 1.3054e+00, 7.2289e-02, -3.3197e-01,
-4.2815e-01, -9.9560e-01, 4.8308e-02, -1.0704e+00, 4.6133e-02,
-2.7710e-01, 6.3607e-01, -1.2849e-01, -5.8321e-01, -6.4198e-01,
6.8877e-01, 4.4855e-01, -9.9281e-01, -1.9603e-01, -1.3646e-01,
-1.5132e+00, -1.8551e+00, 2.9994e+00, 1.9747e+00, -8.8294e-01,
1.0297e+00, 5.4850e-01, 2.2204e+00, -1.9871e-02, 1.6224e+00,
-1.3714e+00, -1.9999e-01, -1.8371e-01, 9.8869e-01, 1.7765e+00,
2.1239e+00, 1.6547e-01, -3.8542e-01, 1.1274e+00, -3.9524e+00,
-1.8184e-01, -9.8598e-01, -1.2485e-01, -7.8307e-01, 1.5246e+00,
-2.3675e-01, 7.5133e-01, -1.8204e+00, 1.1964e+00, 6.9412e-01,
-3.4246e+00, -6.2488e-01, -2.0008e-01, -1.4634e-01, 3.6126e-01,
-6.2960e-01, 1.2811e+00, -2.0820e-01, -2.6770e-01, 1.0875e+00,
-1.8656e+00, -1.7223e+00, -1.6199e+00, -1.6023e+00, 1.1000e-03,
5.5017e-01, 1.9496e+00, 7.6847e-01, -1.2796e+00, 2.4125e+00,
-1.0207e+00, 1.4682e+00, 6.9706e-04, -3.1195e-01, 8.4523e-01,
1.1639e+00, 1.0964e+00, 8.0490e-01, 3.7047e-01, 4.5071e-01,
1.0288e+00, -1.0690e+00, -1.0394e+00, -6.6745e-01, -2.9959e-01,
1.2548e+00, -1.3682e+00, -1.3584e+00, -1.2101e+00, -9.2314e-01,
-1.6717e+00, 1.9204e-01, -5.1889e-01, 6.6319e-01, -3.5625e-02,
3.5143e+00, 7.8116e-01, -8.7697e-01, -3.8530e-01, 2.0860e+00,
-1.5915e+00, -8.9022e-01, -5.0295e-01, -1.2801e+00, 1.8433e-01,
-6.9138e-01, 7.6171e-01, 2.1874e-01, -9.5043e-01, 1.3584e+00,
-1.0811e+00, 3.7449e-01, 1.4505e+00, 1.4932e+00, -1.0532e+00,
-3.7828e-01, 1.7716e+00, 1.8390e-01, -1.4419e+00, 1.0288e+00,
-1.6216e-01, -1.9189e+00, -1.0210e+00, 7.4068e-01, 7.0265e-01,
1.6574e+00, 3.3080e-01, -2.9631e+00, 1.9505e-01, -2.5233e-01,
-2.0795e+00, -1.4711e+00, -1.9923e+00, 3.1158e+00, 2.3007e+00,
-1.4851e+00, -1.3739e+00, -3.8031e-01, 1.3879e+00, 6.2704e-01,
4.0849e-01, 5.2626e-01, -5.3517e-01, 6.4794e-01, 1.3874e+00,
1.1729e+00, -6.2420e-02, 1.6669e-01, 3.7647e-02, -1.8886e+00,
7.9953e-01, 9.9094e-02, 3.3523e-01, 6.6596e-01, -2.0243e+00,
6.9878e-01, 1.0356e+00, 4.0730e-01, -4.5905e-01, 2.0120e+00,
-5.4535e-02, -1.4968e+00, 1.5344e-01, -2.9665e-01, 3.0098e-01,
5.8679e-01, 2.0437e-01, -1.8587e+00, 6.7893e-02, 7.3112e-01,
3.5927e-01, 1.2785e+00, 4.0530e-01, 8.8397e-01, 1.0595e+00,
-6.2867e-01, 9.6102e-01, -1.6319e+00, 3.6489e-01, -4.1222e-01,
1.8157e+00, -2.3874e+00, -2.0938e+00, -5.5133e-01, 1.8377e+00,
-1.0041e+00, 7.4509e-02, 1.0751e+00, 1.6144e+00, -7.9048e-01,
-8.2033e-01, -3.3595e+00, 1.1192e+00, -3.6376e-01, -5.9706e-02,
-1.5762e+00, -7.6090e-01, -5.4732e-01, -2.5771e-01, -5.6112e-02,
-8.0445e-01, -1.9105e+00, 4.5630e-01, 2.2545e+00, -1.7567e+00,
-1.3612e+00, 1.2470e+00, 3.2429e-01, 1.2829e+00, 2.1712e+00,
1.6078e+00, 1.1831e+00, 7.4726e-02, 3.6741e-01, -6.8770e-01,
-7.1650e-01, 1.7661e-01]], device=‘cuda:0’,
grad_fn=)
C++ output
Embds: Columns 1 to 8 -84.6285 -14.7203 17.7419 47.0915 31.8170 57.6813 3.6089 -38.0543
Columns 9 to 16 3.3444 -95.5730 90.3788 -10.8355 2.8831 -14.3861 0.8706 -60.7844
Columns 17 to 24 30.0367 -43.1165 -5.6550 33.2033 -1.1758 105.3884 -9.8710 17.8346
Columns 25 to 32 17.0933 66.6854 119.4765 79.3748 30.2875 -77.4174 0.3317 -4.0767
Columns 33 to 40 -2.8686 -30.3538 -51.4344 -54.1199 -94.5696 -33.0847 -19.5770 54.3094
Columns 41 to 48 9.1542 1.8090 84.0233 -34.8189 79.6485 109.4215 10.2912 -47.0976
Columns 49 to 56 37.7219 -15.3790 -16.3427 22.2094 -110.2703 -47.8214 -40.3721 49.5144
Columns 57 to 64 7.0735 -69.1642 -87.2891 2.4904 -114.2314 -34.6742 77.0583 47.5493
Columns 65 to 72 -12.7955 -12.1884 -70.9220 61.2372 -23.0823 -14.9402 13.1899 77.5274
Columns 73 to 80 14.8980 3.9681 -12.4636 -2.8313 -26.5012 18.7349 -81.2809 27.7805
Columns 81 to 88 4.6502 -18.6308 -65.8188 -7.8959 -84.8021 18.9902 55.9421 -3.1461
Columns 89 to 96 -68.0309 -121.0718 -39.6810 79.0844 44.7410 5.4263 -55.5766 -46.9981
Columns 97 to 104 107.5576 -64.8779 -38.2952 27.7137 -3.9070 27.3118 -6.6422 -13.3164
Columns 105 to 112 104.2085 0.5082 -78.4771 -19.8312 -38.7756 -52.0113 55.9654 -14.9233
Columns 113 to 120 -9.7707 52.0167 -44.6636 -98.1208 4.3471 72.7285 1.8963 -15.4767
Columns 121 to 128 -15.4205 -42.2256 170.4943 -79.3618 -1.6385 11.5500 59.1987 -65.9982
Columns 129 to 136 -9.0985 33.3904 98.2815 -74.2509 11.8020 -89.1567 34.4861 43.4928
Columns 137 to 144 -56.4307 11.7731 -16.7437 31.0511 -46.6434 -20.9232 26.8300 3.2606
Columns 145 to 152 61.6599 -21.9810 -70.2742 -15.0909 -41.5298 -30.9954 -76.2638 0.6642
Columns 153 to 160 2.6916 47.7454 26.7200 21.0140 -44.8855 -6.4925 -65.3175 -45.4141
Columns 161 to 168 -17.8177 -31.5315 -32.9688 11.2705 -58.3355 -83.6264 -56.9800 -41.5826
Columns 169 to 176 14.9421 -66.3415 -19.4020 -8.9205 34.7736 -1.2142 -22.5419 40.3070
Columns 177 to 184 51.2629 37.0988 -84.1648 112.5778 -51.5290 56.4389 -17.4903 42.5482
Columns 185 to 192 57.6678 -29.1431 63.6813 17.9877 -59.6995 31.1782 -43.9503 42.7553
Columns 193 to 200 29.6934 -19.0927 -74.4936 -90.7978 -75.4938 41.4866 9.0591 52.9187
Columns 201 to 208 -89.2584 -50.5271 -46.8471 -67.3429 -1.2110 21.3874 86.3426 -33.9398
Columns 209 to 216 46.3358 17.8981 -100.1674 -50.8498 -55.5474 -42.1486 2.6009 79.9036
Columns 217 to 224 73.3729 41.6763 -82.8588 -2.8996 17.4613 -166.8535 68.3080 42.2190
Columns 225 to 232 -75.3225 -27.0393 40.7027 133.1041 -10.1574 85.9142 -17.5571 -11.0445
Columns 233 to 240 -46.6592 36.1900 -25.5837 23.5690 111.7863 116.6611 -3.4232 -14.3296
Columns 241 to 248 -10.1717 -26.3160 110.0413 -74.1527 66.8889 54.4394 -8.4007 -80.9817
Columns 249 to 256 -52.5828 0.9547 -78.9718 19.8881 68.5607 4.6896 82.5919 11.0848
Columns 257 to 264 -48.9090 49.7747 -90.9747 -22.6597 82.9919 -31.0079 33.3777 -80.8728
Columns 265 to 272 20.9312 24.9726 58.8175 -57.3928 -36.9511 41.7683 -22.7457 18.0902
Columns 273 to 280 33.3806 12.2698 -48.8019 -64.5811 -22.4971 13.0827 25.2252 -69.3366
Columns 281 to 288 -31.1383 9.3472 -41.4773 -45.0921 -29.0197 20.8469 -18.5003 101.1813
Columns 289 to 296 21.4998 -41.0139 13.0072 14.5900 47.8082 8.7939 -1.6898 -65.2906
Columns 297 to 304 98.5455 -36.5257 -13.4876 31.5104 67.0052 20.0974 80.6973 -59.4268
Columns 305 to 312 -9.8725 109.9801 -11.7113 76.0156 19.4814 -54.8399 -58.3198 -22.0197
Columns 313 to 320 -11.4874 -40.5763 -90.6195 61.3063 2.9030 -38.8599 49.8093 63.7094
Columns 321 to 328 -57.7285 41.2222 35.4600 21.2505 29.7755 40.5168 -36.1677 -35.7411
Columns 329 to 336 55.7660 46.6989 56.3559 -109.1042 -56.7988 -16.9920 32.8174 50.5294
Columns 337 to 344 13.8572 92.8637 59.6933 -0.8193 -69.0457 14.8087 20.9237 29.3850
Columns 345 to 352 -59.0192 -19.3695 -47.4750 1.2323 -18.9492 -63.6595 46.3948 1.5139
Columns 353 to 360 80.1003 -116.6856 18.4157 43.6484 14.6691 -26.1271 -60.0532 10.0214
Columns 361 to 368 -17.5375 11.3292 -6.1891 -2.1459 -24.8204 0.0574 147.1159 56.4644
Columns 369 to 376 20.6844 99.9769 -2.2026 45.3141 -5.9111 22.8332 -26.9914 -54.8931
Columns 377 to 384 13.0211 -22.7115 -55.9605 -102.6626 -41.1080 37.0626 64.1098 -87.8013
Columns 385 to 392 -4.5324 116.3614 -13.5869 29.3998 -29.8993 -19.1788 89.5348 33.3830
Columns 393 to 400 47.5617 -47.8952 -115.5733 18.6636 70.4700 38.7836 52.9221 -26.4590
Columns 401 to 408 57.7344 -46.9924 -107.3308 -104.5425 93.0818 -38.1794 28.5326 63.8123
Columns 409 to 416 -21.0296 -53.7937 46.5247 10.2387 -12.8996 85.9877 53.1290 48.6895
Columns 417 to 424 -66.8464 -2.3867 22.6467 7.4483 21.0441 -94.1917 -42.1939 15.9525
Columns 425 to 432 53.8263 113.8375 61.6334 -104.5839 -20.7676 78.8139 -22.6948 -127.5196
Columns 433 to 440 26.8981 20.7751 38.6938 0.1248 -14.7045 -67.0021 -51.5681 -8.1669
Columns 441 to 448 19.7874 -48.3975 -32.2947 81.1478 48.5060 -85.6838 -17.2948 4.0231
Columns 449 to 456 17.8500 173.0746 -8.2571 20.8623 -7.1263 78.6013 18.4043 6.9401
Columns 457 to 464 -55.3688 28.4737 21.1565 142.7567 -89.0954 -30.7984 62.5072 26.2824
Columns 465 to 472 -40.7608 -53.0610 -23.0218 2.4569 58.6491 -60.6084 15.7515 -54.9259
Columns 473 to 480 -44.9702 -8.3017 -71.4793 -84.7397 -114.3832 -15.3010 54.4510 -32.4508
Columns 481 to 488 75.7713 22.8518 -35.4634 -48.0759 -31.5085 -8.1592 6.5577 -23.7090
Columns 489 to 496 -0.2302 -68.3007 26.5670 -28.0143 -21.5935 -55.7180 -5.6677 56.4317
Columns 497 to 504 61.9337 9.6666 -12.2558 -60.3430 -30.2482 31.4843 71.7933 -8.8972
Columns 505 to 512 36.8830 -31.1061 51.6818 8.2866 1.7214 -2.9263 -37.4330 48.5854
[ CPUFloatType{1,512} ]
Using
Pytorch 1.6.0
Libtorch 1.6.0
Windows 10
Cuda 10.1 |
st181135 | Hi,
instead of using
tensor_image[0][0] = tensor_image[0][0].sub(0.5).div(0.5);
tensor_image[0][1] = tensor_image[0][1].sub(0.5).div(0.5);
tensor_image[0][2] = tensor_image[0][2].sub(0.5).div(0.5);
Just try tensor_image = tensor_image.sub(0.5).div(0.5)
Also, it’s better to monitor the output tensor’s sum or mean rather than all the values in the output tensor. |
st181136 | I see that there is a freeze_module pass in 1.5.x and later in PyTorch.
Is there a similar pass in 1.4.x version of PyTorch which inlines the graph and resolves the weights and biases in the PyTorch graph into constants?
Does running _jit_pass_lower_graph(), and converting all the tensor parameters returned by this pass into constants in the graph have the same effect? |
st181137 | The freeze_module pass was added to 1.5, so there isn’t anything 100% equivalent in 1.4 The _jit_pass_lower_graph does something similar but freezing goes beyond it to try to simplify the graph based on the changes, and to verify that the conversion to constants was a correct transform. |
st181138 | @zdevito : Thanks for the reply. If I had to achieve something similar to freeze_module in PyTorch 1.4.x, is running _jit_pass_lower_graph() and converting the returned parameters into graph constants a good approach? Or is there a better mechanism to achieve this? Porting freeze_module to 1.4.x seems cumbersome as it relies on AliasDB APIs which don’t exist in 1.4.x. |
st181139 | Amazon Elastic Inference supports PyTorch and currently uses a modified version of 1.3.1. However, the JIT it uses seems to be a really old version as it says it requires version 1 torchscript models. The models that I’m able to create locally are version 3, and it seems to be that way no matter what version of PyTorch I use. When trying to script() or trace() the models on AWS, it seems to not actually work.
Ideally, I’d be able to convert the models I have locally to a version 1 torchscript models ao I could copy them to my AWS server and run inference with them. Does anybody know how to do that? |
st181140 | I’m trying to convert lstm-decoder with beam search to onnx, but I got some TracerWarnings. Is it possible to operate with int-number item from 1-size input tensor in my future code like this?
def forward(self, img_features, beam_size):
'''
Arguments:
img_features (tensor): Extracted features from the encoder module. (batch_size, feature_pixels = 7x7, encoder_dim = 2048)
beam_size (tensor): Number of top candidates to consider for beam search. (int, = 3 or =6 or =9)
'''
item = beam_size.item()
#some code...
output = torch.tensor([item])
return output
Now I have this warnings and constant output of Onnx-model:
TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
item = beam_size.item()
TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
output = torch.tensor([item])
I tried using @torch.jit.script but it didn’t help too |
st181141 | Could you post your real use case, i.e. how the Python integer is used?
Based on your current code snippet you should be able to just use the tensor instead of converting it to an integer, but I assume it’s just an example and not your real use case. |
st181142 | Ok, it’s just a bit of my problems with Onnx-convertation of lstm beam-search, and converting a tensor to a Python number is the first of them. When I export this for Onnx, I got the constant output from the model with incorrect tracing. I found, that I should script it with torch.script, but those examples not easy to understand
def forward(self, img_features, beam_size):
"""
Function to generate the caption for the corresponding encoded image using beam search to provide the most optimal caption
combination.
Arguments:
img_features (tensor): Extracted features from the encoder module. (batch_size, feature_pixels = 7x7, encoder_dim = 2048)
beam_size (tensor): Number of top candidates to consider for beam search. (int = 3 or 6 or 9)
Output:
sentence (tensor): ordered list of words of the final optimal caption (list)
"""
beam_size = beam_size.item()#!Converting a tensor to a Python number
prev_words = torch.zeros(beam_size, 1).long()
sentences = prev_words
top_preds = torch.zeros(beam_size, 1)
completed_sentences = []
completed_sentences_preds = []
step = 1
h, c = self.get_init_lstm_state(img_features)
while True:
embedding = self.embedding(prev_words).squeeze(1)
context = self.attention(img_features, h)[0]
gate = self.sigmoid(self.f_beta(h))
gated_context = gate * context
lstm_input = torch.cat((embedding, gated_context), dim=1)
h, c = self.lstm(lstm_input, (h, c))
output = self.deep_output(h)
output = top_preds.expand_as(output) + output
if step == 1:
top_preds, top_words = output[0].topk(beam_size, 0, True, True)
else:
top_preds, top_words = output.view(-1).topk(beam_size, 0, True, True)
prev_word_idxs = top_words / output.size(1)
next_word_idxs = top_words % output.size(1)
sentences = torch.cat((sentences[prev_word_idxs], next_word_idxs.unsqueeze(1)), dim=1)
incomplete = [idx for idx, next_word in enumerate(next_word_idxs) if next_word != 1]#!Converting a tensor to a Python boolean
complete = list(set(range(len(next_word_idxs))) - set(incomplete))#!Converting a tensor to a Python index
if len(complete) > 0:
completed_sentences.extend(sentences[complete].tolist())#!Converting a tensor to a Python list
completed_sentences_preds.extend(top_preds[complete])#!Converting a tensor to a Python index
beam_size -= len(complete)
if beam_size == 0:
break
sentences = sentences[incomplete]
h = h[prev_word_idxs[incomplete]]
c = c[prev_word_idxs[incomplete]]
img_features = img_features[prev_word_idxs[incomplete]]
top_preds = top_preds[incomplete].unsqueeze(1)
prev_words = next_word_idxs[incomplete].unsqueeze(1)
if step > 50:
break
step += 1
idx = completed_sentences_preds.index(max(completed_sentences_preds))#!Converting a tensor to a Python boolean
sentence = completed_sentences[idx]
sentence = torch.tensor(sentence)#!torch.tensor results are registered as constants
return sentence |
st181143 | Is scripting working for you now or are you still facing some issues?
I’m not an expert in using ONNX, but could you try to use tensors instead of Python literals via e.g.:
beam_size = torch.tensor([5])
prev_words = torch.zeros(beam_size, 1).long() |
st181144 | Yes, your case is working, but in this line, for example:
top_preds, top_words = output[0].topk(beam_size, 0, True, True)
‘topk()’ first argument must be int, not Tensor, so I still need this option (converatation tensor to a single number).
I try to script it like this:
from torch import Tensor
@torch.jit.script
def get_item(x: Tensor):
item = x.item()
return item
item = get_item(beam_size)
but got this error:
RuntimeError: get_item() Expected a value of type 'Tensor' for argument 'x' but instead found type 'int'.
Position: 0
Value: 1
Declaration: get_item(Tensor x) -> (Scalar)
Cast error details: Unable to cast Python instance of type <class 'int'> to C++ type 'at::Tensor'
What do you think about possible solution for this problems? |
st181145 | The last error message points towards an integer input, while a tensor is expected.
Did you accidentally pass x.item() to get_item()? |
st181146 | Sorry, I’m forgot to save changes in module
Now, when I try to export to Onnnx this model with above get_item(beam_size) scripted function, I got
RuntimeError: Tracer cannot set value trace for type Int. Supported types are tensor, tensor list, and tuple of tensors.
Is it problem from Onnx-part, or am I doing something wrong with torch.script and it leads to errors? |
st181147 | I’m not sure, but the standalone PyTorch script seems to work:
from torch import Tensor
@torch.jit.script
def get_item(x: Tensor):
item = x.item()
return item
beam_size = torch.randn(1)
item = get_item(beam_size)
print(get_item.graph)
> graph(%x.1 : Tensor):
%item.1 : Scalar = aten::item(%x.1) # <ipython-input-109-1655baf12d25>:4:11
return (%item.1)
print(item)
> 0.18689797818660736 |
st181148 | Yes, this is work in practice, but not in onnx-export(
Anyway, thanks for your help, you are best |
st181149 | What is the purpose of @torch.jit.script decorator? Why is adding the decorator “@torch.jit.script” results in an error, while I can call torch.jit.script on that module, e.g. this fails:
import torch
@torch.jit.script
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.script(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h)
"C:\Users\Administrator\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\torch\jit\__init__.py", line 1262, in script
raise RuntimeError("Type '{}' cannot be compiled since it inherits"
RuntimeError: Type '<class '__main__.MyCell'>' cannot be compiled since it inherits from nn.Module, pass an instance instead
While the following code works well:
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.script(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h) |
st181150 | @torch.jit.script can be used as a decorator on functions to script them. So, these two code snippets are roughly equivalent:
Decorator
@torch.jit.script
def fn(a: int):
return a + 1
fn(3) # fn here is a scripted function
Function Call
def fn(a: int):
return a + 1
s_fn = torch.jit.script(fn)
s_fn(3) # s_fn here is a scripted function
This decorator can also be used on classes that extend object to script them (known as Torchscript classes 33).
Because only instances of Modules can be scripted, @torch.jit.script cannot be used as a decorator on a subclass of Module. You must create an instance of the Module and pass it to torch.jit.script in order to script it. |
st181151 | Fairseq Transformer models modify the passed incremental state dict without returning it (OTOH huggingface Transformers return it). ONNX export seems to drop these operations. Is it possible to still export these correctly?
import torch
from typing import Any, Dict, List, Optional
from torch import Tensor
class InplaceModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.embeddings = torch.nn.Linear(1, 10, bias=False)
def forward(self, x, state: Dict[str, Dict[str, Tensor]]):
state['new_level_1'] = dict()
state['new_level_1']['new_level_2'] = x
return self.embeddings(x)
dummy_state = {'level_1': {'level_2': torch.tensor([[3]], dtype=torch.float32)}}
dummy_state = torch.jit.annotate(Dict[str, Dict[str, Optional[torch.Tensor]]], dummy_state)
dummy_input = torch.tensor([[3]], dtype=torch.float32)
model = torch.jit.trace_module(InplaceModule(), dict(forward=(dummy_input, dummy_state,)))
print(model.code)
def forward(self,
input: Tensor,
argument_2: Dict[str, Dict[str, Tensor]]) -> Tensor:
return (self.embeddings).forward(input, ) |
st181152 | import torch
import torch.nn as nn
from torch.quantization import QuantStub, DeQuantStub
class QuantizableModel(nn.Module):
def __init__(self, *args, **kwargs):
super(QuantizableModel, self).__init__()
# self.module = ConvBNReLU(3, 64)
self.conv = nn.Conv2d(3, 2, 3, 1, 1, groups=1, bias=True)
self.quant = QuantStub()
self.dequant = DeQuantStub()
# weight initialization
nn.init.kaiming_normal_(self.conv.weight, mode='fan_out')
nn.init.zeros_(self.conv.bias)
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.dequant(x)
return x
if __name__ == "__main__":
model = QuantizableModel().eval()
inp = torch.randn(1, 3, 224, 224)
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
# # Dummy calibration
model(inp)
torch.quantization.convert(model, inplace=True)
print("before traced: ", model.state_dict().keys())
traced_model = torch.jit.trace(model, inp).eval()
print("after traced: ", traced_model.state_dict().keys())
The output of running the above code in pytorch 1.5 is:
before traced: odict_keys(['conv.weight', 'conv.scale', 'conv.zero_point', 'conv.bias', 'quant.scale', 'quant.zero_point'])
after traced: odict_keys(['conv._packed_params', 'quant.scale', 'quant.zero_point'])
The output of running the same code in pytorch 1.6 is:
before traced: odict_keys(['conv.weight', 'conv.bias', 'conv.scale', 'conv.zero_point', 'quant.scale', 'quant.zero_point'])
after traced: odict_keys(['quant.scale', 'quant.zero_point'])
Parameters of the quantized model will miss in state_dict after being traced in pytorch1.6. Is it a bug or feature? |
st181153 | So I’m in the process of deploying a torch.jit traced model into an x86 environment. I’d ideally like to do this in Python, but I see the tutorials are in C++. Is it possible to unlock the full performance of the FBGEMM backend from Python, or should I really be using C++?
A 20-30% performance hit is acceptable, but if C++ will be multiple times faster I’ll go with that. |
st181154 | Hi everyone,
I’ve come across some interesting behavior regarding TorchScript and index ops. The following two functions do the same job. However, upon inspection of the TorchScript code, one can see that in the latter the JIT compiler completely removes an assignment operator: data_sigma[data_sigma < 1e-12].fill_(1.0)
In [13]: @torch.jit.script
...: def fit(data: Tensor):
...: # Reduce all but the last dimension
...: # pylint:disable=unnecessary-comprehension
...: reduction_dims = [i for i in range(data.dim() - 1)]
...: # pylint:enable=unnecessary-comprehension
...:
...: data_mu = torch.mean(data, dim=reduction_dims, keepdim=True)
...: data_sigma = torch.std(data, dim=reduction_dims, keepdim=True)
...: data_sigma = torch.where(data_sigma < 1e-12, torch.ones_like(data_sigma), data_sigma)
...: return data_mu, data_sigma
...:
In [14]: print(fit.code)
def fit(data: Tensor) -> Tuple[Tensor, Tensor]:
reduction_dims = annotate(List[int], [])
for i in range(torch.sub(torch.dim(data), 1)):
_0 = torch.append(reduction_dims, i)
data_mu = torch.mean(data, reduction_dims, True, dtype=None)
data_sigma = torch.std(data, reduction_dims, True, True)
_1 = torch.lt(data_sigma, 9.9999999999999998e-13)
_2 = torch.ones_like(data_sigma, dtype=None, layout=None, device=None, pin_memory=None, memory_format=None)
data_sigma0 = torch.where(_1, _2, data_sigma)
return (data_mu, data_sigma0)
In [15]: @torch.jit.script
...: def fit(data: Tensor):
...: # Reduce all but the last dimension
...: # pylint:disable=unnecessary-comprehension
...: reduction_dims = [i for i in range(data.dim() - 1)]
...: # pylint:enable=unnecessary-comprehension
...:
...: data_mu = torch.mean(data, dim=reduction_dims, keepdim=True)
...: data_sigma = torch.std(data, dim=reduction_dims, keepdim=True)
...: #data_sigma = torch.where(data_sigma < 1e-12, torch.ones_like(data_sigma), data_sigma)
...: data_sigma[data_sigma < 1e-12].fill_(1.0)
...: return data_mu, data_sigma
...:
In [16]: print(fit.code)
def fit(data: Tensor) -> Tuple[Tensor, Tensor]:
reduction_dims = annotate(List[int], [])
for i in range(torch.sub(torch.dim(data), 1)):
_0 = torch.append(reduction_dims, i)
data_mu = torch.mean(data, reduction_dims, True, dtype=None)
data_sigma = torch.std(data, reduction_dims, True, True)
return (data_mu, data_sigma)
Does anyone have a good explanation for this behavior? I’m worried now that the same may be happening in other parts of my code. Such assignments are important when avoiding numerical precision errors.
Cheers,
Ângelo |
st181155 | Solved by Michael_Suo in post #3
Actually: I believe this is correct behavior. Indexing a tensor with another tensor produces a copy of the original tensor. So in the code:
data_sigma[data_sigma < 1e-12].fill_(1.0)
you are performing fill_ on a copy of data_sigma, then immediately throwing it away. The TorchScript compiler is cor… |
st181156 | Yes, this is definitely concerning—do you mind filing a bug on Github and we can follow up there? Thanks! |
st181157 | Actually: I believe this is correct behavior. Indexing a tensor with another tensor produces a copy of the original tensor. So in the code:
data_sigma[data_sigma < 1e-12].fill_(1.0)
you are performing fill_ on a copy of data_sigma, then immediately throwing it away. The TorchScript compiler is correctly recognizing that this is work that you never use the result of, and thus we remove it as an optimization. |
st181158 | Hi.
I want to convert darts/cnn 3’s model to TFlite, finaly.
First of all, I tried to convert it to ONNX by below code.
import torch
import torch.nn as nn
import genotypes
from model import NetworkCIFAR as Network
genotype = eval("genotypes.%s" % 'DARTS')
model = Network(36, 10, 20, True, genotype)
model.load_state_dict(torch.load('./weights.pt'))
model = model.cuda()
onnx_model_path = './darts_model.onnx'
dummy_input = torch.randn(8,3,32,32)
input_names = ['image_array']
output_names = ['category']
torch.onnx.export(model,dummy_input, onnx_model_path,
input_names=input_names, output_names=output_names)
However, it couldn’t convert.
Error is below.
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/onnx/__init__.py", line 168, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 69, in export
use_external_data_format=use_external_data_format)
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 488, in _export
fixed_batch_size=fixed_batch_size)
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 334, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 291, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, _force_outplace=False, _return_inputs_states=True)
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/jit/__init__.py", line 278, in _get_trace_graph
outs = ONNXTracedModule(f, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/XXXX_darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/jit/__init__.py", line 361, in forward
self._force_outplace,
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/jit/__init__.py", line 351, in wrapper
out_vars, _ = _flatten(outs)
RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type NoneType
Does onnx.export not correspond to darts?
Could you please tell me how to fix it to convert to ONNX?
Finally, I am asking this same question in darts’s issue, sorry.
Thank you!
My environment
Ubuntu 16.04
Python 3.6.10
CUDA 9.0
Pytorch 0.3.1(to search model), 1.5.1(to convert to ONNX) |
st181159 | Solved by ptrblck in post #4
Yes, sorry for the unclear naming. By “eager” mode I meant the normal Python usage.
Good to see the model is working generally.
Could you , for the sake of debugging, remove the logits_aux from the forward and just return the logits and retry to export the model? |
st181160 | Based on the error message, it seems that an intermediate activation is None instead of a valid tensor.
Is your model working fine in PyTorch eager mode and JIT (without the ONNX export)? |
st181161 | Thanks for your reply.
Is it correct to understand PyTorch eager mode as normal mode?
If so, the output of the model(input) is below. I think it is correct.
>>> out = model(dummy_input)
>>> out
(tensor([[-0.0391, -0.0840, 0.1382, -0.0397, 0.0157, -0.0448, -0.0603, -0.0823,
0.0025, -0.0009],
[ 0.0257, 0.0031, -0.1880, 0.0768, -0.1047, -0.0392, 0.1393, -0.0419,
-0.0437, 0.0032],
[ 0.0090, 0.0097, -0.0768, -0.0383, -0.0220, -0.2048, -0.1315, 0.0117,
-0.0538, -0.0613],
[ 0.0438, -0.1284, 0.0325, 0.0441, 0.0736, 0.1941, -0.0407, -0.0634,
0.1074, -0.0407],
[ 0.0440, 0.0194, 0.0147, 0.0859, 0.2149, -0.0393, 0.1640, 0.0369,
-0.1021, -0.1820],
[-0.3142, -0.0726, -0.0694, -0.1064, -0.1595, 0.2461, 0.1174, 0.2102,
0.1790, 0.2188],
[-0.0522, 0.0327, -0.1626, -0.0955, 0.0625, -0.0061, 0.0662, 0.0667,
0.1003, 0.0635],
[ 0.1054, -0.0456, 0.0922, 0.0559, 0.1422, -0.1924, -0.2107, -0.0572,
-0.0424, -0.1007]], device='cuda:0', grad_fn=<AddmmBackward>), tensor([[-0.0779, -0.4483, 0.5459, -0.4263, 0.3033, -0.0147, 0.1823, 0.2561,
0.3321, -0.8131],
[-0.1185, -0.1932, 0.2465, 0.3930, -0.0634, 0.2440, -0.0587, -0.5931,
0.0938, 0.2163],
[ 0.0699, -0.2207, 0.5958, -0.0778, -0.1024, -0.1841, 0.5211, 0.0760,
0.2308, 0.1463],
[ 0.1858, -0.0432, 0.3188, -0.0905, 0.1415, -0.6925, 0.1487, -0.2300,
1.0883, 0.1186],
[-0.1471, 0.1120, 0.3354, -0.3918, 0.0748, -0.5318, 0.0106, -0.4543,
1.2513, 0.1778],
[ 0.4499, 0.0425, 0.3949, -0.8790, -0.1463, -0.4942, -0.4362, -0.3380,
0.3257, 0.2104],
[ 0.2962, 0.0098, 0.6569, 0.0520, 0.1627, -0.4044, 0.2104, -0.2278,
0.2411, 0.0337],
[ 0.0591, -0.0795, 0.9120, -0.5483, -0.2887, -0.2304, -0.3799, -0.5769,
0.5903, -0.6071]], device='cuda:0', grad_fn=<AddmmBackward>))
Regarding the JIT, I don’t know much about it, so could you please tell me how to check that?
Here’s the code, is it possible to convert to ONNX even if ‘foward’ has ‘if’ in it?
class NetworkCIFAR(nn.Module):
def __init__(self, C, num_classes, layers, auxiliary, genotype):
super(NetworkCIFAR, self).__init__()
self._layers = layers
self._auxiliary = auxiliary
stem_multiplier = 3 #RGB
C_curr = stem_multiplier*C #C is input_channels
self.stem = nn.Sequential(
nn.Conv2d(3, C_curr, 3, padding=1, bias=False),
nn.BatchNorm2d(C_curr)
)
C_prev_prev, C_prev, C_curr = C_curr, C_curr, C
self.cells = nn.ModuleList()
reduction_prev = False
for i in range(layers):
if i in [layers//3, 2*layers//3]:
C_curr *= 2
reduction = True
else:
reduction = False
cell = Cell(genotype, C_prev_prev, C_prev, C_curr, reduction, reduction_prev)
reduction_prev = reduction
self.cells += [cell]
C_prev_prev, C_prev = C_prev, cell.multiplier*C_curr
if i == 2*layers//3:
C_to_auxiliary = C_prev
if auxiliary:
self.auxiliary_head = AuxiliaryHeadCIFAR(C_to_auxiliary, num_classes)
self.global_pooling = nn.AdaptiveAvgPool2d(1)
self.classifier = nn.Linear(C_prev, num_classes)
def forward(self, input):
logits_aux = None
s0 = s1 = self.stem(input)
for i, cell in enumerate(self.cells):
s0, s1 = s1, cell(s0, s1, self.drop_path_prob)
if i == 2*self._layers//3:
if self._auxiliary and self.training:
logits_aux = self.auxiliary_head(s1)
out = self.global_pooling(s1)
logits = self.classifier(out.view(out.size(0),-1))
return logits, logits_aux
I’m sorry for all of the questions.
Thank you for your time. |
st181162 | crook52:
Is it correct to understand PyTorch eager mode as normal mode?
Yes, sorry for the unclear naming. By “eager” mode I meant the normal Python usage.
Good to see the model is working generally.
Could you , for the sake of debugging, remove the logits_aux from the forward and just return the logits and retry to export the model? |
st181163 | It looks like it fails in tracing, can you try torch.jit.trace and see if it work or not? |
st181164 | I’m sorry for my late reply.
After removing logits_aux as you advised, it worked!!!
I cannot thank you enough!
However, why it couldn’t convert with logits_aux??
cclass AuxiliaryHeadCIFAR(nn.Module):
def __init__(self, C, num_classes):
"""assuming input size 8x8"""
super(AuxiliaryHeadCIFAR, self).__init__()
self.features = nn.Sequential(
nn.ReLU(inplace=True),
nn.AvgPool2d(5, stride=3, padding=0, count_include_pad=False), # image size = 2 x 2
nn.Conv2d(C, 128, 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 768, 2, bias=False),
nn.BatchNorm2d(768),
nn.ReLU(inplace=True)
)
self.classifier = nn.Linear(768, num_classes)
def forward(self, x):
x = self.features(x)
x = self.classifier(x.view(x.size(0),-1))
return x |
st181165 | Thank you for your reply, and sorry for my late reply.
ebarsoum:
It looks like it fails in tracing,
Yes, torch.jit.trace doesn’t work.
>>> torch.jit.trace(model,dummy_input)
/home/XXXX/darts/cnn/eval-EXP-20200710-150423/utils.py:105: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = Variable(torch.cuda.FloatTensor(x.size(0), 1, 1, 1).bernoulli_(keep_prob))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/jit/__init__.py", line 875, in trace
check_tolerance, _force_outplace, _module_class)
File "/home/XXXX/darts/cnn/eval-EXP-20200710-150423/venv/lib/python3.6/site-packages/torch/jit/__init__.py", line 1027, in trace_module
module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace)
RuntimeError: Only tensors, lists and tuples of tensors can be output from traced functions
If I remove logits_aux following ptrblck’s advice, it work well.
I don’t know why. So, I would appreciate it if you could inform when you find it.
Thank you! |
st181166 | My best guess is that tracing the model didn’t go through the conditions where aux_logits is set to a tensor, so that it stayed None until the return statement. This could happen, e.g. if you called model.eval() before exporting it. |
st181167 | Thank you for your comment!
The point is that I have no choice but to erase the lights_aux, tracing model. |
st181168 | I have a question related to this project https://github.com/NathanUA/U-2-Net/blob/7e5ff7d4c3becfefbb6e3d55916f48c7f7f5858d/u2net_test.py#L104 8
I can trace the net like this:
traced_script_module = torch.jit.trace(net, inputs_test)
traced_script_module.save("traced_model.pt")
print(inputs_test.size()) # shows (1, 3, 320, 320)
Now I’m trying to run the model in a C++ application. I was able to do this in a prior project https://github.com/DBraun/PyTorchTOP-cpumem 3 I used CMake and built in debug mode by doing
SET DEBUG=1 before the CMake instructions.
In the C++ project for U-2-Net, I can load the model into a module with no errors. When I call
torchinputs.clear();
torchinputs.push_back(torch::ones({1, 3, 320, 320 }, torch::kCUDA).to(at::kFloat));
module.forward(torchinputs); // error
I get
Unhandled exception at 0x00007FFFD8FFA799 in TouchDesigner.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000EA677F1B30. occurred
The error is at https://github.com/pytorch/pytorch/blob/4c0bf93a0e61c32fd0432d8e9b6deb302ca90f1e/torch/csrc/jit/api/module.h#L112 17 It says inputs has size 0. However, I’m pretty sure I’ve passed non-empty data (1, 3, 320,320) to module->forward() https://github.com/DBraun/PyTorchTOP-cpumem/blob/f7cd16cb84021a7fc3681cad3a66c2bd7551a572/src/PyTorchTOP.cpp#L294 6
This is the stack trace at module->forward(torchinputs)
stacktrace939×289 10.6 KB
I thought it might be a DLL issue but I’ve copied all DLLs from libtorch/lib
I can confirm GPU stuff is available and that when I traced the module I was using CUDA.
LoadLibraryA("c10_cuda.dll");
LoadLibraryA("torch_cuda.dll");
try {
std::cout << "CUDA: " << torch::cuda::is_available() << std::endl;
std::cout << "CUDNN: " << torch::cuda::cudnn_is_available() << std::endl;
std::cout << "GPU(s): " << torch::cuda::device_count() << std::endl;
}
catch (std::exception& ex) {
std::cout << ex.what() << std::endl;
}
Trying to fix the runtime exception on module->forward, I thought maybe @torch.jit.script needed to be in some of the functions in the U-2-Net project like here https://github.com/NathanUA/U-2-Net/blob/7e5ff7d4c3becfefbb6e3d55916f48c7f7f5858d/model/u2net.py#L24 5 I was worried about calling shape[2:] in a function without the @torch.jit.script Should I not be worried?
Any advice is appreciated!
I’ve also followed all the instructions here An unhandled exceptionMicrosoft C ++ exception: c10 :: Error at memory location 15 |
st181169 | Solved by DBraun in post #11
It was an issue with DLLs…
TouchDesigner has its own DLLs in C:/Program Files/Derivative/TouchDesigner/bin. These DLLs get loaded when TouchDesigner opens.
My custom plugin is in Documents/Derivative/Plugins and all of the libtorch DLLs are also there. My thought was that having everything in this… |
st181170 | Have you moved your model to CUDA? The model will be on CPU by default if you call torch::jit::load. |
st181171 | Thanks for your suggestion. I tried
module = torch::jit::load("traced_model.pt", torch::kCUDA);
module.to(torch::kCUDA);
but got the same results. I have the debug dlls and library etc, perfectly ready for some more debugging. Anything more I can do to help?
I’m stepping through line-by-line. I noticed that the module.forward() call takes about 18 seconds before the exception and this happens even when I know I’m giving it a wrongly sized Tensor:
torchinputs.push_back(torch::ones({1, 1, 1, 1}, torch::kCUDA).to(torch::kFloat)); // intentionally wrong size
module.forward(torchinputs);
If I change everything in my code to cpu, it doesn’t throw a runtime error. So I must be not succeeding in making sure everything is CUDA. I also tried following everything here https://github.com/pytorch/pytorch/issues/19302 4
Why isn’t this sufficient for having everything in CUDA?
auto module = torch::jit::load("traced_model.pt", torch::kCUDA);
for (auto p : module.parameters()) {
std::cout << p.device() << std::endl; // cuda:0
}
auto finalinput = torch::ones({ 1, 3, 320, 320 }, torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA));
std::cout << "finalinput device: " << finalinput.device() << std::endl; // cuda:0
torchinputs.push_back(finalinput);
auto forward_result = module.forward(torchinputs); // std::runtime_error
^ and changing merely both of those two references to kCPU instead of kCUDA doesn’t throw an error. |
st181172 | I read everything here https://pytorch.org/tutorials/advanced/cpp_export.html 12 and tried at::kCUDA instead of torch::kCUDA. I tried the nightly debug 1.5 libtorch but encountered other problems that I couldn’t solve, so I need to stick with 1.4 for now. |
st181173 | Hi , I was facing the same issues (trying to run in Unreal Engine 4.25.
Did you manage to solve this? |
st181174 | Not yet. Maybe the info on the github issue will help you although it didn’t work for me https://github.com/NathanUA/U-2-Net/issues/29 22 |
st181175 | Putting LoadLibraryA("torch_cuda.dll"); early in my code allowed me to start using the nightly debug build of libtorch 1.5, but I’m still stuck on module.forward.
2020-07-26 21_27_53-Microsoft Visual Studio1163×384 26.4 KB
I also put -INCLUDE:?warp_size@cuda@at@@YAHXZ in Linker>All Options>Additional Options.
and LoadLibraryA("c10_cuda.dll"); early too. Is there something else I can try?
Update again: I can trace a style transfer model and that works in my code here, but the traced U2Net model doesn’t work.
Below is how I wrapped the model. Is it ok to use x[:,0,:,:] or does that break the jit trace? I’m also concerned about this line https://github.com/NathanUA/U-2-Net/blob/b77cd6da3204efcb03e18e15dd3b9eb24d47f969/model/u2net.py#L24
def normPRED(d):
ma = torch.max(d)
mi = torch.min(d)
dn = (d-mi)/(ma-mi)
return dn
class ModelWrapper(nn.Module):
def __init__(self, u2netmodel):
super(ModelWrapper,self).__init__()
self.u2netmodel = u2netmodel
def forward(self, x):
# my code doesn't use ToTensorLab in the data loader
# https://github.com/NathanUA/U-2-Net/blob/b77cd6da3204efcb03e18e15dd3b9eb24d47f969/data_loader.py#L208
# so do the normalization in this wrapper
x = x / torch.max(x)
r = (x[:,0,:,:]-0.485)/0.229
g = (x[:,1,:,:]-0.456)/0.224
b = (x[:,2,:,:]-0.406)/0.225
img = torch.stack((r,g,b), 1)
d1,d2,d3,d4,d5,d6,d7= self.u2netmodel(img)
return normPRED(d1)
# paraphrasing u2net_test.py
net = U2NET(3,1)
net.cuda()
wrapper = ModelWrapper(net)
wrapper.cuda()
wrapper.to('cuda') # just in case?
print('is cuda: ' + str(next(wrapper.parameters()).is_cuda)) # True
inputs_test = data_test['image']
inputs_test = inputs_test.type(torch.FloatTensor)
inputs_test = inputs_test.cuda()
print("inputs size: " + str(inputs_test.size())) # [1, 3, 320, 320]
d1 = wrapper(inputs_test)
# save d1 as an image and the image is great!
# save_output(img_name_list[i_test],pred,prediction_dir)
traced = torch.jit.trace(wrapper, inputs_test)
traced.save("traced_model.pt") |
st181176 | If I use torch.jit.script instead of torch.jit.trace:
sm = torch.jit.script(wrapper)
torch.jit.save(sm, "traced_model.pt")
I realized I could put print statements and that they would show up in the c++ console. I put some surrounding the execution of torch.stack in my ModelWrapper. It turns out this is the moment it throws the exception. Same thing happens when trying to write it with torch.cat followed by torch.unsqueeze (the cat fails).
Weirdly I am able to do this inside the ModuleWrapper’s forward:
something = torch.stack([torch.randn([2, 3, 4]), torch.randn([2, 3, 4])])
When I removed the torch.stack call from ModelWrapper, using print statements I was able to pinpoint a failure within the u2netmodel forward on a call to cat. https://github.com/NathanUA/U-2-Net/blob/b77cd6da3204efcb03e18e15dd3b9eb24d47f969/model/u2net.py#L87
So my question now is why can’t I do torch.stack or torch.cat in this module? Is it because these things allocate memory whereas calls to Conv2D don’t allocate? I’ve used print statements to make sure everything’s on cuda etc. Something about cloning? Python execution of the same model is working fine. |
st181177 | I realized I should just try the following code:
auto thing1 = torch::ones({ 1, 3, 5, 5 }, torch::kCUDA).to(torch::kFloat32);
auto thing2 = torch::ones({ 1, 3, 5, 5 }, torch::kCUDA).to(torch::kFloat32);
auto thing3 = torch::cat({ thing1, thing2 }, 1);
I get this error on torch::cat
image1043×1131 44.3 KB
Unhandled exception at 0x00007FFA2C4FA799 in TouchDesigner.exe: Microsoft C++ exception: c10::Error at memory location 0x00000046B7DF54D0.
at this line:
github.com
pytorch/pytorch/blob/7cdf786a07b2ca434983a0a508e2e42c75a4697d/aten/src/ATen/core/op_registration/hacky_wrapper_for_legacy_signatures.h#L100 2
template<class FuncPtr, class... ParametersBeforeTensorOptions, class... ParametersAfterTensorOptions>
struct with_scattered_tensor_options_<FuncPtr, guts::typelist::typelist<ParametersBeforeTensorOptions...>, guts::typelist::typelist<ParametersAfterTensorOptions...>> final {
static decltype(auto) wrapper(
ParametersBeforeTensorOptions... parameters_before,
optional<ScalarType> scalar_type,
optional<Layout> layout,
optional<Device> device,
optional<bool> pin_memory,
ParametersAfterTensorOptions... parameters_after) {
return (*FuncPtr::func_ptr())(
std::forward<ParametersBeforeTensorOptions>(parameters_before)...,
TensorOptions().dtype(scalar_type).device(device).layout(layout).pinned_memory(pin_memory),
std::forward<ParametersAfterTensorOptions>(parameters_after)...
);
}
};
}
template<class FuncPtr>
Note that I’m running this code inside a DLL compiled for TouchDesigner and TouchDesigner is using CUDA 10.1 to match my libtorch build. If I run the same code inside a simple exe, there’s no error. Any clue what’s going on or clue to proceed? Thank you! |
st181178 | It was an issue with DLLs…
TouchDesigner has its own DLLs in C:/Program Files/Derivative/TouchDesigner/bin. These DLLs get loaded when TouchDesigner opens.
My custom plugin is in Documents/Derivative/Plugins and all of the libtorch DLLs are also there. My thought was that having everything in this Plugins folder would be sufficient: If my custom dll looked for a dependent DLL it would find it there as a sibling. However, I needed to paste the libtorch DLLs into TouchDesigner’s own bin folder. I didn’t trace down which specific DLL was the dealbreaker. Whichever one is most relevant to torch::cat I guess… |
st181179 | Hi,
I want to run MLP model on my accelerator.
My plan is to add a custom jit operation that operates like the nn.Linear layer but faster by using my accelerator.
However, the accelerator only supports int8 operation, so I need some quantization as well.
How can I add the custom operation that utilizes torch.quantization?
Thanks in advance! |
st181180 | Is it possible to cast intlist (int []) to a tensor and vice versa in the JIT scriptmodule’s graph? Curious to know if we can insert any operator which does this for us. |
st181181 | tensor, probably:
@torch.jit.script
def fn(l : List[int]):
return torch.tensor(l)
fn.graph
gives
graph(%l.1 : int[]):
%4 : bool = prim::Constant[value=0]()
%2 : None = prim::Constant()
%5 : Tensor = aten::tensor(%l.1, %2, %2, %4)
return (%5)
Best regards
Thomas |
st181182 | I was going through the source code and found the following two operators
"aten::_list_to_tensor(int[] self) -> Tensor"
"aten::_tensor_to_list(Tensor self) -> int[]"
I don’t see these used anywhere in the code itself. Can these be used to convert an int[] to Tensor and if needed a Tensor to int[]? |
st181183 | you can use::
torch.jit.script
def foo(x: torch.Tensor):
out: List[int] = x.tolist()
return out |
st181184 | Does anyone know why this code gives the following error when loading the ONNX model?
@torch.jit.script
def bar(x):
zero = torch.tensor(0.0, dtype=torch.float32)
one = torch.tensor(1.0, dtype=torch.float32)
if x.eq(zero):
y = zero
else:
y = one
return y
class Foo(nn.Module):
def forward(self, x):
return bar(x)
foo = Foo()
dummy_x = torch.tensor(0.0, dtype=torch.float32)
torch.onnx.export(foo, dummy_x, "./foo.onnx", input_names=["x"], output_names=["y"])
foo_onnx = onnxruntime.InferenceSession("./foo.onnx")
InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: D:\2\s\onnxruntime\core\graph\graph.cc:912 onnxruntime::Graph::InitializeStateFromModelFileGraphProto This is an invalid model. Graph output (1) does not exist in the graph. |
st181185 | I just tested this repro with PyTorch and ONNXRuntime nightly.
The error I see is:
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Type Error: Type ‘tensor(float)’ of input parameter (x) of operator (Equal) in node (Equal_3) is invalid.
However, by changing dtype to torch.int in the test, this ONNXRuntime error is no longer thrown.
Looks like ONNX Equal op takes float input data type, so this might be an issue with ONNXRuntime.
Will follow op on this. |
st181186 | Actually setting opset_version=11 would fix this issue. ONNX Equal op supports float types starting from opset 11. |
st181187 | Got it. The change is as follows:
torch.onnx.export(foo, dummy_x, "D:/foo.onnx", input_names=["x"], output_names=["y"], opset_version=11)
Now it runs correctly:
x = np.asarray(1.0, dtype=np.float32)
foo_onnx.run(["y"], {"x": x}) |
st181188 | I am building a custom C++ shared library that uses pytorch C++ libraries. But I am getting one undefined symbol - “typeinfo for torch::jit::GraphAttr”. I have included torch and c10 into my target_link_libraries. Am I missing additional pytorch libraries to link ? |
st181189 | Solved by yetanadur in post #2
resolved the issue. It seems when I build pytorch code with
python setup.py install the shared object executables are installed without execution permission on files - as
-rw-r--r-- . After manually correcting file permissions with
chmod +x <file.so> , the link issues are resolved. |
st181190 | resolved the issue. It seems when I build pytorch code with
python setup.py install the shared object executables are installed without execution permission on files - as
-rw-r--r-- . After manually correcting file permissions with
chmod +x <file.so> , the link issues are resolved. |
st181191 | Hi I am working on a quantized model in C++. I have trained and quantized the model in Python and loaded to C++ (post training quantization). I wonder if I can parse the jitted model parameters (torchscript format) in C++ ? I could not find any layer-unpacking modules in torch::jit::script::Module .
After loading the model, I can dump the scriptModule modules and parameters using (torch::jit::script::Module) m->dump():
--------------------------------------------------------------------------------------------------------------------
dumping module module __torch__.Net {
parameters {
}
attributes {
training = False
(Here->) fc1 = <__torch__.torch.nn.intrinsic.quantized.modules.linear_relu.LinearReLU object at 0x5555566b75f0>
Relu1 = <__torch__.torch.nn.modules.linear.Identity object at 0x5555566b3d40>
fc2 = <__torch__.torch.nn.intrinsic.quantized.modules.linear_relu.LinearReLU object at 0x5555566a7030>
Relu2 = <__torch__.torch.nn.modules.linear.Identity object at 0x5555567b1440>
droput2 = <__torch__.torch.nn.modules.dropout.Dropout object at 0x5555567b1640>
fc3 = <__torch__.torch.nn.quantized.modules.linear.Linear object at 0x5555567b22c0>
quant = <__torch__.torch.nn.quantized.modules.Quantize object at 0x5555567b2ad0>
dequant = <__torch__.torch.nn.quantized.modules.DeQuantize object at 0x5555567b8350>
logMax = <__torch__.torch.nn.modules.activation.LogSoftmax object at 0x5555567b87c0>
}
methods {
method forward {
graph(%self.1 : __torch__.Net,
%x.1 : Tensor):
%7 : int = prim::Constant[value=-1]() # ~//MNIST_PyTorch_Quantize.py:50:19
%8 : int = prim::Constant[value=784]() # ~//MNIST_PyTorch_Quantize.py:50:23
%3 : __torch__.torch.nn.quantized.modules.Quantize = prim::GetAttr[name="quant"](%self.1)
%x0.1 : Tensor = prim::CallMethod[name="forward"](%3, %x.1) # :0:0
%9 : int[] = prim::ListConstruct(%7, %8)
%x1.1 : Tensor = aten::view(%x0.1, %9) # ~//MNIST_PyTorch_Quantize.py:50:12
%12 : __torch__.torch.nn.intrinsic.quantized.modules.linear_relu.LinearReLU = prim::GetAttr[name="fc1"](%self.1)
%x2.1 : Tensor = prim::CallMethod[name="forward"](%12, %x1.1) # :0:0
%16 : __torch__.torch.nn.modules.linear.Identity = prim::GetAttr[name="Relu1"](%self.1)
%x3.1 : Tensor = prim::CallMethod[name="forward"](%16, %x2.1) # :0:0
%20 : __torch__.torch.nn.intrinsic.quantized.modules.linear_relu.LinearReLU = prim::GetAttr[name="fc2"](%self.1)
%x4.1 : Tensor = prim::CallMethod[name="forward"](%20, %x3.1) # :0:0
%24 : __torch__.torch.nn.modules.linear.Identity = prim::GetAttr[name="Relu2"](%self.1)
%x5.1 : Tensor = prim::CallMethod[name="forward"](%24, %x4.1) # :0:0
%28 : __torch__.torch.nn.modules.dropout.Dropout = prim::GetAttr[name="droput2"](%self.1)
%x6.1 : Tensor = prim::CallMethod[name="forward"](%28, %x5.1) # :0:0
%32 : __torch__.torch.nn.quantized.modules.linear.Linear = prim::GetAttr[name="fc3"](%self.1)
%x7.1 : Tensor = prim::CallMethod[name="forward"](%32, %x6.1) # :0:0
%36 : __torch__.torch.nn.quantized.modules.DeQuantize = prim::GetAttr[name="dequant"](%self.1)
%x8.1 : Tensor = prim::CallMethod[name="forward"](%36, %x7.1) # :0:0
%40 : __torch__.torch.nn.modules.activation.LogSoftmax = prim::GetAttr[name="logMax"](%self.1)
%42 : Tensor = prim::CallMethod[name="forward"](%40, %x8.1) # :0:0
return (%42)
}
}
submodules {
module __torch__.torch.nn.intrinsic.quantized.modules.linear_relu.LinearReLU {
parameters {
}
attributes {
training = False
in_features = 784
out_features = 512
scale = 0.048926487565040588
zero_point = 0
_packed_params = <__torch__.torch.nn.quantized.modules.linear.LinearPackedParams object at 0x5555566bcc40>
}
methods {
method forward {
graph(%self.1 : __torch__.torch.nn.intrinsic.quantized.modules.linear_relu.LinearReLU,
%input.1 : Tensor):
%4 : __torch__.torch.nn.quantized.modules.linear.LinearPackedParams = prim::GetAttr[name="_packed_params"](%self.1)
%5 : Tensor = prim::GetAttr[name="_packed_params"](%4)
%7 : float = prim::GetAttr[name="scale"](%self.1)
%9 : int = prim::GetAttr[name="zero_point"](%self.1)
%Y_q.1 : Tensor = quantized::linear_relu(%input.1, %5, %7, %9) # ~/python3.7/site-packages/torch/nn/intrinsic/quantized/modules/linear_relu.py:29:14
return (%Y_q.1)
}
}
submodules {
(+) module __torch__.torch.nn.quantized.modules.linear.LinearPackedParams {
}
}
module __torch__.torch.nn.modules.linear.Identity {} (+)
module __torch__.torch.nn.intrinsic.quantized.modules.linear_relu.LinearReLU {} (+)
module __torch__.torch.nn.modules.linear.Identity {} (+)
module __torch__.torch.nn.modules.dropout.Dropout {} (+)
module __torch__.torch.nn.quantized.modules.linear.Linear {} (+)
module __torch__.torch.nn.quantized.modules.Quantize {} (+)
module __torch__.torch.nn.quantized.modules.DeQuantize {} (+)
module __torch__.torch.nn.modules.activation.LogSoftmax {} (+)
} // end of submodules
} // end of dumping module module __torch__.Net
--------------------------------------------------------------------------------------------------------
Notes: (+) means there are collapsed lines omitted to save space.
Torch version 1.6.0+.
My Questions:
1- During model load, are the module layers packed again to a format like in torch::nn::Linear and torch::nn::Conv1d … etc? how can I access them?
2- Are the pointers printed in the dump (like the line marked with “(Here->)” ) for Python objects and methods or they are C++ objects and methods? are they cast-able to the formats in torch::nn:* ?
3- What is the recommended procedure to structure the model again in terms of the number of layers , the attributes\configuration of each layer and the corresponding trained weights from the jitted\torchscript format ? |
st181192 | 1- During model load, are the module layers packed again to a format like in torch::nn::Linear and torch::nn::Conv1d … etc? how can I access them?
what is torch::nn::Linear? are you talking about the pytorch c++ API? the weights are packed in linear modules, you can use https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L34 6 to get the unpacked weights.
Are the pointers printed in the dump (like the line marked with “(Here->)” ) for Python objects and methods or they are C++ objects and methods? are they cast-able to the formats in torch::nn:* ?
These are TorchScript objects I think. I’m not sure about the relationship between the pytorch c++ API and TorchScript, cc @Michael_Suo could you comment on this?
What is the recommended procedure to structure the model again in terms of the number of layers , the attributes\configuration of each layer and the corresponding trained weights from the jitted\torchscript format ?
Not sure I understand the question, could you be more concrete? |
st181193 | jerryzh168:
what is torch::nn::Linear ? are you talking about the pytorch c++ API? the weights are packed in linear modules, you can use https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L34 to get the unpacked weights.
Yes, I mean C++ API. As mentioned in description, I have trained the model in Python and exported it as torchscript to C++. Is the function “_weight_bias” accessible in torchscript format () (I can see that it is has the decorator @torch.jit.export) ? If yes, can you show please how to use it with loaded torchscript models ?
Rephrase 3-: Given a model that is trained in python, saved as “torchscript” and loaded into C++ front end, I want to extract the number of layers, the type of each layer (Linear,conv1d…), the size of each layer and weights associated with each layer… How can I do that ? |
st181194 | k.osama:
torchscript format () (I can see that it is has the decorator @torch.jit.export) ? If yes, can you show please how to use
here is an example calling method in a TorchScript Moudule: https://codebrowser.bddppq.com/pytorch/pytorch/torch/csrc/jit/passes/quantization/insert_quant_dequant.cpp.html#978 18
if by C++ API you mean TorchScript then this should work. There is another C++ API that is authoring models in C++(https://pytorch.org/docs/stable/cpp_index.html#authoring-models-in-c 13) which I’m not familiar with.
for question 3: API of torch::jit::Module can be found in https://codebrowser.bddppq.com/pytorch/pytorch/torch/csrc/jit/api/module.h.html#torch::jit::Module 16 |
st181195 | Hi, I am trying to create a TorchScript module of Facebook’s deep learning recommendation model (DLRM) 2 using torch.jit.script() method. The conversion fails owing to the following runtime error:
RuntimeError:
cannot call a value of type 'Tensor':
File "dlrm_s_pytorch.py", line 275
# return x
# approach 2: use Sequential container to wrap all layers
return layers(x)
~~~~~~ <--- HERE
'DLRM_Net.apply_mlp' is being compiled since it was called from 'DLRM_Net.sequential_forward'
File "dlrm_s_pytorch.py", line 343
def sequential_forward(self, dense_x, lS_o, lS_i):
# process dense features (using bottom mlp), resulting in a row vector
x = self.apply_mlp(dense_x, self.bot_l)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
# debug prints
# print("intermediate")
'DLRM_Net.sequential_forward' is being compiled since it was called from 'DLRM_Net.forward'
File "dlrm_s_pytorch.py", line 337
def forward(self, dense_x, lS_o, lS_i):
if self.ndevices <= 1:
return self.sequential_forward(dense_x, lS_o, lS_i)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
else:
return self.parallel_forward(dense_x, lS_o, lS_i)
To recreate the error:
Clone the DLRM repository and install the requirements.
<activate virtual environment>
git clone https://github.com/facebookresearch/dlrm.git
cd dlrm
pip install requirements.txt
Add the following line in dlrm_s_pytorch.py at after line 179 to solve a type conversion issue:
n = n.item()
Add the following snippet in dlrm_s_pytorch.py after the architecture object is initialized:
dlrm_jit = torch.jit.script(dlrm)
sys.exit() # successful exit after compiling, no need to train
Run the below command:
python dlrm_s_pytorch.py --arch-sparse-feature-size=32 --arch-embedding-size="70446-298426-33086-133729-61823" --data-size=20480 --arch-mlp-bot="256-256-128-32" --arch-mlp-top="256-64-1" --max-ind-range=400000 --data-generation=random --loss-function=bce --nepochs=5 --round-targets=True --learning-rate=1.0 --mini-batch-size=2048 |
st181196 | Hello,
I tried to visualize the runtime performance improvement made by convolution layer which I implemented from scratch Vs torchscript version of convolution layer Vs torch.nn.conv2d() module for 100 iterations with input (128,3,28,28), out_channel =64, kernel size=3.
Convolution layer from scratch in CUDA -> 9.366 seconds
torchscript convolution layer from scratch in CUDA -> 6.636 seconds
torch.nn.conv2d() -> 475.614 milliseconds.
Is there any problem in my approach? and how to optimize even more? . I request you to help me with this problem.
My code
class conv2D(nn.Module):
def __init__(self, in_channel, out_channel, kernel_size):
super(conv2D,self).__init__()
self.weight = torch.nn.Parameter(torch.ones(out_channel,in_channel,kernel_size, kernel_size))
self.bias = torch.nn.Parameter(torch.zeros(out_channel))
self.kernel_size = kernel_size
self.in_channel = in_channel
self.out_channel = out_channel
def forward(self, image):
img_height = image.shape[3]
img_width = image.shape[2]
batch_size = image.shape[0]
out_height = img_height-self.kernel_size+1
out_width = img_width-self.kernel_size+1
output = torch.zeros(batch_size,self.out_channel,out_width,out_height)
for k in range(batch_size):
for i in range(out_height):
for j in range(out_width):
temp = torch.sum(image[k,:,j:j+self.kernel_size,i:i+self.kernel_size]*self.weight,dim=(1,2,3))
output[k,:,i,j]=torch.add(temp,self.bias)
return output
x = torch.ones(128,3,28,28).to("cuda")
c = conv2D(3,64,3).to("cuda")
c_s = torch.jit.script(c).to("cuda")
c_s(x)
Scripting the model and running with a sample input to get an optimized graph
with torch.autograd.profiler.profile(use_cuda=True) as prof:
with torch.no_grad():
for i in range(100):
c(x)
print(prof.table())
Profiling both the scripted and normal method.
with torch.autograd.profiler.profile(use_cuda=True) as prof:
with torch.no_grad():
for i in range(100):
c_s(x)
print(prof.table()) |
st181197 | Convolutions are not easy to optimize and I’m not sure if there is any JIT compiler in the wild, which is currently able to optimize these layers to a competitive performance (please let me know, if you find it ). |
st181198 | I am currently studying torchscript. I came across the technique called profile guided optimization is being carried out in torchscript which gets every information about the tensor and it’s operation.
Profile guided optimization uses Prim:Profile to these information. My doubt is is there any way to visualize the graph (Intermediate Representation) with Prim:Profile and Prim:guard in pytorch 1.5?
Please help to with my doubt |
st181199 | Solved by eellison in post #4
old_prof_exec_state = torch._C._jit_set_profiling_executor(True)
old_prof_mode_state = torch._C._jit_set_profiling_mode(True)
this enables the profiling executor.
torch._C._jit_set_num_profiled_runs(num_runs)
how many profiling runs we want to do before we optimize the graph.
@torch.jit.scr… |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.