id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181400 | is there any ways for me to change the name of input variables of traced module? I have looked into the documentation of torch.jit.trace and it could not be changed beforehand as well, e.g. with dictionary.
basically I’m trying to mimic the behavior of input_names argument in torch.onnx.export |
st181401 | I don’t believe we expose a way to do this. Names are not semantically meaningful in a JIT graph anyway, they are only for human readability |
st181402 | thanks for the reply.
what I want to do is so that I could use keyword arguments as an input to forward.
in my traced module, there are 3 arguments, the name of the first two arguments have already been properly inferred by tracing, but the last one is not.
my question now becomes how does the tracer infer the argument name for this? |
st181403 | The tracer looks in the python interpreter frame state’s local variables (f_local) for names (see here 6. So if, for a given traced value, it finds a corresponding local, it will add that as the name in the graph. |
st181404 | Hey, I’m quite stuck on trying to run torch::jit::Graph with cpp API, I have already tried to search for docs or implementations but failed.
my main purpose to do so is to divide a JIT Graph into subgraphs (I would like to control which nodes go to each subgraph). My plan is to parse a graph into nodes and then create new graphs from each sequence of nodes (with my own decision) and then run each graph one by one.
this is what I have already tried:
#include <torch/script.h>
#include <iostream>
int main() {
torch::jit::script::Module my_model = torch::jit::load("<MY_PATH>/model.pkl");
torch::jit::script::Method m = my_model.get_method("forward");
auto g = m.graph();
auto cu = std::make_shared<torch::jit::script::CompilationUnit>();
c10::QualifiedName name("forward");
torch::Function *fn = cu->create_function(std::move(name), g);
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::rand({1, 1, 3, 3}));
auto output = fn->operator()(inputs);
std::cout << output << std::endl;
}
based on model.pkl that was produced by the python code below:
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
return self.conv(x)
scripted_foo = torch.jit.script(Net())
scripted_foo.save('model.pkl')
this cpp approch produces the following error:
libc++abi.dylib: terminating with uncaught exception of type c10::Error: forward() Expected a value of type '__torch__.Net' for argument 'self' but instead found type 'Tensor'.
Position: 0
Declaration: forward(__torch__.Net self, Tensor x) -> (Tensor) (checkArg at ../aten/src/ATen/core/function_schema_inl.h:194)
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x1038ce787 in libc10.dylib)
frame #1: c10::FunctionSchema::checkArg(c10::IValue const&, c10::Argument const&, c10::optional<unsigned long>) const + 719 (0x11285ee6f in libtorch.dylib)
frame #2: c10::FunctionSchema::checkAndNormalizeInputs(std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> >&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, c10::IValue, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, c10::IValue> > > const&) const + 228 (0x11285ddf4 in libtorch.dylib)
frame #3: torch::jit::Function::operator()(std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> >, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, c10::IValue, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, c10::IValue> > > const&) + 49 (0x11285d221 in libtorch.dylib)
frame #4: main + 686 (0x1037eb59e in deep_project)
frame #5: start + 1 (0x7fff700f0cc9 in libdyld.dylib)
frame #6: 0x0 + 1 (0x1 in ???)
Process finished with exit code 6
I assume that I need to push self of type __torch__.Net somehow but I dont know how, I will be happy for some help.
thanks a lot. |
st181405 | Solved by Michael_Suo in post #3
For methods like, as in Python, the first argument to the graph is self, which represents the module object instance. The Module API takes care of adding self to the stack, see here. |
st181406 | For methods like, as in Python, the first argument to the graph is self, which represents the module object instance. The Module API takes care of adding self to the stack, see here 9. |
st181407 | thanks a lot, it works - I have another question though, do you know what is the most clean way to create a graph from a couple of nodes ? |
st181408 | Probably using the base Graph API for node creation and insertion is the cleanest. For an example of how to take “foreign” Node*s not owned by a graph and copy them into the graph, the Graph::copy method is a useful example: https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.cpp#L680 14 |
st181409 | Hi,
I am trying to script a PyTorch model. I am running into the following issues;
My original code utilizes subscripts as follows:
for i in range(self.n_layers):
self.in_layers[i](a),
self.cond_layers[I](b) ...
Both in_layers and cond_layers are of type torch.nn.ModuleList().
When trying to compile I receive an error that these types are not subscriptable. To work around this issue, I created a new function that receives a torch.nn.ModuleList and iterates over each item to find the right one. I am trying to use MyPy annotations to get this to compile as follows:
def moduleListChoice(self, ml, choice):
# type: (torch.nn.ModuleList, int)
moduleNumber = 0
for module in ml:
if moduleNumber == choice:
return module
moduleNumber = moduleNumber + 1
print(“Module not found”)
The problem is that I can’t figure out what the type should be for the first argument. torch.nn.ModuleList doesn’t work. Any thoughts on how get this to work?
Thanks. |
st181410 | This PR 87 should have added the support for indexing into nn.ModuleList. Which PyTorch version are you using? If you are using an older one (pre 1.5), could you update to the latest stable version and rerun your script, please? |
st181411 | Hi,
I am trying to script some PyTorch code that utilizes ModuleList. The code makes use of subscripting of a ModuleList object as follows:
or i in range(self.n_layers):
acts = fused_add_tanh_sigmoid_multiply(
self.in_layersi,
self.cond_layersi,
torch.IntTensor([self.n_channels]))
I am met with the following error:
RuntimeError:
‘module’ object is not subscriptable:
at /workspace/tacotron2/waveglow/model.py:147:16
for i in range(self.n_layers):
acts = fused_add_tanh_sigmoid_multiply(
self.in_layersi,
~~~~~~~~~~~~~~~~ <— HERE
self.cond_layersi,
torch.IntTensor([self.n_channels]))
Any help would be greatly appreciated.
Thanks. |
st181412 | Is this a double post from here 111 or are these errors differently in some sense? |
st181413 | I have trained a model containing GRU. When I try to convert it to jit, I found that the outputs of the original model and jit model are different. This problem can be reconstructed by the code below. It’s very strange that other operation(like fc/conv) is ok, the output of print(model(y)-jit_model(y)) is an zero matrix whhile GRU is not.
import torch
from torch import nn
class gruModel(nn.Module):
def __init__(self):
super(gruModel, self).__init__()
self.biGRU = nn.GRU(256*5, 100, num_layers=1, bidirectional=True, batch_first=True, bias=True)
# self.fc = nn.Linear(256*5, 200)
def forward(self, x):
# x = self.fc(x)
x, _ = self.biGRU(x, torch.zeros(2, x.size(0), 100, device=x.device))
return x
if __name__ == '__main__':
y = torch.rand([1, 256, 1280]).cuda()
model = gruModel()
model = torch.nn.DataParallel(model).to(torch.device('cuda'))
model.eval()
traced_script = torch.jit.trace(model.module, y)
traced_script.eval()
traced_script.save("gru_jit.pt")
jit_model = torch.jit.load("gru_jit.pt")
print(model(y)-jit_model(y)) |
st181414 | Solved by lzj9072 in post #2
I have found the solution…
if the model contains GRU layer, you should load the model in cpu then to the gpu.
model = torch.jit.load(model_path, map_location=torch.device("cpu"))
model = model.cuda() |
st181415 | I have found the solution…
if the model contains GRU layer, you should load the model in cpu then to the gpu.
model = torch.jit.load(model_path, map_location=torch.device("cpu"))
model = model.cuda() |
st181416 | Hi @lzj9072 glad you found the solution to your problem. There’s an issue out for this problem here you can follow: Problem in converting GRU to jit |
st181417 | Hi everyone,
I’m trying to use the deformable convolutions cpp extensions from the mmdetection repo 1 without a setup.py but by compiling them just in time with torch.utils.cpp_extension.load() as suggested here. However, I’m having some trouble giving the load() function the correct path.
My folder structure is as follows:
├── dcn
│ ├── deform_conv.py
│ ├── deform_pool.py
│ ├── __init__.py
│ └── src
│ ├── deform_conv_cuda.cpp
│ ├── deform_conv_cuda_kernel.cu
│ ├── deform_pool_cuda.cpp
│ └── deform_pool_cuda_kernel.cu
└── test_dcn.py
In test_dcn.py I import the deformable convolutions with from dcn import DeformConvPack and in the file deform_conv.py I inserted the following at the top:
# deform_conv.py
from torch.utils.cpp_extension import load
deform_conv_cuda = load(name='deform_conv_cuda', sources=['src/deform_conv_cuda.cpp', 'src/deform_conv_cuda_kernel.cu'])
# Rest of the code
# ...
I expected this to work but the compilation fails because the .cpp and .cu files cannot be found. If I instead specify the sources as 'dcn/src/deform_conv_cuda.cpp' and 'dcn/src/deform_conv_cuda_kernel.cu' it works.
Could someone explain me the logic behind this? Thank you very much =) |
st181418 | Basically, a working directory of process and python’s path used with module import are distinct things; as compilation is about creating child processes, and module with load() lives in entirely different directory, the former directory is used as base. |
st181419 | Thanks for the answer! In the mean time I came to the same conclusion and used the following snippet as a workaround. That way I can import the nn.Module defined in deform_conv.py from anywhere I want without having to worry about getting the path right. I’m just a bit surprised, that the torch.utils.cpp_extension.load() function doesn’t take care of that.
# deform_conv.py
import os
from torch.utils.cpp_extension import load
parent_dir = os.path.dirname(os.path.abspath(__file__))
sources = ['src/deform_conv_cuda.cpp', 'src/deform_conv_cuda_kernel.cu'] # Paths of sources relative to this file
abs_sources = [os.path.join(parent_dir, source) for source in sources] # Absolute paths of sources
deform_conv_cuda = load(name='deform_conv_cuda', sources=abs_sources) # JIT compilation of extensions |
st181420 | Hey, guys
I trained a image segmentation model and want to do inference using libtorch. The model predict a mask with a input image.
捕获1794×265 45.4 KB
I traced the model for later C++ usage.
#define CV_8UC3 CV_MAKETYPE(CV_8U,3)
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
#include<opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/types_c.h>
#include <cuda.h>
#include <cuda_runtime.h>
using namespace cv;
using namespace std;
int main() {
std::string model_path = "D:/project/WDD/model_cpu.pt";
std::string image_path = "D:/data/glass_crack/converted/84/img.png";
torch::jit::script::Module model = torch::jit::load(model_path);
assert(module != nullptr);
std::cout << "load model sucessfully.\n";
//load img and normalize
Mat img = imread(image_path, 1);
cv::cvtColor(img, img, CV_BGR2RGB);
if (img.empty())
{
printf("could not show image...");
return -1;
}
cv::Mat img_float;
img.convertTo(img_float, CV_32FC3, 1.0f / 255.0f);
auto tensor_image = torch::from_blob(img_float.data, { 1, img.cols, img.rows, 3 });
tensor_image = tensor_image.permute({ 0, 3, 1, 2 });
//normalize
tensor_image[0][0] = tensor_image[0][0].sub(0.485).div(0.229);
tensor_image[0][1] = tensor_image[0][1].sub(0.456).div(0.224);
tensor_image[0][2] = tensor_image[0][2].sub(0.406).div(0.225);
std::vector<torch::jit::IValue> inputs;
inputs.emplace_back(tensor_image);
// Execute the model and turn its output into a tensor.
at::Tensor out_tensor = model.forward(inputs).toTensor();
// convert result to CV mat and save
out_tensor = out_tensor.squeeze().detach();
out_tensor = out_tensor.mul(255).clamp(0, 255).to(torch::kU8);
std::cout << out_tensor.sizes() << '\n';
cv::Mat resultImg(img.rows, img.cols, CV_8UC3);
std::memcpy((void*)resultImg.data, out_tensor.data_ptr(), sizeof(torch::kU8) * out_tensor.numel());
imwrite("landscape_output.jpg", resultImg);
std::cout << "Done!\n";
while (1);
}
it loaded model and did forward sucessfully, however, the problem is the output. The output:
the output is not as expected, totally nonsense…
can someone help? |
st181421 | Solved by ptrblck in post #4
Thanks for the update!
The current output still looks as if the copying of the data is running into a row/columns mismatch.
You see these diagonal “edges”, which would correspond to the desired width, but are shifted in each row.
I would recommend to check the shape of the output, the created Ope… |
st181422 | Could you try to permute the dimensions again to NHWC before converting the output to an OpenCV array? |
st181423 | thanks for reply~
the output mask has a shape of [N, C, H, W], here N =1, C=1.
after
out_tensor = out_tensor.squeeze().detach();
out_tensor’s shape: [H, W].
As you recommend:
ut_tensor = out_tensor.squeeze(0).detach().permute({ 1,2,0 });
I use this line of code instead. so the shape become: [H, W, 1]. the mask:
not working…
I find that C = 1, so I changed
cv::Mat resultImg(img.rows, img.cols, CV_8UC3);
to
cv::Mat resultImg(img.rows, img.cols, CV_8UC1);
then the predicted mask:
… |
st181424 | Thanks for the update!
The current output still looks as if the copying of the data is running into a row/columns mismatch.
You see these diagonal “edges”, which would correspond to the desired width, but are shifted in each row.
I would recommend to check the shape of the output, the created OpenCV image and make sure that the output tensor is contiguous in memory. |
st181425 | problem solved. thanks for help.
Yes, the row/col problem. the seg model should outputs a mask which has the same shape of input image. However, a input image with 450x650 has a predicted mask of shape 448x648. python predicts the same shape but libtorch does not.
so I modified:
cv::Mat resultImg(out_tensor.sizes()[0], out_tensor.sizes()[1], CV_8UC1);
problem solved. |
st181426 | Good to hear you solved the issue!
However, why doesn’t libtorch output the same shape as the Python model?
Did you narrowed down this issue, as it sounds like a bug. |
st181427 | caffe has a differnent implementation of convolutional layers compared with pytorch.
if the code of libtorch comes from caffe, then it explains why. |
st181428 | Hi,
I am trying to convert a torch model to torchscript. I have removed all non supported operations from my model (the ones raised by torch.jit.script(model) ) but now the conversion fails with the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-d068ba517b85> in <module>
1 import torch
----> 2 torchscript_model = torch.jit.script(fa)
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py in script(obj, optimize, _frames_up, _rcb)
1201
1202 if isinstance(obj, torch.nn.Module):
-> 1203 return torch.jit.torch.jit._recursive.recursive_script(obj)
1204
1205 qualified_name = _qualified_name(obj)
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in recursive_script(mod, exclude_methods)
171 filtered_methods = filter(ignore_overloaded, methods)
172 stubs = list(map(make_stub, filtered_methods))
--> 173 return copy_to_script_module(mod, overload_stubs + stubs)
174
175
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in copy_to_script_module(original, stubs)
93 setattr(script_module, name, item)
94
---> 95 torch.jit._create_methods_from_stubs(script_module, stubs)
96
97 # Now that methods have been compiled, take methods that have been compiled
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py in _create_methods_from_stubs(self, stubs)
1421 rcbs = [m.resolution_callback for m in stubs]
1422 defaults = [get_default_args(m.original_method) for m in stubs]
-> 1423 self._c._create_methods(self, defs, rcbs, defaults)
1424
1425 # For each user-defined class that subclasses ScriptModule this meta-class,
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in create_method_from_fn(module, fn)
183 # We don't want to call the hooks here since the graph that is calling
184 # this function is not yet complete
--> 185 torch.jit._create_methods_from_stubs(module, (stub,))
186 return stub
187
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py in _create_methods_from_stubs(self, stubs)
1421 rcbs = [m.resolution_callback for m in stubs]
1422 defaults = [get_default_args(m.original_method) for m in stubs]
-> 1423 self._c._create_methods(self, defs, rcbs, defaults)
1424
1425 # For each user-defined class that subclasses ScriptModule this meta-class,
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in make_strong_submodule(field, module, parent)
193
194 # Convert the module to a ScriptModule
--> 195 new_strong_submodule = recursive_script(module)
196
197 # Install the ScriptModule on the python side
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in recursive_script(mod, exclude_methods)
171 filtered_methods = filter(ignore_overloaded, methods)
172 stubs = list(map(make_stub, filtered_methods))
--> 173 return copy_to_script_module(mod, overload_stubs + stubs)
174
175
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in copy_to_script_module(original, stubs)
93 setattr(script_module, name, item)
94
---> 95 torch.jit._create_methods_from_stubs(script_module, stubs)
96
97 # Now that methods have been compiled, take methods that have been compiled
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py in _create_methods_from_stubs(self, stubs)
1421 rcbs = [m.resolution_callback for m in stubs]
1422 defaults = [get_default_args(m.original_method) for m in stubs]
-> 1423 self._c._create_methods(self, defs, rcbs, defaults)
1424
1425 # For each user-defined class that subclasses ScriptModule this meta-class,
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in create_method_from_fn(module, fn)
183 # We don't want to call the hooks here since the graph that is calling
184 # this function is not yet complete
--> 185 torch.jit._create_methods_from_stubs(module, (stub,))
186 return stub
187
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py in _create_methods_from_stubs(self, stubs)
1421 rcbs = [m.resolution_callback for m in stubs]
1422 defaults = [get_default_args(m.original_method) for m in stubs]
-> 1423 self._c._create_methods(self, defs, rcbs, defaults)
1424
1425 # For each user-defined class that subclasses ScriptModule this meta-class,
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/_recursive.py in create_method_from_fn(module, fn)
183 # We don't want to call the hooks here since the graph that is calling
184 # this function is not yet complete
--> 185 torch.jit._create_methods_from_stubs(module, (stub,))
186 return stub
187
~/.conda/envs/pytorch/lib/python3.6/site-packages/torch/jit/__init__.py in _create_methods_from_stubs(self, stubs)
1421 rcbs = [m.resolution_callback for m in stubs]
1422 defaults = [get_default_args(m.original_method) for m in stubs]
-> 1423 self._c._create_methods(self, defs, rcbs, defaults)
1424
1425 # For each user-defined class that subclasses ScriptModule this meta-class,
RuntimeError: bad optional access
I have no idea how to investigate or solve this problem. Is there any way to get to the root of the issue?
Thanks,
Julien |
st181429 | This looks like a bug, would you mind filing an issue on GitHub with a repro so we can work on a fix? |
st181430 | Thanks for your answer.
After investigating the model (commenting blocks of code until the error is gone), I have traced back to the error to creating a tensor manually from a blend of 0-dim tensor and floats.
E.g. torch.tensor([[my_tensor, my_tensor]]) works
E.g. torch.tensor([[my_tensor, 0.0]]) raises the error
I have solved it in my case by using:
torch.cat([my_tensor.view(1), torch.tensor([0.0])]).unsqueeze
or something similar.
Julien |
st181431 | That’s great debugging!
An issue would still be good, as the error message wasn’t really helpful. |
st181432 | I am building a library that converts a torchscript module to mxnet gluon. Given the two language is very similar I have an easy time parsing the .code and recreate the model. However, I am not able to extract the constants that appear in the code (e.g. CONSTANTS.c0). Any idea how I can obtain the value of this from the jit compiled module?
Thanks a lot! |
st181433 | Are you trying to extract the constants from the serialized binary or in C++? If it’s the former you’ll have to parse the constants.pkl in the serialized zip file, which is in the same format as Python’s pickler (our reader is here 14). |
st181434 | Shouldn’t be too hard to add an endpoint that also passes back the constants table, given that we populate it with any call to PythonPrint. @ifeherva can you file a feature request on github for this? |
st181435 | I am using currently the .code attribute of the TracedModule class. For example a resnet50 backbone with a constant scaling factor looks like this:
def forward(self,
input: Tensor) -> Tensor:
_0 = self.model.relu
_1 = (self.model.bn1).forward((self.model.conv1).forward(input, ), )
_2 = self.model.layer1
_3 = (self.model.maxpool).forward((_0).forward(_1, ), )
_4 = self.model.layer3
_5 = (self.model.layer2).forward((_2).forward(_3, ), )
_6 = self._pool
_7 = (self.model.layer4).forward((_4).forward(_5, ), )
_8 = (_6).forward(_7, )
_9 = ops.prim.NumToTensor(torch.size(_8, 0))
x = torch.view(_8, [int(_9), -1])
return torch.mul(x, CONSTANTS.c0)
I did’t consider the zip file yet, I thought there is a way to get it ‘on-the-fly’. |
st181436 | ifeherva:
TracedModule
github.com/pytorch/pytorch
Add TracedModule attribute for the constants table 4
opened
Apr 14, 2020
ifeherva
🚀 Feature
TracedModuel currently returns the compiled code via the .code() member function. This code can possible contain constants (e.g. CONSTANTS.c0), however... |
st181437 | Hi,
Is there any way to convert torch._C.Graph to torch.jit.Graph. Or is there a way generate a dot file from the torch._C.Graph. I would like to create a dot file from tracing like in make_dot_from_trace 9 function. But it takes only torch.jit.Graph as input.
When I ran my model through torch.jit.trace(model, input) 5; I got a object whose .graph was of type torch._C.Graph which throws error when I keep it as a parameter for. make_dot_from_trace function. |
st181438 | Hi so we don’t have a torch.jit.Graph object to begin with I think, all graph objects are torch._C.Graph, so I think if pytorchviz is using torch.jit.Graph that could be already a issue, you probably should submit a issue or a fix to the pytorchviz repo instead. |
st181439 | I am using Java for making inferences. However, it consumes too many CPU threads, is it possible to set an upper limit like it is possible to do in Python and C++? I cannot find any reference on how to do so.
Thank you, Jakob |
st181440 | I want to create some custom GRU variants (mainly Layer normalized).
I was following some blog posts and benchmarks (like https://github.com/pytorch/pytorch/blob/master/benchmarks/fastrnns/custom_lstms.py 7) and I created following small benchmark https://gist.github.com/usamec/af21be7b83e6b1a3f38c26136af811f3 9, where it seems, that JITed GRU is 10 times slower than cuDNN implemention (but 3times faster than nonJITed). (Using GeForce RTX 2080 Ti).
It is advertised, that at least JIT forward pass runs similarly than cuDNN, so what am I doing wrong?
(I tried running it multiple times, so results are not affected by cold start). |
st181441 | As far as I know, cuDNN is the fastest in all circumstances. And JIT optimizes the code as it runs, so later batches run faster than the first few batches. |
st181442 | can you post your benchmark script? 10 times slower is very weird as we don’t expect that much. |
st181443 | There is a linked script in original post: https://gist.github.com/usamec/af21be7b83e6b1a3f38c26136af811f3 5 |
st181444 | Also when I increase size of GRU to 1024 features, then the relative difference is much smaller (32 ms JIT, 25 ms cuDNN). |
st181445 | I have the same problem, though my variant is closer to RNN than GRU, so I would think it would be even easier to optimize. Increasing the size has the same effect of shrinking the performance gap, but I need the feature size to be relatively small.
Would be great to figure out how to improve performance of jit RNNs. It’s a really simple model (and inference in C++ is lightning fast), so I’m not sure why it’s so slow. |
st181446 | Could you please open an issue on github and paste in the no. here? I will be taking a look at this very shortly. |
st181447 | github.com/pytorch/pytorch
JITed GRU too slow 16
opened
Apr 3, 2020
usamec
🐛 Bug
It is advertised, that forward pass of JITed RNNs (e.g. GRU) is as fast as cuDNN implementation.
But it is not... |
st181448 | Hello,
are there any plans to support jit compilation of code using named tensor operations?
IMHO, named tensors are potentially great, but are in a weird place: rewriting old code seems not worth it, especially with that list of unsupported subsystems. But even for new code, that is big and complex enough to require dimension reorderings, one has to choose between JIT and named tensors beforehand. To me, JIT seems more valuable (and even more so for simpler code fragments, that require no more that some occasional head/tail [un]squeeze). |
st181449 | Hi @googlebot,
We’ve discussed it but there are no immediate plans. However it wouldn’t be that hard to add. Maybe by the next release ? |
st181450 | Take detectron2 as an example. I have transformed a model from detectron2 with torch.jit.script to torchscript, the input of which is annotated with detectron2.structures.box. Because there is no detectron2 python package in the deployment environment of the torchscript module , I define another input data class x.box which has totally the same codes as detectron2.structures.box. But when this new input class is sent to the torchscript module, an error happened:
RuntimeError: classType INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/python/pybind_utils.h:742, please report a bug to PyTorch.
Is this mean that I must send the same input data class(detectron2.structures.box) to the model? Or, is there any ways to make it work?
Thanks a lot:) |
st181451 | I have figured out the reason. It is because the data class of the output of torchscript is not supported in the deployment environment.
So is there any way to register a new data class which consist of the same codes of the original one in torchscript in the deployment environment and construct links between them? |
st181452 | Are you able to post a repro for the internal assert failure?
It sounds like you’re asking for structural typing 2 instead of nominal typing 2. TorchScript is nominally typed, so we don’t have a blessed way to do exactly what you’re asking. (It does look like some version of structural typing is supported, but that is likely an accident 1).
detectron2.structures.box should be included with the serialized code of your TorchScript model, so you should be able to reconstruct it inside the JIT (though the API to do this may not exist/be pretty). Once we fix the above bug we can probably figure out some hack to make this still work. |
st181453 | The codes are still being modified, but I can illustrate this problem with some sample codes:
In the development environment where torchscript is exported, we have:
from detectron2.structures import Box
class MyModule(nn.Module) -> Box:
some_of_codes
then we export torchscript:
model = MyModule()
torch.jit.script(model).save('scripted_model.pt')
In the deployment environment, we have:
model = torch.jit.load('scripted_model.pt')
input = some_data_of_model
output = model(input)
Since the data type ‘detectron2.structures.Box’ is not available here, the RuntimeError mentioned above will appear.
I can see that the ‘detectron2.structures.Box’ class is available in the serialized code of my TorchScript, but it seems that I can not use it to construct an instance which can receive the result from the output of torchscript. |
st181454 | I made a custom GRU following this custom LSTM 1. Compared with the Pytorch native GRU, I am able to reproduce the outputs/loss, but not the gradients.
The custom GRU layer is the following:
class GRUCell(jit.ScriptModule):
__constants__ = ['ngate']
def __init__(self, input_size, hidden_size):
super(GRUCell, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.ngate = 3
self.w_ih = Parameter(torch.randn(self.ngate * hidden_size, input_size))
self.w_hh = Parameter(torch.randn(self.ngate * hidden_size, hidden_size))
self.b_ih = Parameter(torch.randn(self.ngate * hidden_size))
self.b_hh = Parameter(torch.randn(self.ngate * hidden_size))
@jit.script_method
def forward(self, inputs, hidden):
# type: (Tensor, Tensor) -> Tensor
gi = torch.mm(inputs, self.w_ih.t()) + self.b_ih
gh = torch.mm(hidden, self.w_hh.t()) + self.b_hh
i_r, i_i, i_n = gi.chunk(self.ngate, 1)
h_r, h_i, h_n = gh.chunk(self.ngate, 1)
resetgate = torch.sigmoid(i_r + h_r)
inputgate = torch.sigmoid(i_i + h_i)
newgate = torch.tanh(i_n + resetgate * h_n)
hy = newgate + inputgate * (hidden - newgate)
return hy
class GRULayer(jit.ScriptModule):
def __init__(self, cell, *cell_args):
super(GRULayer, self).__init__()
self.cell = cell(*cell_args)
@jit.script_method
def forward(self, inputs, out):
# type: (Tensor, Tensor) -> Tensor
inputs = inputs.unbind(0)
outputs = torch.jit.annotate(List[Tensor], [])
for i in range(len(inputs)):
out = self.cell(inputs[i], out)
outputs += [out]
return torch.stack(outputs)
The outputs and gradients are compared:
torch.manual_seed(10)
seq_len = 5
batch = 5
input_size = 3
num_classes = 2
hidden_size = num_classes
criterion = nn.CrossEntropyLoss()
inp = torch.randn(seq_len, batch, input_size)
label = torch.randint(low = 0, high = num_classes, size = (batch,))
state = torch.randn(batch, hidden_size)
rnn = GRULayer(GRUCell, input_size, hidden_size)
out = rnn(inp, state)
out = out[-1]
loss = criterion(out, label)
loss.backward()
gradients = [x.grad for x in rnn.parameters()]
# Control: pytorch native GRU
native_gru = nn.GRU(input_size, hidden_size, 1, batch_first = False)
native_gru_state = state.unsqueeze(0)
for native_gru_param, custom_param in zip(native_gru.all_weights[0], rnn.parameters()):
assert native_gru_param.shape == custom_param.shape
with torch.no_grad():
native_gru_param.copy_(custom_param)
native_gru_out, native_gru_out_state = native_gru(inp, native_gru_state)
native_gru_out = native_gru_out[-1, :, :]
native_gru_loss = criterion(native_gru_out, label)
native_gru_loss.backward()
native_gru_gradients = [x.grad for x in native_gru.all_weights[0]]
print("loss is", loss.item())
print("loss difference is", (loss - native_gru_loss).max().item())
print("gradient differences are")
for x, y in zip(gradients, native_gru_gradients):
# print(x.abs().max().item())
# print(y.abs().max().item())
print((x - y).abs().max().item())
Here are the outputs:
loss is 0.5983431935310364
loss difference is 0.0
gradient differences are
0.16684868931770325
0.10169483721256256
0.08745706081390381
0.06843984127044678
I think the problem lies in GRULayer, as I am able to reproduce the gradients in a single GRUcell (e.g. by setting seq_len = 1). What makes the problem more weird is I am able to reproduce the gradients of the Pytorch native LSTM layers with the original custom LSTM provided in the link.
Your help is very much appreciated! |
st181455 | Is there any way to convert pytorch IR(JIT) to MLIR?Does IR also exist in eager mode? |
st181456 | Hi
Maybe I’m doing something wrong, but I’ve noticed a continuous increase in the memory usage when calling torch.jit.trace(model, inputs) multiple times in the same process.
I’m using the following test code:
#Utilities
import os
import psutil
#JIT trace test
import torch
import torchvision.models as model_zoo
with torch.no_grad():
#Create a simple resnet
model = model_zoo.resnet18()
model.eval
#Create sample input
sample_input = torch.randn(size=[2, 3, 480, 640], requires_grad=False)
#Get process id
process = psutil.Process(os.getpid())
#Repeat tracing
for i in range(0, 1000):
tr_model = torch.jit.trace(model, sample_input, check_trace=False)
print("Iter: {} = {}".format(i, process.memory_full_info()))
I’m using PyTorch 1.4.0 installed with conda.
I get the same problem with check_trace=True and check_trace=False |
st181457 | Hi
Looking at the output of the script, it seems that there is not a constant increase in the memory usage.
For example, if I run the test above with check_trace=True for 80 iterations, I get the following output:
Iter: 0 = pfullmem(rss=326701056, vms=729473024, shared=75661312, text=2265088, lib=0, data=278097920, dirty=0, uss=319164416, pss=320976896, swap=0)
Iter: 1 = pfullmem(rss=334843904, vms=737370112, shared=76226560, text=2265088, lib=0, data=285995008, dirty=0, uss=326877184, pss=328689664, swap=0)
Iter: 2 = pfullmem(rss=342360064, vms=744742912, shared=76226560, text=2265088, lib=0, data=293367808, dirty=0, uss=334262272, pss=336074752, swap=0)
Iter: 3 = pfullmem(rss=379133952, vms=781746176, shared=76226560, text=2265088, lib=0, data=330371072, dirty=0, uss=371142656, pss=372955136, swap=0)
Iter: 4 = pfullmem(rss=407904256, vms=810496000, shared=76226560, text=2265088, lib=0, data=359120896, dirty=0, uss=399900672, pss=401713152, swap=0)
Iter: 5 = pfullmem(rss=417579008, vms=820097024, shared=76226560, text=2265088, lib=0, data=368721920, dirty=0, uss=409509888, pss=411322368, swap=0)
Iter: 6 = pfullmem(rss=408051712, vms=810266624, shared=76226560, text=2265088, lib=0, data=358891520, dirty=0, uss=399818752, pss=401631232, swap=0)
Iter: 7 = pfullmem(rss=427544576, vms=829927424, shared=76226560, text=2265088, lib=0, data=378552320, dirty=0, uss=419491840, pss=421304320, swap=0)
Iter: 8 = pfullmem(rss=442355712, vms=844673024, shared=76226560, text=2265088, lib=0, data=393297920, dirty=0, uss=434245632, pss=436058112, swap=0)
Iter: 9 = pfullmem(rss=449597440, vms=852045824, shared=76226560, text=2265088, lib=0, data=400670720, dirty=0, uss=441626624, pss=443439104, swap=0)
Iter: 10 = pfullmem(rss=449785856, vms=852045824, shared=76226560, text=2265088, lib=0, data=400670720, dirty=0, uss=441634816, pss=443447296, swap=0)
Iter: 11 = pfullmem(rss=457093120, vms=859418624, shared=76226560, text=2265088, lib=0, data=408043520, dirty=0, uss=449015808, pss=450828288, swap=0)
Iter: 12 = pfullmem(rss=457015296, vms=859418624, shared=76226560, text=2265088, lib=0, data=408043520, dirty=0, uss=449028096, pss=450840576, swap=0)
Iter: 13 = pfullmem(rss=496234496, vms=898740224, shared=76226560, text=2265088, lib=0, data=447365120, dirty=0, uss=488226816, pss=490039296, swap=0)
Iter: 14 = pfullmem(rss=511221760, vms=913670144, shared=76226560, text=2265088, lib=0, data=462295040, dirty=0, uss=503164928, pss=504977408, swap=0)
Iter: 15 = pfullmem(rss=511164416, vms=913670144, shared=76226560, text=2265088, lib=0, data=462295040, dirty=0, uss=503173120, pss=504985600, swap=0)
Iter: 16 = pfullmem(rss=550715392, vms=952991744, shared=76226560, text=2265088, lib=0, data=501616640, dirty=0, uss=542507008, pss=544319488, swap=0)
Iter: 17 = pfullmem(rss=550686720, vms=952991744, shared=76226560, text=2265088, lib=0, data=501616640, dirty=0, uss=542515200, pss=544327680, swap=0)
Iter: 18 = pfullmem(rss=550268928, vms=952578048, shared=76226560, text=2265088, lib=0, data=501202944, dirty=0, uss=542109696, pss=543922176, swap=0)
Iter: 19 = pfullmem(rss=559927296, vms=962408448, shared=76226560, text=2265088, lib=0, data=511033344, dirty=0, uss=551948288, pss=553760768, swap=0)
Iter: 20 = pfullmem(rss=530780160, vms=932917248, shared=76226560, text=2265088, lib=0, data=481542144, dirty=0, uss=522661888, pss=524474368, swap=0)
Iter: 21 = pfullmem(rss=530845696, vms=932917248, shared=76226560, text=2265088, lib=0, data=481542144, dirty=0, uss=522661888, pss=524474368, swap=0)
Iter: 22 = pfullmem(rss=535318528, vms=937832448, shared=76226560, text=2265088, lib=0, data=486457344, dirty=0, uss=527446016, pss=529258496, swap=0)
Iter: 23 = pfullmem(rss=550354944, vms=952578048, shared=76226560, text=2265088, lib=0, data=501202944, dirty=0, uss=542322688, pss=544135168, swap=0)
Iter: 24 = pfullmem(rss=557764608, vms=959950848, shared=76226560, text=2265088, lib=0, data=508575744, dirty=0, uss=549695488, pss=551507968, swap=0)
Iter: 25 = pfullmem(rss=565141504, vms=967323648, shared=76226560, text=2265088, lib=0, data=515948544, dirty=0, uss=557068288, pss=558880768, swap=0)
Iter: 26 = pfullmem(rss=572645376, vms=974696448, shared=76226560, text=2265088, lib=0, data=523321344, dirty=0, uss=564441088, pss=566253568, swap=0)
Iter: 27 = pfullmem(rss=579878912, vms=982069248, shared=76226560, text=2265088, lib=0, data=530694144, dirty=0, uss=571813888, pss=573626368, swap=0)
Iter: 28 = pfullmem(rss=594636800, vms=997076992, shared=76226560, text=2265088, lib=0, data=545701888, dirty=0, uss=586567680, pss=588380160, swap=0)
Iter: 29 = pfullmem(rss=602152960, vms=1004449792, shared=76226560, text=2265088, lib=0, data=553074688, dirty=0, uss=593948672, pss=595761152, swap=0)
Iter: 30 = pfullmem(rss=641359872, vms=1043771392, shared=76226560, text=2265088, lib=0, data=592396288, dirty=0, uss=633151488, pss=634963968, swap=0)
Iter: 31 = pfullmem(rss=616792064, vms=1019195392, shared=76226560, text=2265088, lib=0, data=567820288, dirty=0, uss=608706560, pss=610519040, swap=0)
Iter: 32 = pfullmem(rss=624295936, vms=1026568192, shared=76226560, text=2265088, lib=0, data=575193088, dirty=0, uss=616083456, pss=617895936, swap=0)
Iter: 33 = pfullmem(rss=641503232, vms=1043771392, shared=76226560, text=2265088, lib=0, data=592396288, dirty=0, uss=633294848, pss=635107328, swap=0)
Iter: 34 = pfullmem(rss=725028864, vms=1127489536, shared=76226560, text=2265088, lib=0, data=676114432, dirty=0, uss=716894208, pss=718706688, swap=0)
Iter: 35 = pfullmem(rss=725000192, vms=1127489536, shared=76226560, text=2265088, lib=0, data=676114432, dirty=0, uss=716902400, pss=718714880, swap=0)
Iter: 36 = pfullmem(rss=724959232, vms=1127489536, shared=76226560, text=2265088, lib=0, data=676114432, dirty=0, uss=716910592, pss=718723072, swap=0)
Iter: 37 = pfullmem(rss=695332864, vms=1097781248, shared=76226560, text=2265088, lib=0, data=646406144, dirty=0, uss=687341568, pss=689154048, swap=0)
Iter: 38 = pfullmem(rss=714858496, vms=1117442048, shared=76226560, text=2265088, lib=0, data=666066944, dirty=0, uss=706879488, pss=708691968, swap=0)
Iter: 39 = pfullmem(rss=695586816, vms=1097781248, shared=76226560, text=2265088, lib=0, data=646406144, dirty=0, uss=687362048, pss=689174528, swap=0)
Iter: 40 = pfullmem(rss=761667584, vms=1164136448, shared=76226560, text=2265088, lib=0, data=712761344, dirty=0, uss=753594368, pss=755406848, swap=0)
Iter: 41 = pfullmem(rss=751833088, vms=1154306048, shared=76226560, text=2265088, lib=0, data=702930944, dirty=0, uss=743772160, pss=745584640, swap=0)
Iter: 42 = pfullmem(rss=751779840, vms=1154306048, shared=76226560, text=2265088, lib=0, data=702930944, dirty=0, uss=743780352, pss=745592832, swap=0)
Iter: 43 = pfullmem(rss=732422144, vms=1134645248, shared=76226560, text=2265088, lib=0, data=683270144, dirty=0, uss=724258816, pss=726071296, swap=0)
Iter: 44 = pfullmem(rss=737099776, vms=1139560448, shared=76226560, text=2265088, lib=0, data=688185344, dirty=0, uss=729055232, pss=730867712, swap=0)
Iter: 45 = pfullmem(rss=751976448, vms=1154306048, shared=76226560, text=2265088, lib=0, data=702930944, dirty=0, uss=743940096, pss=745752576, swap=0)
Iter: 46 = pfullmem(rss=761769984, vms=1164136448, shared=76226560, text=2265088, lib=0, data=712761344, dirty=0, uss=753778688, pss=755591168, swap=0)
Iter: 47 = pfullmem(rss=781651968, vms=1183797248, shared=76226560, text=2265088, lib=0, data=732422144, dirty=0, uss=773447680, pss=775260160, swap=0)
Iter: 48 = pfullmem(rss=789053440, vms=1191170048, shared=76226560, text=2265088, lib=0, data=739794944, dirty=0, uss=780832768, pss=782645248, swap=0)
Iter: 49 = pfullmem(rss=788971520, vms=1191170048, shared=76226560, text=2265088, lib=0, data=739794944, dirty=0, uss=780840960, pss=782653440, swap=0)
Iter: 50 = pfullmem(rss=796258304, vms=1198542848, shared=76226560, text=2265088, lib=0, data=747167744, dirty=0, uss=788221952, pss=790034432, swap=0)
Iter: 51 = pfullmem(rss=811008000, vms=1213288448, shared=76226560, text=2265088, lib=0, data=761913344, dirty=0, uss=802975744, pss=804788224, swap=0)
Iter: 52 = pfullmem(rss=818520064, vms=1220661248, shared=76226560, text=2265088, lib=0, data=769286144, dirty=0, uss=810356736, pss=812169216, swap=0)
Iter: 53 = pfullmem(rss=825761792, vms=1228034048, shared=76226560, text=2265088, lib=0, data=776658944, dirty=0, uss=817741824, pss=819554304, swap=0)
Iter: 54 = pfullmem(rss=833257472, vms=1235406848, shared=76226560, text=2265088, lib=0, data=784031744, dirty=0, uss=825122816, pss=826934272, swap=0)
Iter: 55 = pfullmem(rss=840499200, vms=1242779648, shared=76226560, text=2265088, lib=0, data=791404544, dirty=0, uss=832503808, pss=834316288, swap=0)
Iter: 56 = pfullmem(rss=848011264, vms=1250152448, shared=76226560, text=2265088, lib=0, data=798777344, dirty=0, uss=839884800, pss=841694208, swap=0)
Iter: 57 = pfullmem(rss=855269376, vms=1257525248, shared=76226560, text=2265088, lib=0, data=806150144, dirty=0, uss=847265792, pss=849075200, swap=0)
Iter: 58 = pfullmem(rss=862777344, vms=1264898048, shared=76226560, text=2265088, lib=0, data=813522944, dirty=0, uss=854642688, pss=856452096, swap=0)
Iter: 59 = pfullmem(rss=933801984, vms=1336430592, shared=76226560, text=2265088, lib=0, data=885055488, dirty=0, uss=925798400, pss=927607808, swap=0)
Iter: 60 = pfullmem(rss=933892096, vms=1336430592, shared=76226560, text=2265088, lib=0, data=885055488, dirty=0, uss=925806592, pss=927616000, swap=0)
Iter: 61 = pfullmem(rss=933851136, vms=1336430592, shared=76226560, text=2265088, lib=0, data=885055488, dirty=0, uss=925814784, pss=927624192, swap=0)
Iter: 62 = pfullmem(rss=900866048, vms=1303351296, shared=76226560, text=2265088, lib=0, data=851976192, dirty=0, uss=892878848, pss=894691328, swap=0)
Iter: 63 = pfullmem(rss=905740288, vms=1308266496, shared=76226560, text=2265088, lib=0, data=856891392, dirty=0, uss=897671168, pss=899483648, swap=0)
Iter: 64 = pfullmem(rss=927948800, vms=1330384896, shared=76226560, text=2265088, lib=0, data=879009792, dirty=0, uss=919920640, pss=921733120, swap=0)
Iter: 65 = pfullmem(rss=935456768, vms=1337757696, shared=76226560, text=2265088, lib=0, data=886382592, dirty=0, uss=927301632, pss=929114112, swap=0)
Iter: 66 = pfullmem(rss=942710784, vms=1345130496, shared=76226560, text=2265088, lib=0, data=893755392, dirty=0, uss=934686720, pss=936499200, swap=0)
Iter: 67 = pfullmem(rss=950222848, vms=1352503296, shared=76226560, text=2265088, lib=0, data=901128192, dirty=0, uss=942067712, pss=943880192, swap=0)
Iter: 68 = pfullmem(rss=957468672, vms=1359876096, shared=76226560, text=2265088, lib=0, data=908500992, dirty=0, uss=949448704, pss=951261184, swap=0)
Iter: 69 = pfullmem(rss=972296192, vms=1374621696, shared=76226560, text=2265088, lib=0, data=923246592, dirty=0, uss=964202496, pss=966014976, swap=0)
Iter: 70 = pfullmem(rss=987045888, vms=1389367296, shared=76226560, text=2265088, lib=0, data=937992192, dirty=0, uss=978956288, pss=980768768, swap=0)
Iter: 71 = pfullmem(rss=994557952, vms=1396740096, shared=76226560, text=2265088, lib=0, data=945364992, dirty=0, uss=986341376, pss=988153856, swap=0)
Iter: 72 = pfullmem(rss=1001803776, vms=1404112896, shared=76226560, text=2265088, lib=0, data=952737792, dirty=0, uss=993722368, pss=995534848, swap=0)
Iter: 73 = pfullmem(rss=1033687040, vms=1436061696, shared=76226560, text=2265088, lib=0, data=984686592, dirty=0, uss=1025548288, pss=1027360768, swap=0)
Iter: 74 = pfullmem(rss=1009147904, vms=1411485696, shared=76226560, text=2265088, lib=0, data=960110592, dirty=0, uss=1001111552, pss=1002924032, swap=0)
Iter: 75 = pfullmem(rss=1016659968, vms=1418858496, shared=76226560, text=2265088, lib=0, data=967483392, dirty=0, uss=1008496640, pss=1010309120, swap=0)
Iter: 76 = pfullmem(rss=1023901696, vms=1426231296, shared=76226560, text=2265088, lib=0, data=974856192, dirty=0, uss=1015877632, pss=1017690112, swap=0)
Iter: 77 = pfullmem(rss=1038667776, vms=1440976896, shared=76226560, text=2265088, lib=0, data=989601792, dirty=0, uss=1030631424, pss=1032443904, swap=0)
Iter: 78 = pfullmem(rss=1046183936, vms=1448349696, shared=76226560, text=2265088, lib=0, data=996974592, dirty=0, uss=1038012416, pss=1039824896, swap=0)
Iter: 79 = pfullmem(rss=1085386752, vms=1487671296, shared=76226560, text=2265088, lib=0, data=1036296192, dirty=0, uss=1077211136, pss=1079023616, swap=0)
Iter: 80 = pfullmem(rss=1109991424, vms=1512247296, shared=76226560, text=2265088, lib=0, data=1060872192, dirty=0, uss=1101799424, pss=1103611904, swap=0)
As you can see it starts from ~ 300MB and reaches 1.1GB.
If it helps, I’ve noticed that, If I set check_trace=False, the memory usage increases more slowly and it takes more iterations to reach 1GB
Thanks |
st181458 | I can reproduce it with the latest master build and have created a bug here 103.
Thanks again for reporting this issue here. |
st181459 | Traced module
torch.manual_seed(25)
module = torch.jit.trace(net, torch.randn(1,3,5,5))
print(module.code)
module(torch.randn(1,3,5,5))[0, 0, 0]
And the output
# module.code
def forward(self,
input: Tensor) -> Tensor:
_0 = getattr(self.convs, "1")
_1 = (getattr(self.convs, "0")).forward(input, )
_2 = getattr(self.residuals, "0")
_3 = (getattr(self.convs, "2")).forward((_0).forward(_1, ), )
_4 = getattr(self.residuals, "2")
_5 = (getattr(self.residuals, "1")).forward((_2).forward(_3, ), )
_6 = getattr(self.upsample_convs, "1")
_7 = (getattr(self.upsample_convs, "0")).forward((_4).forward(_5, ), )
_8 = (self.final_conv).forward((_6).forward(_7, ), )
return torch.sigmoid(_8)
# Output
tensor([0.3275, 0.4366, 0.2979, 0.2537, 0.4489, 0.4663, 0.4455, 0.4363],
grad_fn=<SelectBackward>)
Scripted module: I only changed torch.jit.trace() to torch.jit.script()
# module.code
def forward(self,
x: Tensor) -> Tensor:
_0 = self.convs
_1 = getattr(_0, "0")
_2 = getattr(_0, "1")
_3 = getattr(_0, "2")
x0 = (_1).forward(x, )
x1 = (_2).forward(x0, )
x2 = (_3).forward(x1, )
_4 = self.residuals
_5 = getattr(_4, "0")
_6 = getattr(_4, "1")
_7 = getattr(_4, "2")
x3 = (_5).forward(x2, )
x4 = (_6).forward(x3, )
x5 = (_7).forward(x4, )
_8 = self.upsample_convs
_9 = getattr(_8, "0")
_10 = getattr(_8, "1")
x6 = (_9).forward(x5, )
x7 = (_10).forward(x6, )
x8 = (self.final_conv).forward(x7, )
return torch.sigmoid(x8)
# Output
tensor([0.3275, 0.4366, 0.2979, 0.2537, 0.4489, 0.4663, 0.4455, 0.4363],
grad_fn=<SelectBackward>)
You can see the results module.code is totally different, as my forward() contains loops, but the module’s output is exactly the same.
Why is this the case? Thanks in advance! |
st181460 | Solved by driazati in post #2
The two don’t necessarily produce the same IR (which is what you’re seeing, it corresponds directly to module.graph) since tracing only records tensor operations as they occur, while scripting will compile the code itself. Tracing will unroll loops based on your example input, so this is probably wh… |
st181461 | The two don’t necessarily produce the same IR (which is what you’re seeing, it corresponds directly to module.graph) since tracing only records tensor operations as they occur, while scripting will compile the code itself. Tracing will unroll loops based on your example input, so this is probably why you’re seeing the same results.
But if your model has no data dependent control flow (meaning that tracing will generate correct results), the output from the two will be identical (and it will also match the output from eager (no jit) PyTorch) |
st181462 | you’re right, my model has no data dependent control flow, as I was only looping over a nn.ModuleList |
st181463 | I am trying to create trace file. Related code part is here:
def center_crop(self, layer, target_size):
_, _, layer_height, layer_width = layer.size()
diff_y = (layer_height - target_size[0]) // 2
diff_x = (layer_width - target_size[1]) // 2
return layer[:, :, diff_y:(diff_y + target_size[0]), diff_x:(diff_x + target_size[1])]
def forward(self, x, bridge):
up = self.up(x)
crop1 = self.center_crop(bridge, up.shape[2:])
out = torch.cat([up, crop1], 1)
out = self.conv_block(out)
It gives the warning at this line:
return layer[:, :, diff_y:(diff_y + target_size[0]), diff_x:(diff_x + target_size[1])]
Then, I loaded this trace file in C++ and made prediction for a test image. Then I compared the result with the output in python. I observed that results are very different. I guess that the reason originates from that warning. |
st181464 | The tracer uses the example inputs you provide and records the operations. If a different input would result in a different operation, then this will not be captured by the tracer. Here it’s recording the arguments to the slice operation as constants (so it will always slice it with the values produced with the example inputs).
To get around this, put any code with data-dependent control flow inside a ScriptModule 251, and then call that in your traced code. |
st181465 | @driazati I added a helper function and wrote this code:
@torch.jit.script
def center_slice_helper(layer, diff_y, diff_x, h_end, w_end):
return layer[:, :, diff_y:h_end, diff_x:w_end]
def center_crop(self, layer, target_size):
#_, _, layer_height, layer_width = layer.size()
diff_y = (layer.shape[2] - target_size.shape[2]) // 2
diff_x = (layer.shape[3] - target_size.shape[3]) // 2
h_end = diff_y + target_size.shape[2]
w_end = diff_x + target_size.shape[3]
return center_slice_helper(layer, diff_y, diff_x, h_end, w_end)
def forward(self, x, bridge):
up = self.up(x)
crop1 = self.center_crop(bridge, up)
out = torch.cat([up, crop1], 1)
out = self.conv_block(out)
return out
At this time, it gave this exception:
torch.jit.TracingCheckError: Tracing failed sanity checks!
Encountered an exception while running the Python function with test inputs.
Exception:
center_slice_helper() expected value of type Tensor for argument 'diff_y' in position 1, but instead got value of type int.
Value: 0
Declaration: center_slice_helper(Tensor layer, Tensor diff_y, Tensor diff_x, Tensor h_end, Tensor w_end) -> Tensor
I debugged the code and observed that x parameter of forward function has this type:
x_parameter.PNG932×390 8.66 KB
How to handle this? |
st181466 | @driazati I have read the related part in documentation: https://pytorch.org/docs/stable/jit.html#mixing-tracing-and-scripting 100
I understood that I should use MyPy-style type annotation. What is your opinion? |
st181467 | Our support for non-tensor types in tracing is pretty limited (see #14455 62), but your model looks like using script mode would work well. See something like:
@torch.jit.script
def center_slice_helper(layer, diff_y, diff_x, h_end, w_end):
# type: (Tensor, int, int, int, int) -> Tensor
return layer[:, :, diff_y:h_end, diff_x:w_end]
class M(torch.jit.ScriptModule):
@torch.jit.script_method
def center_crop(self, layer, target_size):
#_, _, layer_height, layer_width = layer.size()
diff_y = (layer.shape[2] - target_size.shape[2]) // 2
diff_x = (layer.shape[3] - target_size.shape[3]) // 2
h_end = diff_y + target_size.shape[2]
w_end = diff_x + target_size.shape[3]
return center_slice_helper(layer, diff_y, diff_x, h_end, w_end)
@torch.jit.script_method
def forward(self, x, bridge):
crop1 = self.center_crop(bridge, x)
return crop1
m = M() |
st181468 | Hi,
I’m having trouble using torchscript with my model which uses spconv. I need spconv, since I’m working with large sparse matrices. However,
traced_script_module= torch.jit.trace(model, example)
throws the following error:
RuntimeError: Found an unsupported argument type c10::List<at::Tensor> in the JIT tracer.
I’m also getting a lot of transformation errors like:
TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect.
Any Idea what could cause this?
Thanks. |
st181469 | This is just to clarify my understanding of JIT.
When we talk about PyTorch JIT - is the translator that generates TorchScript IR a JIT itself ? And when we execute the TorchScript (using libtorch C++ library) is there an another JIT that is interpreting TorchScript IR and generating code that calls libtorch APIs and dispatches the calls ? If so, should we expect this online JIT to have a runtime overhead in terms of cpu cycles and memory ?
Thanks,
Yetanadur |
st181470 | Hi @yetanadur,
No, IR generation is a fully ahead-of-time process. JIT compilation is used to recover information that is potentially dynamic, such as tensor shapes, that can be used to make better optimization decisions during runtime. This all happens entirely after IR generation, though. Please let us know if you have further questions.
James |
st181471 | Thanks James.
Where can I find the entry point function (in source code) for JIT ? And which function has the code to jump to the JIT’d (optimized) code ? Basically want to step thru with a debugger.
Thanks a bunch,
Yetanadur |
st181472 | This is true for 1.5, but isn’t true on master, and won’t be true for the upcoming release. |
st181473 | Hi I’m trying to export my code to jit script, but come across the following error:
RuntimeError: inputs_.size() == 1 INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/ir.h:364, please report a bug to PyTorch. (input at /pytorch/torch/csrc/jit/ir.h:364)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f193351b813 in /job/.local/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: + 0x51af79 (0x7f19364baf79 in /job/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #2: + 0x516db5 (0x7f19364b6db5 in /job/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #3: torch::jit::PeepholeOptimizeONNX(std::shared_ptrtorch::jit::Graph&, int, bool) + 0xcb (0x7f19364baa5b in /job/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: + 0x4cfbd5 (0x7f193646fbd5 in /job/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: + 0x210ba4 (0x7f19361b0ba4 in /job/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #32: __libc_start_main + 0xe7 (0x7f193a3dab97 in /lib/x86_64-linux-gnu/libc.so.6)
Is there any clue on what problem it is? Thanks |
st181474 | Hi,
It looks quite bad yes.
Can you share a small code sample that reproduces this issue? |
st181475 | https://github.com/kmkurn/pytorch-crf/blob/1b7ced20c01352ccf70b7ae7bb99a58366de8e48/torchcrf/init.py#L259 26
It’s pretty deal with this function, i move out this function as a non-class function and add @torch.jit.script |
st181476 | Hi @shy,
This looks like an issue with some precondition in ONNX export. Do you mind filing an issue 20 on github so we can route this to the right developers?
Thanks,
James |
st181477 | I’m new to C++ frontend. Here is my current code
#include <torch/script.h>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
torch::jit::script::Module module;
try {
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << "error loading the model\n";
return -1;
}
std::vector<torch::Tensor> sample_input;
torch::load(sample_input, argv[2]);
std::cout << "Loaded Successfully\n";
std::vector<torch::jit::IValue> inputs;
inputs.push_back(sample_input);
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output << '\n';
}
I want to run the executable as
./example-app ../model.pt ../sample_input.pth
The code doesn’t compile (make) .
In the documentation, I saw usage of torch::load but during make I noticed the error message ‘load’ is not a member of ‘torch’.
Also the inputs would not append the tensor, not sure what IValue is.
Could you please provide the fix? |
st181478 | You probably need to #include <torch/all.h> for it to pick up torch::load.
This comment 7 has more info about the save/load APIs. |
st181479 | I have a simple function (LSTM layer) that I’m converting to TorchScript and executing. From some initial experiments, it looks like the JIT version runs slower than the non-JIT version on both CPU and GPU. The relevant code is listed here: https://github.com/lmnt-com/haste/blob/master/frameworks/pytorch/lstm.py#L31-L64 6. Is this expected behavior? |
st181480 | I saved the quantized model to TorchScript successfully, but when I run the TorchScript model, I met the following problem:
Traceback (most recent call last):
File "ocr_quant.py", line 293, in <module>
js_out = ts(x)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
RuntimeError: Didn't find kernel to dispatch to for operator 'aten::max_pool2d_with_indices'. Tried to look up kernel for dispatch key 'QuantizedCPUTensorId'. Registered dispatch keys are: [CUDATensorId, CPUTensorId, VariableTensorId]
The above operation failed in interpreter, with the following stack trace:
at <string>:63:30
return torch.avg_pool2d(self, kernel_size, stride, padding, ceil_mode, count_include_pad, divisor_override), backward
def max_pool2d(self,
kernel_size: List[int],
stride: List[int],
padding: List[int],
dilation: List[int],
ceil_mode: bool):
output, indices = torch.max_pool2d_with_indices(self, kernel_size, stride, padding, dilation, ceil_mode)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
def backward(grad_output):
grad_self = torch.max_pool2d_with_indices_backward(grad_output, self, kernel_size, stride, padding, dilation, ceil_mode, indices)
return grad_self, None, None, None, None, None
return output, backward
def max_pool2d_with_indices(self,
kernel_size: List[int],
stride: List[int],
padding: List[int],
The above operation failed in interpreter, with the following stack trace:
Here’s my code for quantization:
crnn.cnn.fuse_model()
img = Image.open("3.png").convert("L").resize((300,32))
x = ToTensor()(img).unsqueeze(0)
out = crnn(x)
crnn.cnn.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(crnn.cnn,inplace=True)
# calibration
crnn(x)
# quantize
torch.quantization.convert(crnn.cnn, inplace=True)
torch.quantization.quantize_dynamic(crnn.rnn,{nn.LSTM,nn.Linear},dtype=torch.qint8,inplace=True)
quantized_out = crnn(x)
ts = torch.jit.script(crnn)
ts_out = ts(x)
The error occurs when the TorchScript model ts is called.
And here’s my code for crnn.cnn
class ConvBNReLU(nn.Sequential):
def __init__(self,i,in_channels,out_channels,kernel_size=3,stride = 1,padding = 1,groups = 1,bn = False):
self.bn = bn
if bn:
super(ConvBNReLU,self).__init__(OrderedDict([
("conv{0}".format(i),nn.Conv2d(in_channels,out_channels,kernel_size,stride,padding,groups = groups,bias = True)),
("batchnorm{0}".format(i),nn.BatchNorm2d(out_channels)),
("relu{0}".format(i),nn.ReLU(inplace=False))
]))
else:
super(ConvBNReLU,self).__init__(OrderedDict([
("conv{0}".format(i),nn.Conv2d(in_channels,out_channels,kernel_size,stride,padding,groups = groups,bias = True)),
("relu{0}".format(i),nn.ReLU(inplace=False))
]))
class CNN(nn.Module):
def __init__(self, imgH, nc):
super(CNN, self).__init__()
assert imgH % 16 == 0, 'imgH has to be a multiple of 16'
ks = [3, 3, 3, 3, 3, 3, 2]
ps = [1, 1, 1, 1, 1, 1, 0]
ss = [1, 1, 1, 1, 1, 1, 1]
nm = [64, 128, 256, 256, 512, 512, 512]
self.cnn = nn.Sequential()
pn = 0
for i in range(7):
nIn = nc if i == 0 else nm[i - 1]
nOut = nm[i]
if i in [0,1,3,5]:
self.cnn.add_module("ConvBNReLU{}".format(i),ConvBNReLU(i,nIn,nOut,ks[i],ss[i],ps[i],bn=False))
if i in [0,1]:
self.cnn.add_module("pooling{0}".format(pn),nn.MaxPool2d(2,2))
else:
self.cnn.add_module("pooling{0}".format(pn),nn.MaxPool2d((2,2),(2,1),(0,1)))
pn += 1
else:
self.cnn.add_module("ConvBNReLU{}".format(i),ConvBNReLU(i,nIn,nOut,ks[i],ss[i],ps[i],bn=True))
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.cnn(x)
x = self.dequant(x)
return x
My PyTorch version is 1.3.1+cpu.
How can I solve this problem? Looking forward to your replay. |
st181481 | The input tensor of CRNN is cropped from an tensor of image, when I convert the tensor to PIL image and convert it back, the code works well.
img_out = ToTensor()(ToPILImage()(img_out.squeeze(0))).unsqueeze(0)
and the following code also helps:
img_out = torch.ones(img_out_.shape)
img_out.data = img_out_.clone()
Could anyone tell me the differences between cropped tensor and tensor converted from image? |
st181482 | I know this is late but for others visiting this with the same error I found that wrapping the ts(x) in a torch.no_grad() block solved the error |
st181483 | I got “out of memory” when I tried to trace a model with jit.
What is interesting is that when I run the model in the test mode it works.
What is the difference between testing and tracing? |
st181484 | When you say “test mode”, do you mean calling eval() and then forward()? There are minor differences; certain Modules have different behavior in eval vs. training mode. In addition, if during testing you aren’t recording gradients (e.g. you set requires_grad to off), you don’t have to build up an autograd graph and keep the bookkeeping structures around.
Keep in mind that tracing will actually execute your model in order to trace it. When you run the model forward on the example inputs manually, does it still oom?
It would be helpful to have a script that we can use to reproduce the issue, otherwise there isn’t a ton of information to go off of. |
st181485 | With respect to “test mode”, I meant what you said. And I feel like there seems to be a memory leak in my code then.
Thanks for your reply. |
st181486 | How can I make a prediction? using traced torch model( the model is suppose to identify image similarity model , not image classification ) I need help please because am about to make an android app |
st181487 | You should be able to call it just like you call the model before you trace it. If you are looking for an Android specific example, check out this repo 7. |
st181488 | I did implement it on the android but the tensor that I am getting after prediction on my android app is different that of on the desktop |
st181489 | I’m trying to wrap two scripted model(from torch.jit.script) into a new torchscript model in my use case, I’ve narrow down the problems as below:
(noted that, cls: classification model, mask: segmentation model)
1. res34(cls) + res34(cls) => Pass (Same model arch and same weight(same epoch))
2. res34(cls) + res34(cls) => Pass (Same model arch and diff weight(diff epoch))
3. res18(cls) + res34(cls) => Fail (Same model arch and diff weight(diff epoch))
4. res34(cls) + res34(mask) => Fail (Diff model structure)
Before getting into the snippet, to be mentioned is that, two failed cases (3, 4) get different error message.
Looking forward to the answer. Thanks!!
(PyTorch Version: 1.4.0)
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.model1 = torch.jit.load(SAVE_MODEL_PATH) # res 34
#self.model2 = torch.jit.load(SAVE_MODEL_PATH) # res 34
#self.model2 = torch.jit.load(SAVE_MODEL_PATH2) # res 18
#self.model2 = torch.jit.load(SAVE_MODEL_PATH3) # res 34 different weight
self.model2 = torch.jit.load(SAVE_MODEL_PATH4) # mask
def forward(self, x:torch.Tensor)->torch.Tensor:
return self.model1(x)[0]
def test_ensemble():
input640 = torch.rand(1, 3, 640, 640).cuda()
test_model = TestNet()
# build script model
test_model_libtorch = torch.jit.script(test_model)
test_model_libtorch.save(SAVE_MODEL_PATH)
test_model_libtorch = torch.jit.load(SAVE_MODEL_PATH).cuda()
output = test_model_libtorch(input640)
exit(0)
res34 + res34 => OK
res34 + res34 different weight => OK
res34 + res18 => fail when forward
RuntimeError: input.isTensor() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/argument_spec.h:89, please report a bug to PyTorch. Expected Tensor but found Bool (addTensor at /pytorch/torch/csrc/jit/argument_spec.h:89)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fa6372ff193 in /usr/local/lib/python3.7/dist-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x4059e33 (0x7fa63b9b3e33 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #2: torch::jit::ArgumentSpecCreator::create(bool, std::vector<c10::IValue, std::allocator<c10::IValue> > const&) const + 0x230 (0x7fa63b9ae8d0 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x40848fb (0x7fa63b9de8fb in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #4: <unknown function> + 0x407b8c1 (0x7fa63b9d58c1 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #5: torch::jit::Function::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x60 (0x7fa63bc9a480 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #6: <unknown function> + 0x7b4a8b (0x7fa69551ca8b in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x7b53bf (0x7fa69551d3bf in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0x774d76 (0x7fa6954dcd76 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0x295a74 (0x7fa694ffda74 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #13: python() [0x530583]
frame #18: python() [0x530547]
frame #25: python() [0x62d082]
frame #28: python() [0x606505]
frame #30: __libc_start_main + 0xf0 (0x7fa6991a9830 in /lib/x86_64-linux-gnu/libc.so.6)
res34 + mask => fail when jit.load
test_model_libtorch = torch.jit.load(SAVE_MODEL_PATH).cuda()
File "/usr/local/lib/python3.7/dist-packages/torch/jit/__init__.py", line 235, in load
cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
IndexError: Argument passed to at() was not in the map. |
st181490 | Thanks for the report! Since this is a bug report, do you mind filing an issue on GH and we can follow up there? Over there, I will probably ask you for a script that I can run that will reproduce the problem. Thanks |
st181491 | Thanks for ur prompt attention.
Will prepare the script and file an issue on GH! |
st181492 | Following this guide 11 and this code 5 and inspired by this discussion 5, I whipped up my own GRU cell implementation using JIT as follows.
class JitGRUCell(jit.ScriptModule):
def __init__(self, input_size, hidden_size):
super(JitGRUCell, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.weight_ih = Parameter(torch.Tensor(3 * hidden_size, input_size))
self.weight_hh = Parameter(torch.Tensor(3 * hidden_size, hidden_size))
self.bias_ih = Parameter(torch.Tensor(3 * hidden_size))
self.bias_hh = Parameter(torch.Tensor(3 * hidden_size))
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
@jit.script_method
def forward(self, x, hidden):
# type: (Tensor, Tensor) -> Tensor
x = x.view(-1, x.size(1))
x_results = torch.mm(x, self.weight_ih.t()) + self.bias_ih
h_results = torch.mm(hidden, self.weight_hh.t()) + self.bias_hh
x_results = x_results.squeeze()
h_results = h_results.squeeze()
i_r, i_z, i_n = x_results.chunk(3, 1)
h_r, h_z, h_n = h_results.chunk(3, 1)
r = torch.sigmoid(i_r + h_r)
z = torch.sigmoid(i_z + h_z)
n = torch.tanh(i_n + r * h_n)
return n - torch.mul(n, z) + torch.mul(z, hidden)
The implementation itself looks very straightforward, however performance is not great.
What are some tricks that I could use to optimize this implementation further? I’m very new to JIT script modules and I appreciate each and every suggestion. |
st181493 | Sorry for the super late response. I just randomly saw this thread again. Nope… Still using the same code. Did you solve this problem? |
st181494 | What do you mean “return h”? IIRC the output of a GRU cell is the hidden state itself (you feed it back to itself as the previous hidden state)
Also, I had some unit tests in place, and could confirm that this code returned close-enough results to PyTorch’s GRUCell implementation |
st181495 | By the way, here’s my full implementation (including support for multiple layers, etc.): https://github.com/Maghoumi/JitGRU 45
Suggestions for improvement are greatly welcome! |
st181496 | If you are comparing jit version to default version, there certainly will be a gap, especially during backward process. I noticed that both the implementation from here 3 and your implementation used +=[i], but I recommend using .append(i). I don’t know how are these two methods translated to C++ but it seems that .append(i) is faster than simply adding, especially when you want to add many elements. |
st181497 | Hello,
I’m trying to convert my model via jit.trace(). I’m currently using the spconv library for my Network. Training the network works fine and with no errors, but using jit.trace() throws a size mismatch error.
Here are the layers of my model:
self.net = spconv.SparseSequential(
spconv.SparseConv2d(1, 64, 1),
nn.PReLU(),
spconv.SparseConv2d(64, 256, 2, padding=(1, 0)),
nn.PReLU(),
spconv.SparseConv2d(256, 512, 2, padding=(1, 0)),
nn.PReLU(),
spconv.SparseConv2d(512, 256, 2, padding=(0, 1)),
nn.PReLU(),
spconv.SparseConv2d(256, 64, 2, padding=(0, 1)),
nn.PReLU(),
spconv.SparseConv2d(64, 1, 1),
# nn.Tanh(),
)
Here is the error message:
RuntimeError: size mismatch, m1: [375 x 375], m2: [1 x 64] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
The input consists of 375x375 matrices. The network operates on CUDA.
Maybe someone has an idea why this error comes up.
Thank you |
st181498 | Solved by Jakov_Gaspar in post #3
Hi,
I already solved the problem, you were right. I accidentally passed a float as batch size, which caused this error. |
st181499 | That error looks like it’s coming from the actual operation, not anything to do with tracing. Are you tracing it with the same inputs you’re using to run it successfully in normal PyTorch?
If your model has any control flow that depends on the inputs tracing may not work correctly. Can you also post the full code you’re using to run/trace your model? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.