id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st180400 | Hi, I’m having the same error and I reported here 13 but I had no answer so far.
Can anyone help me to solve this issue? |
st180401 | I’m having an issue with tensorboard summarywriter and nn.pack_padded_sequence. In the pack_padded_sequence docs it says the ‘lengths’ can be either a tensor(which MUST be on the cpu) or a list object.
Since i’m using a GPU , I use pack_padded_input in my model like so
packed_input = torch.nn.utils.rnn.pack_padded_sequence(embedding_batched_sentences, seq_lengths_clamped.cpu().numpy(), batch_first=True,enforce_sorted=False)
When I run my model without any use of summarywriter it works just fine as expected. No errors at all.
However when I try to add_graph of my model to summarywriter. It checks my forward method and i get the following error
→ 249 _VF._pack_padded_sequence(input, lengths, batch_first)
250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
251
RuntimeError: ‘lengths’ argument should be a 1D CPU int64 tensor, but got 0D cpu Long tensor
this error output occurs further down from
→ 286 trace = torch.jit.trace(model, args)
Anyone know what;s going on here? |
st180402 | The code is as follows:
class DemoTargetObjectFeatureProcessor(torch.nn.Module):
def __init__(self):
super(DemoTargetObjectFeatureProcessor, self).__init__()
def forward(self, target_object_inputs):
target_object_size = target_object_inputs[:, 0] * target_object_inputs[:, 1]
target_object_size = target_object_size.unsqueeze(-1)
return torch.cat((target_object_inputs, target_object_size), dim=1)
@torch.jit.export
def forward_1(self, target_object_inputs_dict):
target_object_inputs = target_object_inputs_dict['a']
return self.forward(target_object_inputs)
target_object_inputs = torch.tensor([[1,1], [2,2]])
target_object_inputs_dict = {
"a": torch.tensor([[1,1], [2,2]])
}
fp = DemoTargetObjectFeatureProcessor()
fp.forward(target_object_inputs)
fp.forward_1(target_object_inputs_dict)
module = torch.jit.trace(fp, target_object_inputs)
module = torch.jit.trace(fp.forward, target_object_inputs)
module = torch.jit.trace(fp.forward_1, target_object_inputs_dict)
I’m trying to pass a named dictionary of tensors into the TorchScript module from “module = torch.jit.trace(fp.forward_1, target_object_inputs_dict)”, but I will get the error as follows:
Traceback (most recent call last):
File "test_trace.py", line 50, in <module>
module = torch.jit.trace(fp.forward_1, target_object_inputs_dict)
File "/home/lei.chen/.pyenv/versions/3.6.6/lib/python3.6/site-packages/torch/jit/__init__.py", line 893, in trace
raise AttributeError("trace doesn't support compiling individual module's functions.\n"
AttributeError: trace doesn't support compiling individual module's functions.
Please use trace_module
If i’m trying to use
module = torch.jit.script(fp)
module.forward_1(target_object_inputs_dict)
I will get the follow error:
RuntimeError:
Unsupported operation: indexing tensor with unsupported index type 'str'. Only ints, slices, and tensors are supported:
File "test_trace.py", line 36
@torch.jit.export
def forward_1(self, target_object_inputs_dict):
target_object_inputs = target_object_inputs_dict['a']
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return self.forward(target_object_inputs)
How can we pass a named dictionary of tensors into TorchScript? I want to run model inference in C++ backend using the exported TorchScript. From the PyTorch unit tests https://github.com/pytorch/pytorch/blob/5136ed0e44c65cb3747a1f22f77ccf09d54c125c/test/cpp/api/jit.cpp#L73 9, it seems that we can pass a c10::Dict into TorchScript module. |
st180403 | Any TorchScript expert can help here? I also find this issue in PyTorch github page: https://github.com/pytorch/pytorch/issues/16847 93. It’s claimed that “Closing as we support this in the tracer now.”, but many ppls experienced the error from dictionary as input tensors. |
st180404 | Hi, if you read the error, it infers the input as a tensor. You need to explicitly specify it’s a dict with annotation
from typing import Dict
import torch
def forward_1(self, target_object_inputs_dict: Dict[str, torch.Tensor]): |
st180405 | I am getting the following error on a torch.script.load("file.pth", "cuda"):
RuntimeError:
Unknown type name 'NoneType':
Serialized File "code/__torch__/classification/models/multi_output/model.py", line 9
channels_fc : int
learning_rate : float
scheduler : NoneType
~~~~~~~~ <--- HERE
_torchscript_attributes : Dict[str, List[str]]
features : __torch__.torch.nn.modules.container.Sequential
What does it mean? Is it looking for python code? I thought that torchscript models would not require the original python code to be run. |
st180406 | Could you explain your use case a bit more, i.e. how did you save the model and what is the model definition? Also, where does torch.script.load come from? Are you using torch.jit.load? |
st180407 | My goal is to export a model for fast inference.
When I trace a model with jit.trace I usually do it this way:
model.eval()
with torch.no_grad():
input = torch.rand(size=(1, 3, 500, 500))
traced_cell = torch.jit.trace(model.to("cpu"), (input))
It is not clear to me if model.eval() and with torch.no_grad() are required (or still suggested) when exporting a model with jit.script.
Thank you! |
st180408 | model.eval() will change the behavior of some modules (e.g. dropout layers will be disabled and batchnorm layers will use their running stats to normalize the data). torch.jit.trace does not capture any data-dependent control flow, i.e. the code path used by the input will only be captured and other inputs won’t take a different path based on e.g. if statements etc. Given that, it sounds right to use model.eval() before tracing (otherwise the dropout layer would be used with the same mask in each forward pass). I don’t know if disabling the gradient calculation is needed during tracing or could also be added later during the inference. In any case, you might also want to check torch.autograd.inference_mode() 8 for your model deployment. |
st180409 | torch.jit.script is able to capture data-dependent control flow (e.g. the dropout masks would be randomly sampled in each step in case you leave the model in .train() mode). However, the common use case during inference is to use the .eval() mode, so you might call it nevertheless even if scripting the model. The advantage would be that your model could use other data-dependent control flow, such as conditions based on the shape of the input etc. |
st180410 | Thank you. Should I call .eval() before jit.script or after it? What about torch.autograd.inference_mode? I don’t understcand if the results of these operations “are saved” by jit.script.
Thanks! |
st180411 | jit.script should not capture training/eval mode or the no_grad() context, so you should be able to script the model and call .eval() as well as inference_mode/no_grad during the deployment.
However, if you are seeing any issue with this, please let us know. |
st180412 | Thank you @ptrblck . My question is actually the following: is there any benefit on calling a .eval() and enable no_grad on torch script model? Is it something I should after I load a file in order to gain performance, or is it useless? |
st180413 | Yes, if you are deploying call inference_mode/no_grad for performance gains.
model.eval() will change the behavior of some layers, such as disabling dropout and using the running stats of batchnorm layers so it’s not a performance (speed) improvement, but used during the evaluation/test phase of the model. |
st180414 | Tensor Comprehensions are discontinued (rather quietly). Triton seems too complicated? Or not, if you have experience to share on it
So what do we use today like TC? Anything on the horizon, even?
Perhaps some documentation on guiding the JIT towards similar performance to writing fused kernels? |
st180415 | Enamex:
Perhaps some documentation on guiding the JIT towards similar performance to writing fused kernels?
There is indeed ongoing work in the different JIT backends, which use code generation to create (fused) kernels. I’m a bit familiar with the internals of the nvfuser work, but unfortunately cannot link to a proper documentation, as it’s still in an early stage.
In any case, I believe to see more code generation approaches in the future, which would make writing custom operations easier in the framework |
st180416 | So the current plan is JIT all the way? No lower-level script language for kernels specifically (like TCs)?
And if it is going to be JIT-based, are there any plans/work for a static diagnostic that tells you how well certain parts of the source TorchScript have managed to generate fused low-level code?
Something like a function summarize_compilation(script(model, method)) that returns a report on how certain sections fused or didn’t. Or something. You can definitely tell I’ve no idea about this area except that “fused–>probably good” |
st180417 | I have relatively large network fully exported to torchscript.
I reimport the torchscript in python, and then I need to run the forward pass a number of times on a set of frames.
The weird thing is that while the first forward pass runs at an acceptable speed, the 2nd one (and only the second one, after the first pass has returned the results to CPU) seems to be tacking forever, more than 50 seconds.
I’ve even tried to call
torch.cuda.synchronize(device=self.device)
after the .to('cpu').
The rest of the frames are processed at reasonable speed, it’s really only the 2nd one what could it be doing in this 2nd pass? |
st180418 | Are you only seeing the slow 2nd iteration when using the scripted model and/or only the GPU?
Note that the JIT performs some optimizations, but 50s sounds rather excessive, so I doubt it’s the case here. |
st180419 | Hi, thanks for your reply
It is a traced model, and it only happens on GPU (on cpu things are even).
I’ve tried this on Pytorch 1.8.1 cuda 10, on both Windows 10 and Ubuntu 20.04 (the former on a Quadro P2000 Mobile with driver 466.11 and the latter has a Titan RTX on 450.102.04) and the results are similarly skewed (the 2nd frame takes 30 seconds on the Titan).
If it is of any assistance in understanding what is going on these are the profiles from the first 3 frames:
Untitled1770×1200 181 KB
Captured with this block:
with torch.autograd.profiler.profile(use_cuda=True, with_stack=True) as prof:
inp = [torch.from_numpy(i).float().to(self.device) for i in inputs]
with torch.no_grad():
out = self.sess(*inp)
result = [o.detach().cpu().numpy() for o in out]
Btw is there any way to check that pytorch is loading the correct cuda libraries?
Thank you! |
st180420 | Is the slowdown only happening on scripted models or any GPU workload?
michele-arrival:
Btw is there any way to check that pytorch is loading the correct cuda libraries?
You could check the used CUDA version via torch.version.cuda and depending on the way you’ve installed it (binaries or source build) it can be either statically or dynamically linked. |
st180421 | ptrblck:
Is the slowdown only happening on scripted models or any GPU workload?
Apologies, I didn’t catch that the point of your question initially. It is only on scripted models.
Just to check if I understand correctly now:
I have the training function of my model wrapped in some boilerplate code (partly pytorch-lightning, but it should be irrelevant). I have the option to trace this function and then use it for training, or to skip this step and train with all python host code.
My original question was based on what i observed at inference time (forward function has been exported and re-imported) but indeed even in training i observe a delay on the second iteration only when the forward function is traced, whereas with all python host code the times are even.
ptrblck:
michele-arrival:
Btw is there any way to check that pytorch is loading the correct cuda libraries?
You could check the used CUDA version via torch.version.cuda and depending on the way you’ve installed it (binaries or source build) it can be either statically or dynamically linked.
cuda = '10.2' in torch/version.py
It’s installed from anaconda, so it should be all statically linked i believe?
Thanks! |
st180422 | Hi Dobbie.
The answer is: kind-of. I can give you an update.
Someone opened an issue on gitlab, after encountering what i thought is a superset of this problem. He saw that with varying input sizes, the first 20 iterations would be slow. On the other hand with fixed input size, only the first 2 iterations would be slow (and the second one, unacceptably so, much like the reason for my question here).
link: The first 20 loops of inference are Extremely Slow on C++ libtorch v1.8.1 · Issue #56245 · pytorch/pytorch · GitHub 3
He was referred to a solution to avoid the 20 slow iterations, which was to to decrease the optimization depth with torch._C._jit_set_bailout_depth(1)
link: JIT makes model run x14 times slower in pytorch nightly · Issue #52286 · pytorch/pytorch · GitHub 5
Now, the problem for me was that the latter solution did not fix the 2nd iteration. However as it turns out, setting the baliout_depth to 0 would solve just that.
I’m not sure if this is an acceptable/recommended solution, and/or if there’s a better fix to be coming in a future version of pytorch (as i see ticket 52286 is still open) however maybe you want to try this.
So in conclusion, try:
torch._C._jit_set_bailout_depth(0)
or if you are writing C++:
#include <torch/csrc/jit/runtime/graph_executor.h>
torch::jit::getBailoutDepth() = 0;
and see if it works for you without too much penalty
M. |
st180423 | Thank you very much! It works for me without measurable performance penalty. It seems there is few document about bailout depth and it’s hard to find the solution for me without the help of you guys. |
st180424 | yes, that’s what they mention on JIT makes model run x14 times slower in pytorch nightly · Issue #52286 · pytorch/pytorch · GitHub 16, that _jit_set_bailout_depth is undocumented.
I would expect it to be an option that is not exactly supposed to be accessible by the user, as both _C module and function are coded as “private”. That’s why I wonder if it’s the right solution. |
st180425 | Hello,
I have a traced model that runs very slowly on the second inference run on torch 1.7 and greater. From these 1 posts 1 and others, this seems to be the expected behavior.
I have one question about the exact speed of the jit/graph optimisation run (the second run). Say that my model can take in different sized tensors as data during the forward pass, say [batch_size, c, h, w] or and then [batch_size, c * 10, h, w] both being valid inputs. Would be it expected that two forward passes (including the second, longer one) would be faster with input [batch_size, c, h, w] than input sized [batch_size, c * 10, h, w]? My experiments show that the second run (that optimises the graph) is the same (slower) speed with both inputs. I just want to confirm that this is the expected behavior.
Thank you! |
st180426 | If I’m not mistaken, the default JIT backend uses up to 3 steps to optimize the graph for each new input shape. I.e. the new and larger batch would trigger the optimization again. The nvfuser backend would relax this slowdown a bit, as it’s more flexible when it comes to different input shapes (in case the generated kernels cannot be reused, they would be regenerated and optimized again). |
st180427 | I need to implement dynamic tensor split op in work. But when I want to export this split op to ONNX with dynamic split_size, it seems not work.
I am new to ONNX. Anyone can help me? Thanks a lot.
To Reproduce
import torch
dummy_input = (torch.tensor([1, 4, 2, 7, 3]), torch.tensor([1, 2, 2]))
class Split(torch.nn.Module):
def forward(self, x, l):
return x.split(l.cpu().numpy().tolist(), dim=-1)
model = Split()
with torch.no_grad():
torch.onnx.export(
model, dummy_input, 'split.onnx', verbose=False, opset_version=13,
input_names=['a', 'b'],
output_names=['c'],
dynamic_axes={'a': [0], 'b': [0], 'c': [0]}
)
when I use the onnx model, it seems not to work. I get this error:
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:b
import onnxruntime as ort
model_path = './split.onnx'
sess = ort.InferenceSession(model_path)
a = torch.tensor([4, 2, 3, 4])
b = torch.tensor([1, 3])
sess.run(['c'], {'a':a.numpy(), 'b':b.numpy()})
Tensor b seems can not be used as an input, but I do need a parameter to represent the dynamic split_size.
Environment
PyTorch Version (e.g., 1.0): 1.8.1+cu111
Python version: 3.8 |
st180428 | The rust community has grown so much over the years, a library ‘tch-rs’ (https://docs.rs/tch/0.2.0/tch/ 243) has had a successful build (https://travis-ci.org/LaurentMazare/tch-rs 143) to bind C++ API of PyTorch.
I am posting this to have the PyTorch community expand and self aware if anyone else has experience with rust and open to hear if anyone has tried to deploy PyTorch algorithms with tch-rs.
GitHub
LaurentMazare/tch-rs 423
Rust bindings for the C++ api of PyTorch. Contribute to LaurentMazare/tch-rs development by creating an account on GitHub. |
st180429 | Right now Libtorch C++ API deployment is somewhat difficult unless you have an open source library/software (i.e. torchserve) than can wrap serialize model’s into production instead of python web frameworks (flask/django/fastapi). Torchserve is nice for it has an RFC for a high performance Cpp PyTorch serving platform
If you look at the techempower benchmarks, you see in the top 10 we have 4/10 libraries made in rust. This site is used to measure who has the fastest request done for web frameworks.
www.techempower.com
TechEmpower Web Framework Performance Comparison 207
Performance comparison of a wide spectrum of web application frameworks and platforms using community-contributed test implementations.
Right now two popular web frameworks from rust, Rocket and Actix, are gaining traction in the web community. Once you finish creating your algorithms in pytorch using pytorch library, torchscript the model to be loaded up using tch-rs. |
st180430 | I have a GUI application which was written in Python + Java. Both the GUI and deep learning models are handled by Python and some core logics was written in Java. It has some performance issues and I’m trying to rewrite it using Rust. Basically, I want rewrite the GUI using Rust and load the trained models into Rust using tch-rs. |
st180431 | So you’re writing you’re client side (front end) using Rust, assuming the framework is yew ( GitHub - yewstack/yew: Rust / Wasm framework for building client web apps 18 ), to load up you’re PyTorch models using tch-rs? |
st180432 | I’m learning and trying to use Tauri to build the GUI, using Rust to re-implement some core logic which was originally implemented in Java. And yes, use tch-rs to load trained models. |
st180433 | I want to store a small set of useful-to-have constants in my model:
ZERO = torch.tensor( float(0.0) )
ONE = torch.tensor( float(0.0) )
INF = torch.tensor( float('inf') )
NAN = torch.tensor( float('nan') )
TRUE = torch.tensor( bool(True) )
These are often needed when e.g. perfoming operations like torch.where(mask, tensor, ZERO)
What is the best way to store these kinds of constants in my model? I have the follwoing desiderata:
They are cast to the correct device/dtype when using model.to
They are collected in a namespace model.constants
I do not need to copy-paste the whole code in each model every time
The list is extendable on a per-model basis
It is fully compatible with JIT
Currently, I am doing
class model(nn.Module):
ZERO: torch.Tensor
def __init__(self):
self.register_buffer('ZERO', torch.tensor(0.0))
Which satisfies (1, 4, 5) but violates (2, 3). Any better ideas? |
st180434 | I tried to run my model on iOS using Metal. However, the PreLu module does not seem to be supported, yet. I get:
Could not run ‘aten::prelu’ with arguments from the ‘Metal’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::prelu’ is only available for these backends: [CPU, BackendSelect, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC].
It runs fine on CPU but no luck with Metal. Am I making some kind of stupid mistake, is there a workaround, or do I have to be patient and wait for Metal support of the aten::PreLu?
Here is my code example (just a slight modification of the HelloWorldMetal example, which runs fine without the PreLu layer):
import torch
import torchvision
import torch.nn as nn
from torch.utils.mobile_optimizer import optimize_for_mobile
class myModel(nn.Module):
def __init__(self):
super().__init__()
self.network = nn.Sequential(
nn.MaxPool2d(8, 8),
nn.PReLU(),
nn.Flatten(),
nn.Linear(2352, 10))
def forward(self, xb):
return self.network(xb)
model= myModel()
model.eval()
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
torchscript_model_optimized = optimize_for_mobile(traced_script_module, backend='metal')
torchscript_model_optimized._save_for_lite_interpreter("HelloWorld/HelloWorld/model/model2.pt")
Then I use the model in my app with:
#import "TorchModule.h"
#import <LibTorch-Lite-Nightly/LibTorch-Lite.h>
@implementation TorchModule {
@protected
torch::jit::mobile::Module _impl;
}
- (nullable instancetype)initWithFileAtPath:(NSString*)filePath {
self = [super init];
if (self) {
try {
_impl = torch::jit::_load_for_mobile(filePath.UTF8String);
} catch (const std::exception& exception) {
NSLog(@"%s", exception.what());
return nil;
}
}
return self;
}
- (NSArray<NSNumber*>*)predictImage:(void*)imageBuffer {
try {
c10::InferenceMode mode;
at::Tensor tensor = torch::from_blob(imageBuffer, {1, 3, 224, 224}, at::kFloat).metal();
auto outputTensor = _impl.forward({tensor}).toTensor().cpu();
float* floatBuffer = outputTensor.data_ptr<float>();
if (!floatBuffer) {
return nil;
}
NSMutableArray* results = [[NSMutableArray alloc] init];
for (int i = 0; i < 1000; i++) {
[results addObject:@(floatBuffer[i])];
}
return [results copy];
} catch (const std::exception& exception) {
NSLog(@"%s", exception.what());
}
return nil;
}
@end
Thank you guys so much in advance! |
st180435 | Any clues what might be causing this JIT issue? I would assume it should detect the Generator annotation. This is linked to following PR
Adding RNG to model initializers 2
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 14072, in test_nn_init
cu = torch.jit.CompilationUnit(code)
File "/opt/conda/lib/python3.6/site-packages/torch/jit/_recursive.py", line 838, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/opt/conda/lib/python3.6/site-packages/torch/jit/_script.py", line 1311, in script
qualified_name, ast, _rcb, get_default_args(obj)
RuntimeError:
Unknown type name 'torch.Generator':
File "/opt/conda/lib/python3.6/site-packages/torch/nn/init.py", line 122
def uniform_(tensor: Tensor, a: float = 0., b: float = 1., generator: torch.Generator = None) -> Tensor:
~~~~~~~~~~~~~~~ <--- HERE
r"""Fills the input Tensor with values drawn from the uniform
distribution :math:`\mathcal{U}(a, b)`.
'uniform_' is being compiled since it was called from 'test'
File "<string>", line 4
def test(a):
# type: (Tensor)
return torch.nn.init.uniform_(a)
~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE |
st180436 | I think asking in your PR might be a better place to fix potentially related issues or is the error unrelated to your changes? |
st180437 | The error doesn’t really arise from the changes. It is really type annotation, removing which pylint complains and keeping it the jit complains |
st180438 | Hi~ Is there any way to map the nodes in the onnx.graph(exported by torch.onnx.export) to the modules in the pytorch module(for example, conv). The reason I do this is that I need some additional customized attributes in the module to optimize the execution of onnx, but this information is not included in onnx (this information is not used in inference). |
st180439 | I have seen in several places examples using PYTORCH_FUSION_DEBUG=1 to retrieve the source of the fused kernels (for example here [JIT] Fusion of Dropout without constant is_training parameter is unsuccessful · Issue #24032 · pytorch/pytorch · GitHub 4), I am assuming this dumps it to stdout, but when run with this variable set I see nothing.
Could anyone give advice on how to get this working? Perhaps I need to compile PyTorch with specific flags set? |
st180440 | Solved by tom in post #2
PyTorch has 3 fusers (legacy, NNC/TensorExpr fuser, and cuda/nvFuser), the PYTORCH_FUSION_DEBUG only worked for the old (now legacy) fuser.
For the newer fusers, you probably could get some info from the logging facility described here:
Best regards
Thomas |
st180441 | PyTorch has 3 fusers (legacy, NNC/TensorExpr fuser, and cuda/nvFuser), the PYTORCH_FUSION_DEBUG only worked for the old (now legacy) fuser.
For the newer fusers, you probably could get some info from the logging facility described here:
github.com
pytorch/pytorch/blob/f9c0a39ad9320795fc8df77b570c317e0c2ab60e/torch/csrc/jit/jit_log.h#L7-L38 2
// `TorchScript` offers a simple logging facility that can enabled by setting an
// environment variable `PYTORCH_JIT_LOG_LEVEL`.
// Logging is enabled on a per file basis. To enable logging in
// `dead_code_elimination.cpp`, `PYTORCH_JIT_LOG_LEVEL` should be
// set to `dead_code_elimination.cpp` or, simply, to `dead_code_elimination`
// (i.e. `PYTORCH_JIT_LOG_LEVEL=dead_code_elimination`).
// Multiple files can be logged by separating each file name with a colon `:` as
// in the following example,
// `PYTORCH_JIT_LOG_LEVEL=dead_code_elimination:guard_elimination`
// There are 3 logging levels available for your use ordered by the detail level
// from lowest to highest.
// * `GRAPH_DUMP` should be used for printing entire graphs after optimization
// passes
// * `GRAPH_UPDATE` should be used for reporting graph transformations (i.e.
// node deletion, constant folding, etc)
// * `GRAPH_DEBUG` should be used for providing information useful for debugging
This file has been truncated. show original
Best regards
Thomas |
st180442 | Taking code straight from the documentation on this page 1:
import torch
import io
class MyModule(torch.nn.Module):
def forward(self, x):
return x + 10
m = torch.jit.script(MyModule())
# Save to io.BytesIO buffer
buffer = io.BytesIO()
torch.jit.save(m, buffer)
If we then run:
torch.jit.load(buffer)
We see:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/torch/jit/_serialization.py", line 163, in load
cpp_module = torch._C.import_ir_module_from_buffer(
RuntimeError: PytorchStreamReader failed reading zip archive: not a ZIP archive
This is on Mac and Linux, Python3.9, torch 1.9.0. |
st180443 | Solved by Michael_Suo in post #2
you probably wanto do a buffer.seek(0) to reset the cursor |
st180444 | Hello,
I’d like to determine the exact layer execution order (forward() call chain) of a RecursiveScriptModule. I have to do it on this particular data type, as this is the pre-defined interface for model exchange in our flow.
Example: dummy Fashion-MNIST model, with a twist - I have swapped the declaration order of layer1 and layer2, whereas the forward() function contains the correct execution order.
class FashionCNN(nn.Module):
def __init__(self):
super(FashionCNN, self).__init__()
self.layer2 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.layer1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.fc1 = nn.Linear(in_features=64*6*6, out_features=600)
self.drop = nn.Dropout2d(0.25)
self.fc2 = nn.Linear(in_features=600, out_features=120)
self.fc3 = nn.Linear(in_features=120, out_features=10)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.drop(out)
out = self.fc2(out)
out = self.fc3(out)
return out
...
model = FashionCNN()
...
scripted_model = torch.jit.script(model.eval())
scripted_model.save('fmnist_scripted.pt')
In another script, after loading back the model, I would like to determine the forward() call chain programmatically somehow:
...
scripted_model = torch.jit.load('fmnist_scripted.pt')
# Iterate over all hierarchycal layers ?!
layers = OrderedDict()
for i in scripted_model.named_modules():
if not list(i[1].named_children()):
layers[i[0]] = i[1]
# This mechanism gives me the wrong layer ordering:
print(layers)
> OrderedDict([('layer2.0', RecursiveScriptModule(original_name=Conv2d)),
('layer2.1', RecursiveScriptModule(original_name=BatchNorm2d)),
('layer2.2', RecursiveScriptModule(original_name=ReLU)),
('layer2.3', RecursiveScriptModule(original_name=MaxPool2d)),
('layer1.0', RecursiveScriptModule(original_name=Conv2d)),
('layer1.1', RecursiveScriptModule(original_name=BatchNorm2d)),
('layer1.2', RecursiveScriptModule(original_name=ReLU)),
('layer1.3', RecursiveScriptModule(original_name=MaxPool2d)),
('fc1', RecursiveScriptModule(original_name=Linear)),
('drop', RecursiveScriptModule(original_name=Dropout2d)),
('fc2', RecursiveScriptModule(original_name=Linear)),
('fc3', RecursiveScriptModule(original_name=Linear))])
Is there a way to obtain the right order of the forward() function call chain? Can this be done somehow programmatically?
Maybe @ptrblck, could you please give me some hints? I’d really appreciate your help!
Thank you & Best regards,
RB |
st180445 | Hi.
I know this seems like an anti-pattern question to ask, but I was wondering if there’s a way to edit a ScriptModule’s parameters and delete references to the old object at the same time?
Currently it is not allowed to delete children of ScriptModule._parameters, which I assume is because the management of this happens in the lower C++ levels of the code.
Are there any thoughts on supporting this in the future?
For now it is at least possible to replace a value in the same ScriptModule._parameters as long as it’s of lesser size than the current value.
Example of something that will still execute (after the model is traced):
class Net(nn.Module):
def __init__(self, num_emb):
super(Net, self).__init__()
self.emb = nn.Embedding(num_emb, 100)
def forward(self, x):
return self.emb(x)
m = Net(10000)
*trace model*
subset = torch.randn_like(m.emb.weight[:80])
m.emb.weight.data = subset
Are there any huge drawbacks to doing this, other than being anti-pattern?
My usecase is training a model, shipping it to production, then trimming it down to a subset of our most recent items before accepting queries.
Bonus question: Are there any good ways to convert a traced model back to a “normal” model after doing torch.jit.load()? |
st180446 | Modifying a traced model, then doing a copy() forces a “proper” re-allocate at least.
Any assistance with some of the other questions would still be appreciated! But my immediate needs are at least met.
lm = torch.jit.load('m.pt')
subset = torch.randn_like(lm.emb.weight[:80])
lm.emb.weight.data = subset
tmp = lm.copy() |
st180447 | NegatioN:
convert a traced model back to a “normal” model after doing torch.jit.load()
Hi NegatioN,
I know it’s a long time but just wondering have you found any answer to your question? I have been struggling with this question for a while and would be very grateful if someone could help me. How can we modify a traced model? How can we finetune them? And what’s the point if we cannot finetune these models? |
st180448 | Hi @ptrblck,
Sorry to also mention you in this post. I appreciate it if you could help me. |
st180449 | I don’t think you can modify a scripted model, so would need to create the model architecture before scripting it. Afterwards you can fine-tune it as a plain eager model. |
st180450 | Thanks so much, @ptrblck, for your prompt answer. I am not familiar with the eager model. I just guess you mean I should apply pytorch_to_keras for fine-tuning?
I work with big data, and it takes more than one week to train the model! so if I just want to change the size of a Linear layer like ( self.fc = nn.Linear(64,1)) to ( self.fc = nn.Linear(4096,1)) it is not possible, and I should modify the model and train that from the beginning?! |
st180451 | sepid_kh:
I am not familiar with the eager model. I just guess you mean I should apply pytorch_to_keras for fine-tuning?
No, by “eager mode” I meant the “normal” PyTorch model without scripting.
sepid_kh:
I work with big data, and it takes more than one week to train the model! so if I just want to change the size of a Linear layer like ( self.fc = nn.Linear(64,1)) to ( self.fc = nn.Linear(4096,1)) it is not possible, and I should modify the model and train that from the beginning?!
You can directly change the layer in the model before scripting it. E.g. something like this would work:
model = MyModel()
model.fc = nn.Linear(...)
model_scripted = torch.jit.script(model) # script afterwards
# manipulating the scripted model might fail
model_scripted.fc = nn.Linear(...)
Would this work for your use case or do you need to load an already scripted model (and somehow cannot manipulate it beforehand)? |
st180452 | Yes @ptrblck, I need to load an already scripted model and then modify that.
I actually didn’t know after loading a scripted model it cannot be fine_tuned or modified! So I guess I have to train my model from scratch and save that as a normal model? Isn’t there any way to convert an already saved script model (after loading) to a normal model? |
st180453 | Small correction: the scripted model can be fine-tuned (i.e. trained), but I don’t believe modified (JIT experts might correct me here).
I’m not sure what restrictions you are working with and why loading a scripted model is necessary, i.e. I guess you might not have access to the model definition? |
st180454 | for fine_tune I mean, if we want to change last layer size(e.g. number of class), is it still possible?
Let’s say my model is something like that:
class Mymodel(nn.Module):
def init(self, num_classes):
super(Mymodel, self).init()
self.model1 = models.resnet50(pretrained=True)
self._dropout = nn.Dropout(.4)
self.sig = nn.Sigmoid()
self.fc =nn.Linear(2048, num_classes)
…
then what I did is:
train the model
evaluate model
save model using torch.jit.save(net_trace,‘model.pt’) |
st180455 | Yes, your workload is possible assuming you are not trying to manipulate the model architecture of the scripted model.
The potential issue I’ve mentioned before would arise if you are using this workflow:
save model using torch.jit.save(net_trace,‘model.pt’)
load the model
change the model architecture #!!!
train the model |
st180456 | I tried to trace a modified resnet50 model ( taken from the torchray repo 2) . i havent observed any average improvements in forward pass upon doing this. my code is here: Google Colab 1
can anyone help with what i might be doing wrong? |
st180457 | I had a look a fx.fuse_modules write up 4, and it suggests that modules to be fused need to be tracable. So i wanted to know if say i do scripting on a model, does it do fusing automatically, or are the 2 operations complementary and i should do them separately to get added performance gains? |
st180458 | I’m aware that torchscript does not support mixed serialization but is there anyway to easily save a torchscript file with a preprocessing wrapper? The actual input to my models are vectorized features derived from a text file and the features chosen are model hyperparameters, as such I wanted to hide that process under the hood allowing users to pass a raw text file and have the preprocessing steps wrapped around torchscript object but I can’t see how to do this without mixed serialization?
Is there a work around? |
st180459 | Just make a class wrapper. First, make all preprocessing functions torchscript friendly, then you can create a class like:
class Wrapper(nn.Module):
def __init__(self, preprocess_fn, model):
self.preprocess_fn = preprocess_fn
self.model = model
def forward(self, x): # remember, by default, x is expected to be a torch.Tensor.
return self.model(self.preprocess_fn(x))
wrapper = Wrapper(your_preprocess_function, your_model)
wrapper_ts = torch.jit.script(wrapper)
And that’s all!
I did the same approach but to integrate some post processing steps without needing to deploy extra python files. |
st180460 | @vferrer Hello! Your reply is very helpful, but I need some clarifications. What do you mean by “functions torchscript friendly”? What if I need to have an audio file as input and the preprocessing function is the one that extracts the feature tensor? |
st180461 | @Sele With “functions torchscript friendly” I mean python functions that can be scripted: ts_fn = torch.jit.script(fn) as they will compiled to torchscript (they are inside the forward pass). |
st180462 | from torch.autograd import Function
class SubMConvFunction(Function):
@staticmethod
def forward(ctx, features, filters, indice_pairs, indice_pair_num,
num_activate_out, algo):
ctx.save_for_backward(indice_pairs, indice_pair_num, features, filters)
ctx.algo = algo
return ops.indice_conv(features,
filters,
indice_pairs,
indice_pair_num,
num_activate_out,
False,
True,
algo=algo)
@staticmethod
def backward(ctx, grad_output):
indice_pairs, indice_pair_num, features, filters = ctx.saved_tensors
input_bp, filters_bp = ops.indice_conv_backward(features,
filters,
grad_output,
indice_pairs,
indice_pair_num,
False,
True,
algo=ctx.algo)
return input_bp, filters_bp, None, None, None, None
indice_subm_conv = SubMConvFunction.apply
This giving error that
Python builtin <built-in method apply of FunctionMeta object at 0x56a2498> is currently not supported in Torchscript:
if self.subm:
out_features = Fsp.indice_subm_conv(features, self.weight,
~~~~~~~~~~~~~~~~~~~~ <--- HERE
indice_pairs.to(device),
indice_pair_num,
I know that the autograd is currently not supported in the torch jit export. My model training is done, so I’m only interested in inference and exporting. Is there a way to replace this whole thing with just a simple forward function which is supported by jit. |
st180463 | Based on this question-answer: https://discuss.pytorch.org/t/creating-a-torchscript-wrapper/93137 3, I wonder how can I use this class Wrapper if I need to have an audio file as input and the preprocessing function to be the one that extracts the feature tensor. |
st180464 | Hi! I’m trying to use autograd.grad inside a jit decorated function but am having problems with mismatching type hints that I’m unable to resolve. My problem is similar to Error when using TorchScript with autograd · Issue #46483 · pytorch/pytorch · GitHub 2 but I also need to use the grad_outputs argument which is not covered there.
Running the following ( with pytorch 1.9.0 )
@torch.script.jit
def gradient(y, x):
# grad_outputs = [torch.ones_like(y)]
grad_outputs = torch.jit.annotate(Optional[Tensor], torch.ones_like(y))
# grad_outputs = torch.jit.annotate(Optional[List[Optional[Tensor]]], [torch.ones_like(y)]) -> "Expected a List type hint but instead got Optional[List[Optional[Tensor]]]"
grad = torch.autograd.grad(
[y], [x], [grad_outputs], create_graph=True, retain_graph=True
)[0]
return grad
yields the error
RuntimeError:
aten::grad(Tensor[] outputs, Tensor[] inputs, Tensor?[]? grad_outputs=None, bool? retain_graph=None, bool create_graph=False, bool allow_unused=False) -> (Tensor?[]):
Expected a value of type 'Optional[List[Optional[Tensor]]]' for argument 'grad_outputs' but instead found type 'List[Tensor]'.
Empty lists default to List[Tensor]. Add a variable annotation to the assignment to create an empty list of another type (torch.jit.annotate(List[T, []]) where T is the type of elements in the list for Python 2)
Any help would be greatly appreciated!
Cheers
Raphael |
st180465 | Would you mind creating an issue on GitHub as well, so that we could track and fix it, please? |
st180466 | Thanks for the quick reply. Created the issue here: [JIT] autograd.grad with gradient_outputs argument results in type error · Issue #64728 · pytorch/pytorch · GitHub 16 |
st180467 | Dear community,
I am working in the field of deploying PyTorch models. Our customers provide us models which have been jit.scipt(...)-ed and saved as *.pt files. This is the common model exchange interface they have defined for us.
The “unholy” thing that I’m trying to achieve now is to concatenate two such scripted models in form of RecursiveScriptModules.
model1 = torch.jit.load('customer_model1.pt')
model2 = torch.jit.load('customer_model2.pt')
print(type(model1) # Gives me <class 'torch.jit._script.RecursiveScriptModule'>
print(type(model2) # Gives me <class 'torch.jit._script.RecursiveScriptModule'>
# ???
The questions is whether I can somehow concatenate these two models? Every idea is highly appreciated…
Edit: maybe a side note: we use the models only for inference
Thank you in advance!
Best regards,
RB |
st180468 | Solved by ptrblck in post #4
I had something like this in mind:
# save
model1 = nn.Linear(10, 5)
model2 = nn.Linear(5, 2)
model1 = torch.jit.script(model1)
model2 = torch.jit.script(model2)
torch.jit.save(model1, 'model1.pt')
torch.jit.save(model2, 'model2.pt')
# load
class MyModel(nn.Module):
def __init__(self, model1… |
st180469 | Would you like to “concatenate” them such that the output of model1 would be passed to model2 in a sequential way? If so, then you might be able to create a new custom nn.Module, use both of the scripted models there, and script the new “parent model” again (if needed).
Let me know, if I misunderstood your question. |
st180470 | Dear @ptrblck,
thank you for your answer, yes that is exactly what I would like to achieve. However, I’m not sure how to convert even a single RecursiveScriptModule into an nn.Module without it’s original class definition available. I’d highly appreciate some hints please
Thank you & Best regards,
RB |
st180471 | I had something like this in mind:
# save
model1 = nn.Linear(10, 5)
model2 = nn.Linear(5, 2)
model1 = torch.jit.script(model1)
model2 = torch.jit.script(model2)
torch.jit.save(model1, 'model1.pt')
torch.jit.save(model2, 'model2.pt')
# load
class MyModel(nn.Module):
def __init__(self, model1, model2):
super().__init__()
self.model1 = model1
self.model2 = model2
def forward(self, x):
x = self.model1(x)
x = self.model2(x)
return x
model1 = torch.jit.load('model1.pt')
model2 = torch.jit.load('model2.pt')
model = MyModel(model1, model2)
x = torch.randn(1, 10)
out = model(x)
print(out.shape)
> torch.Size([1, 2])
I.e. just loading the models and using them wrapped in another parent model. |
st180472 | Dear @ptrblck,
this is exactly what I wanted to achieve, I didn’t know it was possible this way.
Thank you very much,
RB |
st180473 | Dear community,
I am working on deploying PyTorch models. Our customers provide us models which have been jit.scipt(...)-ed and saved as *.pt files. This is the common model exchange interface they have defined for us.
We are now trying to extract subgraphs of the larger full model graph. I could already carve out single layers from a model, e.g.:
scripted_model = torch.jit.load('sc_model.pt')
for i in scripted_model.named_modules():
# Avoid hierarchical modules - since their modules are represented as single standalone layers as well
if sum(1 for _ in i[1].named_children()) != 0:
continue
# Save layers one-by-one
i[1].save(f"sc_model_{i[0]}.pt")
The question: is there a ways to carve out multiple subsequent layers in a single *.pt file? Is this even possible?
Of course the validity of the subnet model is important, so that we can call it for inference as normally as the full model would be invoked.
I’d really appreciate some help here…
Thank you & Best regards
RB |
st180474 | Hi,
Based on code here
https://github.com/pytorch/pytorch/blob/master/benchmarks/fastrnns/custom_lstms.py 33
I write an example to compare the cumputation capability of native lstm and custom lstm.
But I found that the speed of custom lstm is 100 times slower than native lstm class.
here is my test code:
import torch.nn as nn
import time
from models.custom_lstm import LSTMLayer, LSTMCell, script_lstm, LSTMState
input_size = 1024
cell_size =2048
batch_size =20
seq_len = 200
native_lstm=nn.LSTM(input_size, cell_size,1).cuda()
custom_lstm=script_lstm(input_size, cell_size,1).cuda()
inp = torch.randn(seq_len,batch_size,input_size).cuda()
hx = inp.new_zeros(batch_size, cell_size, requires_grad=False)
cx = inp.new_zeros(batch_size, cell_size, requires_grad=False)
t1 = time.time()
out, hid = native_lstm(inp)
t2 = time.time()
out2, hid2 = custom_lstm(inp, [(hx, cx)])
t3 = time.time()
print ('lstm:{}\ncustom lstm:{}\n'.format(t2-t1, t3-t2))
And here is the result:
lstm:0.015676498413085938
custom lstm:1.0338680744171143
My torch version is 1.3.1, GPU is TITANV with Cuda10 and cuDNN7.4.1
I also tried on pytorch 1.1, GPU TITAN XP with CUDA9.1, which has same ratio of speed.
Any idea?
Thanks so much |
st180475 | Solved by driazati in post #2
The TorchScript runtime does some optimizations on the first pass (it assumes you will be running your compiled model’s inference many times), so this is likely why it looks much slower. Could you try running custom_lstm a couple times before you benchmark it and comparing? |
st180476 | The TorchScript runtime does some optimizations on the first pass (it assumes you will be running your compiled model’s inference many times), so this is likely why it looks much slower. Could you try running custom_lstm a couple times before you benchmark it and comparing? |
st180477 | I retest both class for 1000 times, and the result seems more reasonable.
lstm:49.758071184158325
custom lstm:55.80940389633179
Thanks for you answer. |
st180478 | I met the same problem. is there any way to disable the optimization or choose the optimization level or after optimization we can save the model.because when I load the torchscript model in C++, the first pass takes about 20s while the others’ infer time is about 0.5s. |
st180479 | I think there’s a parameter called optimize for scripting in scripting using touchscript. |
st180480 | in the source code "optimize is deprecated and has no effect."
https://pytorch.org/docs/stable/_modules/torch/jit.html#script 25 |
st180481 | The code suggested an alternative: warnings.warn("`optimize` is deprecated and has no effect. Use `with torch.jit.optimized_execution() instead") |
st180482 | you could try setting torch._C._jit_set_profiling_mode() to True and torch._C._jit_set_profiling_mode to False
This mode was specifically added for speeding up compilation times for inference.
You could also indeed try with torch.jit.optimized_execution() if compilation times are still high for you. The latter runs even fewer optimizations. |
st180483 | when I use python the torch.jit.optimized_execution() could solve the problem, thanks
however, how should I solve this problem in C++?
thanks in andvanve |
st180484 | you could try this.
#include <torch/csrc/jit/update_graph_executor_opt.h>
//...
setGraphExecutorOptimize(false); |
st180485 | Hi, I still have some questions about the custom RNN:
I am able to reproduce senmao’s results that lstm and custom lstm have similar performance in 1000 times, but this is partly due to the original lstm becomes worse. This can be seen in senmao’s results. The first run of the original lstm is 0.015. If the performance is consistent, 1000 runs would take 15 seconds instead of 49.758 as reported (I have verified this myself).
Although I have no idea why the original lstm becomes worse, I get rid of the problem by changing the hyper parameters to:
input_size = 37
cell_size =256
batch_size =128
seq_len = 60
Now the original lstm becomes stable. In this case, the custom lstm is 10 times slower than the original lstm. Here are the results of 1000 runs: lstm:1.54s, custom lstm:19.75s. Can anyone please suggest how the custom lstm can be modified to have comparable performance with the original lstm?
Thanks so much! |
st180486 | Hi senmao! I’m writting you because I tried to get your results by my own… but, I cound’t get them. I have the same as you posted and I’m getting the time by this code:
for i in range(1000):
t1 = time.time()
out, hid = native_lstm(inp)
t2 = time.time()
t_native += t2-t1
t3 = time.time()
out2, hid2 = custom_lstm(inp, [(hx, cx)])
t4 = time.time()
t_custom += t4-t3
Thanks for all!
Regards,
David. |
st180487 | Hi,
I’m looking for a way to write the graph (preferably torch.jit.script module) forward and backward into a text format file.
A simple way to write the forward:
module = torch.jit.script(model) # JIT for model
print(module.code)
But, is there something similar to write the backward?
The reason I’m looking for that is: I want to evaluate performance of the forward and backward for a theoretical HW.
I tried to use pytorch profile, but it does not give all the information. Does not give the graph connectivity and tensor sizes
ONNX export support only forward.
Any other idea? |
st180488 | One thing that you could try is using the torchviz library to visualize what the backward pass looks like!
torchviz allows you to draw the computation graph of your network. This might be able to help you! |
st180489 | I would like to load a TorchScript model file (ie saved using torch.jit.save) into a regular torch.nn.Module. IE, rather than have
my_recursive_script_module = torch.jit.load(model_path)
I’d have:
my_nn_module = some_function(model_path)
Is this possible? I would think it would be straightforward, since TorchScript models are essentially a restricted subset of torch models, but it’s not working. |
st180490 | I’m not aware of this “inverse” operation to revert the scripting and create a Python model from the scripted one, but you might be able to manually restore the model based on the graph. |
st180491 | Hi,
I am running in a DataDistributedParallel environment, where each worker imports a JIT module and therefore calls torch.utils.cpp_extension.load(... ). As a result, each worker compiles the JIT module from scratch. This take ages. Is there a way to efficiently cache this loading? |
st180492 | I’m currently reading c++ implementation of RNN in ATen (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/RNN.cpp 1). Based on the following implementation, is my understanding correct that in every layer there will be dropout applied with the same mask for every steps?
template<typename io_type, typename hidden_type, typename weight_type>
LayerOutput<io_type, std::vector<hidden_type>>
apply_layer_stack(const Layer<io_type, hidden_type, weight_type>& layer, const io_type& input,
const std::vector<hidden_type>& hiddens, const std::vector<weight_type>& weights,
int64_t num_layers, double dropout_p, bool train) {
AT_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
AT_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
auto layer_input = input;
auto hidden_it = hiddens.begin();
auto weight_it = weights.begin();
std::vector<hidden_type> final_hiddens;
for (int64_t l = 0; l < num_layers; ++l) {
auto layer_output = layer(layer_input, *(hidden_it++), *(weight_it++));
final_hiddens.push_back(layer_output.final_hidden);
layer_input = layer_output.outputs;
if (dropout_p != 0 && train && l < num_layers - 1) {
layer_input = dropout(layer_input, dropout_p);
}
}
return {layer_input, final_hiddens};
} |
st180493 | Hi Pytorch Team,
We have Custom Deep Learning hardware and a Compiler for the Hardware. For now, we have only Caffe framework support.
Having said that, we want to work on adding support for the Pytorch framework.
Is there any reference for the same?
Thanks,
Darshan C G |
st180494 | You might look into:
Convert from PyTorch to ONNX using torch.onnx.export 1. Compile from ONNX format to your machine code. This is the approach taken by Intel OpenVINO and a few other hardware vendors, and it will let you easily add support for TensorFlow as well, since there is also a TensorFlow->ONNX converter.
I think the Glow compiler 1 can load TorchScript, so you could look into that.
What’s the name of your project / company? |
st180495 | Hi All,
I just have a quick question regarding RecursiveScriptModule, I’ve built a custom optimizer and within it I cycle through all layers via for module in net.modules() and call the name of the module via module.__class__.__name___. I need this as my optimizer handles different layers differently, and I determine what type of layer it is via module.__class__.__name___.
However, if I jit my network this fails because all the name of all the modules become RecursiveScriptModule, I did notice that when printing out module it has RecursiveScriptModule(original_name=Linear). So, I was wondering how can I get the orginal_name variable here?
Thanks in advance! |
st180496 | Solved by ptrblck in post #2
module.original_name would return a string containing the original module name, e.g.:
lin = nn.Linear(1, 1)
s = torch.jit.script(lin)
print(s.original_name)
> 'Linear'
so you could try to use this attribute instead. |
st180497 | module.original_name would return a string containing the original module name, e.g.:
lin = nn.Linear(1, 1)
s = torch.jit.script(lin)
print(s.original_name)
> 'Linear'
so you could try to use this attribute instead. |
st180498 | does torchscript support custom autograd function?
I implemented a custom function followed this link:
https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html 17
Does torchsceipt support this kind of function? I followed this torchscript tutorial:TorchScript 4 and an example 7
However, I got error “ValueError: Compiled functions can’t take variable number of arguments or use keyword-only arguments with defaults”, and it’s my custom autograd function caused this error. I assumed that the compiler doesn’t know about the argument type of the custom autograd function. I cannot find how to make custom autograd work with torchscript. |
st180499 | We do not currently support custom autograd functions, but it is something on our radar that we would like to do in the future. You can find more context in this issue 182.
It is also possible to replicate most of the behavior in custom autograd functions now via custom C++ operators. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.