id
stringlengths
3
8
text
stringlengths
1
115k
st83568
Solved by AnBucquet in post #2 Hi @shadowhy , You can try this: output = (a == torch.max(a, 1 , keepdim=True)[0]) Let me know if it’s not what you’re looking for.
st83569
Hi @shadowhy , You can try this: output = (a == torch.max(a, 1 , keepdim=True)[0]) Let me know if it’s not what you’re looking for.
st83570
I am trying to calculate the F1-score of a multilabel classification problem using sklearn.metrics.f1_score but I am getting the error UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples. 'precision', 'predicted', average, warn_for). My y_pred was probability values and I converted them using y_pred > 0.5. y_pred[0] is tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=torch.uint8) and y_ground_truth[0] is tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) The error is occuring at: f1_score(y_ground_truth.to('cpu'), y_pred.to('cpu'), average='samples') I am using nn.BCELoss() I went through Calculating Precision, Recall and F1 score in case of multi label classification 32 but it didn’t help.
st83571
Try this target_ = y_ground_truth.data.cpu().numpy() output_ = y_pred.data.cpu().numpy() f1_score(output_ , target_ , average='samples')
st83572
I’m following the tutorial here 9, and after getting it to load the model I wanted to run it. I had to make some small adjustments to the example code as my input features are indices (and just on the side, I could not find any documentation for torch::ones, expected it to be here 1 but didn’t find it, and trying to use torch::kInt64 resulted in “is not a member of ‘torch’” despite it being listed on that page?). I got it to compile with this #include <torch/script.h> // One-stop header. #include <iostream> #include <memory> #include <vector> int main(int argc, const char* argv[]) { if (argc != 2) { std::cerr << "usage: example-app <path-to-exported-script-module>\n"; return -1; } // Deserialize the ScriptModule from a file using torch::jit::load(). std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]); assert(module != nullptr); std::vector<torch::jit::IValue> inputs; at::TensorOptions tens_opts; tens_opts.dtype(at::kLong); inputs.push_back(torch::ones({1, 2}, tens_opts)); auto output = module->forward(inputs).toTensor(); std::cout << output.slice(0) << std::endl;; } But when running it I get an error: terminate called after throwing an instance of 'at::Error' what(): isTensor() ASSERT FAILED at [...]/libtorch/include/ATen/core/ivalue.h:153, please report a bug to PyTorch. (toTensor at [...]/libtorch/include/ATen/core/ivalue.h:153) I’m doing slicing in the forward definition of my model, could that be an issue? Here’s my model definition class CRNN(nn.Module): def __init__(self, num_inp, num_hid, num_ff, num_layers, num_out): super(CRNN, self).__init__() self.embed = nn.Embedding(num_inp, num_hid) self.rnn = nn.GRU(num_hid, num_hid, num_layers, batch_first=True) self.fc_emb_skip = nn.Linear(num_hid, num_ff) self.fc1 = nn.Linear(num_hid + num_ff, num_ff) self.fc2 = nn.Linear(num_ff, 3) self.hidden_init = nn.Parameter(t.randn(num_layers, 1, num_hid).type(t.FloatTensor), requires_grad=True) self.num_inp = num_inp self.num_hid = num_hid self.num_out = num_out self.num_layers = num_layers def forward(self, x): hidden = self.init_hidden(x.size(0)) emb = self.embed(x) output, hidden = self.rnn(emb, hidden) y_skip = F.elu(self.fc_emb_skip(emb[:, :-1])) joined_out = t.cat((y_skip, output[:, 1:],), dim=2) outview = joined_out.contiguous().view(joined_out.size(0) * joined_out.size(1), joined_out.size(2)) y = F.elu(self.fc1(outview)) logprobs = F.log_softmax(self.fc2(y), 1) return logprobs, hidden def init_hidden(self, batch_size): return self.hidden_init.repeat(1, batch_size, 1)
st83573
Solved by KevNull in post #2 is module->forward(inputs) returning a tensor, or a tuple of tensors? You might need to unpack the tuple before calling toTensor() from something it contained.
st83574
is module->forward(inputs) returning a tensor, or a tuple of tensors? You might need to unpack the tuple before calling toTensor() from something it contained.
st83575
Oh yeah, you’re right it’s a tuple makes sense why it fails now… How would you unpack the tuple? It’s not using std::tuple is it?
st83576
Something like replacing .toTensor() with .toTuple()->elements()[0].toTensor() will attempt to turn the first element in the tuple into a tensor object. Not sure if that’s the recommended method.
st83577
You could also use std::get 33 to get the elements of a tuple. Not sure if that’s better than your current approach.
st83578
how to get the 2rd output? when i use .toTuple->elements()[1].toTensor() with libtorch-1.1.0. thanks
st83579
I have a bunch of values I’d like to predict, and each value belongs to a category that has its own distribution of values. Instead of creating a new model per category, I’d like to incorporate weight sharing, then have the model apply a different fully connected layer based on the category. What’s the most efficient way to implement this?
st83580
Could you explain your use case a bit? How would you select the category? Do you have this information for each sample, i.e. training and test samples?
st83581
Sure. For context, this has to do with the CHAMPS competition on Kaggle. I have a tensor of shape NxF, where N is the number of nodes in a graph and F is the length of features. Per graph, I have a set of attributes that I am regressing against, the scalar coupling constant. Each attribute involves a pair of nodes, and I am given the type of coupling for each attribute I am trying to predict. Ideally, I’d like to have a model that does convolutions on the entire graph, then use different fully connected networks per coupling type. The only weight sharing across coupling types is the convolution/pooling part. Currently, I am doing this by concatenating every combination of 2 nodes possible, forming an NxNxF tensor, then passing that through a fully connected network that outputs 8 values, one for every coupling type (now NxNx8). Then, since I have the indices of each pair of nodes to train on, I can index that tensor with torch.take and get the right values. However, I’m wondering if there’s a more efficient way to do this. Doing a pairwise concatenation of every node is very memory intensive. Do you have any suggestions? Thanks!
st83582
Let’s say that I have a fixed dataset of length N. I want to apply a distribution over this N samples and then sample from this distribution to create mini-batches. I want to continuously change the distribution over my data. One option is to change it after each epoch. This can be easily done by just generating new probabilities for each sample every epoch and then my getitem will just use this distribution to sample an item. However, what I want is to change the distribution after each mini-batch. Is there a way I can do that so that I don’t lose the nice parallelization and multi-threading DataLoader offers? I am not sure where to put the code that generates the probabilities over each sample. Any ideas?
st83583
When I use multiprocessing on a remote server, I got some error messages like: File "/job/.local/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 315, in reduce storage fd, size = storage._share_fd_() RuntimeError: unable to write to file </torch_1_3660435083> What works: 1) run it locally with same # of processes/workers; 2) run it on the server with fewer processes/workers. This seems that shared memory is not enough. Since I use ‘spawn’ method to initialize a process, I actually did not use any model.share_memory(). What might be the problem and solution? Anything else that I should do to make sure I do not use shared memory? (I can not set the shared memory size on this remote server…) Thanks very much in advance!
st83584
batch_size = 3 num_layers = 2 in_channels = 10 out_channels = 20 kernel_size = 1 stride = 1 padding = 0 dilation = 1 sharing_rates = [0, 0] bias = True input = torch.randn(batch_size, in_channels, 1, 1).to(device) o = nn.Conv2d(in_channels,out_channels,kernel_size,stride,padding,dilation,1,bias).to(device) output = o(input) g = nn.Linear(in_channels,out_channels,bias).to(device) g._parameters['weight'].data.copy_(o._parameters['weight'].squeeze().data) g._parameters['bias'].data.copy_(o._parameters['bias'].squeeze().data) goutput = g(input.squeeze()) print((output.squeeze()-goutput).abs().sum()) print(torch.eq(output.squeeze(),goutput).all()) I get following printout tensor(1.9670e-06, grad_fn=<SumBackward0>) tensor(0, dtype=torch.uint8) I think Conv with kernel size 1 is the same as Linear operation. Am I wrong?
st83585
A difference of 1e-6 points to the limited floating point precision. You could run your code again with DoubleTensors, which should yield a smaller difference.
st83586
I was following this method (Dynamic parameter declaration in forward function 3) to dynamically assign parameters in forward function. However, my parameter is not just one single weight tensor but it is nn.Sequential . When I implement below: class MyModule(nn.Module): def __init__(self): # you need to register the parameter names earlier self.register_parameter('W_di', None) def forward(self, input): if self.W_di is None: self.W_di = nn.Sequential( nn.Linear(mL_n * 2, 1024), nn.ReLU(), nn.Linear(1024, self.hS)).to(device) I get the following error. TypeError: cannot assign 'torch.nn.modules.container.Sequential' as parameter 'W_di' (torch.nn.Parameter or None expected) Is there any way that I can register nn.Sequential as a whole param? Thanks!
st83587
You could register the nn.Sequential container directly using self.W_di = nn.Sequential(...), which will thus register all internal parameters. I’m not sure, if I understand the use case correctly, but if you need to get only this subset of parameters, you could call: model.W_di.parameters() Would that work for you or do you need to handle these parameters somehow differently?
st83588
Hi ptrblck, thanks for the reply. As you see in the forward function self.W_di = nn.Sequential(...) is how I assigned nn.Sequential to self.W_di. However, when the forward is called, I’m getting an error saying TypeError: cannot assign 'torch.nn.modules.container.Sequential' as parameter 'W_di' (torch.nn.Parameter or None expected) I’m not trying to call parameters nor a subset of params. Just trying to run the training but got this error. pytorch 1.1.0
st83589
In your __init__ you are using: self.register_parameter('W_di', None) which will create this error later. If you don’t need W_di as an nn.Parameter, you could just remove it.
st83590
But what if I need to declare it as a parameter? Maybe I wasn’t explaining it fully here but I’m trying to assign parameter in foward function following this approach Dynamic parameter declaration in forward function 3 In that post, the Chief Crazy Person suggested using self.register_parameter.
st83591
Ah OK, I clearly misunderstood the use case. This code should work as the suggested one by Adam class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.submodule = None def forward(self, x): if self.submodule is None: self.submodule = nn.Sequential( nn.Linear(1, 1), nn.ReLU() ) x = self.submodule(x) return x model = MyModel() model(torch.randn(1, 1)) print(dict(model.named_parameters())) > {'submodule.0.weight': Parameter containing: tensor([[-0.1282]], requires_grad=True), 'submodule.0.bias': Parameter containing: tensor([-0.7143], requires_grad=True)}
st83592
My computer is equipped with two RTX 2080ti GPUs. I have installed the cuda 10.1 in the system and the driver version is 418.39. ±----------------------------------------------------------------------------+ | NVIDIA-SMI 418.39 Driver Version: 418.39 CUDA Version: 10.1 | |-------------------------------±---------------------±---------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 208… Off | 00000000:01:00.0 On | N/A | | 41% 36C P8 36W / 260W | 637MiB / 10986MiB | 1% Default | ±------------------------------±---------------------±---------------------+ | 1 GeForce RTX 208… Off | 00000000:02:00.0 Off | N/A | | 40% 31C P8 13W / 260W | 1MiB / 10989MiB | 0% Default | ±------------------------------±---------------------±---------------------+ ±----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 2410 G /usr/lib/xorg/Xorg 28MiB | | 0 2593 G /usr/bin/gnome-shell 98MiB | | 0 12630 G /usr/lib/xorg/Xorg 252MiB | | 0 12734 G /opt/teamviewer/tv_bin/TeamViewer 19MiB | | 0 12793 G /usr/bin/gnome-shell 186MiB | | 0 13362 G …uest-channel-token=17060840034489939627 49MiB | ±----------------------------------------------------------------------------+ I have tried using anaconda to install pytorch for running, but it will get stuck a few minutes after starting running with no response to mouse and keyboard and the screen is frozen. Besides, I cannot ssh into this computer either. I cannot figure out what kind of problems it has because I cannot see the error since everything is frozen. So I have to restart the computer by force. It seems that RTX 2080ti requires at least CUDA 10 to run efficiently. In anaconda, I can create several different environments to test. I have tried several combinations of cudatoolkit version and pytorch version, but it still has this problem. In this computer, I can install tensorflow-gpu 1.9 with cuda 9 toolkit using anaconda, because tensorflow has not yet supported cuda 10. As for pytorch, the cuda 10 is only supported for other gpus such as 1080ti. ref: https://github.com/pytorch/pytorch/issues/12977 10 Does anyone have an idea?
st83593
I’m having a similar issue when training on a multiple 2080Ti machine using DataParallel. When using only one GPU it seems to run fine but freezes and crashes in the same way as described above when using DataParallel. Using pytorch 1.1 and cuda10.0.
st83594
Hm, do you have an HDMI cable plugged into one of the cards to drive the GUI? If yes, can you try to run the code without DataParallel on just the card where you have the HDMI cable is plugged in? I am wondering if this really is a DataParallel issue or whether it’s due to exhausting all the GPU memory such that the GUI freezes. PS: I have a machine with 8 2080Ti and when I use DataParallel on those, i don’t have any issues. I am running a headless install of Ubuntu there though, so no GUI. On my second machine I have 4 cards where I have an HDMI cable plugged into one of those for the GUI, and I notice that the interface is indeed slower when I run model training on this card – but I guess this is expected? (It never crashes/completely freezes though)
st83595
Hmm good question. The machine in question has no display, I ssh into it. I’m glad to hear you have such a set up working though! Do you mind giving me a few details about how you installed pytorch and cuda? Which versions and how they were installed? That would be very helpful.
st83596
My machine can only run for less than 10 minutes. Then it stucked and I cannot ssh into it anymore even though I turn off the display. I was wondering how to manage to assemble 8 gpus. Could you please share the spec of motherboard, cpu and other components. I assemble this machine by myself. I am not sure whether other components caused such issue.
st83597
Prinsphield: even though I turn off the display Not sure, but does that really turn off the GPU use (you can check via Nvidia smi, RAM use should be ~0)? I would just try to simply unplug the cable for the graphics card temporarily. motherboard, cpu and other components. Motherboard: Supermicro X11DPG-OT-CPU, https://www.supermicro.com/products/motherboard/Xeon/C620/X11DPG-OT-CPU.cfm 1 CPUs: 4 8-core Intel® Xeon® Silver 4110 CPU @ 2.10GHz (https://ark.intel.com/content/www/us/en/ark/products/123547/intel-xeon-silver-4110-processor-11m-cache-2-10-ghz.html) Memory: 376GiB how you installed pytorch and cuda? I am using conda – I noticed it’s much faster than the custom-compiled version. Probably because of mmkl in conda since I have intel CPUs. I.e., I am just using conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
st83598
Faced a similar instance. Do monitor the temperature of gpus using watch nvidia-smi, check if it exceeds 90 degrees or so. Also you can check if the temperature throttle is activated with nvidia-smi -a.
st83599
Very good point regarding the temperature. In the 8-GPU machine, mine are usually around 60-70 Celsius +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.93 Driver Version: 410.93 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 208... Off | 00000000:1A:00.0 Off | N/A | | 41% 66C P2 259W / 250W | 4429MiB / 10989MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 208... Off | 00000000:1B:00.0 Off | N/A | | 43% 69C P2 247W / 250W | 4429MiB / 10989MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce RTX 208... Off | 00000000:3D:00.0 Off | N/A | | 30% 49C P2 134W / 250W | 4365MiB / 10989MiB | 84% Default | +-------------------------------+----------------------+----------------------+ | 3 GeForce RTX 208... Off | 00000000:3E:00.0 Off | N/A | | 30% 47C P2 127W / 250W | 4365MiB / 10989MiB | 79% Default | +-------------------------------+----------------------+----------------------+ | 4 GeForce RTX 208... Off | 00000000:88:00.0 Off | N/A | | 36% 59C P2 250W / 250W | 4429MiB / 10989MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 5 GeForce RTX 208... Off | 00000000:89:00.0 Off | N/A | | 35% 27C P0 69W / 250W | 0MiB / 10989MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 6 GeForce RTX 208... Off | 00000000:B1:00.0 Off | N/A | | 39% 62C P2 252W / 250W | 4429MiB / 10989MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 7 GeForce RTX 208... Off | 00000000:B2:00.0 Off | N/A | | 38% 62C P2 242W / 250W | 4429MiB / 10989MiB | 99% Default | +-------------------------------+----------------------+----------------------+ where the throttling is to occur at 89C: GPU Shutdown Temp : 94 C GPU Slowdown Temp : 91 C GPU Max Operating Temp : 89 C Could be that your cooling is not sufficient and you approach the Slowdown/Shutdown temperature when you are running your GPUs, which could be an explanation for My machine can only run for less than 10 minutes. Maybe do a new run and keep an eye on the nvidia-smi reported temperatures and see if there is a correlation
st83600
I use nvidia-smi to watch the status of gpu while running programs. There are two possible cases: the nvidia-smi will fail to obtain the status of gpus, while programs are still running. The error says you need to reboot your computer. After a while, the computer was totally stuck and cannot be ssh-ed. The temperature is normal even at the point when it gets stuck.
st83601
Hi Sebastian, If you don’t mind can you tell me what sort of power supply you’re using on that machine?
st83602
Update, I seem to have gotten my net training on the machine with mutlitple 2080Ti’s. Part of the issue was with the code itself where a few tensors were inadvertently allocated on device 0 instead of on another gpu. Once these were tracked down and fixed the training would run on two gpus but the machine would shut down when running on three or four. This was traced to the power supply of the machine and once that was upgraded the code seems to train fine on all four gpus.
st83603
Hi David, I’m experiencing similar problems to yours with two 2080Ti’s. Do you mind sharing how you traced the problem to the power supply?
st83604
What happened is that training on all the gpus would begin normally but within a minute or so the machine would simply shut down, i.e., power itself off without any warning. The guy here who built the rig immediately suspected the 1300W power supply wasn’t big enough and replaced it with a 1600W unit. No issues after that. I can ask him for more details if you’d like.
st83605
Hy guys, i have a problem, i would to pass many distributions that are storage like .npy as input of a ResNet. It’s a new thing for me the type of input. How can do this? Is it possible or i can transformate my input first? The code that i wrote is ready for images…
st83606
Solved by JuanFMontesinos in post #4 When you load the model model.load(state_dict) use the arg strict=False model.load(sd,strict=False) Anyway you wiol have to fije tune
st83607
You can pass whatever you want, for reading numpy files you will have to replace the reading function to adapt it to npy files. What’s the exact question about?
st83608
The problem in another, i have the .npy as images of 9 channels. I can’t change them How can i change the resnet50 model pretrained?
st83609
When you load the model model.load(state_dict) use the arg strict=False model.load(sd,strict=False) Anyway you wiol have to fije tune
st83610
I modified the WaveNet architecture for binary classification: class CustomConv(nn.Module): def __init__(self): super(CustomConv, self).__init__() self.conv1 = nn.Conv1d(in_channels=1, out_channels=256, kernel_size=3, padding=1) self.blocks = self.build_conv_block(num_blocks, 256) self.conv2 = nn.Conv1d(in_channels=256, out_channels=128, kernel_size=1) self.conv3 = nn.Conv1d(in_channels=128, out_channels=1, kernel_size=1) self.act = nn.ReLU() self.linear = nn.Linear(2000, 1) self.sigmoid = nn.Sigmoid() def build_conv_block(self, num_layers, num_channels): block = [] for _ in range(num_layers): for i in range(12): block.append(ConvBlock(num_channels, 2**i)) return nn.Sequential(*block) def forward(self, x): x = self.conv1(x) x = self.act(x) _,x = self.blocks((x,0)) x = self.act(x) x = self.conv2(x) x = self.act(x) x = self.conv3(x) x = torch.squeeze(x) x = self.linear(x) x = self.sigmoid(x) return x My model hasn’t been training very well, and I was wondering if torch.squeeze() could have any (negative) effect on the gradients produced during backprop, or any effect on training in general. Is there better practice for something like this? Any ideas?
st83611
Usually you would flatten the activation via x = x.view(x.size(0), -1) to create a 2-dimensional tensor out of the 4-dimensional conv output. If you are using torch.squeeze(x), this will remove a variable size of dimensions. Assuming that you are using a batch_size>1, at least the channel dimension will be removed. This would yield an activation of [batch_size, h, w], which is then passed to the linear layer. nn.Linear takes an input as [batch_size, *, in_features], where * denotes additional dimensions. The linear layer will be applied on each of these additional dimensions and I’m not sure, if that’s what you really want in this case. Could you replace the squeeze with the view operation and check the results again?
st83612
Thanks for your response! I think you’re right – view is better than squeeze in this case as my batch_size>1. So thank you for pointing this out. When I retrained the model with view, interestingly enough, I got the same accuracy as what I observed with squeeze. I’m not sure why this was the case (any ideas?), but nonetheless thank you for helping me make my code better reflect my intentions.
st83613
Are you using nn.BCELoss as the criterion? I would assume the model to work quite differently if you pass a 2-dimensional tensor vs. a 3-dimensional one into nn.Linear.
st83614
Yes, I am using nn.BCELoss as the criterion. Yeah, I’m not sure why the model gives me the same accuracy. Perhaps I made a mistake – let me double check and retrain both models and get back to you if the accuracies vary.
st83615
Hello Everybody, I used Keras but started recently with PyTorch 0.4.1 under Windows10 on a laptop without GPU. Everything works fine, but I’m surprised that the TaskManager shows me that all the CPU consumption is by python.exe itself. Keras compiles behind the scenes C++ code, which is then executed for speed. How does that work for PyTorch? Surely, all the number crunching can not be done in pure Python ??
st83616
Hi, The c++ code is compiled as python extensions. So from the task manager point of view, it is python that uses these ressources. Note that in pytorch, all the compilation is done ahead of time, not at runtime.
st83617
I have a 1D tensor: a, to which I want to apply multiple 1D convolutions. Each of these convolutions has to be applied to a diminishing slice of the same vector: a[:-i], i.e. the next convolution is applied after dropping the last element of a. Slicing is very inefficient in this case because it reallocs the whole remainder of a, so the overall procedure does not fit in memory. Is there any way to .resize_() a without affecting the autograder or, alternatively, to specify the convolution limits in each convolution layer? (each convolution is done with a different layer) Thanks,
st83618
ezorita: Slicing is very inefficient in this case because it reallocs the whole remainder of a, so the overall procedure does not fit in memory. That shouldn’t be the case, if I understand your use case correctly. The output tensor will of course allocate new memory, but since the input is just sliced, it should not trigger a copy. Do you have a code snippet so that we can have a look?
st83619
Hi, thanks for your answer. Here is the relevant code: for i in np.arange(self.resolution-1,-1,-1): # Dilated convolutions z = self.sliconv[i](m[:,:,:(i+1)]) where m is a 1D vector and self.sliconv is defined as: # Filters for sliding convolutions self.sliconv = nn.ModuleList() self.sliconv.append( nn.Sequential( nn.Conv1d(in_channels=kernel_size*in_channels, out_channels=out_channels, kernel_size=1, stride=1), nn.LeakyReLU(LR_ALPHA) ).to(self.device) ) for d in np.arange(self.resolution-1)+1: self.sliconv.append( nn.Sequential( nn.Conv1d(in_channels=kernel_size*in_channels, out_channels=out_channels, kernel_size=2, stride=1, dilation=d), nn.LeakyReLU(LR_ALPHA) ).to(self.device) )
st83620
A 2019-08-05_161214.jpg912×356 144 KB B 2019-08-05_160344.jpg1107×447 150 KB C 2019-08-05_161725.jpg911×373 156 KB (The third column means the time consumption(unit: μs)) The three images are from three slightly different code snippets. Anyway, there will always be a line that takes up 60ms. I would like to know why this happens and how to shorten the time?
st83621
All “slow” lines contain a cpu() call, which will create a synchronization if your script runs on the GPU. To properly time CUDA code, you should synchronize before starting and stopping the timer (if you are manually profiling). torch.cuda.synchronize() t0 = time.time() ... torch.cuda.synchronize() t1 = time.time() You could also use the profiler 3 to measure the execution of your code.
st83622
The second picture doesn’t contain cpu().data, it is max_ids=ids.max(), ids is torch.cuda.Tensor.
st83623
2019-08-05_215805.jpg965×426 175 KB I added ‘torch.cuda.synchronize()’, and this line takes up 60ms. Is there a way to remove synchronization time?
st83624
Hello to everyone! I am struggling to understand what are torch.nn.Parameter in torch.nn.LSTM? I have build a toy model and it seems to have 4 torch.nn.Paramater’s. What are they in the following formulas? Andrei
st83625
Have a look at docs 82 under the Variables paragraph. Posting it here will mess up the format, but you’ll find some information about which internal LSTM parameters contain which part of the weight and bias matrices: LSTM.weight_ih_l[k] – the learnable input-hidden weights … You can access these parameters directly via: lstm = nn.LSTM(1, 1) for name, _ in lstm.named_parameters(): print(name) > weight_ih_l0 weight_hh_l0 bias_ih_l0 bias_hh_l0 lstm.weight_ih_l0
st83626
I want to use the inception_v3 framework that comes with torchversion ,but it nake a mistake. model.fc = nn.Sequential(nn.Linear(2048,512), nn.ReLU(), nn.Dropout(0.2), nn.Linear(512,5), nn.LogSoftmax(dim=1)) for epoch in range(epochs): for inputs,labels in train_loader: inputs,labels = inputs.to(device),labels.to(device) optimizer.zero_grad() out = model(inputs) loss = criterion(out,labels) loss.backward() optimizer.step() running_loss +=loss.item() steps +=1 if (steps+1)%5 == 0: test_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs,labels in test_loader: inputs, labels = inputs.to(device), labels.to(device) out2 = model(inputs) batch_loss = criterion(out2,labels) test_loss +=batch_loss.item() ps = torch.exp(out2) top_pred,top_class = ps.topk(1,dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() train_losses.append(running_loss/len(train_loader)) test_losses.append(test_loss/len(test_loader)) print(f"Epoch {epoch+1}/{epochs}" f"Train loss: {running_loss/5:.3f}", f"Test loss: {test_loss/len(test_loader):.3f} " f"Test accuracy: {accuracy/len(test_loader):.3f}") running_loss = 0 model.train() Traceback (most recent call last): File “F:/csdncollection/train.py”, line 81, in loss = criterion(out,labels) File “D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 493, in call result = self.forward(*input, **kwargs) File “D:\Anaconda3\lib\site-packages\torch\nn\modules\loss.py”, line 209, in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) File “D:\Anaconda3\lib\site-packages\torch\nn\functional.py”, line 1863, in nll_loss dim = input.dim() AttributeError: ‘tuple’ object has no attribute ‘dim’ I hope someone could help me .
st83627
Solved by ptrblck in post #2 By default the inception model returns two outputs, the output of the last linear layer and the aux_logits. If you don’t want to use the aux_logits for your training, just index out at 0: loss = criterion(out[0], labels) If you are training from scratch, you might just disable it the aux_logits: m…
st83628
By default the inception model returns two outputs, the output of the last linear layer and the aux_logits. If you don’t want to use the aux_logits for your training, just index out at 0: loss = criterion(out[0], labels) If you are training from scratch, you might just disable it the aux_logits: model = models.inception_v3(aux_logits=False).
st83629
Thank you, I just read your answer to help others on this question. He has run successfully. You are really a helpful person, thank you.
st83630
hello,when I use model = models.inception_v3(aux_logits=False) It has error. Traceback (most recent call last): File “F:/csdncollection/train.py”, line 49, in model = models.inception_v3(pretrained=True,aux_logits=False) File “D:\Anaconda3\lib\site-packages\torchvision\models\inception.py”, line 31, in inception_v3 model.load_state_dict(model_zoo.load_url(model_urls[‘inception_v3_google’])) File “D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 777, in load_state_dict self.class.name, “\n\t”.join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Inception3: Unexpected key(s) in state_dict: “AuxLogits.conv0.conv.weight”, “AuxLogits.conv0.bn.weight”, “AuxLogits.conv0.bn.bias”, “AuxLogits.conv0.bn.running_mean”, “AuxLogits.conv0.bn.running_var”, “AuxLogits.conv1.conv.weight”, “AuxLogits.conv1.bn.weight”, “AuxLogits.conv1.bn.bias”, “AuxLogits.conv1.bn.running_mean”, “AuxLogits.conv1.bn.running_var”, “AuxLogits.fc.weight”, “AuxLogits.fc.bias”. when I use model = models.inception_v3(pretrained=True,aux_logits=True) this model can run,but it has another error. Epoch 1/200Train loss: 1.629 Test loss: 1.306 Test accuracy: 0.620 Epoch 1/200Train loss: 1.686 Test loss: 1.883 Test accuracy: 0.377 Epoch 1/200Train loss: 1.320 Test loss: 1.058 Test accuracy: 0.611
st83631
If you load the pretrained parameters, you need to specify aux_logits=True (or just leave the default). In that case, slice the output as suggested in the other post. Which kind of error are you getting then?
st83632
I’ve added an GeForce GTX 1080 Ti into my machine (Running Ubuntu 18.04 and Anaconda with Python 3.7) to utilize the GPU when using PyTorch. Both cards a correctly identified: $ lspci | grep VGA 03:00.0 VGA compatible controller: NVIDIA Corporation GF119 [NVS 310] (reva1) 04:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) The NVS 310 handles my 2-monitor setup, I only want to utilize the 1080 for PyTorch. I also installed the latest NVIDIA drivers that are currently in the repository and that seems to be fine: $ nvidia-smi Sat Jan 19 12:42:18 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.87 Driver Version: 390.87 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 NVS 310 Off | 00000000:03:00.0 N/A | N/A | | 30% 60C P0 N/A / N/A | 461MiB / 963MiB | N/A Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX 108... Off | 00000000:04:00.0 Off | N/A | | 0% 41C P8 10W / 250W | 2MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ Driver version 390.xx allows to run CUDA 9.1 (9.1.85) according the the NVIDIA docs 46. Since this is also the version in the Ubuntu repositories, I simple installed the CUDA Toolkit with: $ sudo apt-get-installed nvidia-cuda-toolkit And again, this seems be alright: $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Nov__3_21:07:56_CDT_2017 Cuda compilation tools, release 9.1, V9.1.85 and $ apt-cache policy nvidia-cuda-toolkit nvidia-cuda-toolkit: Installed: 9.1.85-3ubuntu1 Candidate: 9.1.85-3ubuntu1 Version table: *** 9.1.85-3ubuntu1 500 500 http://sg.archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages 100 /var/lib/dpkg/status Lastly, I’ve installed PyTorch from scratch with conda conda install pytorch torchvision -c pytorch Also error as far as I can tell: $ conda list ... pytorch 1.0.0 py3.7_cuda9.0.176_cudnn7.4.1_1 pytorch ... However, PyTorch doesn’t seem to find CUDA: $ python -c 'import torch; print(torch.cuda.is_available())' False In more detail, if I force PyTorch to convert a tensor x to CUDA with x.cuda() I get the error: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from 82 http://... What am I’m missing here? I’m new to this, but I think I’ve checked the Web already quite a bit to find any caveats like NVIDIA driver and CUDA toolkit versions? EDIT: Some more outputs from PyTorch: print(torch.cuda.device_count()) # --> 0 print(torch.cuda.is_available()) # --> False print(torch.version.cuda) # --> 9.0.176
st83633
There is a version mismatch. Your installed CUDA version is 9.1. However, your PyTorch package is compiled with CUDA 9.0. Possible solutions: Install CUDA 9.0 instead of 9.1, or, Install PyTorch that is compiled with CUDA 9.1, if there isn’t one yet, you may need download PyTorch source code and compile by yourself. BTW, I myself failed install PyTorch with Anaconda Python 3.7, but succeeded with 3.6. (on Ubuntu 18.04) Hope this helps.
st83634
As I know now, PyTorch comes with all the required CUDA, cuDNN, etc. code bundled in the binaries – I don’t need anything special that would require to compile the sources myself :). Hence, there’s no need to install the CUDA Toolkit at all, and in fact I removed it completely. Only the Nvidia drivers and PyTorch are needed. I got it working at the moment, simply switching to a single-card setup, i.e., I removed the “small” NVS 310 – initially, I wanted to keep that card to drive all graphical output and use the 1080 solely for number crunching. However, no combination of drivers or PyTorch version (incl. different CUDA version) worked. With only the 1080 it was smooth sailing and worked immediately. The main difference is, is that I can now a newer Nvidia driver, 415 instead of 390, the limit of the NVS 310. Your comment that you had to downgrade to Python 3.6 sounds very interesting, though. I actually also tried what happens when using only the NVS 310. I knew that its compute capability was to low, but when I tested it a couple of months ago, I got the respective error messages (“Your card is too old” or something like this). But now, it could even find the Nvidia driver. PyTorch installs quite alright with Anaconda + Python 3.7, it just won’t run in CUDA mode. I will probably give a clean Anaconda + Python 3.6 a shot, just to see if it makes a difference for me. Otherwise, I will leave the NVS 310 out of the machine. I don’t even know, if there’s a serious advantage if the 1080 wouldn’t need to handle the graphics output. Thanks a lot for your feedback!
st83635
I faced the same problem with you, and Ithink this is because system cannot find the driver of nvidia. Adding the path of nvidia into my system path may helps.
st83636
Thanks, but I probably will stick with my working solution, i.e., only using the 1080 without the NVS 310 in parallel. With this setup, everything went through without any issues.
st83637
In my case (on a cluster with gpus), adding CUDA_VISIBLE_DEVICES=GPU_ID before the python command solved the problem. Instead of adding CUDA_VISIBLE_DEVICES in the command line, you can probably just add a line os.environ[“CUDA_VISIBLE_DEVICES”] =“GPU_ID” before import torch.
st83638
same problem here: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from
st83639
Hi there, I am running pytorch on our 8-card cloud machine. So I use docker to run it with only required resources. Here is the docker parameters I used to launch it: docker run -it --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES 0 --cpus=4 --shm-size=1G --memory=10240m *image_name* /bin/bash Based on the running command above, I find the training speed is dramatically slower when pin_memory setting True in DataLoader. I write a test benchmark code based on horovod benchmark code. I have post it to my github repo 1. The benchmark result shows, without data loading, the card could process 300 images/second. Adding data loading process, with pin_memory=False, the card could process 292 images/second, which is reasonable. However, with pin_memory=True, the card could only process 46 images/second. I know pin memory is used to accelerate tensor async transfer between host memory and device memory. But the performance here is very strange. Could anyone illustrate me why it is and how to fix it? Thanks.
st83640
Solved by rtrobin in post #3 @ptrblck Thanks for the reply. I have found the trick. I’m not a professional in CPU field. It seems that something with swap does slow the tensor transfer speed. Our cloud machine has two NUMA nodes. The docker parameter --cpus=4 doesn’t allocate 4 cpu cores to the job, but load every cores balanc…
st83641
Are you seeing the swap being used a lot? As far as I know, this might cause the slow down.
st83642
@ptrblck Thanks for the reply. I have found the trick. I’m not a professional in CPU field. It seems that something with swap does slow the tensor transfer speed. Our cloud machine has two NUMA nodes. The docker parameter --cpus=4 doesn’t allocate 4 cpu cores to the job, but load every cores balanced under cpu rate 4/cpu_core_nums. To run the training job correctly, the docker parameter --cpuset-cpus should be used.
st83643
I introduce a function from .so (compiled by .cu and .pyx), and error generates. If without this function, there is no error. Can someone help me?
st83644
Solved by Ran in post #2 Some wrong code in .cu generates this error.
st83645
Hi, I am working with depth estimation from stereo images and am trying to use a custom loss function. In the network, images are cropped into size [256 512] and the maximum disparity is set to 192. This will then give me an output. The loss function includes cross product([bs256512 192] [192 bs256512]), which will generate a massive matrix and requires a lot of memory. Is there a way to get around this?
st83646
Failed to build pytorch from the source. My ENV: Ubuntu 19.04 NVIDIA-SMI 430.40 Driver Version: 430.40 CUDA Version: 10.1 gcc (Ubuntu 8.3.0-6ubuntu1) 8.3.0 ^ [501/2210] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/operators/torch_generated_pack_segments.cu.o FAILED: caffe2/CMakeFiles/torch.dir/operators/torch_generated_pack_segments.cu.o cd ....../pytorch/build/caffe2/CMakeFiles/torch.dir/operators && /usr/bin/cmake -E make_directory ....../pytorch/build/caffe2/CMakeFiles/torch.dir/operators/. && /usr/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=....../pytorch/build/caffe2/CMakeFiles/torch.dir/operators/./torch_generated_pack_segments.cu.o -D generated_cubin_file:STRING=....../pytorch/build/caffe2/CMakeFiles/torch.dir/operators/./torch_generated_pack_segments.cu.o.cubin.txt -P ....../pytorch/build/caffe2/CMakeFiles/torch.dir/operators/torch_generated_pack_segments.cu.o.Release.cmake /usr/local/cuda/include/cub/device/dispatch/dispatch_reduce.cuh(362): error: use the "typename" keyword to treat nontype "std::iterator_traits<_Iterator>::value_type [with _Iterator=InputIteratorT]" as a type in a dependent context /usr/local/cuda/include/cub/device/dispatch/dispatch_reduce.cuh(363): error: use the "typename" keyword to treat nontype "std::iterator_traits<_Iterator>::value_type [with _Iterator=OutputIteratorT]" as a type in a dependent context /usr/local/cuda/include/cub/device/dispatch/dispatch_reduce.cuh(683): error: use the "typename" keyword to treat nontype "std::iterator_traits<_Iterator>::value_type [with _Iterator=InputIteratorT]" as a type in a dependent context /usr/local/cuda/include/cub/device/dispatch/dispatch_reduce.cuh(684): error: use the "typename" keyword to treat nontype "std::iterator_traits<_Iterator>::value_type [with _Iterator=OutputIteratorT]" as a type in a dependent context 4 errors detected in the compilation of "/tmp/tmpxft_0000562a_00000000-6_pack_segments.cpp1.ii". CMake Error at torch_generated_pack_segments.cu.o.Release.cmake:279 (message): Error generating file ....../pytorch/build/caffe2/CMakeFiles/torch.dir/operators/./torch_generated_pack_segments.cu.o [502/2210] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/operators/torch_generated_logit_op.cu.o ....../pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h(149): warning: missing return statement at end of non-void function "Eigen::internal::ptrue(const Packet &) [with Packet=half2]" [508/2210] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/__/aten/src/THCUNN/torch_generated_LeakyReLU.cu.o ninja: build stopped: subcommand failed. Traceback (most recent call last): File "setup.py", line 748, in <module> build_deps() File "setup.py", line 321, in build_deps cmake=cmake) File "....../pytorch/tools/build_pytorch_libs.py", line 64, in build_caffe2 cmake.build(my_env) File "....../pytorch/tools/setup_helpers/cmake.py", line 328, in build self.run(build_args, my_env) File "....../pytorch/tools/setup_helpers/cmake.py", line 133, in run check_call(command, cwd=self.build_dir, env=env) File "/usr/lib/python3.7/subprocess.py", line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 1. Any suggestions? Cheers Pei
st83647
I have a multitask model which uses ModuleDict. Its forward(self, a, a_len, b, b_len, task_id) has argument task_id. After converting the pytorch model checkpoint into C++ by tracing, the C++ prediction accuracy is 5% lower than pytorch prediction accuracy. If I train each task individually, i.e., into a separate model and checkpoint file, then convert each of them into C++, then the accuracy matches.
st83648
Tracing only covers the specific code-path executed during tracing, all other paths and control-flow information cannot be recorded You could trace the various submodules and combine them using scripting 5 to get an single exportable model encompassing everything. Best regards Thomas
st83649
@tom Thank you so much. I haven’t fully understand your answer yet. The link is nice, however, I don’t know how to apply it into for-loops, i.e. iterate all possible task_id as input. Would you please provide some example? Below is the code snippet of the Model class. It contains ModuleDict whose key is task_id. The forward function has task_id as argument. This is a control flow, similar to if-else, since ModuleDict internally uses for-loops? So, when tracing the model. Shall we trace ALL task_id or tracing only one task_id (variable channel in code) is enough as shown below? channel = torch.ones(1, dtype=torch.int64) traced_script_module = torch.jit.trace(model, (premise, premise_length, hypotheses, hypotheses_length, channel)) output = traced_script_module(premise, premise_length, hypotheses, hypotheses_length, channel) traced_script_module.save('deploy-trace-multitask.pt') Code snippet for Model class’s definition self._word_embedding = nn.Embedding(self.vocab_size, self.embedding_dim, padding_idx=padding_idx, _weight=embeddings) if self.dropout: self._rnn_dropout = RNNDropout(p=self.dropout) #shared by all tasks # self._rnn_dropout = nn.Dropout(p=self.dropout) self._encoding = Seq2SeqEncoder(nn.LSTM, self.embedding_dim, self.hidden_size, bidirectional=True) #multi-task self._attention = nn.ModuleDict({}) self._projection = nn.ModuleDict({}) self._classification = nn.ModuleDict({}) for channel in channels_list: self.update(channel) # Initialize all weights and biases in the model. self.apply(_init_esim_weights) def update(self, channel): channel = str(channel) self._attention.update({channel : SoftmaxAttention()}) self._projection.update({channel : nn.Sequential(nn.Linear(4*2*self.hidden_size, self.hidden_size), nn.ReLU())}) self._classification.update({channel : nn.Sequential(nn.Dropout(p=self.dropout), nn.Linear(4*self.hidden_size, self.hidden_size), nn.Tanh(), nn.Dropout(p=self.dropout), nn.Linear(self.hidden_size, self.num_classes))}) def forward(self, premises, premises_lengths, hypotheses, hypotheses_lengths, channel_tensor): #must be a tensor """ Args: premises: A batch of varaible length sequences of word indices representing premises. The batch is assumed to be of size (batch, premises_length). premises_lengths: A 1D tensor containing the lengths of the premises in 'premises'. hypothesis: A batch of varaible length sequences of word indices representing hypotheses. The batch is assumed to be of size (batch, hypotheses_length). hypotheses_lengths: A 1D tensor containing the lengths of the hypotheses in 'hypotheses'. Returns: logits: A tensor of size (batch, num_classes) containing the logits for each output class of the model. probabilities: A tensor of size (batch, num_classes) containing the probabilities of each output class in the model. """ channel_id = channel_tensor.item() channel = str(channel_id) premises_mask = get_mask(premises, premises_lengths).to(self.device) hypotheses_mask = get_mask(hypotheses, hypotheses_lengths)\ .to(self.device) embedded_premises = self._word_embedding(premises) embedded_hypotheses = self._word_embedding(hypotheses) if self.dropout: embedded_premises = self._rnn_dropout(embedded_premises) embedded_hypotheses = self._rnn_dropout(embedded_hypotheses) encoded_premises = self._encoding(embedded_premises, premises_lengths) encoded_hypotheses = self._encoding(embedded_hypotheses, hypotheses_lengths) attended_premises, attended_hypotheses =\ self._attention[channel](encoded_premises, premises_mask, encoded_hypotheses, hypotheses_mask) """ rest of the code are omitted """
st83650
You need to trace the tasks separately and then write the bit combining them in torch.script, calling the traced modules. The link has a small recipe for that. Best regards Thomas
st83651
@tom Thank you for your patience, sir. Would you please clarify the step of “combining them”? I indeed checked the for-loop example in the link which accumulates result in each for-loop iteration. However, here, each channel’s result is final, and excludes each other. When using multitask model, it expose argument for each individual channel. For example, passing in channel “games” will produce a final score for itself (no need to combine with other channels), while passing in channel “fashion” will produce another final score for itself. What shall be returned in loop_in_traced_fn? Returning any channel’s result will discard other channels’. @torch.jit.script def loop_in_traced_fn(premise, premise_length, hypotheses, hypotheses_length, channel_list): for channel in channel_list: result = model(premise, premise_length, hypotheses, hypotheses_length, channel) #what shall be returned here? since each channel's result exclude each other. #channel_list = ["games", "fashion", "news", "food"] traced = torch.jit.trace(loop_in_traced_fn, premise, premise_length, hypotheses, hypotheses_length, channel_list)
st83652
You can accumulate them in a list and return the list mylist = [] and mylist.append(...) should work… (Or add them, torch.cat, whatever…). Best regards Thomas
st83653
Thank you, Thomas. would you please point me to the exact example you are referring to? I tried several but no luck yet. If we write a wrapperclass Composition, trace each subtask separately and combine them by calling forward(self, inputs) whose inputs is ["games", "fashion", "news", "food"]. This means, the final saved trace C++ model won’t be able to accept single channel like games, right? For my code in previous post, It seems that torch.jit.trace(loop_in_traced_fn, inputs) can ONLY accepts the inputs as tuple, so that loop_in_traced_fn will get the tensor inside the inputs tuple. Am I correct? It still not working yet, with error message shown below. I have also tried all kinds of input such as list of tuples, list of tensors, etc. RuntimeError: Tensor cannot be used as a tuple: @torch.jit.script def loop_in_traced_fn(channel_tuple): result = [] for channel in channel_tuple: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE print(channel) channel = torch.tensor([channel], dtype=torch.int64) output = model(premise, premise_length, hypotheses, hypotheses_length, channel) result.append(output) return result
st83654
Could you do the following, please? Trace the individual models as you did before. Store them in a dictionary with string keys. Write as simple a Python method that you can think of that does what you want the final, combined model to do, using only the dictionary with the traced models (and any inputs). Best regards Thomas
st83655
@tom Thank you very much, Sir. Yes, I followed what you said. Below is the error message. What am I missing? TypeError: 'dict' object for attribute 'module_dict' is not a valid constant. Valid constants are: 1. a nn.ModuleList 2. a value of type {bool, float, int, str, NoneType, function, device, layout, dtype} 3. a list or tuple of (2) Below is a snippet of combining them #trace each channel's model separately channel_tensors = [] module_dict = dict() for channel_id in channels_list: channel = torch.tensor([channel_id], dtype=torch.int64) channel_tensors.append(channel) traced_script_module = torch.jit.trace(model, (premise, premise_length, hypotheses, hypotheses_length, channel)) module_dict[channel_id] = traced_script_module #combine each channel's model together class MyScriptModule(torch.jit.ScriptModule): __constants__ = ['module_dict'] def __init__(self, module_dict): super(MyScriptModule, self).__init__() self.module_dict = module_dict @torch.jit.script_method def forward(self, premises, premises_lengths, hypotheses, hypotheses_lengths, channel): #channel must be a tensor channel_id = channel.item() return self.module_dict[channel_id](premise, premise_length, hypotheses, hypotheses_length, channel) my_script_module = MyScriptModule(module_dict) my_script_module.save("deploy-trace-multitask.channel_all.pt")
st83656
@tom I also tried another aproach by defining a simple python function to trace, and get errors as below. _jit_script_compile(mod, ast, _rcb, get_default_args(fn)) RuntimeError: python value of type 'dict' cannot be used as a value: @torch.jit.script def loop_in_traced_fn(premise, premise_length, hypotheses, hypotheses_length, channel): channel_id = channel.item() result = module_dict[channel_id](premise, premise_length, hypotheses, hypotheses_length, channel) ~~~~~~~~~~~ <--- HERE return result Below is the code snipeet #trace each channel's model separately channel_tensors = [] module_dict = dict() for channel_id in channels_list: channel = torch.tensor([channel_id], dtype=torch.int64) channel_tensors.append(channel) traced_script_module = torch.jit.trace(model, (premise, premise_length, hypotheses, hypotheses_length, channel)) module_dict[channel_id] = traced_script_module @torch.jit.script def loop_in_traced_fn(premise, premise_length, hypotheses, hypotheses_length, channel): channel_id = channel.item() result = module_dict[channel_id](premise, premise_length, hypotheses, hypotheses_length, channel) return result channel = channel_tensors[0] traced = torch.jit.trace(loop_in_traced_fn, (premise, premise_length, hypotheses, hypotheses_length, channel)) traced.save("deploy-trace-multitask.channel_all.pt")
st83657
tom: Write as simple a Python method that you can think of that does what you want the final, combined model to do, using only the dictionary with the traced models (and any inputs). Python method in both approaches report error message like “python dict type is NOT supported in Script”. The dict is used to map each channel into its traced model. This is the confusing part. I guess this python method also requires to be traced and saved as the big wrapper model for all channel’s individually traced models.
st83658
Thank you so much for your kind help. The main issue is how to trace and save the python method which combines all channels’ traced module.
st83659
We know in keras, Bidirectional(GRU(128, activation='linear', return_sequences=True))(a1) # (240,256),that is to say, we can choose activation.But in torch,there’s no para to choose.nn.GRU(n_in, n_hidden, bidirectional=True, dropout=droupout, batch_first=True, num_layers=num_layers) I want to know how to modify activation in GRU in torch
st83660
Seeking help on “install from source” I m trying to install from source by building with CUDA10 library. My CUDA and cuDNN installation are working fine and the “python setup.py install” step seems finished correctly (no failure message - correctly identified anaconda python / CUDA paths). But I did not find the torch package installed anywhere (i.e. pytorch folder was not created in python/lib/site-packages ). I wonder if I missed any steps after running setup.py pls? Thank you!
st83661
Just a heads-up. I am not sure if pytorch is already supported with CUDA10. I tried it once recently and some stuffs did not work. https://discuss.pytorch.org/t/solved-weird-behavior-with-cuda-toolkit-version-10-0-gcc-6-4-0/26870/3 143 Why not try with CUDA9 / CUDA9.2?
st83662
Thank Arul for the reply. I have got a RTX card so not sure if it works with CUDA9.2
st83663
We are building PyTorch from source with CUDA10 pretty often. Are you running into some issues? If so, could you post the error message etc.?
st83664
I’ve used vast.ai’s docker image 11 which includes Pytorch 1.0 with Cuda 10 support without any issues so far.
st83665
i’ve tried building pytorch from source with instructions from the link https://pytorch.org/get-started/locally/ 24 as well as the github repo https://github.com/pytorch/pytorch#from-source 24 However I didn’t think it was the right instruction since I got the following error when importing torchvision ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory I’m currently running on Ubuntu 16.04 & Python 3.5, without conda.
st83666
If you’ve installed torchvision via pip or conda, it might have accidentally downgraded PyTorch in this process (and expects CUDA9 to be present). Could you print the current PyTorch version (print(torch.__version__)) and check, if my assumption is correct? I would recommend to build torchvision from source after your PyTorch build.
st83667
Yes, previously I have installed both packages via pip. Also I got the following after inserting (print(torch.__version__)) 1.2.0a0+2aaeccd Is the following link still relevant for building torchvision from source? https://discuss.pytorch.org/t/installing-torchvision-from-source/4677 12