instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
PyTorch's grid_sample conversion to CoreML (via coremltools) | torch.nn.functional.grid_sample (source here, click on docs for documentation) is currently unsupported operation by CoreML (and their conversion utilities library: coremltools).
What I'm looking for is a way to export layer shown below from PyTorch's torchscript (docs here) to CoreML (either using custom op created via Swift or via efficient PyTorch rewrite of grid_sample).
For details and tips to get you started see Tips section
Minimal verifiable example
import coremltools as ct
import torch
class GridSample(torch.nn.Module):
def forward(self, inputs, grid):
# Rest could be the default behaviour, e.g. bilinear
return torch.nn.functional.grid_sample(inputs, grid, align_corners=True)
# Image could also have more in_channels, different dimension etc.,
# for example (2, 32, 64, 64)
image = torch.randn(2, 3, 32, 32) # (batch, in_channels, width, height)
grid = torch.randint(low=-1, high=2, size=(2, 64, 64, 2)).float()
layer = GridSample()
# You could use `torch.jit.script` if preferable
scripted = torch.jit.trace(layer, (image, grid))
# Sanity check
print(scripted(image, grid).shape)
# Error during conversion
coreml_layer = ct.converters.convert(
scripted,
source="pytorch",
inputs=[
ct.TensorType(name="image", shape=image.shape),
ct.TensorType(name="grid", shape=grid.shape),
],
)
which raises the following error:
Traceback (most recent call last):
File "/home/REDACTED/Downloads/sample.py", line 23, in <module>
coreml_layer = ct.converters.convert(
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 175, in convert
mlmodel = mil_convert(
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 128, in mil_convert
proto = mil_convert_to_proto(, convert_from, convert_to,
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 171, in mil_convert_to_proto
prog = frontend_converter(, **kwargs)
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 85, in __call__
return load(*args, **kwargs)
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 81, in load
raise e
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 73, in load
prog = converter.convert()
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 227, in convert
convert_nodes(self.context, self.graph)
File "/home/REDACTED/.conda/envs/REDACTED/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 54, in convert_nodes
raise RuntimeError(
RuntimeError: PyTorch convert function for op 'grid_sampler' not implemented.
Dependencies
Python (conda):
coremltools==4.1
torch==1.8.0
You could also use nightly/master builds (at least for the day of writing: 2021-03-20)
Tips
Those were split into two possible solutions I currently see:
PyTorch only
Rewrite torch.nn.functional.grid_sample from scratch.
This would require sticking only to PyTorch operations on tensors as loops (e.g. triple nested) would hang the converter and be too inefficient
You cannot use __getitem__ on list or related types - seems to work with torch.Tensor but had problems with that so you should have it in mind if you get RuntimeError: PyTorch convert function for op '__getitem__' not implemented
Pros:
No need for two languages & sticking to single technology
Cons:
Limited with loops and would require sticking to vectorized operations (most/all of the time)
Swift & CoreML
Register custom layer which is responsible for running grid_sample. CPU only implementation would be fine (although using Apple's Metal for GPU speedups would be great).
As I'm not into Swift, I've gathered a few resources which might help you:
https://coremltools.readme.io/docs/custom-operators - starting point, Python only, quite easy, only registering layer for conversion
https://developer.apple.com/documentation/coreml/mlcustomlayer - API of the layer which one would have to code in Swift
https://developer.apple.com/documentation/coreml/core_ml_api/creating_a_custom_layer - more about aforementioned (but not much)
https://machinethink.net/blog/coreml-custom-layers/ - blog post with example and dispatching layer to devices (GPU, CPU). Needs Swift (CPU version), Metal (GPU implementation). Eventual Metal implementation might be based off PyTorch's CUDA impl, CPU and Swift might also be related.
3 years old, so be aware of that, swish activation layer, seems to be a good starting point (and other posts from the same author cast some light on the CoreML itself).
https://github.com/hollance/CoreML-Custom-Layers - repo for the above
Pros:
Possibility to use loops and finer control over the algorithm
Might be easier as we're not limited to operations which CoreML can currently read
Cons:
Two languages
Sparse documentation
| Apparently some good soul saw our struggles and provided custom op using MIL (intermediate representation language of CoreML).
Blog post where I found the solution and gist with grid sample
I am not sure why OP did not post it here, but please do respond with a comment if you want to take some SO points for your solution!
Full operation conversion code below:
from coremltools.converters.mil import register_torch_op, register_op
from coremltools.converters.mil.mil.ops.defs._op_reqs import *
# Custom operator for `torch.nn.functional.grid_sample`
@register_op(doc_str="Custom Grid Sampler", is_custom_op=True)
class custom_grid_sample(Operation):
input_spec = InputSpec(
x = TensorInputType(),
grid = TensorInputType(),
mode = StringInputType(const=True, optional=True),
padding_mode = StringInputType(const=True, optional=True),
align_corners = BoolInputType(const=True, optional=True)
)
bindings = {
"class_name": "CustomGridSampler",
"input_order": ["x", "grid"],
"parameters": ["mode", "padding_mode", "align_corners"],
"description": "Custom Grid Sampler"
}
def __init__(self, **kwargs):
super(custom_grid_sample, self).__init__(**kwargs)
def type_inference(self):
x_type = self.x.dtype
x_shape = self.x.shape
grid_type = self.grid.dtype
grid_shape = self.grid.shape
assert len(x_shape) == len(grid_shape) == 4
assert grid_shape[-1] == 2
shape = list(x_shape)
shape[-2] = grid_shape[1]
shape[-1] = grid_shape[2]
return types.tensor(x_type, tuple(shape))
@register_torch_op
def grid_sampler(context, node):
inputs = _get_inputs(context, node)
x = inputs[0]
grid = inputs[1]
mode = node.attr.get("mode", "bilinear")
padding_mode = node.attr.get("padding_mode", "zeros")
align_corners = node.attr.get("align_corners", False)
x = mb.custom_grid_sample(
x=x,
grid=grid,
mode=mode,
padding_mode=padding_mode,
align_corners=align_corners,
name=node.name
)
context.add(x)
| https://stackoverflow.com/questions/66725654/ |
pytorch loss accumulated when using mini-batch | I am new to pytorch. May I ask what is the difference between adding 'loss.item()' or not? The following 2 parts of code:
for epoch in range(epochs):
trainingloss =0
for i in range(0,X.size()[1], batch_size):
indices = permutation[i:i+batch_size]
F = model.forward(X[n])
optimizer.zero_grad()
criterion = loss(X,n)
criterion.backward()
optimizer.step()
trainingloss += criterion.item()
and this
for epoch in range(epochs):
for i in range(0,X.size()[1], batch_size):
indices = permutation[i:i+batch_size]
F = model.forward(X[n])
optimizer.zero_grad()
criterion = loss(X,n)
criterion.backward()
optimizer.step()
If anyone has any idea please help. Thank you very much.
| Calling loss.item() allows you to take a loss variable that is detached from the computation graph that PyTorch creates (this is what .item() does for PyTorch variables).
If you add the line trainingloss += criterion.item() at the end of each "batch loop", this will keep track of the batch loss throughout the iteration by incrementally adding the loss for each minibatch in your training set. This is necessary since you are using minibatches - the loss for each minibatch will not be equal to the loss over all the batches.
Note: If you use PyTorch variables outside the optimization loop, e.g. in a different scope, which could happen if you call something like return loss, it is crucial that you call .item() on any PyTorch variables that are part of the computation graph (as a general rule of thumb, any outputs/loss/models that interact with PyTorch methods will likely be part of your computation graph). If not, this can cause the computation graph to not be de-allocated/deleted from Python memory, and can lead to CPU/GPU memory leaks. What you have above looks correct though!
Also, in the future, PyTorch's DataLoader class can help you with minibatches with less boilerplate code - it can loop over your dataset such that each item you loop over is a training batch - i.e. you don't require two for loops in your optimization.
I hope you enjoy learning/using PyTorch!
| https://stackoverflow.com/questions/66726886/ |
How gradients are accumulated in real | βGradient will not be updated but be accumulated, and updated every N rounds.β I have a question that how the gradients are accumulated in the below code snippet: in every round of the below loop I can see a new gradient is computed by loss.backward() and should be stored internally, but would this internally stored gradient be refreshed in the next round? How the gradient is summed up, and later be applied every N rounds?
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad()
| The first time you call backward, the .grad attribute of the parameters of your model will be updated from None, to the gradients. If you do not reset the gradients to zero, future calls to .backward() will accumulate (i.e. add) gradients into the attribute (see the docs).
When you call model.zero_grad() you are doing the reset.
| https://stackoverflow.com/questions/66727017/ |
Predictions doesn't equal number of images | My validation data are 150 images, but when i try to use my model to predict them my predictions are of length 22 I don't understand why?
total_v=0
correct_v=0
with torch.no_grad():
model.eval()
for data_v, target_v in (validloader):
if SK:
target_v = torch.tensor(np.where(target_v.numpy() == 2, 1, 0).astype(np.longlong))
else:
target_v = torch.tensor(np.where(target_v.numpy() == 0, 1, 0).astype(np.longlong))
data_v, target_v = data_v.to(device), target_v.to(device)
outputs_v = model(data_v)
loss_v = criterion(outputs_v, target_v)
batch_loss += loss_v.item()
_,pred_v = torch.max(outputs_v, dim=1)
correct_v += torch.sum(pred_v==target_v).item()
total_v += target_v.size(0)
val_acc.append(100 * correct_v/total_v)
val_loss.append(batch_loss/len(validloader))
network_learned = batch_loss < valid_loss_min
print(f'validation loss: {np.mean(val_loss):.4f}, validation acc: {(100 *
correct_v/total_v):.4f}\n')
this is my model
model = models.resnet50(pretrained = True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 2)
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adagrad(model.parameters())
| If you want to have the whole predictions you should store predictions of each individual batch and concatenate them at the end of iterations
...
all_preds = []
for data_v, target_v in validloader:
....
_,pred_v = torch.max(outputs_v, dim=1)
all_preds.append(pred_v)
....
all_preds = torch.cat(all_preds).cpu().numpy()
print(len(all_preds))
| https://stackoverflow.com/questions/66732843/ |
len() vs .size(0) when looping through DataLoader samples | I came across this on github (snippet from here):
(...)
for epoch in range(round):
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
# train with real
netD.zero_grad()
real_cpu = data[0].to(device)
batch_size = real_cpu.size(0)
label = torch.full((batch_size,), real_label, device=device)
(...)
Would replacing batch_size = real_cpu.size(0) with batch_size = len(data[0]) give the same effect? (or maybe at least with batch_size = len(real_cpu)?) Reason why I'm asking is that iirc the official PyTorch tutorial incorporated len(X) when displaying training progress during the loop for (X, y) in dataloader: etc. so I was wondering if the two methods are equivalent for displaying the number of 'samples' in the 'current' batch.
| If working with data where batch size is the first dimension then you can interchange real_cpu.size(0) with len(real_cpu) or with len(data[0]).
However when working with some models like LSTMs you can have batch size at second dimension, and in such case you couldn't go with len, but rather real_cpu.size(1) for example
| https://stackoverflow.com/questions/66732881/ |
Predicting a Chess Board Position using Pytorch | I want to predict a the current chess board using pytorch/keras. (Let's not worry about the input for now.)
How would I got about that?
A chess board has 8x8 positions (64) on each position could be a black or white piece (12) or no piece at all (1). I am planning on using this representation for the chess board (other suggestions are welcome!):
https://en.wikipedia.org/wiki/Board_representation_(computer_chess)#Square_list
For example:
2 3 4 5 6 4 3 2
1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
-1-1-1-1-1-1-1-1
-2-3-4-5-6-4-3-2`
As far as I know it is not possible to predict something like this. Because the number of classes my final layer would have to predict are 448 (64x7) and I don't feel like a NN could do that. Additionally there is the problem that softmax wouldn't work (imo). Also the Loss function might become a problem as well.
Does someone have an idea on how to do that? Or could point me in the right direction, because multi-class classification isn't really the right term for this task. I was thinking about creating 6 networks that create a classification for each piece. So a 8x8 array that looks like this (for rooks):
10000001
00000000
00000000
00000000
00000000
-1000000-1
But the problem is still quite similar.
I think creating 64 NNs that take care of one position each would simplify the problem a bit. But that would be a pain to train.
Looking forward to hearing your suggestions!
| For anyone wondering how to do this. I think I figured it out:
You build a Softmax over the third dimension of a 8x8x13 array and get a 8x8 matrix with all the chess figures.
Thanks to @Prune. I will adapt my questions in the future.
| https://stackoverflow.com/questions/66734214/ |
Image classification model works with 32x32 images but not 64x64 | I'm trying to build a deep learning food classification algorithm using the food 101 dataset. I was able to successfully implement it by using the following model which works when all my images are sized with dimensions 32x32. However, I realised some of the images were almost incomprehensible so I increased the size to 64x64 for all images. However, when I run my code with these larger image sizes it no longer works.
I believe the error is to do with how I've defined the model. I'm new to the area of deep learning and would appreciate any help. If you need any further info pls comment below without taking down the post.
Model definition (uses convolutional layers and a residual block):
class SimpleResidualBlock(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=3, stride=1, padding=1)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=3, stride=1, padding=1)
self.relu2 = nn.ReLU()
def forward(self, x):
out = self.conv1(x)
out = self.relu1(out)
out = self.conv2(out)
return self.relu2(out) + x # ReLU can be applied before or after adding the input
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
class ImageClassificationBase(nn.Module):
def training_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
return loss
def validation_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
acc = accuracy(out, labels) # Calculate accuracy
return {'val_loss': loss.detach(), 'val_acc': acc}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], last_lr: {:.5f}, train_loss: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}".format(
epoch, result['lrs'][-1], result['train_loss'], result['val_loss'], result['val_acc']))
def conv_block(in_channels, out_channels, pool=False):
layers = [nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)]
if pool: layers.append(nn.MaxPool2d(2))
return nn.Sequential(*layers)
class ResNet9(ImageClassificationBase):
def __init__(self, in_channels, num_classes):
super().__init__()
self.conv1 = conv_block(in_channels, 64)
self.conv2 = conv_block(64, 128, pool=True)
self.res1 = nn.Sequential(conv_block(128, 128), conv_block(128, 128))
self.conv3 = conv_block(128, 256, pool=True)
self.conv4 = conv_block(256, 512, pool=True)
self.res2 = nn.Sequential(conv_block(512, 512), conv_block(512, 512))
self.classifier = nn.Sequential(nn.MaxPool2d(4),
nn.Flatten(),
nn.Dropout(0.2),
nn.Linear(512, num_classes))
def forward(self, xb):
out = self.conv1(xb)
out = self.conv2(out)
out = self.res1(out) + out
out = self.conv3(out)
out = self.conv4(out)
out = self.res2(out) + out
out = self.classifier(out)
return out
Error I get when executing it:
RuntimeError: mat1 dim 1 must match mat2 dim 0
| Welcome to stackoverflow. In general, when you give your code and the error you are receiving, it is better to at least provide a test case scenario where others can reproduce the error you are getting. In this case, I was able to identify the problem and create a test case example.
"RuntimeError: mat1 dim 1 must match mat2 dim 0" this error sounded like a matrix multiplication error to me, where you multiply two matrices and dimensions don't match for multiplication. When I look at your code, only place I see that uses a matrix multiplication is that part:
self.classifier = nn.Sequential(nn.MaxPool2d(4),
nn.Flatten(),
nn.Dropout(0.2),
nn.Linear(512, num_classes))
Linear layer is just a basic matrix multiplication: out = input * weight + bias. So it looks like input dimension of linear layer and weight matrix don't match when you change the input image size:
model_resnet = ResNet9(3, 10)
img = torch.rand(10, 3, 64, 64)
out = model_resnet(img)
Reason this happens is that you use MaxPool2d(4) which applies a 4x4 max pooling filter over the input. If input dimensions to max pooling is 4x4, this filter will produce 1x1 result, if it is 8x8 it will produce 2x2 output, so when you increase your input dimensions to 64x64 from 32x32, output of max pooling will be doubled in each axis making nn.Linear(512) not suitable for your new dimension.
Solution is simple, use Adaptive pooling operations. For example:
self.classifier = nn.Sequential(nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Dropout(0.2),
nn.Linear(512, num_classes))
AdaptiveAvgPool2d will apply a average filter over the received input, and produce 1x1 result all the time no matter the input dimension. Essentially if the input is 8x8 it will apply 8x8 averaging filter, if the input is 4x4 it will apply 4x4 averaging filter. So with this simple change, you can use 32x32 and 64x64 and even higher dimensional images.
| https://stackoverflow.com/questions/66739317/ |
How to concatenate tensor to another list of tensor in pytorch? | I have a tensor of shape "torch.Size([2, 2, 3])" and another tensor of shape "torch.Size([2, 1, 3])". I want a concatenated tensor of shape "torch.Size([2, 2, 6])".
For example :
a=torch.tensor([[[2,3,5],[12,13,15]],[[20,30,50],[120,130,150]]])
b=torch.tensor([[[99,99,99]],[[999,999,999]]])
I want the output as : [[[99,99,99,2,3,5],[99,99,99,12,13,15]],[[999,999,999,20,30,50],[999,999,999,120,130,150]]]
I have written a O(n2) solution using two for loops but,
This is taking a lot of time with millions of calculation, Does anyone help me in doing this efficiently ?? May be some matrix calculation trick for tensors ??
| To exactly match the example you have provided:
c = torch.cat([b.repeat([1,a.shape[1]//b.shape[1],1]),a],2)
The reasoning behind this is that the concatenate operation in pytorch (and numpy and other libraries) will complain if the dimensions of the two tensors in the non-specified axes (in this case 0 and 1) do not match. Therefore, you have to repeat the tensor along the non-matching axis (the first axis, therefore the second element of the repeat list) in order to make the dimensions align. Note that the solution here will only work if the middle dimension of a is evenly divisible by the middle dimension of b.
In newer versions of pytorch, this can also be done using the torch.tile() function.
| https://stackoverflow.com/questions/66739326/ |
Running deep-daze on nvidia jetson - "Could not find a version that satisfies the requirement torchvision>=0.8.2" | I have an nvidia Jetson Nano (the 4gb version). I am attempting to run this project on it: https://github.com/lucidrains/deep-daze
I am attempting to run the command pip install deep-daze. However, I do not have pip so I am running pip3 install deep-daze. When I run that I get
chris@chris-desktop:~$ pip3 install deep-daze
Collecting deep-daze
Using cached https://files.pythonhosted.org/packages/f1/ed/b3f3d9d92f5a48932b3807f683642b28da75722ae93da2f9bdc6af5f1768/deep_daze-0.7.2-py3-none-any.whl
Collecting tqdm (from deep-daze)
Downloading https://files.pythonhosted.org/packages/f8/3e/2730d0effc282960dbff3cf91599ad0d8f3faedc8e75720fdf224b31ab24/tqdm-4.59.0-py2.py3-none-any.whl (74kB)
100% |ββββββββββββββββββββββββββββββββ| 81kB 2.4MB/s
Collecting torchvision>=0.8.2 (from deep-daze)
Could not find a version that satisfies the requirement torchvision>=0.8.2 (from deep-daze) (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3)
No matching distribution found for torchvision>=0.8.2 (from deep-daze)
I am pretty unfamiliar with several of the moving parts here and not sure how to fix this. I thought these version numbers may be useful in answering this question:
chris@chris-desktop:~$ python3 --version
Python 3.6.9
chris@chris-desktop:~$ pip3 --version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
chris@chris-desktop:~$ python2 --version
Python 2.7.17
chris@chris-desktop:~$ pip2 --version
bash: pip2: command not found
chris@chris-desktop:~$ pip --version
bash: pip: command not found
chris@chris-desktop:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
| Thanks to the comment from Elbek I got this working! I was able to follow the guide here: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-8-0-now-available/72048
Unfortunately after I got everything installed I ran into an issue with not having enough memory, but, it all got installed.
| https://stackoverflow.com/questions/66740533/ |
Dockerfile: pip install fails with requirements.txt (but succeeds with individual packages) | I'm trying to install some packages in a docker container, and there is a problem when installing from a requirements.txt file. This line:
RUN python3.8 -m pip install -r requirements.txt
fails with the error:
...
Collecting torch
Downloading torch-1.8.0-cp38-cp38-manylinux1_x86_64.whl (735.5 MB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
torch from https://files.pythonhosted.org/packages/89/c1/72e9050d3e31e4df983f6e06799a1a4c896427c1e5645a6d810940944b60/torch-1.8.0-cp38-cp38-manylinux1_x86_64.whl#sha256=fa1e391cca3937d5dea31f31a1a80a01bd4a8062c039448c254bbf5a58eb0787 (from -r requirements.txt (line 3)):
Expected sha256 fa1e391cca3937d5dea31f31a1a80a01bd4a8062c039448c254bbf5a58eb0787
Got d5466637c17c3ae0c81c00d93a0b7c8d8428cfd216f54953a11d0788ea7b74fb
The requirements.txt file is the following:
numpy
opencv-python
torch
However, when installing these packages one at a time everything works fine:
RUN python3.8 -m pip install numpy
RUN python3.8 -m pip install opencv-python
RUN python3.8 -m pip install torch
Any ideas how to solve this?
*** EDIT ***
Dockerfile up to that point:
FROM public.ecr.aws/lambda/python:3.8
COPY requirements.txt ./
| You could try a couple of things. Depending on your base image, you could run pip install in this way:
RUN pip install -r requirements.txt
Another option would be to change your requirements.txt such that it is version controlled. Then you can be sure you have compatible versions and is a good practice in general. Eg.:
torch==1.8.0
Try to run Docker again with without caches:
docker build -no-cache
Or you could check this answer:
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. when updating Django
| https://stackoverflow.com/questions/66741901/ |
ImportError: cannot import name 'download_url_to_file' | I tried to run a Python script that uses the download_url_to_file method inside the torch hub, but I got the following error:
Traceback (most recent call last):
File "crop-video.py", line 1, in <module>
import face_alignment
File "$PROGRAMS_PATH$\Python\Python36\lib\site-packages\face_alignment-1.3.3-py3.6.egg\face_alignment\__init__.py", line 7, in <module>
File "$PROGRAMS_PATH$\Python\Python36\lib\site-packages\face_alignment-1.3.3-py3.6.egg\face_alignment\api.py", line 7, in <module>
File "$PROGRAMS_PATH$\Python\Python36\lib\site-packages\face_alignment-1.3.3-py3.6.egg\face_alignment\utils.py", line 13, in <module>
ImportError: cannot import name 'download_url_to_file'
I am using Python version 3.6.2. I also have the packages installed with the following versions:
certifi (2020.12.5)
chardet (4.0.0)
cycler (0.10.0)
dataclasses (0.8)
decorator (4.4.2)
face-alignment (1.3.3)
future (0.18.2)
idna (2.10)
imageio (2.9.0)
imageio-ffmpeg (0.4.3)
kiwisolver (1.3.1)
llvmlite (0.36.0)
matplotlib (3.3.4)
networkx (2.5)
numba (0.53.0)
numpy (1.19.5)
opencv-python (4.5.1.48)
Pillow (8.1.2)
pip (9.0.1)
pyparsing (2.4.7)
python-dateutil (2.8.1)
PyWavelets (1.1.1)
requests (2.25.1)
rocketchat-API (0.6.9)
scikit-image (0.17.2)
scipy (1.5.4)
setuptools (28.8.0)
six (1.15.0)
tifffile (2020.9.3)
torch (1.0.0)
torchvision (0.2.1)
tqdm (4.24.0)
typing-extensions (3.7.4.3)
unknown (0.0.0)
urllib3 (1.26.4)
The download_url_to_file method does not appear to be found inside the hub file, but I checked the torch hub, and made sure it was defined! How can this error be fixed?
| Worked in my Pc
use torch == 1.6
| https://stackoverflow.com/questions/66746626/ |
PyTorch gather 3D source with 2D index | Given a source tensor of shape [B,N,F] and an index tensor of shape [B,k], where index[i][j] is an index to a specific feature inside source[i][j] is there a way to extract an output tensor such that:
output[i][j] = source[i][j][index[i][j]]
torch.gather specifies that index.shape == source.shape, while here the shape of the source is one dimension bigger.
source = [
[[0.1,0.2],[0.2,0.3]],
[[0.4,0.5],[0.6,0.7]],
[[0.7,0.6],[0.8,0.9]]
]
index = [
[1,0],
[0,0],
[1,1]
]
desired_output = [
[0.2,0.2],
[0.4,0.6],
[0.6,0.9]
]
| For future references - The solution is
source.gather(2,index.unsqueeze(2)).squeeze(2)
| https://stackoverflow.com/questions/66746822/ |
Tokenizing & encoding dataset uses too much RAM | Trying to tokenize and encode data to feed to a neural network.
I only have 25GB RAM and everytime I try to run the code below my google colab crashes. Any idea how to prevent his from happening? βYour session crashed after using all available RAMβ
I thought tokenize/encoding chunks of 50000 sentences would work but unfortunately not.
The code works on a dataset with length 1.3 million. The current dataset has a length of 5 million.
max_q_len = 128
max_a_len = 64
trainq_list = train_q.tolist()
batch_size = 50000
def batch_encode(text, max_seq_len):
for i in range(0, len(trainq_list), batch_size):
encoded_sent = tokenizer.batch_encode_plus(
text,
max_length = max_seq_len,
pad_to_max_length=True,
truncation=True,
return_token_type_ids=False
)
return encoded_sent
# tokenize and encode sequences in the training set
tokensq_train = batch_encode(trainq_list, max_q_len)
The tokenizer comes from HuggingFace:
tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased')
| You should use generators and pass data to tokenizer.batch_encode_plus, no matter the size.
Conceptually, something like this:
Training list
This one probably holds list of sentences, which is read from some file(s). If this is a single large file, you could follow this answer to lazily read parts of the input (preferably of batch_size lines at once):
def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
Otherwise open a single file (much smaller than memory, because it will be way larger after encoding using BERT), something like this:
import pathlib
def read_in_chunks(directory: pathlib.Path):
# Use "*.txt" or any other extension your file might have
for file in directory.glob("*"):
with open(file, "r") as f:
yield f.readlines()
Encoding
Encoder should take this generator and yield back encoded parts, something like this:
# Generator should create lists useful for encoding
def batch_encode(generator, max_seq_len):
tokenizer = BertTokenizerFast.from_pretrained("bert-base-multilingual-uncased")
for text in generator:
yield tokenizer.batch_encode_plus(
text,
max_length=max_seq_len,
pad_to_max_length=True,
truncation=True,
return_token_type_ids=False,
)
Saving encoded files
As the files will be too large to fit in RAM memory, you should save them to disk (or use somehow as they are generated).
Something along those lines:
import numpy as np
# I assume np.arrays are created, adjust to PyTorch Tensors or anything if needed
def save(encoding_generator):
for i, encoded in enumerate(encoding_generator):
np.save(str(i), encoded)
| https://stackoverflow.com/questions/66747954/ |
Can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first | I am trying to show results of GAN network on some specified epochs. The function for printing the current result was used previously with TF. I need to change in to pytorch.
def show_result(G_net, z_, num_epoch, show=False, save=False, path='result.png'):
#test_images = sess.run(G_z, {z: z_, drop_out: 0.0})
test_images = G_net(z_)
size_figure_grid = 5
fig, ax = plt.subplots(size_figure_grid, size_figure_grid, figsize=(5, 5))
for i, j in itertools.product(range(size_figure_grid), range(size_figure_grid)):
ax[i, j].get_xaxis().set_visible(False)
ax[i, j].get_yaxis().set_visible(False)
for k in range(5*5):
i = k // 5
j = k % 5
ax[i, j].cla()
ax[i, j].imshow(np.reshape(test_images[k], (28, 28)), cmap='gray')
label = 'Epoch {0}'.format(num_epoch)
fig.text(0.5, 0.04, label, ha='center')
plt.savefig(name)
file = drive.CreateFile({'title': label, "parents": [{"kind": "https://drive.google.com/drive/u/0/folders/", "id": folder_id}]})
file.SetContentFile(name)
file.Upload()
if num_epoch == 10 or num_epoch == 20 or num_epoch == 50 or num_epoch == 100:
plt.show()
plt.close()
The results I need to obtain looks like that:
result img
I am getting this error, however I am not sure what I did incorrectly
Can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
| I am assuming G_net is your network. It looks like you store the network in the GPU, hence the returned result test_images will also be in GPU. You will need to move it to cpu and convert to numpy:
#test_images = G_net(z_)
test_images = G_net(z_).detach().cpu().numpy()
This will detach the tensor from the graph, move to cpu and then convert to numpy.
| https://stackoverflow.com/questions/66752458/ |
CUDA out of memory.Tried to allocate 14.00 MiB (GPU 0;4.00 GiB total capacity;2 GiB already allocated;6.20 MiB free;2GiB reserved intotal by PyTorch) | I am trying to run this code from fastai
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224), num_workers = 0)
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
I get the following error
RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0;
4.00 GiB total capacity; 2.20 GiB already allocated; 6.20 MiB free; 2.23 GiB reserved in total by PyTorch)
I also tried running
import torch
torch.cuda.empty_cache()
and restarting the kernel which was of no use
Any help would be appreciated
| The default batch_size used in the ImageDataLoaders.from_name_func is 64 according to the documentation here. Reducing that should solve your problem. Pass another parameter to ImageDataLoaders.from_name_func like bs=32 or any other smaller value till the error is not thrown
| https://stackoverflow.com/questions/66752975/ |
RuntimeError:shape β[4, 98304]β is invalid for input of size 113216 | I am learning to train a basic nn model for image classification, the error happened when I was trying to feed in image data into the model. I understand that I should input correct size of image data. My image data is 128*256 with 3 channels,4 classes, and the batch size is 4. What I don't understand is where does the size 113216 come from? I checked all related parameters or image meta data, but didn't find a clue. Here is my code:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(3*128*256, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(4, 3*128*256)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
for epoch in range(2): # loop over the dataset multiple times
print('round start')
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
print(inputs.shape)
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
Thanks for your help!
| Shapes
Conv2d changes width and height of image without padding. Rule of thumb (if you want to keep the same image size with stride=1 (default)): padding = kernel_size // 2
You are changing number of channels, while your linear layer has 3 for some reason?
Use print(x.shape) after each step if you want to know how your tensor data is transformed!
Commented code
Fixed code with comments about shapes after each step:
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(3, 6, 5)
self.pool = torch.nn.MaxPool2d(2, 2)
self.conv2 = torch.nn.Conv2d(6, 16, 5)
# Output shape from convolution is input shape to fc
self.fc1 = torch.nn.Linear(16 * 29 * 61, 120)
self.fc2 = torch.nn.Linear(120, 84)
self.fc3 = torch.nn.Linear(84, 10)
def forward(self, x):
# In: (4, 3, 128, 256)
x = F.relu(self.conv1(x))
# (4, 3, 124, 252) because kernel_size=5 takes 2 pixels
x = self.pool(x)
# (4, 6, 62, 126) # Because pooling halving the size
x = F.relu(self.conv2(x))
# (4, 16, 58, 122) # Same reason as above
x = self.pool(x)
# (4, 16, 29, 61) Because pooling halving the size
# Better use torch.flatten(x, dim=1) so you don't have to input size here
x = x.view(-1, 16 * 29 * 61) # Use -1 to be batch size independent
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
Other things that might help
Try torch.nn.AdaptiveMaxPool2d(1) before ReLU, it will make your network width and height independent
Use flatten (or torch.nn.Flatten() layer) after this pooling
If so, pass num_channels set in last convolution as in_features for nn.Linear
| https://stackoverflow.com/questions/66755165/ |
Pytorch random tensor generation with one fix value and rest of random value | In my testing dataset, I have to always include one specific image(image at position 0) in each batch but others values can be randomly selected. So I am making a tensor which will have 1st value 0 (for 1st image) and the rest of others can be anything other than 0. My code snippet is below.
a= torch.randperm(len(l-1)) #where l is total no of testing image in dataset, code output->tensor([10, 0, 1, 2, 4, 5])
b=torch.tensor([0]) # code output-> tensor([0])
c=torch.cat((b.view(1),a))# gives output as -> tensor([0, 10, 0, 1, 2, 4, 5]) and 0 is used twice so repeated test image
However, above approach can include 0 twice as torch.randperm many times includes 0. Is there a way in torch to generate random number skipping one specific value. Or if you think another approach will be better please comment.
| You could just remove these 0s using conditional indexing (also assumed you meant len(l) - 1):
a= torch.randperm(len(l)-1) #where l is total no of testing image in dataset, code output->tensor([10, 0, 1, 2, 4, 5])
a=a[a!=0]
b=torch.tensor([0]) # code output-> tensor([0])
c=torch.cat((b,a))# gives output as -> tensor([0, 10, 0, 1, 2, 4, 5]) and 0 is used twice so repeated test image
Or if you want to make sure it's never put in:
a=torch.arange(1,len(l))
a=a[torch.randperm(a.shape[0])]
b=torch.tensor([0])
c=torch.cat((b,a))
The second approach is a bit more versatile as you can have whatever values you'd like in your initial a declaration as well as replacement.
| https://stackoverflow.com/questions/66756131/ |
How to replace specific values in PyTorch tensor along diagonal? | For example, there is a PyTorch matrix A:
A = tensor([[3,2,1],[1,0,2],[2,2,0]])
I need to replace 0 with 1 on the diagonal, so the result should be:
tensor([[3,2,1],[1,1,2],[2,2,1]])
| You can use torch's inbuilt diagonal functions to replace diagonal elements like so:
mask = A.diagonal() == 0
A += torch.diag(mask)
>>> A
tensor([[3, 2, 1],
[1, 1, 2],
[2, 2, 1]])
If you want to replace 0's with another value, change mask to mask * replace_value.
| https://stackoverflow.com/questions/66756790/ |
Confusion about Pytorch `torch.split` documentation | When I view the explanation of the function torch.split in PyTorch, I find it difficult for me to read as a non-English-speaker:
torch.split(tensor, split_size_or_sections, dim=0)
[...]
If split_size_or_sections is a list, then tensor will be split
into len(split_size_or_sections) chunks with sizes in dim according
to split_size_or_sections.
Does "with sizes in dim" mean "with sizes in split_size_or_sections along the dimension dim"?
| Don't worry - your English is fine, that line is a bit confusing.
Yes you're correct. It means if you pass a list e.g. split_size_or_sections=[1,2,4,5] it will split the tensor into len([1,2,4,5]) chunks (with the splits happening across dim), and each chunk will be of length 1, 2, 4, 5 respectively.
This implicitly assumes that sum([1,2,4,5]) equals the size of dim, and will return an error if not.
| https://stackoverflow.com/questions/66759378/ |
Two tensors are on the same device, but I get the error: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu | I have:
def loss_fn(self, pred, truth):
truth_flat = torch.reshape(truth, (truth.size(0),-1)).to(truth.device)
pred_flat = torch.reshape(pred, (pred.size(0),-1)).to(pred.device)
stoi_loss = NegSTOILoss(sample_rate=16000)(pred_flat, truth_flat)
print('truth', truth.size(), truth_flat.size(), stoi_loss)
return torch.nn.MSELoss()(pred, truth)
As you can see, I'm making sure that it's on the same device, but I still get the error:
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
Any ideas?
| You are assigning to two different devices, truth.device and pred.device.
| https://stackoverflow.com/questions/66764585/ |
How can I set the diagonal of an N-dim tensor to 0 along given dims? | Iβm trying to figure out a way to set the diagonal of a 3-dimensional Tensor (along 2 given dims) equal to 0. An example of this would be, letβs say I have a Tensor of shape [N,N,N] and I wanted to set the diagonal along dim=1,2 equal to 0? How exactly could that be done?
I tried using fill_diagonal_ but that only does the k-th diagonal element for each sub-array, i.e:
>>> data = torch.ones(3,4,4)
>>> data.fill_diagonal_(0)
tensor([[[0, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[1, 1, 1, 1],
[1, 0, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 1]]])
whereas I would want the entire diagonal for each sub-matrix to be equal to 0 here. So, the desired outcome would be,
tensor([[[0, 1, 1, 1],
[1, 0, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 0]],
[[0, 1, 1, 1],
[1, 0, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 0]],
[[0, 1, 1, 1],
[1, 0, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 0]]])
Secondly, the reason I state for a given pair of dimension is, I need to repeat this `zeroingβ along 2 different pairs of dimensions (e.g. dim=(1,2) then dim=(0,1)) to get the required masking I need.
Is there a way to mask a given diagonal over 2 arbitrary dimensions for a 3D-tensor?
| You can do this with a for loop over the sub-tensors:
# across dim0
for i in range(data.size(0)):
data[i].fill_diagonal_(0)
If you need to perform this over an arbitrary two dimensions of a 3d tensor, simply apply the fill to the appropriate slices:
# across dim1
for i in range(data.size(1)):
data[:,i].fill_diagonal_(0)
# across dim2
for i in range(data.size(2)):
data[:,:,i].fill_diagonal_(0)
| https://stackoverflow.com/questions/66765173/ |
How to convert a full string output into torch tensor | Having defined an API based on a transformer model which I send request to, I got as output an object of class string supposed to represent a torch tensor.
import requests
import json
import torch
response = requests.post("http://127.0.0.1:5000/predict", json="This is a test.")
data = response.json()
print(data)
tensor([[-5.5169e-01, 5.1988e-01, -1.4731e-01, 4.6096e-02, -5.5753e-02,
-6.9530e-01, 7.3424e-01, 3.0014e-01, 4.0528e-01, -1.7587e-01,
1.2586e-01, 5.6712e-01, -4.0757e-01, 1.5796e-01, 9.4700e-01,
6.2967e-01, 3.1027e-01, 1.0169e-02, 3.1380e-02, 1.2585e-01,
2.3633e-01, 6.0813e-01, -1.0548e+00, -1.8581e-01, -5.9870e-02,
1.2952e-01, -3.8818e-01, 2.3425e-01]])
I checked in the requests module documentation, but couldn't find any way to convert this fully strings into a torch tensor?
| You can use eval to evaluate the code contained in the string:
from torch import tensor
my_tensor = eval(data)
| https://stackoverflow.com/questions/66765540/ |
How to return a k-dim pytorch tensor truncated based on 1D mask | So suppose I have a k-dim tensor and a 1-dim mask, that is used a lot in pytorch for variable length sequences, and I want to return a tensor that represents the elements up to the first false value in mask. Here is an example:
import torch
a = torch.tensor([[1,2],[3,4],[5,6],[0,0],[0,0],[0,0]])
b = torch.tensor([True,True,True,False,False,False])
# magic goes here, result of c should be:
print(c)
>>> [[1,2],[3,4],[5,6]]
In this example, the input tensor is 2D, but it could be k-d, with any number of values across those dimensions. Only the first dimension needs to match the mask dimension. So doing torch.masked_select doesn't work since the tensor to be truncated isn't 1D like the mask, and since you don't know the dimensionality, squeeze and unsqueeze is not a solution either.
The mask is always going to be true for the first k elements and false for the remaining elements, though if your solution does not "depend" on this, that is fine.
This seems like people would be doing it all the time, yet I cannot find anywhere where this question has been answered.
| You can simply pass the mask as a slice index to the tensor:
c = a[b]
>>> c
tensor([[1, 2],
[3, 4],
[5, 6]])
| https://stackoverflow.com/questions/66766423/ |
Rearrange torch 2D tensors ("Tiles") to be in a particular order | I have something that looks like this...
import torch
X = torch.cat([torch.ones((2,4)).unsqueeze(0), torch.ones((2,4)).unsqueeze(0)*2, torch.ones((2,4)).unsqueeze(0)*3, torch.ones((2,4)).unsqueeze(0)*4]).unsqueeze(0)
which is
tensor([[[[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[2., 2., 2., 2.],
[2., 2., 2., 2.]],
[[3., 3., 3., 3.],
[3., 3., 3., 3.]],
[[4., 4., 4., 4.],
[4., 4., 4., 4.]]]])
And I want to rearrange them to look like this
tensor([[[1., 1., 1., 1., 2., 2., 2., 2.],
[1., 1., 1., 1., 2., 2., 2., 2.],
[3., 3., 3., 3., 4., 4., 4., 4.],
[3., 3., 3., 3., 4., 4., 4., 4.]]])
However the solution I was hoping would work obviously doesn't and while I understand why it doesn't work I'll share it.
b, t, x, y = X.shape
num_x_splits = num_y_splits = 2
X = X.reshape(-1,x*num_x_splits,y*num_y_splits)
tensor([[[1., 1., 1., 1., 1., 1., 1., 1.],
[2., 2., 2., 2., 2., 2., 2., 2.],
[3., 3., 3., 3., 3., 3., 3., 3.],
[4., 4., 4., 4., 4., 4., 4., 4.]]])
Is there a computationally efficient way I can accomplish what I want?
I did some more research and looks like the ideal solution might actually be a combinatoin of the accepted answer and to use the einops package which provides a high-level, easy-to-read, efficient solution
| I will start with the simple case where we know the number of tensors and which tensors to be positioned etc.:
ones = 1 * torch.ones(2,4)
twos = 2 * torch.ones(2,4)
thrs = 3 * torch.ones(2,4)
fors = 4 * torch.ones(2,4)
fivs = 5 * torch.ones(2,4)
sixs = 6 * torch.ones(2,4)
row1 = torch.cat([ones, twos], axis = 1)
row2 = torch.cat([thrs, fors], axis = 1)
row3 = torch.cat([fivs, sixs], axis = 1)
comb = torch.cat([row1, row2, row3], axis = 0)
print(comb)
tensor([[1., 1., 1., 1., 2., 2., 2., 2.],
[1., 1., 1., 1., 2., 2., 2., 2.],
[3., 3., 3., 3., 4., 4., 4., 4.],
[3., 3., 3., 3., 4., 4., 4., 4.],
[5., 5., 5., 5., 6., 6., 6., 6.],
[5., 5., 5., 5., 6., 6., 6., 6.]])
So you just need to concatenate rows first then combine all rows on the axis=0 which will get the positioning as you want if you know which tensors you want to combine etc.
Another way to solve this is to permute the dimensions so that you get the placement you want which should work for more complex cases as well:
comb2 = torch.cat([ones, twos, thrs, fors, fivs, sixs]).view(3, 2, 2, 4)
print(comb2)
print(comb2.permute(0, 2, 1, 3). reshape(6, 8))
tensor([[[[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[2., 2., 2., 2.],
[2., 2., 2., 2.]]],
[[[3., 3., 3., 3.],
[3., 3., 3., 3.]],
[[4., 4., 4., 4.],
[4., 4., 4., 4.]]],
[[[5., 5., 5., 5.],
[5., 5., 5., 5.]],
[[6., 6., 6., 6.],
[6., 6., 6., 6.]]]])
tensor([[1., 1., 1., 1., 2., 2., 2., 2.],
[1., 1., 1., 1., 2., 2., 2., 2.],
[3., 3., 3., 3., 4., 4., 4., 4.],
[3., 3., 3., 3., 4., 4., 4., 4.],
[5., 5., 5., 5., 6., 6., 6., 6.],
[5., 5., 5., 5., 6., 6., 6., 6.]])
In the first view of the tensor, we view it such that we know we have 3 sets of tensors with 2x4 dimensions and each of these 2x4 tensors stored in additional dimension (2x2x4) to make it easier to manipulate instead of storing 2x8 or 4x4. Then we rearrange the dimensions to get the placement you want. I try to go backwards from the desired dimension to find the permutation I want usually. First dimension should then be 6 = 3x2 where we get 3 sets and 2 rows of tensor so we keep the first axis in place, move rows dimension next to set dimension: permute(0, 2, 1, 3). Permutation of 4 dimensional tensors can be a bit tricky to imagine, just play with it and you will get the right combination :)
| https://stackoverflow.com/questions/66770797/ |
Wall damage detection | I'm lanning to create a real-time wall damages detector [scratches, Cracks] using YOLOv5 and my custom dataset of images (125).
Do you think I can do transfer learning or it wonβt be possible since the coco dataset classes are not similar?
Do you think I need to increase the dataset size?
For now I'm just trying to do a proof of concept. Wanted to have my steps planned in advance.
| If your dataset is small (like in you case), transfer learning will almost always give better results when compared to training from scratch. As for your second question, yes. The more data you get, the better your model will be able to learn and perform. Considering that it's a relatively different task than which Yolo-V5 was originally trained on, try to get as many images as you can
| https://stackoverflow.com/questions/66774293/ |
Training accuracy decrease and loss increase when using pack_padded_sequence - pad_packed_sequence | I'm trying to train a bidirectional lstm with pack_padded_sequence and pad_packed_sequence, but the accuracy keeps decreasing while the loss increasing.
This is my data loader:
X1 (X[0]): tensor([[1408, 1413, 43, ..., 0, 0, 0],
[1452, 1415, 2443, ..., 0, 0, 0],
[1434, 1432, 2012, ..., 0, 0, 0],
...,
[1408, 3593, 1431, ..., 0, 0, 0],
[1408, 1413, 1402, ..., 0, 0, 0],
[1420, 1474, 2645, ..., 0, 0, 0]]), shape: torch.Size([64, 31])
len_X1 (X[3]): [9, 19, 12, 7, 7, 15, 4, 13, 9, 8, 14, 19, 7, 23, 7, 13, 7, 12, 10, 12, 13, 11, 31, 8, 20, 17, 8, 9, 9, 29, 8, 5, 5, 13, 9, 8, 10, 17, 13, 8, 8, 11, 7, 29, 15, 10, 6, 7, 10, 9, 10, 10, 4, 16, 11, 10, 16, 8, 13, 8, 8, 20, 7, 12]
X2 (X[1]): tensor([[1420, 1415, 51, ..., 0, 0, 0],
[1452, 1415, 2376, ..., 1523, 2770, 35],
[1420, 1415, 51, ..., 0, 0, 0],
...,
[1408, 3593, 1474, ..., 0, 0, 0],
[1408, 1428, 2950, ..., 0, 0, 0],
[1474, 1402, 3464, ..., 0, 0, 0]]), shape: torch.Size([64, 42])
len_X2 (X[4]): [14, 42, 13, 18, 12, 31, 8, 19, 5, 7, 15, 19, 7, 17, 6, 11, 12, 16, 8, 8, 19, 8, 12, 10, 11, 9, 9, 9, 9, 21, 7, 5, 8, 13, 14, 8, 15, 8, 8, 8, 12, 13, 7, 14, 4, 10, 6, 11, 12, 7, 8, 11, 9, 13, 30, 10, 15, 9, 9, 7, 9, 8, 7, 20]
t (X[2]): tensor([0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1,
0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0,
0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1]), shape: torch.Size([64])
This is my model class:
class BiLSTM(nn.Module):
def __init__(self, n_vocabs, embed_dims, n_lstm_units, n_lstm_layers, n_output_classes):
super(BiLSTM, self).__init__()
self.v = n_vocabs
self.e = embed_dims
self.u = n_lstm_units
self.l = n_lstm_layers
self.o = n_output_classes
self.padd_idx = tokenizer.get_vocab()['[PAD]']
self.embed = nn.Embedding(
self.v,
self.e,
self.padd_idx
)
self.bilstm = nn.LSTM(
self.e,
self.u,
self.l,
batch_first = True,
bidirectional = True,
dropout = 0.5
)
self.linear = nn.Linear(
self.u * 4,
self.o
)
def forward(self, X):
# initial_hidden
h0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device)
c0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device)
# embedding
out1 = self.embed(X[0].to(device))
out2 = self.embed(X[1].to(device))
# # pack_padded_sequence
out1 = nn.utils.rnn.pack_padded_sequence(out1, X[3], batch_first=True, enforce_sorted=False)
out2 = nn.utils.rnn.pack_padded_sequence(out2, X[4], batch_first=True, enforce_sorted=False)
# NxTxh, lxNxh
out1, _ = self.bilstm(out1, (h0, c0))
out2, _ = self.bilstm(out2, (h0, c0))
# # pad_packed_sequence
out1, _ = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True)
out2, _ = nn.utils.rnn.pad_packed_sequence(out2, batch_first=True)
# take only the final time step
out1 = out1[:, -1, :]
out2 = out2[:, -1, :]
# concatenate out1&2
out = torch.cat((out1, out2), 1)
# linear layer
out = self.linear(out)
iout = torch.max(out, 1)[1]
return iout, out
And if I remove pack_padded_sequence - pad_packed_sequence, the model training works just fine:
class BiLSTM(nn.Module):
def __init__(self, n_vocabs, embed_dims, n_lstm_units, n_lstm_layers, n_output_classes):
super(BiLSTM, self).__init__()
self.v = n_vocabs
self.e = embed_dims
self.u = n_lstm_units
self.l = n_lstm_layers
self.o = n_output_classes
self.padd_idx = tokenizer.get_vocab()['[PAD]']
self.embed = nn.Embedding(
self.v,
self.e,
self.padd_idx
)
self.bilstm = nn.LSTM(
self.e,
self.u,
self.l,
batch_first = True,
bidirectional = True,
dropout = 0.5
)
self.linear = nn.Linear(
self.u * 4,
self.o
)
def forward(self, X):
# initial_hidden
h0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device)
c0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device)
# embedding
out1 = self.embed(X[0].to(device))
out2 = self.embed(X[1].to(device))
# pack_padded_sequence
# out1 = nn.utils.rnn.pack_padded_sequence(out1, X[3], batch_first=True, enforce_sorted=False)
# out2 = nn.utils.rnn.pack_padded_sequence(out2, X[4], batch_first=True, enforce_sorted=False)
# NxTxh, lxNxh
out1, _ = self.bilstm(out1, (h0, c0))
out2, _ = self.bilstm(out2, (h0, c0))
# pad_packed_sequence
# out1, _ = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True)
# out2, _ = nn.utils.rnn.pad_packed_sequence(out2, batch_first=True)
# take only the final time step
out1 = out1[:, -1, :]
out2 = out2[:, -1, :]
# concatenate out1&2
out = torch.cat((out1, out2), 1)
# linear layer
out = self.linear(out)
iout = torch.max(out, 1)[1]
return iout, out
| These lines of your code are wrong.
# take only the final time step
out1 = out1[:, -1, :]
out2 = out2[:, -1, :]
You say you are taking the final time step but you are forgetting that each sequence has different lengths.
nn.utils.rnn.pad_packed_sequence will pad the output of each sequence until it's length equals that of the longest so that they all have the same length.
In other words you are slicing out vectors of zeros (the padding) for most sequence.
This should do what you want.
# take only the final time step
out1 = out1[range(out1.shape[0]), X3 - 1, :]
out2 = out2[range(out2.shape[0]), X4 - 1, :]
This is assuming X3 and X4 are tensors.
| https://stackoverflow.com/questions/66775321/ |
Check if each element of a tensor is contained in a list | Say I have a tensor A and a container of values vals. Is there a clean way of returning a Boolean tensor of the same shape as A with each element being whether that element of A is contained within vals? e.g:
A = torch.tensor([[1,2,3],
[4,5,6]])
vals = [1,5]
# Desired output
torch.tensor([[True,False,False],
[False,True,False]])
| You can simply do like this:
result = A.apply_(lambda x: x in vals).bool()
Then result will contain this tensor:
tensor([[ True, False, False],
[False, True, False]])
Here I simply used a lambda function and the apply_ method that you can find in the official documentation.
| https://stackoverflow.com/questions/66779647/ |
What does the * sign mean in this NN built by Pytorch? | I was reading the code for Generative Adversarial Nets Code by https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/gan/gan.py, I would like to know what the * sign means here, I searched on Google and Stackoverflow but could not find a clear explanation.
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*block(opt.latent_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))),
nn.Tanh()
)
| *x is iterable unpacking notation in Python. See this related answer.
def block returns a list of layers, and *block(...) unpacks the returned list into positional arguments to the nn.Sequential call.
Here's a simpler example:
def block(in_feat, out_feat):
return (nn.Linear(in_feat, out_feat), nn.LeakyReLU(0.2, inplace=True))
self.model = nn.Sequential(
*block(128, 256),
)
# Equivalent to:
# layers = block(128, 256)
# self.model = nn.Sequential(layers[0], layers[1])
# Also equivalent to:
# layers = block(128, 256)
# self.model = nn.Sequential(*layers)
| https://stackoverflow.com/questions/66780615/ |
torch : Dijkstra's algorithm | I am working on 3D point clouds. I have the SPARSE MATRIX representation of the graph structure of the point cloud (like csr_matrix in scipy.sparse). I want to club together the points that are within certain threshold of the Geodesic distance (approximated by the path length in the graph) and process them together. TO FIND such points, I need to run some shortest path finding algorithm like Dijkstra's. In a nutshell, my idea is like this
Sample K points out of N points (that I could do using Furthest Point Sampling)
Find the nearest Geodesic neighbours (using BackProp supported algorithm) for each of K points
Process the neighbours for each point using some Neural Network
This will go in my forward function.
Is there a way to implement Dijkstraβs in my functionality?
Or any other idea that I can implement?
Thank you very much!
| I created my custom implementation for Dijkstra using priority queues as discussed here
For the same, I created a custom PriorityQ class using torch function as below
class priorityQ_torch(object):
"""Priority Q implelmentation in PyTorch
Args:
object ([torch.Tensor]): [The Queue to work on]
"""
def __init__(self, val):
self.q = torch.tensor([[val, 0]])
# self.top = self.q[0]
# self.isEmpty = self.q.shape[0] == 0
def push(self, x):
"""Pushes x to q based on weightvalue in x. Maintains ascending order
Args:
q ([torch.Tensor]): [The tensor queue arranged in ascending order of weight value]
x ([torch.Tensor]): [[index, weight] tensor to be inserted]
Returns:
[torch.Tensor]: [The queue tensor after correct insertion]
"""
if type(x) == np.ndarray:
x = torch.tensor(x)
if self.isEmpty():
self.q = x
self.q = torch.unsqueeze(self.q, dim=0)
return
idx = torch.searchsorted(self.q.T[1], x[1])
print(idx)
self.q = torch.vstack([self.q[0:idx], x, self.q[idx:]]).contiguous()
def top(self):
"""Returns the top element from the queue
Returns:
[torch.Tensor]: [top element]
"""
return self.q[0]
def pop(self):
"""pops(without return) the highest priority element with the minimum weight
Args:
q ([torch.Tensor]): [The tensor queue arranged in ascending order of weight value]
Returns:
[torch.Tensor]: [highest priority element]
"""
if self.isEmpty():
print("Can Not Pop")
self.q = self.q[1:]
def isEmpty(self):
"""Checks is the priority queue is empty
Args:
q ([torch.Tensor]): [The tensor queue arranged in ascending order of weight value]
Returns:
[Bool] : [Returns True is empty]
"""
return self.q.shape[0] == 0
Now dijkstra, with adjacency matrix(with graph weights as input)
def dijkstra(adj):
n = adj.shape[0]
distance_matrix = torch.zeros([n, n])
for i in range(n):
u = torch.zeros(n, dtype=torch.bool)
d = np.inf * torch.ones(n)
d[i] = 0
q = priorityQ_torch(i)
while not q.isEmpty():
v, d_v = q.top() # point and distance
v = v.int()
q.pop()
if d_v != d[v]:
continue
for j, py in enumerate(adj[v]):
if py == 0 and j != v:
continue
else:
to = j
weight = py
if d[v] + py < d[to]:
d[to] = d[v] + py
q.push(torch.Tensor([to, d[to]]))
distance_matrix[i] = d
return distance_matrix
Returns shortest path distance matrix for the graph points!
| https://stackoverflow.com/questions/66782954/ |
Why a.storage() is b.storage() returns false when a and b reference the same data? | >>> a = torch.arange(12).reshape(2, 6)
>>> a
tensor([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11]])
>>> b = a[1:, :]
>>> b.storage() is a.storage()
False
But
>>> b[0, 0] = 999
>>> b, a # both tensors are changed
(tensor([[999, 7, 8, 9, 10, 11]]),
tensor([[ 0, 1, 2, 3, 4, 5],
[999, 7, 8, 9, 10, 11]]))
What is exactly the objects that stores tensor data? How can I make check if 2 tensors share memory?
| torch.Tensor.storage() returns a new instance of torch.Storage on every invocation. You can see this in the following
a.storage() is a.storage()
# False
To compare the pointers to the underlying data, you can use the following:
a.storage().data_ptr() == b.storage().data_ptr()
# True
There is a discussion of how to determine whether pytorch tensors share memory in this pytorch forum post.
Note the difference between a.data_ptr() and a.storage().data_ptr(). The first returns the pointer to the first element of the tensor, whereas the second seems to the point to the memory address of the underlying data (not the sliced view), though it is not documented.
Knowing the above, we can understand why a.data_ptr() is different from b.data_ptr(). Consider the following code:
import torch
a = torch.arange(4, dtype=torch.int64)
b = a[1:]
b.data_ptr() - a.data_ptr()
# 8
The address of the first element of b is 8 more than the first element of a because we sliced to remove the first element, and each element is 8 bytes (the dtype is 64-bit integer).
If we use the same code as above but use an 8-bit integer data type, the memory address will be different by one.
| https://stackoverflow.com/questions/66783542/ |
pytorch multiple branches of a model |
Hi I'm trying to make this model using pytorch.
Each input is consisted of 20 images of size 28 X 28, which is C1 ~ Cp in the image.
Each image goes to CNN of same structure, but their outputs are concatenated eventually.
I'm currently struggling with feeding multiple inputs to each of its respective CNN model.
Each model in the first box with three convolutional layers will look like this as a code, but I'm not quite sure how I can put 20 different input to separate models of same structure to eventually concatenate.
self.features = nn.Sequential(
nn.Conv2d(1,10, kernel_size = 3, padding = 1),
nn.ReLU(),
nn.Conv2d(10, 14, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(14, 18, kernel_size=3, padding=1),
nn.ReLU(),
nn.Flatten(),
nn.Linear(28*28*18, 256)
)
I've tried out giving a list of inputs as an input to forward function, but it ended up with an error and won't go through.
I'll be more than happy to explain further if anything is unclear.
| Simply define forward as taking a list of tensors as input, then process each input with the corresponding CNN (in the example snippet, CNNs share the same structure but don't share parameters, which is what I assume you need. You'll need to fill in the dots ... according to your specifications.
class MyModel(torch.nn.Module):
def __init__(self, ...):
...
self.cnns = torch.nn.ModuleList([torch.nn.Sequential(...) for _ in range(20)])
def forward(xs: list[Tensor]):
return torch.cat([cnn(x) for x, cnn in zip(xs, self.cnns)], dim=...)
| https://stackoverflow.com/questions/66786787/ |
How to feed a image to Generator in GAN Pytorch | So, I'm training a DCGAN model in pytorch on celeba dataset (people). And here is the architecture of the generator:
Generator(
(main): Sequential(
(0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace=True)
(9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace=True)
(12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(13): Tanh()
)
)
So after training, I want to check what generator outputs if I feed an occluded image like this:
(size: 64X64)
But as u might have guessed that the image has 3 channels and my generator accepts a latent vector of 100 channels at the starting, so what is the correct way to feed this image to the generator and check the output. (I'm expecting that the generator tries to generate only the occluded part of the image). If you want a reference code then try this demo file of pytorch. I have modified this file according to my own needs, so for referring, this will do the trick.
| You just can't do that. As you said, your network expects 100 dimensional input which is normally sampled from standard normal distribution:
So the generator's job is to take this random vector and generate 3x64x64 image that is indistinguishable from real images. Input is a random 100 dimensional vector sampled from standard normal distribution. I don't see any way to input your image into the current network without modifying the architecture and retraining the new model. If you want to try a new model, you can change input to occluded images, apply some conv. / linear layers to reduce the dimensions to 100 then keep the rest of the network same. This way network will try to learn to generate images not from latent vector but from the feature vector extracted from occluded images. It may or may not work.
EDIT I've decided to give it a go and see if network can learn with this type of conditioned input vectors instead of latent vectors. I've used the tutorial example you've linked and added a couple of changes. First a new network for receiving input and reducing it to 100 dimensions:
class ImageTransformer(nn.Module):
def __init__(self):
super(ImageTransformer, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(3, 1, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True)
)
self.linear = nn.Linear(32*32, 100)
def forward(self, input):
out = self.main(input).view(input.shape[0], -1)
return self.linear(out).view(-1, 100, 1, 1)
Just a simple convolution layer + relu + linear layer to map to 100 dimensions at the output. Note that you can try a much better network here as a better feature extractor, I just wanted to make a simple test.
fixed_input = next(iter(dataloader))[0][0:64, :, : ,:]
fixed_input[:, :, 20:44, 20:44] = torch.tensor(np.zeros((24,24), dtype = np.float32))
fixed_input = fixed_input.to(device)
This is how I modify the tensor to add a black patch over the input. Just sampled a batch to create a fixed input to track the process as it was done in the tutorial with a random vector.
# Create the generator
netG = Generator().to(device)
netD = Discriminator().to(device)
netT = ImageTransformer().to(device)
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
netD.apply(weights_init)
netT.apply(weights_init)
# Print the model
print(netG)
print(netD)
print(netT)
Most of the steps are same, just created an instance of the new transformer network. Then finally, training loop is slightly modified where generator is not fed random vectors but it is given outputs of the new transformer network.
img_list = []
G_losses = []
D_losses = []
iters = 0
for epoch in range(num_epochs):
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
transformed = data[0].detach().clone()
transformed[:, :, 20:44, 20:44] = torch.tensor(np.zeros((24,24), dtype = np.float32))
transformed = transformed.to(device)
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, dtype=torch.float, device=device)
output = netD(real_cpu).view(-1)
errD_real = criterion(output, label)
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
fake = netT(transformed)
fake = netG(fake)
label.fill_(fake_label)
output = netD(fake.detach()).view(-1)
errD_fake = criterion(output, label)
errD_fake.backward()
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label)
output = netD(fake).view(-1)
errG = criterion(output, label)
errG.backward()
D_G_z2 = output.mean().item()
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netT(fixed_input)
fake = netG(fake).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
Training was somewhat okay in terms of loss reductions etc. Finally this is what I got after 5 epochs training:
So what does this result tell us? Since the generator's inputs were not randomly taken from a normal distribution, generator wasn't able to learn the distribution of faces to create varying range of output faces. And since the input is a conditioned feature vector, output images' range is limited. So in summary, random inputs are required for the generator even though it learned to remove patches :)
| https://stackoverflow.com/questions/66789569/ |
Issue while using transformers package inside the docker image | I am using transformers pipeline to perform sentiment analysis on sample texts from 6 different languages. I tested the code in my local Jupyterhub and it worked fine. But when I wrap it in a flask application and create a docker image out of it, the execution is hanging at the pipeline inference line and its taking forever to return the sentiment scores.
mac os catalina 10.15.7 (no GPU)
Python version : 3.8
Transformers package : 4.4.2
torch version : 1.6.0
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
results = classifier(["We are very happy to show you the Transformers library.", "We hope you don't hate it."])
print([i['score'] for i in results])
The above code works fine in Jupyter notebook and it has provided me the expected result
[0.7495927810668945,0.2365245819091797]
So now if I create a docker image with flask wrapper its getting stuck at the results = classifier([input_data]) line and the execution is running forever.
My folder structure is as follows:
- src
|-- app
|--main.py
|-- Dockerfile
|-- requirements.txt
I used the below Dockerfile to create the image
FROM tiangolo/uwsgi-nginx-flask:python3.8
COPY ./requirements.txt /requirements.txt
COPY ./app /app
WORKDIR /app
RUN pip install -r /requirements.txt
RUN echo "uwsgi_read_timeout 1200s;" > /etc/nginx/conf.d/custom_timeout.conf
And my requirements.txt file is as follows:
pandas==1.1.5
transformers==4.4.2
torch==1.6.0
My main.py script look like this :
from flask import Flask, json, request, jsonify
import traceback
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
app = Flask(__name__)
app.config["JSON_SORT_KEYS"] = False
model_name = 'nlptown/bert-base-multilingual-uncased-sentiment'
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = pipeline('sentiment-analysis', model=model_path, tokenizer=model_path)
@app.route("/")
def hello():
return "Model: Sentiment pipeline test"
@app.route("/predict", methods=['POST'])
def predict():
json_request = request.get_json(silent=True)
input_list = [i['text'] for i in json_request["input_data"]]
results = nlp(input_list) ########## Getting stuck here
for result in results:
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
score_list = [round(i['score'], 4) for i in results]
return jsonify(score_list)
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=False, port=80)
My input payload is of the form
{"input_data" : [{"text" : "We are very happy to show you the Transformers library."},
{"text" : "We hope you don't hate it."}]}
I tried looking into the transformers github issues but couldn't find one. I execution works fine even when using the flask development server but it runs forever when I wrap it and create a docker image. I am not sure if I am missing any additional dependency to be included while creating the docker image.
Thanks.
| I was having a similar issue. It seems that starting the app somehow polutes the memory of transformers models. Probably something to do with how Flask does threading but no idea why. What fixed it for me was doing the things that are causing trouble (loading the models) in a different thread.
import threading
def preload_models():
"LOAD MODELS"
return 0
def start_app():
app = create_app()
register_handlers(app)
preloading = threading.Thread(target=preload_models)
preloading.start()
preloading.join()
return app
First reply here. I would be really glad if this helps.
| https://stackoverflow.com/questions/66797173/ |
PyTorch not working when using Pytorch with cuda 11.1: Dataloader | So, I'm running this deep neural network. And normally, when i train it with the cpu using the pytorch library WITHOUT cuda, it runs fine.
However, I noticed when I installed pytorch+cuda 11.1, whenever I try to enumerate over the Dataloader, the following error gets thrown out:
OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\Name\Desktop\Folder\lib\site-packages\torch\lib\cudnn_adv_train64_8.dll" or one of its dependencies.
But, when i use pip to uninstall pytorch+cuda and install pytorch without cuda, my script runs fine.
both versions of pytorch are 1.8.0
Does anyone know how to go about fixing this? Would love to be able to use this gpu.
Note: This is how i set up the dataloader:
train_dataloader = DataLoader(
train_dataset,
batch_size=args.batch_size,
num_workers=args.num_workers,
shuffle=True,
pin_memory=(device == "cuda"),
)
| See this issue on the PyTorch github page: https://github.com/Spandan-Madan/Pytorch_fine_tuning_Tutorial/issues/10
Basically you need to turn off the automatic paging file management in Windows.
Windows Key
Search for: advanced system settings
Advanced tab
Performance - "Settings" button
Advanced tab - "Change" button
Uncheck the "Automatically manage paging file size for all drives" checkbox
Select the "System managed size" radio button.
Reboot
| https://stackoverflow.com/questions/66800258/ |
GPU memory increasing at each batch (PyTorch) | I am trying to build a convolutionnal network using ConvLSTM layer (LSTM cell but with convolutions instead of matrix multiplications), but the problem is that my GPU memory increases at each batch, even if I'm deleting variables, and getting the true value for the loss (and not the graph) for each iteration. I may be doing something wrong but that exact same script ran without issues with another model (with more parameters and also using ConvLSTM layer).
Each batch is composed of num_batch x 3 images (grayscale) and I'm trying to predict the difference |Im(t+1)-Im(t)| with the input Im(t)
def main():
config = Config()
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=config.batch_size, num_workers=0, shuffle=True, drop_last=True)
nb_img = len(train_dataset)
util.clear_progress_dir()
step_tensorboard = 0
###################################
# Model Setup #
###################################
model = fully_convLSTM()
if torch.cuda.is_available():
model = model.float().cuda()
lr = 0.001
optimizer = torch.optim.Adam(model.parameters(),lr=lr)
util.enumerate_params([model])
###################################
# Training Loop #
###################################
model.train() #Put model in training mode
train_loss_recon = []
train_loss_recon2 = []
for epoch in tqdm(range(config.num_epochs)):
running_loss1 = 0.0
running_loss2 = 0.0
for i, (inputs, outputs) in enumerate(train_dataloader, 0):
print(i)
torch.cuda.empty_cache()
gc.collect()
# if torch.cuda.is_available():
inputs = autograd.Variable(inputs.float()).cuda()
outputs = autograd.Variable(outputs.float()).cuda()
im1 = inputs[:,0,:,:,:]
im2 = inputs[:,1,:,:,:]
im3 = inputs[:,2,:,:,:]
diff1 = torch.abs(im2 - im1).cuda().float()
diff2 = torch.abs(im3 - im2).cuda().float()
model.initialize_hidden()
optimizer.zero_grad()
pred1 = model.forward(im1)
loss = reconstruction_loss(diff1, pred1)
loss.backward()
# optimizer.step()
model.update_hidden()
optimizer.zero_grad()
pred2 = model.forward(im2)
loss2 = reconstruction_loss(diff2, pred2)
loss2.backward()
optimizer.step()
model.update_hidden()
## print statistics
running_loss1 += loss.detach().data
running_loss2 += loss2.detach().data
if i==0:
with torch.no_grad():
img_grid_diff_true = (diff2).cpu()
img_grid_diff_pred = (pred2).cpu()
f, axes = plt.subplots(2, 4, figsize=(48,48))
for l in range(4):
axes[0, l].imshow(img_grid_diff_true[l].squeeze(0).squeeze(0), cmap='gray')
axes[1, l].imshow(img_grid_diff_pred[l].squeeze(0).squeeze(0), cmap='gray')
plt.show()
plt.close()
writer_recon_loss.add_scalar('Reconstruction loss', running_loss1, step_tensorboard)
writer_recon_loss2.add_scalar('Reconstruction loss2', running_loss2, step_tensorboard)
step_tensorboard += 1
del pred1
del pred2
del im1
del im2
del im3
del diff1
del diff2#, im1_noised, im2_noised
del inputs
del outputs
del loss
del loss2
for obj in gc.get_objects():
if torch.is_tensor(obj) :
del obj
torch.cuda.empty_cache()
gc.collect()
epoch_loss = running_loss1 / len(train_dataloader.dataset)
epoch_loss2 = running_loss2/ len(train_dataloader.dataset)
print(f"Epoch {epoch} loss reconstruction1: {epoch_loss:.6f}")
print(f"Epoch {epoch} loss reconstruction2: {epoch_loss2:.6f}")
train_loss_recon.append(epoch_loss)
train_loss_recon2.append(epoch_loss2)
del running_loss1, running_loss2, epoch_loss, epoch_loss2
Here is the model used :
class ConvLSTMCell(nn.Module):
def __init__(self, input_channels, hidden_channels, kernel_size):
super(ConvLSTMCell, self).__init__()
# assert hidden_channels % 2 == 0
self.input_channels = input_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
# self.num_features = 4
self.padding = 1
self.Wxi = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whi = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxf = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whf = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxc = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whc = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxo = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Who = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wci = None
self.Wcf = None
self.Wco = None
def forward(self, x, h, c): ## Equation (3) dans Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)
cf = torch.sigmoid(self.Wxf(x) + self.Whf(h) + c * self.Wcf)
cc = cf * c + ci * torch.tanh(self.Wxc(x) + self.Whc(h)) ###gt= tanh(cc)
co = torch.sigmoid(self.Wxo(x) + self.Who(h) + cc * self.Wco) ##channel out = hidden channel
ch = co * torch.tanh(cc)
return ch, cc #short memory, long memory
def init_hidden(self, batch_size, hidden, shape):
if self.Wci is None:
self.Wci = nn.Parameter(torch.zeros(1, hidden, shape[0], shape[1])).cuda()
self.Wcf = nn.Parameter(torch.zeros(1, hidden, shape[0], shape[1])).cuda()
self.Wco = nn.Parameter(torch.zeros(1, hidden, shape[0], shape[1])).cuda()
else:
assert shape[0] == self.Wci.size()[2], 'Input Height Mismatched!'
assert shape[1] == self.Wci.size()[3], 'Input Width Mismatched!'
return (autograd.Variable(torch.zeros(batch_size, hidden, shape[0], shape[1])).cuda(),
autograd.Variable(torch.zeros(batch_size, hidden, shape[0], shape[1])).cuda())
class fully_convLSTM(nn.Module):
def __init__(self):
super(fully_convLSTM, self).__init__()
layers = []
self.hidden_list = [1,32,32,1]#,32,64,32,
for k in range(len(self.hidden_list)-1): # Define blocks of [ConvLSTM,BatchNorm,Relu]
name_conv = "self.convLSTM" +str(k)
cell_conv = ConvLSTMCell(self.hidden_list[k],self.hidden_list[k+1],3)
setattr(self, name_conv, cell_conv)
name_batchnorm = "self.batchnorm"+str(k)
batchnorm=nn.BatchNorm2d(self.hidden_list[k+1])
setattr(self, name_batchnorm, batchnorm)
name_relu =" self.relu"+str(k)
relu=nn.ReLU()
setattr(self, name_relu, relu)
self.sigmoid = nn.Sigmoid()
self.internal_state=[]
def initialize_hidden(self):
for k in range(len(self.hidden_list)-1):
name_conv = "self.convLSTM" +str(k)
(h,c) = getattr(self,name_conv).init_hidden(config.batch_size, self.hidden_list[k+1],(256,256))
self.internal_state.append((h,c))
self.internal_state_new=[]
def update_hidden(self):
for i, hidden in enumerate(self.internal_state_new):
self.internal_state[i] = (hidden[0].detach(), hidden[1].detach())
self.internal_state_new = []
def forward(self, input):
x = input
for k in range(len(self.hidden_list)-1):
name_conv = "self.convLSTM" +str(k)
name_batchnorm = "self.batchnorm"+str(k)
name_relu =" self.relu"+str(k)
x, c = getattr(self,name_conv)(x, self.internal_state[k][1], self.internal_state[k][0])
self.internal_state_new.append((x.detach(),c.detach()))
x = getattr(self,name_batchnorm)(x)
if k!= len(self.hidden_list)-2:
x = getattr(self,name_relu)(x)
else :
x = self.sigmoid(x)
return x
So my question is, what in my code is causing memory to accumulate during the training phase?
| A few quick notes about training code:
torch.Variable is deprecated since at least 8 minor versions (see here), don't use it
gc.collect() has no point, PyTorch does the garbage collector on it's own
Don't use torch.cuda.empty_cache() for each batch, as PyTorch reserves some GPU memory (doesn't give it back to OS) so it doesn't have to allocate it for each batch once again. It will make your code slow, don't use this function at all tbh, PyTorch handles this.
Don't spam random memory cleaning, that's most probably not where the error is
Model
Yes, this is probably the case (although it's hard to read this model's code).
Take notice of self.internal_state list and self.internal_state_new list also.
Each time you call model.initialize_hidden() a new set of tensor is added to this list (and never cleaned as far as I can tell)
self.internal_state_new seems to be cleaned in update_hidden, maybe self.internal_state should be also?
In essence, check out this self.internal_state property of your model, the list grows indefinitely from what I see. Initializing with zeros everywhere is quite strange, there is probably no need to do that (e.g. PyTorch's RNN is initialized with zeros by default, this is probably similar).
| https://stackoverflow.com/questions/66801280/ |
Confusion about the shape of the output logits from Resnet | I am trying to understand why the shape of the output logits from the Resnet18 model I am working with are (27, 19). The shape of 19 I understand, that is the number of classes I have set the model to predict, but the shape of 27 is the part that I am confused about. I have a batch size of 64 so I would have thought the shape of the logits would be (64, 19), because that would give me 1 prediction vector for each image in the batch...
| Turns out I was looking at the logits from the last batch in my epoch, and there weren't enough images left to fill up the entire 64 batch size so it only has 27 images left to train on.
| https://stackoverflow.com/questions/66805716/ |
Pytorch1.6 What is the actual learning rate during training? | I'd like to know the actual learning rate during training, here is my code.
learning_rate = 0.001
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[1, 2], gamma=0.1)
def train(epoch):
train_loss = 0
for batch_idx, (input, target) in enumerate(train_loader):
predict_label = net(input)
loss = criterion(predict_label, target)
train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(optimizer.param_groups[0]['lr'])
scheduler.step()
print(scheduler.state_dict()['_last_lr'])
print(optimizer.param_groups[0]['lr'])
the output is 0.001, 0.0001, 0.0001. So what is the actual lr during optimizer.step()? 0.001 or 0.0001? Thanks.
| The important part is here:
for batch_idx, (input, target) in enumerate(train_loader):
...
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(optimizer.param_groups[0]['lr']) #### CURRENT LEARNING RATE
scheduler.step() #step through learning rate
print(scheduler.state_dict()['_last_lr']) #### NEW LEARNING RATE
print(optimizer.param_groups[0]['lr']) #### NEW LEARNING RATE
Because you step your scheduler after your epoch, then the first epoch will have your initial value which is set to 0.001. If you run for multiple epochs then it will continue to be annealed.
| https://stackoverflow.com/questions/66810555/ |
Trouble implementing "concurrent" softmax function from paper (PyTorch) | I am trying to implement the so called 'concurrent' softmax function given in the paper "Large-Scale Object Detection in the Wild from Imbalanced Multi-Labels". Below is the definition of the concurrent softmax:
NOTE: I have left the (1-rij) term out for the time being because I don't think it applies to my problem given that my training dataset has a different type of labeling compared to the paper.
To keep it simple for myself I am starting off by implementing it in a very inefficient, but easy to follow, way using for loops. However, the output I get seems wrong to me. Below is the code I am using:
# here is a one-hot encoded vector for the multi-label classification
# the image thus has 2 correct labels out of a possible 3 classes
y = [0, 1, 1]
# these are some made up logits that might come from the network.
vec = torch.tensor([0.2, 0.9, 0.7])
def concurrent_softmax(vec, y):
for i in range(len(vec)):
zi = torch.exp(vec[i])
sum_over_j = 0
for j in range(len(y)):
sum_over_j += (1-y[j])*torch.exp(vec[j])
out = zi / (sum_over_j + zi)
yield out
for result in concurrent_softmax(vec, y):
print(result)
From this implementation I have realized that, no matter what value I give to the first logit in 'vec' I will always get an output of 0.5 (because it essentially always calculates zi / (zi+zi)). This seems like a major problem, because I would expect the value of the logits to have some influence on the resulting concurrent-softmax value. Is there a problem in my implementation then, or is this behaviour of the function correct and there is something theoretically that I am not understanding?
| This is the expected behaviour given y[i]=1 for all other i.
Note you can simplify the summation with a dot product:
y = torch.tensor(y)
def concurrent_softmax(z, y):
sum_over_j = torch.dot((torch.ones(len(y)) - y), torch.exp(z))
for zi in z:
numerator = torch.exp(zi)
denominator = sum_over_j + numerator
yield numerator / denominator
| https://stackoverflow.com/questions/66818548/ |
Build a pytorch model wrap around another pytorch model | Is it possible to wrap a pytorch model inside another pytorch module? I could not do it the normal way like in transfer learning (simply concatenating some more layers) because in order to get the intended value for the next 'layer', I need to wait the last layer of the first module to generate multiple outputs (say 100) and to use all those outputs to get the value for the next 'layer' (say taking the max of those outputs). I tried to define the integrated model as something like the following:
class integrated(nn.Module):
def __init__(self):
super(integrated, self)._init_()
def forward(self, x):
model = VAE(
encoder_layer_sizes=args.encoder_layer_sizes,
latent_size=args.latent_size,
decoder_layer_sizes=args.decoder_layer_sizes,
conditional=args.conditional,
num_labels=10 if args.conditional else 0).to(device)
device = torch.device('cpu')
model.load_state_dict(torch.load(r'...')) # the first model is saved somewhere else beforehand
model.eval()
temp = []
for j in range(100):
x = model(x)
temp.append(x)
y=max(temp)
return y
The reason I would like to do that is the library I need to use requires the input itself to be a pytorch module. Otherwise I could simply leave the last part outside of the module.
| Yes you can definitely use a Pytorch module inside another Pytorch module. The way you are doing this in your example code is a bit unusual though, as external modules (VAE, in your case) are more often initialized in the __init__ function and then saved as attributes of the main module (integrated). Among other things, this avoids having to reload the sub-module every time you call forward.
One other thing that looks a bit funny is your for loop over repeated invocations of model(x). If there is no randomness involved in model's evaluation, then you would only need a single call to model(x), since all 100 calls will give the same value. So assuming there is some randomness, you should consider whether you can get the desired effect by batching together 100 copies of x and using a single call to model with this batched input. This ultimately depends on additional information about why you are calling this function multiple times on the same input, but either way, using a single batched evaluation will be a lot faster than using many unbatched evaluations.
| https://stackoverflow.com/questions/66819359/ |
PyTorch ToTensor scaling to [0,1] discrepancy | If I do this
mnist_train = MNIST('../data/MNIST', download = True,
transform = transforms.Compose([
transforms.ToTensor(),
]), train = True)
and
mnist_train.data.max()
why do I get 255? I should get 1, because ToTensor() scales to [0,1], right?
If I do:
for i in range(0, len(mnist_train)):
print(mnist_train[i][0].max())
then, I get almost 1?
Could someone please help me understand this?
| When you do
mnist_train.data
PyTorch gives you the data attribute of the mnist_train, which is defined on this line (when you make a MNIST instance). And if you look at the codes before it in the __init__, no transformation happens!
OTOH, when you do
mnist_train[i]
the __getitem__ method of the object is triggered which you can find here. There is an if statement for transform in this method and therefore you get the transformed version now.
Since a common usage is using this MNIST dataset (or any other one) through torch.utils.data.DataLoader and it calls this __getitem__, we get the normalized values.
| https://stackoverflow.com/questions/66821250/ |
BERT: Weights of input embeddings as part of the Masked Language Model | I looked through different implementations of BERT's Masked Language Model.
For pre-training there are two common versions:
Decoder would simply take the final embedding of the [MASK]ed token and pass it throught a linear layer (without any modifications):
class LMPrediction(nn.Module):
def __init__(self, hidden_size, vocab_size):
super().__init__()
self.decoder = nn.Linear(hidden_size, vocab_size, bias = False)
self.bias = nn.Parameter(torch.zeros(vocab_size))
self.decoder.bias = self.bias
def forward(self, x):
return self.decoder(x)
Some implementations would use the weights of the input embeddings as weights of the decoder-linear-layer:
class LMPrediction(nn.Module):
def __init__(self, hidden_size, vocab_size, embeddings):
super().__init__()
self.decoder = nn.Linear(hidden_size, vocab_size, bias = False)
self.bias = nn.Parameter(torch.zeros(vocab_size))
self.decoder.weight = embeddings.weight ## <- THIS LINE
self.decoder.bias = self.bias
def forward(self, x):
return self.decoder(x)
Which one is correct? Mostly, I see the first implementation. However, the second one makes sense as well - but I cannot find it mentioned in any papers (I would like to see if the second version is somehow superior to the first one)
| For those who are interested, it is called weight tying or joint input-output embedding. There are two papers that argue for the benefit of this approach:
Beyond Weight Tying: Learning Joint Input-Output Embeddings for Neural Machine Translation
Using the Output Embedding to Improve Language Models
| https://stackoverflow.com/questions/66821321/ |
Pytorch gather question (3D Computer Vision) | I have N groups of C-dimension points. In each groups there are M points. So, there is a tensor of (N, M, C). Let's call it features.
I calculated the maximum element and the index through M dimension, to find the maximum points for each C dimension (a max pooling operation), resulting max tensor (N, 1, C) and index tensor (N, 1, C).
I have another tensor of shape (N, M, 3) storing the geometric coordinates of those N*M high-dimention points. Now, I want to use the index from the maximum points in each C dimension, to get the coordinates of all those maximum points.
For example, N=2, M=4, C=6.
The coordinate tensor, whose shape is (2, 4, 3):
[[[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
[8, 7, 6]]
[11, 12, 13]
[14, 15, 16]
[17, 18, 19]
[18, 17, 16]]]
The indices tensor, whose shape is (2, 1, 6):
[[[0, 1, 2, 1, 2, 3]]
[[1, 2, 3, 2, 1, 0]]]
For example, the first element in indices is 0, I want to grab [1, 2, 3] from the coordinate tensor out. For the second element (1), I want to grab [4, 5, 6] out. For the third element in the next dimension (3), I want to grab [18, 17, 16] out.
The result tensor will look like:
[[[1, 2, 3] # 0
[4, 5, 6] # 1
[7, 8, 9] # 2
[4, 5, 6] # 1
[7, 8, 9] # 2
[8, 7, 6]] # 3
[[14, 15, 16] # 1
[17, 18, 19] # 2
[18, 17, 16] # 3
[17, 18, 19] # 2
[14, 15, 16] # 1
[11, 12, 13]]]# 0
whose shape is (2, 6, 3).
I tried to use torch.gather but I can not get it worked. I wrote a naive algorithm enumerating all N groups, but indeed it is slow, even using TorchScript's JIT. So, how to write this efficiently in pytorch?
| You can use integer array indexing combined with broadcasting semantics to get your result.
import torch
x = torch.tensor([
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[8, 7, 6]],
[[11, 12, 13],
[14, 15, 16],
[17, 18, 19],
[18, 17, 16]],
])
i = torch.tensor([[[0, 1, 2, 1, 2, 3]],
[[1, 2, 3, 2, 1, 0]]])
# rows is shape [2, 1], cols is shape [2, 6]
rows = torch.arange(x.shape[0]).type_as(i).unsqueeze(1)
cols = i.squeeze(1)
# y is [2, 6, ...]
y = x[rows, cols]
| https://stackoverflow.com/questions/66821436/ |
Huggingface error: AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id' | I am trying to tokenize some numerical strings using a WordLevel/BPE tokenizer, create a data collator and eventually use it in a PyTorch DataLoader to train a new model from scratch.
However, I am getting an error
AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'
when running the following code
from transformers import DataCollatorForLanguageModeling
from tokenizers import ByteLevelBPETokenizer
from tokenizers.pre_tokenizers import Whitespace
from torch.utils.data import DataLoader, TensorDataset
data = ['4814 4832 4761 4523 4999 4860 4699 5024 4788 <unk>']
# Tokenizer
tokenizer = ByteLevelBPETokenizer()
tokenizer.pre_tokenizer = Whitespace()
tokenizer.train_from_iterator(data, vocab_size=1000, min_frequency=1,
special_tokens=[
"<s>",
"</s>",
"<unk>",
"<mask>",
])
# Data Collator
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False
)
train_dataset = TensorDataset(torch.tensor(tokenizer(data, ......)))
# DataLoader
train_dataloader = DataLoader(
train_dataset,
collate_fn=data_collator
)
Is this error due to not having configured the pad_token_id for the tokenizer? If so, how can we do this?
Thanks!
Error trace:
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/opt/anaconda3/envs/x/lib/python3.8/site-packages/transformers/data/data_collator.py", line 351, in __call__
if self.tokenizer.pad_token_id is not None:
AttributeError: 'ByteLevelBPETokenizer' object has no attribute 'pad_token_id'
Conda packages
pytorch 1.7.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch
pytorch-lightning 1.2.5 pyhd8ed1ab_0 conda-forge
tokenizers 0.10.1 pypi_0 pypi
transformers 4.4.2 pypi_0 pypi
| The error tells you that the tokenizer needs an attribute called pad_token_id. You can either wrap the ByteLevelBPETokenizer into a class with such an attribute (... and met other missing attributes down the road) or use the wrapper class from the transformers library:
from transformers import PreTrainedTokenizerFast
#your code
tokenizer.save(SOMEWHERE)
tokenizer = PreTrainedTokenizerFast(tokenizer_file=tokenizer_path)
| https://stackoverflow.com/questions/66824985/ |
where should I change the code to tell PyTorch not to use the GPU? | As mentioned in How to tell PyTorch to not use the GPU?, in order to tell PyTorch not to use the GPU you should change a few lines inside PyTorch code.
Where should I make the change?
Where is the line of code that needs to be modified?
I tried to find it but couldn't...
| Using the method .cpu() on any tensor or pytorch module transfers the component to the cpu so that the calculations will be made using it.
Another direction is to use the method .to("cpu"). Alternatively, you can change "cpu" with the name of other devices such as "cuda".
Example:
a)
model = MyModel().cpu() # move the model to the cpu
x = data.cpu() # move the input to the cpu
y = model(x)
b)
model = MyModel().to('cpu') # move the model to the cpu
x = data.to('cpu') # move the input to the cpu
y = model(x)
| https://stackoverflow.com/questions/66830732/ |
Pytorch Model Summary | I am trying to load a pytorch model using:
model = torch.load('/content/gdrive/model.pth.tar', map_location='cpu')
I want to check the model summary. When I try:
print(model)
I get the following output:
{'state_dict': {'model.conv1.weight': tensor([[[[ 2.0076e-02, 1.5264e-02, -1.2309e-02, ..., -4.0222e-02,
-4.0527e-02, -6.4458e-02],
[ 6.3291e-03, 3.8393e-03, 1.2400e-02, ..., -3.3926e-03,
-2.1063e-02, -3.4743e-02],
[ 1.9969e-02, 2.0064e-02, 1.4004e-02, ..., 8.7359e-02,
5.4801e-02, 4.8791e-02],
...,
[ 2.5362e-02, 1.1433e-02, -7.6776e-02, ..., -3.4798e-01,
-2.7198e-01, -1.2066e-01],
[ 8.0373e-02, 1.3095e-01, 1.4240e-01, ..., -2.2933e-03,
-1.0469e-01, -1.0922e-01],
[-1.1147e-03, 7.4572e-02, 1.2814e-01, ..., 1.6903e-01,
1.0619e-01, 2.4744e-02]],
'model.layer4.1.bn2.running_var': tensor([0.0271, 0.0155, 0.0199, 0.0198, 0.0132, 0.0148, 0.0182, 0.0170, 0.0134,
.
.
.
What does it even mean?
I also tried to use:
from torchsummary import summary
summary(model, input_size=(3, 224, 224))
But it gives me the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-31-ca828d30bd38> in <module>()
1 from torchsummary import summary
----> 2 summary(model, input_size=(3, 224, 224))
/usr/local/lib/python3.7/dist-packages/torchsummary/torchsummary.py in summary(model, input_size, batch_size, device)
66
67 # register hook
---> 68 model.apply(register_hook)
69
70 # make a forward pass
AttributeError: 'dict' object has no attribute 'apply'
Please note that model is a custom model that I am trying to load.
How can I get a model summary in Pytorch?
| You loaded the "*.pt" and didn't feed it to a model (which is just a dictionary of the weights depending on what you saved) this is why you get the following output:
{'state_dict': {'model.conv1.weight': tensor([[[[ 2.0076e-02, 1.5264e-02, -1.2309e-02, ..., -4.0222e-02,
-4.0527e-02, -6.4458e-02],
[ 6.3291e-03, 3.8393e-03, 1.2400e-02, ..., -3.3926e-03,
-2.1063e-02, -3.4743e-02],
[ 1.9969e-02, 2.0064e-02, 1.4004e-02, ..., 8.7359e-02,
5.4801e-02, 4.8791e-02],
...,
[ 2.5362e-02, 1.1433e-02, -7.6776e-02, ..., -3.4798e-01,
-2.7198e-01, -1.2066e-01],
[ 8.0373e-02, 1.3095e-01, 1.4240e-01, ..., -2.2933e-03,
-1.0469e-01, -1.0922e-01],
[-1.1147e-03, 7.4572e-02, 1.2814e-01, ..., 1.6903e-01,
1.0619e-01, 2.4744e-02]],
'model.layer4.1.bn2.running_var': tensor([0.0271, 0.0155, 0.0199, 0.0198, 0.0132, 0.0148, 0.0182, 0.0170, 0.0134,
.
.
.
What you should do is:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
print(model)
You can refer to the pytorch doc
Regarding your second attempt, the same issue causing the problem, summary expect a model and not a dictionary of the weights.
| https://stackoverflow.com/questions/66830798/ |
Pytorch embedding too big for GPU but fits in CPU | I am using PyTorch lightning, so lightning control GPU/CPU assignments and in
return I get easy multi GPU support for training.
I would like to create an embedding that does not fit in the GPU memory.
fit_in_cpu = torch.nn.Embedding(too_big_for_GPU, embedding_dim)
Then when I select the subset for a batch, send it to the GPU
GPU_tensor = embedding(idx)
How do I do this in Pytorch Lightning?
| Lightning will send anything that is registered as a model parameter to GPU, i.e: weights of layers (anything in torch.nn.*) and variables registered using torch.nn.parameter.Parameter.
However if you want to declare something in CPU and then on runtime move it to GPU you can go 2 ways:
Create the too_big_for_GPU inside the __init__ without registering it as a model parameter (using torch.zeros or torch.randn or any other init function). Then move it to the GPU on the forward pass
class MyModule(pl.LightningModule):
def __init__():
self.too_big_for_GPU = torch.zeros(4, 1000, 1000, 1000)
def forward(self, x):
# Move tensor to same GPU as x and operate with it
y = self.too_big_for_GPU.to(x.device) * x**2
return y
Create the too_big_for_GPU which will be created by default in CPU and then you would need to move it to GPU
class MyModule(pl.LightningModule):
def forward(self, x):
# Create the tensor on the fly and move it to x GPU
too_big_for_GPU = torch.zeros(4, 1000, 1000, 1000).to(x.device)
# Operate with it
y = too_big_for_GPU * x**2
return y
| https://stackoverflow.com/questions/66832708/ |
How to quickly inverse a permutation by using PyTorch? | I am confused on how to quickly restore an array shuffled by a permutation.
Example #1:
[x, y, z] shuffled by P: [2, 0, 1], we will obtain [z, x, y]
the corresponding inverse should be P^-1: [1, 2, 0]
Example #2:
[a, b, c, d, e, f] shuffled by P: [5, 2, 0, 1, 4, 3], then we will get [f, c, a, b, e, d]
the corresponding inverse should be P^-1: [2, 3, 1, 5, 4, 0]
I wrote the following codes based on matrix multiplication (the transpose of permutation matrix is its inverse), but this approach is too slow when I utilize it on my model training. Does there exisits a faster implementation?
import torch
n = 10
x = torch.Tensor(list(range(n)))
print('Original array', x)
random_perm_indices = torch.randperm(n).long()
perm_matrix = torch.eye(n)[random_perm_indices].t()
x = x[random_perm_indices]
print('Shuffled', x)
restore_indices = torch.Tensor(list(range(n))).view(n, 1)
restore_indices = perm_matrix.mm(restore_indices).view(n).long()
x = x[restore_indices]
print('Restored', x)
| I obtained the solution in PyTorch Forum.
>>> import torch
>>> torch.__version__
'1.7.1'
>>> p1 = torch.tensor ([2, 0, 1])
>>> torch.argsort (p1)
tensor([1, 2, 0])
>>> p2 = torch.tensor ([5, 2, 0, 1, 4, 3])
>>> torch.argsort (p2)
tensor([2, 3, 1, 5, 4, 0])
Update:
The following solution is more efficient due to its linear time complexity:
def inverse_permutation(perm):
inv = torch.empty_like(perm)
inv[perm] = torch.arange(perm.size(0), device=perm.device)
return inv
| https://stackoverflow.com/questions/66832716/ |
How can torch.cat only have one tensor? | I think there is an error in line 53 of the following code:
CycleGan, Buffer for Images
It says:
return_images = torch.cat(return_images, 0) # collect all the images and return
What do you think, what should it read correctly? After having gone throught the code, I am unfortunately not sure what this specific line is supposed to do, but I think I understand the rest until line 52.
| torch.cat's first argument is expected to be a sequence of tensors rather than a single tensor. So you pass in like:
torch.cat([tensor_1, tensor_2, tensor_3]) # the right way
instead of
torch.cat(tensor_1, tensor_2, tensor_3) # not the right way
In the code you linked, they are forming a list called return_images which contains many tensors in it.
np.concatenate has the same behaviour too and PyTorch designers probably mimicked the choice from there. ("cat" is short for "concatenate" which is hard to write!).
| https://stackoverflow.com/questions/66833911/ |
Initialize weight in pytorch neural net | I have created this neural net:
class _netD(nn.Module):
def __init__(self, num_classes=1, nc=1, ndf=64):
super(_netD, self).__init__()
self.num_classes = num_classes
# nc is number of channels
# num_classes is number of classes
# ndf is the number of output channel at the first layer
self.main = nn.Sequential(
# input is (nc) x 28 x 28
# conv2D(in_channels, out_channels, kernelsize, stride, padding)
nn.Conv2d(nc, ndf , 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 14 x 14
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 7 x 7
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 3 x 3
nn.Conv2d(ndf * 4, ndf * 8, 3, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 2 x 2
nn.Conv2d(ndf * 8, num_classes, 2, 1, 0, bias=False),
# out size = batch x num_classes x 1 x 1
)
if self.num_classes == 1:
self.main.add_module('prob', nn.Sigmoid())
# output = probability
else:
pass
# output = scores
def forward(self, input):
output = self.main(input)
return output.view(input.size(0), self.num_classes).squeeze(1)
I want to loop through the different layers and apply a weight initialization depending on the type of layer. I am trying to do the following:
D = _netD()
for name, param in D.named_parameters():
if type(param) == nn.Conv2d:
param.weight.normal_(...)
But that is not working. Can you please help me?
Thanks
| type(param) will only return the actual datatype called a parameter for any type of weight or data in the model. Because named_parameters() doesn't return anything useful in the name either when used on an nn.sequential-based model, you need to look at the modules to see which layers are specifically related to the nn.Conv2d class using isinstance as such:
for layer in D.modules():
if isinstance(layer, nn.Conv2d):
layer.weight.data.normal_(...)
Or, the way that is recommended by Soumith Chintala himself, actually just loop through your main module itself:
for L,layer in D.main:
if isisntance(layer,nn.Conv2d):
layer.weight.data.normal_(..)
I actually prefer the first because you don't have to specify the exact nn.sequential module itself, and will search all possible modules in the model, but either one should do the job for you.
| https://stackoverflow.com/questions/66846599/ |
How to quickly generate a random permutation moving each element with a distance less than K? | I know the following PyTorch API can perform a global random shuffle for 1D array [0, ... , n-1]:
torch.randperm(n)
but I'm confused on how to quickly generate a random permutation, such that each element of the shuffled array satisfying:
K = 10 # should be positive
shuffled_array = rand_perm_func(n, K) # a mysterious function
for i in range(n):
print(abs(shuffled_array[i] - i) < K) # should be True for each i
which means that each element is moved with a distance less than K. Does there exists fast implementations for 1D arrays and 2D arrays?
Thanks to @PM 2Ring, I wrote the following code:
import torch
# randomly produces a 1-D permutation index array,
# such that each element of the shuffled array has
# a distance less than K from its original location
def rand_perm(n, K):
o = torch.arange(0, n)
if K <= 1:
return o
while True:
p = torch.randperm(n)
d = abs(p - o) < K
if bool(d.all()):
return p
if __name__ == '__main__':
for i in range(10):
print(rand_perm(10, 2))
but it seems that when n is large, and K is small, the generation will take a very long time. Does there exists a more efficient implementation?
| Here's a recursive generator in plain Python (i.e. not using PyTorch or Numpy) that produces permutations of range(n) satisfying the given constraint.
First, we create a list out to contain the output sequence, setting each slot in out to -1 to indicate that slot is unused. Then, for each value i we create a list avail of indices in the permitted range that aren't already occupied. For each j in avail, we set out[j] = i and recurse to place the next i. When i == n, all the i have been placed, so we've reached the end of the recursion, and out contains a valid solution, which gets propagated back up the recursion tree.
from random import shuffle
def rand_perm_gen(n, k):
out = [-1] * n
def f(i):
if i == n:
yield out
return
lo, hi = max(0, i-k+1), min(n, i+k)
avail = [j for j in range(lo, hi) if out[j] == -1]
if not avail:
return
shuffle(avail)
for j in avail:
out[j] = i
yield from f(i+1)
out[j] = -1
yield from f(0)
def test(n=10, k=3, numtests=10):
for j, a in enumerate(rand_perm_gen(n, k), 1):
print("\n", j, a)
for i, u in enumerate(a):
print(f"{i}: {u} -> {(u - i)}")
if j == numtests:
break
test()
Typical output
1 [1, 0, 3, 2, 6, 5, 4, 8, 9, 7]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 5 -> 0
6: 4 -> -2
7: 8 -> 1
8: 9 -> 1
9: 7 -> -2
2 [1, 0, 3, 2, 6, 5, 4, 9, 8, 7]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 5 -> 0
6: 4 -> -2
7: 9 -> 2
8: 8 -> 0
9: 7 -> -2
3 [1, 0, 3, 2, 6, 5, 4, 9, 7, 8]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 5 -> 0
6: 4 -> -2
7: 9 -> 2
8: 7 -> -1
9: 8 -> -1
4 [1, 0, 3, 2, 6, 5, 4, 8, 7, 9]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 5 -> 0
6: 4 -> -2
7: 8 -> 1
8: 7 -> -1
9: 9 -> 0
5 [1, 0, 3, 2, 6, 5, 4, 7, 8, 9]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 5 -> 0
6: 4 -> -2
7: 7 -> 0
8: 8 -> 0
9: 9 -> 0
6 [1, 0, 3, 2, 6, 5, 4, 7, 9, 8]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 5 -> 0
6: 4 -> -2
7: 7 -> 0
8: 9 -> 1
9: 8 -> -1
7 [1, 0, 3, 2, 6, 7, 4, 5, 9, 8]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 7 -> 2
6: 4 -> -2
7: 5 -> -2
8: 9 -> 1
9: 8 -> -1
8 [1, 0, 3, 2, 6, 7, 4, 5, 8, 9]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 6 -> 2
5: 7 -> 2
6: 4 -> -2
7: 5 -> -2
8: 8 -> 0
9: 9 -> 0
9 [1, 0, 3, 2, 5, 6, 4, 7, 9, 8]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 5 -> 1
5: 6 -> 1
6: 4 -> -2
7: 7 -> 0
8: 9 -> 1
9: 8 -> -1
10 [1, 0, 3, 2, 5, 6, 4, 7, 8, 9]
0: 1 -> 1
1: 0 -> -1
2: 3 -> 1
3: 2 -> -1
4: 5 -> 1
5: 6 -> 1
6: 4 -> -2
7: 7 -> 0
8: 8 -> 0
9: 9 -> 0
Here's a live version running on SageMathCell.
This approach is faster than generating all permutations and filtering them, but it is still slow for large n. You can improve the speed by removing the shuffle call, in which case the yielded permutations are in lexicographic order.
If you just want a single solution, use next, eg
perm = next(rand_perm_gen(10, 3))
Note that all solutions share the same out list. So if you need to save those solutions in a list you have to copy them, eg
perms = [seq.copy() for seq in rand_perm_gen(5, 2)]
| https://stackoverflow.com/questions/66847769/ |
How to pad zeros on Batch, PyTorch | Is there a better way to do this? How to pad tensor with zeros, without creating new tensor object? I need inputs to be of the same batchsize all the time, so I want to pad inputs that are smaller than batchsize with zeros. Like padding zeros in NLP when sequence length is shorter, but this is padding for batch.
Currently, I create a new tensor, but because of that, my GPU will go out of memory. I don't want to reduce batchsize by half to handle this operation.
import torch
from torch import nn
class MyModel(nn.Module):
def __init__(self, batchsize=16):
super().__init__()
self.batchsize = batchsize
def forward(self, x):
b, d = x.shape
print(x.shape) # torch.Size([7, 32])
if b != self.batchsize: # 2. I need batches to be of size 16, if batch isn't 16, I want to pad the rest to zero
new_x = torch.zeros(self.batchsize,d) # 3. so I create a new tensor, but this is bad as it increase the GPU memory required greatly
new_x[0:b,:] = x
x = new_x
b = self.batchsize
print(x.shape) # torch.Size([16, 32])
return x
model = MyModel()
x = torch.randn((7, 32)) # 1. shape's batch is 7, because this is last batch, and I dont want to "drop_last"
y = model(x)
print(y.shape)
| You can pad extra elements like so:
import torch.nn.functional as F
n = self.batchsize - b
new_x = F.pad(x, (0,0,n,0)) # pad the start of 2d tensors
new_x = F.pad(x, (0,0,0,n)) # pad the end of 2d tensors
new_x = F.pad(x, (0,0,0,0,0,n)) # pad the end of 3d tensors
| https://stackoverflow.com/questions/66848738/ |
Interpreting the output tokenization of BERT for a given word | from bert_embedding import BertEmbedding
bert_embedding = BertEmbedding(model='bert_12_768_12', dataset_name='wiki_multilingual_cased')
output = bert_embedding("any")
I need clarification on the output of mBERT embeddings. I'm aware that WordPiece tokenization is used to break up the input text. Also I observed that on providing a single word (say "any") as input, the output has length equal to the number of characters in the input (in our case, 3). output[i] is a tuple of lists where the first list contains the character at ith position with the 'unknown' token preceding and following it as different elements in the array. Following this are three (= length of the input word) arrays (embeddings) of size 768 each. Why does the output seem to be tokenized character-wise (rather than wordpiece tokenized)?
Also found out the output form changes when the input is given in a list as:bert_embedding(["any"]). The output now is a single tuple with ['[UNK]', 'state', '[UNK]'] as the first element followed by three different embeddings conceivably corresponding to the three tokens listed above.
If I need the embedding of the last subword (not simply of the last character or the whole word) for a given input word, how do I access it?
| Checked their github page. About the input format: YES it is expected as a list (of strings). Also this particular implementation provides token ( = word ) level embeddings; so subword level embedings can't be retrieved directly although it provides a choice on how the word embeddings should be derived from their subword components ( by taking avg which is default or taking sum or just the last subword embedding). Refer to the Hugggingface interface for BERT for a finer control over how the embeddings are taken e.g. from the different layers and using which operations.
| https://stackoverflow.com/questions/66849433/ |
Understanding input and output size for Conv2d | I'm learning image classification using PyTorch (using CIFAR-10 dataset) following this link.
I'm trying to understand the input & output parameters for the given Conv2d code:
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
My conv2d() understanding (Please correct if I am wrong/missing anything):
since image has 3 channels that's why first parameter is 3.
6 is no of filters (randomly chosen)
5 is kernel size (5, 5) (randomly chosen)
likewise we create next layer (previous layer output is input of this layer)
Now creating a fully connected layer using linear function:
self.fc1 = nn.Linear(16 * 5 * 5, 120)
16 * 5 * 5: here 16 is the output of last conv2d layer, But what is 5 * 5 in this?.
Is this kernel size ? or something else? How to know we need to multiply by 5*5 or 4*4 or 3*3.....
I researched & got to know that since image size is 32*32, applying max pool(2) 2 times, so image size would be 32 -> 16 -> 8, so we should multiply it by last_ouput_size * 8 * 8 But in this link its 5*5.
Could anyone please explain?
| These are the dimensions of the image size itself (i.e. Height x Width).
Unpadded convolutions
Unless you pad your image with zeros, a convolutional filter will shrink the size of your output image by filter_size - 1 across the height and width:
3-filter takes a 5x5 image to a (5-(3-1) x 5-(3-1)) image
Zero padding preserves image dimensions
You can add padding in Pytorch by setting Conv2d(padding=...).
Chain of transformations
Since it has gone through:
Layer
Shape Transformation
one conv layer (without padding)
(h, w) -> (h-4, w-4)
a MaxPool
-> ((h-4)//2, (w-4)//2)
another conv layer (without padding)
-> ((h-8)//2, (w-8)//2)
another MaxPool
-> ((h-8)//4, (w-8)//4)
a Flatten
-> ((h-8)//4 * (w-8)//4)
We go from the original image size of (32,32) to (28,28) to (14,14) to (10,10) to (5,5) to (5x5).
To visualise this you can use the torchsummary package:
from torchsummary import summary
input_shape = (3,32,32)
summary(Net(), input_shape)
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 6, 28, 28] 456
MaxPool2d-2 [-1, 6, 14, 14] 0
Conv2d-3 [-1, 16, 10, 10] 2,416
MaxPool2d-4 [-1, 16, 5, 5] 0
Linear-5 [-1, 120] 48,120
Linear-6 [-1, 84] 10,164
Linear-7 [-1, 10] 850
================================================================
| https://stackoverflow.com/questions/66849867/ |
PyTorch: concat and flatten inputs of different shape | I have multiple inputs of different shape: (7,), (), (6,), how can I concatenate and flatten them to a single flatten input. My desired output shape is (14,).
For example: arr1= [1, 2, 3], arr2=6, arr3=[6,7], output=[1,2,3,6,6,7]. Although I can use multiple numpy.append, but that would be ugly.
| You can use torch.cat:
import torch
arr1 = torch.tensor([1, 2, 3])
arr2 = torch.tensor([6])
arr3 = torch.tensor([6,7])
torch.cat((arr1,arr2,arr3))
>>> tensor([1, 2, 3, 6, 6, 7])
| https://stackoverflow.com/questions/66850431/ |
Confusion when displaying an image from matplotlib.pyplot to tensorflow | I have this error: TypeError: Invalid shape (28, 28, 1) for image data
Here is my code:
import torch
import torchvision
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
from torchvision.utils import make_grid
from torch.utils.data.dataloader import DataLoader
from torch.utils.data import random_split
%matplotlib inline
# Load dataset
!wget www.di.ens.fr/~lelarge/MNIST.tar.gz
!tar -zxvf MNIST.tar.gz
from torchvision.datasets import MNIST
dataset = MNIST(root = './', train=True, download=True, transform=ToTensor())
#val_data = MNIST(root = './', train=False, download=True, transform=transform)
image, label = dataset[0]
print('image.shape:', image.shape)
plt.imshow(image.permute(1, 2, 0), cmap='gray') # HELP WITH THIS LINE
print('Label:', label)
I know that the pytorch does processing via this way: C x H x W,
and that matplotlib does it this way: H x W x C, yet when I change it to matplotlib's way, it gives me an error here. Am I missing something? Why does this happen?
| plt.imshow() expects 2D or 3D arrays. If the array has 3 dimensions then the last dimension should be 3 or 4. In your case the array has shape (28,28,1) and this is considered as a 3D array.
So the last dimension should be squeezed out in order to match imshow()'s requirements.
plt.imshow(np.squeeze(image.permute(1, 2, 0), axis = 2), cmap='gray')
| https://stackoverflow.com/questions/66851811/ |
Gradients vanishing despite using Kaiming initialization | I was implementing a conv block in pytorch with activation function(prelu). I used Kaiming initilization to initialize all my weights and set all the bias to zero. However as I tested these blocks (by stacking 100 such conv and activation blocks on top of each other), I noticed that the output I am getting values of the order of 10^(-10). Is this normal, considering I am stacking upto 100 layers. Adding a small bias to each layer fixes the problem. But in Kaiming initialization the biases are supposed to be zero.
Here is the conv block code
from collections import Iterable
def convBlock(
input_channels, output_channels, kernel_size=3, padding=None, activation="prelu"
):
"""
Initializes a conv block using Kaiming Initialization
"""
padding_par = 0
if padding == "same":
padding_par = same_padding(kernel_size)
conv = nn.Conv2d(input_channels, output_channels, kernel_size, padding=padding_par)
relu_negative_slope = 0.25
act = None
if activation == "prelu" or activation == "leaky_relu":
nn.init.kaiming_normal_(conv.weight, a=relu_negative_slope, mode="fan_in")
if activation == "prelu":
act = nn.PReLU(init=relu_negative_slope)
else:
act = nn.LeakyReLU(negative_slope=relu_negative_slope)
if activation == "relu":
nn.init.kaiming_normal_(conv.weight, nonlinearity="relu")
act = nn.ReLU()
nn.init.constant_(conv.bias.data, 0)
block = nn.Sequential(conv, act)
return block
def flatten(lis):
for item in lis:
if isinstance(item, Iterable) and not isinstance(item, str):
for x in flatten(item):
yield x
else:
yield item
def Sequential(args):
flattened_args = list(flatten(args))
return nn.Sequential(*flattened_args)
This is the test Code
ls=[]
for i in range(100):
ls.append(convBlock(3,3,3,"same"))
model=Sequential(ls)
test=np.ones((1,3,5,5))
model(torch.Tensor(test))
And the output I am getting is
tensor([[[[-1.7771e-10, -3.5088e-10, 5.9369e-09, 4.2668e-09, 9.8803e-10],
[ 1.8657e-09, -4.0271e-10, 3.1189e-09, 1.5117e-09, 6.6546e-09],
[ 2.4237e-09, -6.2249e-10, -5.7327e-10, 4.2867e-09, 6.0034e-09],
[-1.8757e-10, 5.5446e-09, 1.7641e-09, 5.7018e-09, 6.4347e-09],
[ 1.2352e-09, -3.4732e-10, 4.1553e-10, -1.2996e-09, 3.8971e-09]],
[[ 2.6607e-09, 1.7756e-09, -1.0923e-09, -1.4272e-09, -1.1840e-09],
[ 2.0668e-10, -1.8130e-09, -2.3864e-09, -1.7061e-09, -1.7147e-10],
[-6.7161e-10, -1.3440e-09, -6.3196e-10, -8.7677e-10, -1.4851e-09],
[ 3.1475e-09, -1.6574e-09, -3.4180e-09, -3.5224e-09, -2.6642e-09],
[-1.9703e-09, -3.2277e-09, -2.4733e-09, -2.3707e-09, -8.7598e-10]],
[[ 3.5573e-09, 7.8113e-09, 6.8232e-09, 1.2285e-09, -9.3973e-10],
[ 6.6368e-09, 8.2877e-09, 9.2108e-10, 9.7531e-10, 7.0011e-10],
[ 6.6954e-09, 9.1019e-09, 1.5128e-08, 3.3151e-09, 2.1899e-10],
[ 1.2152e-08, 7.7002e-09, 1.6406e-08, 1.4948e-08, -6.0882e-10],
[ 6.9930e-09, 7.3222e-09, -7.4308e-10, 5.2505e-09, 3.4365e-09]]]],
grad_fn=<PreluBackward>)
| Amazing question (and welcome to StackOverflow)! Research paper for quick reference.
TLDR
Try wider networks (64 channels)
Add Batch Normalization after activation (or even before, shouldn't make much difference)
Add residual connections (shouldn't improve much over batch norm, last resort)
Please check this out in this order and give a comment what (and if) any of that worked in your case (as I'm also curious).
Things you do differently
Your neural network is very deep, yet very narrow (81 parameters per layer only!)
Due to above, one cannot reliably create those weights from normal distribution as the sample is just too small.
Try wider networks, 64 channels or more
You are trying much deeper network than they did
Section: Comparison Experiments
We conducted comparisons on a deep but efficient model with 14 weight
layers (actually 22 was also tested in comparison with Xavier)
That was due to date of release of this paper (2015) and hardware limitations "back in the days" (let's say)
Is this normal?
Approach itself is quite strange with layers of this depth, at least currently;
each conv block is usually followed by activation like ReLU and Batch Normalization (which normalizes signal and helps with exploding/vanishing signals)
usually networks of this depth (even of depth half of what you've got) use also residual connections (though this is not directly linked to vanishing/small signal, more connected to degradation problem of even deep networks, like 1000 layers)
| https://stackoverflow.com/questions/66853692/ |
Legacy torchtext 0.9.0 | In the latest release of torchtext they moved a lot of features to torchtext.legacy, I want to do the same things like torchtext.legacy.data.Field and other features without using legacy, is that can be done? and how?
| EDIT:
here is a release note about 0.9.0 version
here is the migration guide
Also, in the first link, there are counterparts for legacy Datasets.
Old answer (might be useful)
You could go for an alias, namely:
import torchtext.legacy as torchtext
But this is a bad idea for multiple reasons:
It became legacy for a reason (you can always change your existing code to torchtext.legacy.data.Field)
Very confusing - torchtext should torchtext, not torchtext.legacy
Unable to import torchtext as... torchtext - because this alias is already taken.
You could also do some workarounds like assigning torch.legacy.data.Field to torch.data.Field and what not, but it is a horrible idea.
If you want to work with this legacy stuff, you can always stay with smaller version of torchtext like 0.8.0 and this would make sense
| https://stackoverflow.com/questions/66854921/ |
Serialiazation of pytorch state_dict changes after loading into new model instance | Why do the bytes obtained from serializing pytorch state_dicts change after loading the a state_dict into a new instance of the same model architecture?
Have a look:
import binascii
import torch.nn as nn
import pickle
lin1 = nn.Linear(1, 1, bias=False)
lin1s = pickle.dumps(lin1.state_dict())
print("--- original model ---")
print(f"hash of state dict: {hex(binascii.crc32(lin1s))}")
print(f"weight: {lin1.state_dict()['weight'].item()}")
lin2 = nn.Linear(1, 1, bias=False)
lin2.load_state_dict(pickle.loads(lin1s))
lin2s = pickle.dumps(lin2.state_dict())
print("\n--- model from deserialized state dict ---")
print(f"hash of state dict: {hex(binascii.crc32(lin2s))}")
print(f"weight: {lin2.state_dict()['weight'].item()}")
prints
--- original model ---
hash of state dict: 0x4806e6b6
weight: -0.30337071418762207
--- model from deserialized state dict ---
hash of state dict: 0xe2881422
weight: -0.30337071418762207
As you can see, the hashes of the (pickles of the) state_dicts are different whereas the weight is copied over correctly. I would assume that a state_dict from the new model equals the old one in every aspect. Seemingly, it does not, hence the different hashes.
| This might be because pickle is not expected to produce a repr suitable for hashing (See Using pickle.dumps to hash mutable objects). It might be a better idea to compare keys, and then compare tensors stored in the dict-keys for equality/closeness.
Below is a rough implementation of that idea.
def compare_state_dict(dict1, dict2):
# compare keys
for key in dict1:
if key not in dict2:
return False
for key in dict2:
if key not in dict1:
return False
for (k,v) in dict1.items():
if not torch.all(torch.isclose(v, dict2[k]))
return False
return True
However, if you would still like to hash a state-dict and avoid using comparisons like isclose above, you can use a function like below.
def dict_hash(dictionary):
for (k,v) in dictionary.items():
# it did not work without hashing the tensor
dictionary[k] = hash(v)
# dictionaries are not hashable and need to be converted to frozenset.
return hash(frozenset(sorted(dictionary.items(), key=lambda x: x[0])))
| https://stackoverflow.com/questions/66860645/ |
Flask app serving GPT2 on Google Cloud Run not persisting downloaded files? | I have a Flask app running on Google Cloud Run, which needs to download a large model (GPT-2 from huggingface). This takes a while to download, so I am trying to set up so that it only downloads on deployment and then just serves this up for subsequent visits. That is I have the following code in a script that is imported by my main flask app app.py:
import torch
# from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers import AutoTokenizer, AutoModelWithLMHead
# Disable gradient calculation - Useful for inference
torch.set_grad_enabled(False)
# Check if gpu or cpu
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load tokenizer and model
try:
tokenizer = AutoTokenizer.from_pretrained("./gpt2-xl")
model = AutoModelWithLMHead.from_pretrained("./gpt2-xl")
except Exception as e:
print('no model found! Downloading....')
AutoTokenizer.from_pretrained('gpt2').save_pretrained('./gpt2-xl')
AutoModelWithLMHead.from_pretrained('gpt2').save_pretrained('./gpt2-xl')
tokenizer = AutoTokenizer.from_pretrained("./gpt2-xl")
model = AutoModelWithLMHead.from_pretrained("./gpt2-xl")
model = model.to(device)
This basically tries to load the the downloaded model, and if that fails it downloads a new copy of the model. I have autoscaling set to a minimum of 1 which I thought would mean something would always be running and therefore the downloaded file would persist even after activity. But it keeps having to redownload the model which freezes up the app when some people try to use it. I am trying to recreate something like this app https://text-generator-gpt2-app-6q7gvhilqq-lz.a.run.app/ which does not appear to have the same load time issue . In the flask app itself I have the following:
@app.route('/')
@cross_origin()
def index():
prompt = wp[random.randint(0, len(wp)-1)]
res = generate(prompt, size=75)
generated = res.split(prompt)[-1] + '\n \n...TO BE CONTINUED'
#generated = prompt
return flask.render_template('main.html', prompt = prompt, output = generated)
if __name__ == "__main__":
app.run(host='0.0.0.0',
debug=True,
port=PORT)
But it seems to redownload the models every few hours...how can I avoid having the app re-downloading the models and the app freezing for those who want to try it?
| Data written to the filesystem does not persist when the container instance is stopped.
Cloud Run lifetime is the time between an HTTP Request and the HTTP response. Overlapped requests extend this lifetime. Once the final HTTP response is sent your container can be stopped.
Cloud Run instances can run on different hardware (clusters). One instance will not have the same temporary data as another instance. Instances can be moved. Your strategy of downloading a large file and saving it to the in-memory file system will not work consistently.
Filesystem access
Also note that the file system is in-memory, which means you need to have additional memory to store files.
| https://stackoverflow.com/questions/66873983/ |
How to extract individual JPEG images from a HDF5 file | I have a big HDF5 file with the images and its corresponding ground truth density map.
I want to put them into the network CRSNet and it requires the images in separate files.
How can I achieve that? Thank you very much.
-- Basic info I have a HDF5 file with two keys "images" and "density_maps". Their shapes are (300, 380, 676, 1).
300 stands for the number of images, 380 and 676 refer to the height and width respectively.
-- What I need to put into the CRSNet network are the images (jpg) with their corresponding HDF5 files. The shape of them would be (572, 945).
Thanks a lot for any comment and discussion!
| For starters, a quick clarification on h5py and HDF5. h5py is a Python package to read HDF5 files. You can also read HDF5 files with the PyTables package (and with other languages: C, C++, FORTRAN).
I'm not entirely sure what you mean by "the images (jpg) with their corresponding h5py (HDF5) files" As I understand all of your data is in 1 HDF5 file. Also, I don't understand what you mean by: "The shape of them would be (572, 945)." This is different from the image data, right? Please update your post to clarify these items.
It's relatively easy to extract data from a dataset. This is how you can get the "images" as NumPy arrays and and use cv2 to write as individual jpg files. See code below:
with h5py.File('yourfile.h5','r') as h5f:
for i in range(h5f['images'].shape[0]):
img_arr = h5f['images'][i,:] # slice notation gets [i,:,:,:]
cv2.imwrite(f'test_img_{i:03}.jpg',img_arr)
Before you start coding, are you sure you need the images as individual image files, or individual image data (usually NumPy arrays)? I ask because the first step in most CNN processes is reading the images and converting them to arrays for downstream processing. You already have the arrays in the HDF5 file. All you may need to do is read each array and save to the appropriate data structure for CRSNet to process them. For example, here is the code to create a list of arrays (used by TensorFlow and Keras):
image_list = []
with h5py.File('yourfile.h5','r') as h5f:
for i in range(h5f['images'].shape[0]):
image_list.append( h5f['images'][i,:] ) # gets slice [i,:,:,:]
| https://stackoverflow.com/questions/66874324/ |
AllenNLP Multi-Task Model: Keep encoder weights for new heads | I have trained a (AllenNLP) multi-task model. I would like to keep the encoder/backbone weights and continue training with new heads on new datasets. How can I do that with AllenNLP?
I have two basic ideas for how to do that:
I followed this AllenNLP tutorial to load the trained model and then instead of just making predictions I wanted to change the configuration and the model-heads to continue training on the new datasets...but I am kinda lost in how to do that.
I guess it should be possible to (a) save the state-dict of the previously trained encoder in a file and then (b) point to those weights in the configuration file for the new model (instead of pointing to "bert-base-cased"-weights for example). But looking at the PretrainedTransformerEmbedder-class I don't see how I could pass my own model-weights to that class.
As an additional question: Is it also possible to save the weights of the heads separately and initialize new heads with those weights?
Any help is appreciated :)
| Your second idea is the preferred way, which you can accomplish by using a PretrainedModelInitializer. See the CopyNet model for an example of how to add this to your model.
| https://stackoverflow.com/questions/66874397/ |
Receiving coordinates from inference Pytorch | I'm trying to get the coordinates of the pixels inside of a mask that is generated by Pytorches DefaultPredictor, to later on get the polygon corners and use this in my application.
However, DefaultPredictor produced a tensor of pred_masks, in the following format: [False, False ... False], ... [False, False, .. False]
Where the length of each individual list is length of the image, and the number of total lists is the height of the image.
Now, as I need to get the pixel coordinates that are inside of the mask, the simple solution seemed to be looping through the pred_masks, checking the value and if == "True" creating tuples of these and adding them to a list. However, as we are talking about images with width x height of about 3200 x 1600, this is a relatively slow process (~4 seconds to loop through a single 3200x1600, yet as there are quite some objects for which I need to get the inference in the end - this will end up being incredibly slow).
What would be the smarter way to get the the coordinates (mask) of the detected object using the pytorch (detectron2) model?
Please find my code below for reference:
from __future__ import print_function
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.data import MetadataCatalog
from detectron2.data.datasets import register_coco_instances
import cv2
import time
# get image
start = time.time()
im = cv2.imread("inputImage.jpg")
# Create config
cfg = get_cfg()
cfg.merge_from_file("detectron2_repo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # Set threshold for this model
cfg.MODEL.WEIGHTS = "model_final.pth" # Set path model .pth
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
cfg.MODEL.DEVICE='cpu'
register_coco_instances("dataset_test",{},"testval.json","Images_path")
test_metadata = MetadataCatalog.get("dataset_test")
# Create predictor
predictor = DefaultPredictor(cfg)
# Make prediction
outputs = predictor(im)
#Loop through the pred_masks and check which ones are equal to TRUE, if equal, add the pixel values to the true_cords_list
outputnump = outputs["instances"].pred_masks.numpy()
true_cords_list = []
x_length = range(len(outputnump[0][0]))
#y kordinaat on range number
for y_cord in range(len(outputnump[0])):
#x cord
for x_cord in x_length:
if str(outputnump[0][y_cord][x_cord]) == "True":
inputcoords = (x_cord,y_cord)
true_cords_list.append(inputcoords)
print(str(true_cords_list))
end = time.time()
print(f"Runtime of the program is {end - start}") # 14.29468035697937
//
EDIT:
After changing the for loop partially to compress - I've managed to reduce the runtime of the for loop by ~3x - however, ideally I would like to receive this from the predictor itself if possible.
y_length = len(outputnump[0])
x_length = len(outputnump[0][0])
true_cords_list = []
for y_cord in range(y_length):
x_cords = list(compress(range(x_length), outputnump[0][y_cord]))
if x_cords:
for x_cord in x_cords:
inputcoords = (x_cord,y_cord)
true_cords_list.append(inputcoords)
| The problem is easily solvable with sufficient knowledge about NumPy or PyTorch native array handling, which allows 100x speedups compared to Python loops. You can study the NumPy library, and PyTorch tensors are similar to NumPy in behaviour.
How to get indices of values in NumPy:
import numpy as np
arr = np.random.rand(3,4) > 0.5
ind = np.argwhere(arr)[:, ::-1]
print(arr)
print(ind)
In your particular case this will be
ind = np.argwhere(outputnump[0])[:, ::-1]
How to get indices of values in PyTorch:
import torch
arr = torch.rand(3, 4) > 0.5
ind = arr.nonzero()
ind = torch.flip(ind, [1])
print(arr)
print(ind)
[::-1] and .flip are used to inverse the order of coordinates from (y, x) to (x, y).
NumPy and PyTorch even allow checking simple conditions and getting the indices of values that meet these conditions, for further understanding see the according NumPy docs article
When asking, you should provide links for your problem context. This question is actually about Facebook object detector, where they provide a nice demo Colab notebook.
| https://stackoverflow.com/questions/66880163/ |
How to get the partial derivative of probability to input in pyTorch? | I want to generate attack samples via the following steps:
Find a pre-trained CNN classification model, whose input is X and output is P(y|X), and the most possible result of X is y.
I want to input X' and get y_fool, where X' is not far away from X and y_fool is not equal to y
The steps for getting X' is:enter image description here
How can I get the partial derivative described in the image?
Here is my code but I got None: (The model is Vgg16)
x = torch.autograd.Variable(image, requires_grad=True)
output = model(image)
prob = nn.functional.softmax(output[0], dim=0)
prob.backward(torch.ones(prob.size()))
print(x.grad)
How should I modify my codes? Could someone help me? I would be absolutely grateful.
| Here, the point is to backpropagate a "false" example through the network, in other words you need to maximize one particular coordinate of your output which does not correspond to the actual label of x.
Let's say for example that your model outputs N-dimensional vectors, that x label should be [1, 0, 0, ...] and that we will try to make the model actually predict [0, 1, 0, 0, ...] (so y_fool actually has its second coordinate set to 1, instead of the first one).
Quick note on the side : Variable is deprecated, just set the requires_grad flag to True. So you get :
x = torch.tensor(image, requires_grad=True)
output = model(x)
# If the model is well trained, prob_vector[1] should be almost 0 at the beginning
prob_vector = nn.functional.softmax(output, dim=0)
# We want to fool the model and maximize this coordinate instead of prob_vector[0]
fool_prob = prob_vector[1]
# fool_prob is a scalar tensor, so we can backward it easy
fool_prob.backward()
# and you should have your gradients :
print(x.grad)
After that, if you want to use an optimizer in your loop to modify x, remember that pytorch optimizer.step method tries to minimize the loss, whereas you want to maximize it. So either you use a negative learning rate or you change the backprop sign :
# Maximizing a scalar is minimizing its opposite
(-fool_prob).backward()
| https://stackoverflow.com/questions/66882814/ |
torchvision.io cannot find reference read_image() in __init.py__ | I am trying to import read_image from torchvision.io
and when i hover the error it says
torchvision.io cannot find reference 'read_image' in '__init.py__'
from torchvision.io import read_image
I am following this example
https://pytorch.org/tutorials/beginner/basics/data_tutorial.html
from torchvision.io import read_image ImportError: cannot import name 'read_image' from 'torchvision.io' (C:\Users\X\.conda\envs\Pytorch37\lib\site-packages\torchvision\io\__init__.py)
This is the error I'm getting.
| You need to upgrade the version of your pytorch
# Just for my version of system and cuda, you need to replace the cmd below based on your condition
pip install torch torchvision torchaudio
| https://stackoverflow.com/questions/66883963/ |
PyTorch tensors: new tensor based on old tensor and indices | I'm new to tensors and having a headache over this problem:
I have an index tensor of size k with values between 0 and k-1:
tensor([0,1,2,0])
and the following matrix:
tensor([[[0, 9],
[1, 8],
[2, 3],
[4, 9]]])
I want to create a new tensor which contains the rows specified in index, in that order. So I want:
tensor([[[0, 9],
[1, 8],
[2, 3],
[0, 9]]])
Outside tensors I'd do this operation more or less like this:
new_matrix = [matrix[i] for i in index]
How do I do something similar in PyTorch on tensors?
| You use fancy indexing:
from torch import tensor
index = tensor([0,1,2,0])
t = tensor([[[0, 9],
[1, 8],
[2, 3],
[0, 9]]])
result = t[:, index, :]
to get
tensor([[[0, 9],
[1, 8],
[2, 3],
[0, 9]]])
Note that t.shape == (1, 4, 2) and you want to index on the second axis; so we apply it in the second argument and keep the rest the same via :s i.e. [:, index, :].
| https://stackoverflow.com/questions/66885064/ |
create representation of questions using LSTM via a pre-trained word embedding such as GloVe | I am new in LSTM and python. My goal is to represent the sentence using LSTM.
Could u tell me I am doing the right? how to fix the error when running the following code ?
"TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not str"
import torch
import torch.nn as nn
import numpy as np
from torch import optim
from torch.nn.utils.rnn import pad_packed_sequence, pack_padded_sequence
import torchvision.datasets as datasets # Standard datasets
import torchvision.transforms as transforms
import json
class RNN_LSTM(nn.Module)
def __init__(self, input_size, hidden_size, num_layers, num_classes, vocab_size,
lstm_dropout, device, word_emb_file):
super(RNN_LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm_dropout = lstm_dropout
self.lstm_drop = nn.Dropout(p=self.lstm_dropout)
self.word_emb_file = word_emb_file
self.device = device
# initialize text embeddings
self.word_embeddings = nn.Embedding(vocab_size, input_size)
self.word_embeddings.weight = nn.Parameter(
torch.from_numpy(
np.pad(np.load(self.word_emb_file), ((0, 1), (0, 0)), 'constant')).type(
'torch.FloatTensor'))
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
def forward(self, sentence, question_len):
embeds = self.word_embeddings(sentence)
packed_output = pack_padded_sequence(embeds, question_len, batch_first=True)
outputs, (hidden, cell_state) = self.lstm(packed_output)
outputs, outputs_length = pad_packed_sequence(outputs, batch_first=True)
outputs = torch.cat([hidden[0,:,:], hidden[1,:,:]], dim=-1)
return outputs
lstm_dropout = 0.3
input_size = 300
hidden_size = 256
num_classes = 10
num_layers = 1
device = 'cpu'
vocab_size = 2000
word_emb_file = "/home/project/word_emb_300d.npy"
model = RNN_LSTM(input_size, hidden_size, num_layers, num_classes, vocab_size, lstm_dropout, device, word_emb_file)
model.word_embeddings('Blue Skye')
| Please see torch embedding tutorial and use embedding with keras for knowledge about word embeddings.
Basically nn.Embedding(vocab_size, embed_size) is a matrix vocab_size*embed_size, where each row corresponds to a word's representation.
In order to know which word correspond to which row you should define a vocabulary that can transform a word into an index (for example a python dictionary {'hello': 0, 'word': 1}).
And before that to transform a sentence into word (in order to compute the vocabulary) you need to tokenize the sentence (using nltk.word_tokenize or str.split() for example).
Though torch.nn.Embedding expects a Tensor, so if you want to process multiple sentence into batches, and the sentence have different length you will need to pad the sentence with empty tokens in order to fit them into a Tensor.
# This is pseudo code
# Preprocess data
documents = ['first sentence', 'the second sentence']
tokenized_documents = [d.split(' ') for d in documents]
# Create vocabulary and add a special token for padding
words = [w for d in documents for word in tokenized_documents]
vocabulary = {w: i+1 for i, w in enumerate(set(words))}
vocabulary['PAD'] = 0
indexed_documents = [[vocabulary[w] for w in d] for d in tokenized_documents]
# indexed_documents will look like : [[0, 1], [2, 3, 1]]
padded_documents = torch.nn.utils.rnn.pad_sequence(
indexed_documents,
padding_value=vocabulary['PAD'])
# Data can be fed to the neural network
model.word_embeddings(torch.tensor(padded_documents))
| https://stackoverflow.com/questions/66888011/ |
New tensor based on old tensor and 2d indices | I previously asked:
PyTorch tensors: new tensor based on old tensor and indices
I have the same problem now but need to use a 2d index tensor.
I have a tensor col of size [batch_size, k] with values between 0 and k-1:
idx = tensor([[0,1,2,0],
[0,3,2,2],
...])
and the following matrix:
x = tensor([[[0, 9],
[1, 8],
[2, 3],
[4, 9]],
[[0, 0],
[1, 2],
[3, 4],
[5, 6]]])
I want to create a new tensor which contains the rows specified in index, in that order. So I want:
tensor([[[0, 9],
[1, 8],
[2, 3],
[0, 9]],
[[0, 0],
[5, 6],
[3, 4],
[3, 4]]])
Currently I'm doing it like this:
for i, batch in enumerate(t):
t[i] = batch[col[i]]
How can I do it more efficiently?
| you should use torch gather to achieve this. It would actually also work for the otehr question you linked, but this is left as an exercise to the reader :p
Let us call idx your first tensor and source the second one. Their respective dimensions are (B,N) and (B, K, p) (with p=2 in your example), and all values of idx are between 0 and K-1.
So to use torch gather, we first need to express your operation as a nested for loop. In your case, what you actually want to achieve is :
for b in range(B):
for i in range(N):
for j in range(p):
# This kind of nested for loops is what torch.gether actually does
target[b,i,j] = source[b, idx[b,i,j], j]
But that does not work because idx is a 2D tensor, not a 3D one. Well, no big deal, let's make it a 3D tensor. We want it to have shape (B, N, p) and be actually constant along the last dimension. Then we can replace the for loop with a call to gather:
reshaped_idx = idx.unsqueeze(-1).repeat(1,1,2)
target = source.gather(1, reshaped_idx)
# or : target = torch.gather(source, 1, reshaped_idx)
| https://stackoverflow.com/questions/66888836/ |
Size Mismatch for Functional Linear Layer | I apologize that this is probably a simple question that has been answered before, but I could not find the answer. Iβm attempting to use a CNN to extract features and then input that into a FC network that outputs 2 variables. Iβm attempting to use the functional linear layer as a way to dynamically handle the flattened features. The self.cnn is a Sequential container which last layer is the nn.Flatten(). When I print the size of x after the CNN I see it is 15x152064, so Iβm unclear why the F.linear layer is failing to run with the error below. Any help would be appreciated.
RuntimeError: size mismatch, get 15, 15x152064,2
x = self.cnn(x)
batch_size, channels = x.size()
x = F.linear(x, torch.Tensor([256,channels]))
y_hat = self.FC(x)
| torch.Tensor([256, channels]) does not create a tensor of size (256, channels) but the 1D tensor containing the values 256 and channels instead. I don't know how you want to initialize your weights, but there are a couple options :
# Identity transform:
x = F.linear(x, torch.ones(256,channels))
# Random transform :
x = F.linear(x, torch.randn(256,channels))
| https://stackoverflow.com/questions/66891161/ |
PyTorch shows the error " 'NoneType' object has no attribute 'zero_' " when calling the zero_ method | Can someone please answer why my code is showing an error.
Thanks in advance.
Code:
import torch
torch.manual_seed(0)
a = torch.rand((1, 3), requires_grad = True)
w1 = torch.rand((3, 3), requires_grad = True)
w2 = torch.rand((3, 1), requires_grad = True)
d = torch.matmul(torch.matmul(a, w1), w2)
L = (10 - d)
L.backward()
w1 = w1 - w1.grad*0.001
w1.grad.zero_()
Error:
AttributeError
'NoneType' object has no attribute 'zero_'
| The line
w1 = w1 - w1.grad*0.001
is reassigning w1, so afterwards w1 no longer refers to the same tensor it did before. To maintain all the internal state of w1 (e.g. the .grad member) you must update w1 in place. Since this is a leaf tensor we also need to disable construction of the computation graph.
with torch.no_grad():
w1.sub_(w1.grad * 0.001)
| https://stackoverflow.com/questions/66891401/ |
Torch is not saving my freezed and optimized model | when I start my script it runs fine until it hits the traced_model.save(args.save_path) statement after that the script just stop running.
Could someone please help me out with this?
import argparse
import torch
from model import SpeechRecognition
from collections import OrderedDict
def trace(model):
model.eval()
x = torch.rand(1, 81, 300)
hidden = model._init_hidden(1)
traced = torch.jit.trace(model, (x, hidden))
return traced
def main(args):
print("loading model from", args.model_checkpoint)
checkpoint = torch.load(args.model_checkpoint, map_location=torch.device('cpu'))
h_params = SpeechRecognition.hyper_parameters
model = SpeechRecognition(**h_params)
model_state_dict = checkpoint['state_dict']
new_state_dict = OrderedDict()
for k, v in model_state_dict.items():
name = k.replace("model.", "") # remove `model.`
new_state_dict[name] = v
model.load_state_dict(new_state_dict)
print("tracing model...")
traced_model = trace(model)
print("saving to", args.save_path)
traced_model.save(args.save_path)
print("Done!")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="testing the wakeword engine")
parser.add_argument('--model_checkpoint', type=str, default='your/checkpoint_file', required=False,
help='Checkpoint of model to optimize')
parser.add_argument('--save_path', type=str, default='path/where/you/want/to/save/the/model', required=False,
help='path to save optmized model')
args = parser.parse_args()
main(args)
If you start the script you can even see where it stops working because print("Done!") is not executed.
Here is what it looks in the terminal when I run the script:
loading model from C:/Users/supre/Documents/Python Programs/epoch=0-step=11999.ckpt
tracing model...
saving to C:/Users/supre/Documents/Python Programs
| According to the PyTorch documentation, a common PyTorch convention is to save models using either a .pt or .pth file extension.
To save model checkpoints or multiple components, organize them in a dictionary and use torch.save() to serialize the dictionary.
For example,
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
A common PyTorch convention is to save these checkpoints using the .tar file extension.
Hope this answers your question.
| https://stackoverflow.com/questions/66892593/ |
Neural Network on GPU using system RAM instead of GPU memory | I built a basic chatbot using PyTorch, and in the training code, I moved both the neural network as well as the training data to the gpu. However, when I run the program, it uses up to 2GB of my ram. There is a little gpu memory that is used, but not that much. When i run the same program, but this time on the cpu, it takes only about 900mb of my ram.
Can anyone tell me why is is happening? i have attached my code, as well as a couple of screenshots.
Sorry if the answer is pretty obvious, I'm new to deep learning.
My code:
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import numpy as np
from nltk_utils import tokenizer, stem, bag_of_words
import pandas as pd
device = torch.device("cuda")
#Neural Network
class NeuralNetwork(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super().__init__()
self.l1 = nn.Linear(input_size, hidden_size)
self.l2 = nn.Linear(hidden_size, hidden_size)
self.l3 = nn.Linear(hidden_size, hidden_size)
self.l4 = nn.Linear(hidden_size, num_classes)
self.relu = nn.ReLU()
def forward(self, x):
out = self.relu(self.l1(x))
out = self.relu(self.l2(out))
out = self.relu(self.l3(out))
out = self.l4(out)
return out
#data initialization
data = pd.read_csv("path_to_train_data")
data = data.dropna(axis=0)
allwords = []
taglist = []
xy = []
ignorewords = ["?", "!", "'", ","]
taglist.extend(x for x in data["tag"] if x not in taglist)
#developing vocabulary
for x in data["pattern"]:
w = tokenizer(x)
allwords.extend(stem(y) for y in w if y not in ignorewords)
#making training data
for indx, x in enumerate(data["pattern"]):
w = tokenizer(x)
bag = bag_of_words(w, allwords)
tag = taglist.index(data["tag"].iloc[indx])
xy.append((bag, tag))
xtrain = np.array([x[0] for x in xy])
ytrain = np.array([x[1] for x in xy])
class TestDataset(Dataset):
def __init__(self):
self.num_classes = len(xtrain)
self.xdata = torch.from_numpy(xtrain.astype(np.float32))
self.ydata = torch.from_numpy(ytrain.astype(np.float32))
def __getitem__(self, index):
return self.xdata[index], self.ydata[index]
def __len__(self):
return self.num_classes
dataset = TestDataset()
train_data = DataLoader(dataset=dataset,
batch_size=8,
shuffle=True,
num_workers=0,
pin_memory=True)
inputSize = len(xtrain[0])
hiddenSize = 32
outputSize = len(taglist)
model = NeuralNetwork(inputSize, hiddenSize, outputSize).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
epochs = 5000
for epoch in range(epochs):
for (words, labels) in train_data:
words = words.to(device)
labels = labels.type(torch.LongTensor)
labels = labels.to(device)
y_pred = model(words)
loss = criterion(y_pred, labels)
optimizer.zero_grad(set_to_none=True)
loss.backward()
optimizer.step()
with torch.no_grad():
if (epoch + 1) % 10 == 0:
print(f"epoch: {epoch + 1}, loss: {loss:.30f}")
While running on GPU:
GPU utilization,
RAM utilization
While running on CPU:
RAM utilization
| RAM is the place where your data gets stacked to be proceed, the stacks are then transferred to SRAM or otherwise know as cache which is the nearest memory to CPU (referred as host). The general rule to select the memory of RAM for a system is considered as equivalent or greater than GPU memory.
For Example: If the GPU memory is 8GB you need to have a RAM of 8GB or greater to ensure optimal performance.
So, when running on GPU's (referred as Device) your data is directly transferred to GPU to perform tensor operations. But on CPU it can not perform parallel computations thus using lesser memory of RAM.
You can try it out with different batch_sizes and observe the RAM usage. Note that if your batch of data is not fitting in GPU memory, you will be seeing an CUDA out of memory error.
| https://stackoverflow.com/questions/66899430/ |
Select specific indexes of 3D Pytorch Tensor using a 1D long tensor that represents indexes | So I have a tensor that is M x B x C, where M is the number of models, B is the batch and C is the classes and each cell is the probability of a class for a given model and batch. Then I have a tensor of the correct answers which is just a 1D of size B we'll call "t". How do I use the 1D of size B to just return a M x B x 1, where the returned tensor is just the value at the correct class? Say the M x B x C tensor is called "blah" I've tried
blah[:, :, C]
for i in range(M):
blah[i, :, C]
blah[:, C, :]
The top 2 just return the values of indexes t in the 3rd dimension of every slice. The last one returns the values at t indexes in the 2nd dimension. How do I do this?
| We can get the desired result by combining advanced and basic indexing
import torch
# shape [2, 3, 4]
blah = torch.tensor([
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
# shape [3]
t = torch.tensor([2, 1, 0])
b = torch.arange(blah.shape[1]).type_as(t)
# shape [2, 3, 1]
result = blah[:, b, t].unsqueeze(-1)
which results in
>>> result
tensor([[[ 2],
[ 5],
[ 8]],
[[14],
[17],
[20]]])
| https://stackoverflow.com/questions/66900676/ |
How to run LSTM on very long sequence using Truncated Backpropagation in Pytorch (lightning)? | I have a very long time series I want to feed into an LSTM for classification per-frame.
My data is labeled per frame, and I know some rare events happen that influence the classification heavily ever since they occur.
Thus, I have to feed the entire sequence to get meaningful predictions.
It is known that just feeding very long sequences into LSTM is sub-optimal, since the gradients vanish or explode just like normal RNNs.
I wanted to use a simple technique of cutting the sequence to shorter (say, 100-long) sequences, and run the LSTM on each, then pass the final LSTM hidden and cell states as the start hidden and cell state of the next forward pass.
Here is an example I found of someone who did just that. There it is called "Truncated Back propagation through time". I was not able to make the same work for me.
My attempt in Pytorch lightning (stripped of irrelevant parts):
def __init__(self, config, n_classes, datamodule):
...
self._criterion = nn.CrossEntropyLoss(
reduction='mean',
)
num_layers = 1
hidden_size = 50
batch_size=1
self._lstm1 = nn.LSTM(input_size=len(self._in_features), hidden_size=hidden_size, num_layers=num_layers, batch_first=True)
self._log_probs = nn.Linear(hidden_size, self._n_predicted_classes)
self._last_h_n = torch.zeros((num_layers, batch_size, hidden_size), device='cuda', dtype=torch.double, requires_grad=False)
self._last_c_n = torch.zeros((num_layers, batch_size, hidden_size), device='cuda', dtype=torch.double, requires_grad=False)
def training_step(self, batch, batch_index):
orig_batch, label_batch = batch
n_labels_in_batch = np.prod(label_batch.shape)
lstm_out, (self._last_h_n, self._last_c_n) = self._lstm1(orig_batch, (self._last_h_n, self._last_c_n))
log_probs = self._log_probs(lstm_out)
loss = self._criterion(log_probs.view(n_labels_in_batch, -1), label_batch.view(n_labels_in_batch))
return loss
Running this code gives the following error:
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
The same happens if I add
def on_after_backward(self) -> None:
self._last_h_n.detach()
self._last_c_n.detach()
The error does not happen if I use
lstm_out, (self._last_h_n, self._last_c_n) = self._lstm1(orig_batch,)
But obviously this is useless, as the output from the current frame-batch is not forwarded to the next one.
What is causing this error? I thought detaching the output h_n and c_n should be enough.
How do I pass the output of a previous frame-batch to the next one and have torch back propagate each frame batch separately?
| Apparently, I missed the trailing _ for detach():
Using
def on_after_backward(self) -> None:
self._last_h_n.detach_()
self._last_c_n.detach_()
works.
The problem was self._last_h_n.detach() does not update the reference to the new memory allocated by detach(), thus the graph is still de-referencing the old variable which backprop went through.
The reference answer solved that by H = H.detach().
Cleaner (and probably faster) is self._last_h_n.detach_() which does the operation in place.
| https://stackoverflow.com/questions/66902573/ |
NumPy: pass subset of multi-dimensional indices to array | Suppose I have an array, and a (sub)set of indices:
arr = np.array([[[0,1,2],
[3,4,5],
[6,7,8]],
[[9, 10,11],
[12,13,14],
[15,16,17]]])
idx = [1,2]
And I wish to get, for each element of dim=0 of the array, the respective idx slice, i.e.:
>>> arr[:, idx[0], idx[1]]
array([ 5, 14])
Is there a way to do such slicing without hard-coding each index? Something like:
ar[:, *idx]
Note, the following is my current workaround:
idx = [slice(a.shape[0]), *idx]
a[idx]
but I was wondering if NumPy (or PyTorch, or a related library) supported a more 'elegant'/natural multi-dimensional indexing syntax for such cases.
| I think your workaround is pretty much as close as you can get.
You can try this to make it slightly more concise:
import numpy as np
arr = np.array([[[0,1,2],
[3,4,5],
[6,7,8]],
[[9, 10,11],
[12,13,14],
[15,16,17]]])
idx = [1,2]
print(
arr[(slice(None), *idx)]
)
This works because : is the same as slice(None)
| https://stackoverflow.com/questions/66903916/ |
PyTorch ROCm is out - How to select Radeon GPU as device | since Pytorch released the ROCm version, which enables me to use other gpus than nvidias, how can I select my radeon gpu as device in python?
Obviously, code like device = torch.cuda.is_available or device = torch.device("cuda") is not working. Thanks for any help. :)
| This took me forever to figure out. I'm still having some configuration issues with my AMD GPU, so I haven't been able to test that this works, but, according to this github pytorch thread, the Rocm integration is written so you can just call torch.device('cuda') and no actual porting is required!
See the thread here: https://github.com/pytorch/pytorch/issues/10670
| https://stackoverflow.com/questions/66909581/ |
How to get layer execution time on an AI model saved as .pth file? | I'm trying to run a Resnet-like image classification model on a CPU, and want to know the breakdown of time it takes to run each layer of the model.
The issue I'm facing is the github link https://github.com/facebookresearch/semi-supervised-ImageNet1K-models has the model saved as a .pth file. It is very large (100s of MB), and I don't know exactly how it differs from pytorch except it's binary.
I load the model from this file using the following script. But I don't see a way to modify the model or insert the t = time.time() variables/statements in between model layers to break down the time in each layer.
Questions:
Would running the model in the following script give a correct estimate of end-to-end time (t2-t1) it takes to run the model on the CPU, or would it also include pytorch compilation time?
How to insert time statements between consecutive layers to get a breakdown?
There is no inference/training script at the github link and only has the .pth file. So how exactly is one supposed to run inference or training? How to insert additional layers between consecutive layers of the .pth model and save them?
#!/usr/bin/env python
import torch torchvision time
model=torch.hub.load('facebookresearch/semi-supervised-ImageNet1K-models', 'resnext50_32x4d_swsl', force_reload=False)
in = torch.randn(1, 3, 224, 224)
t1 = time.time()
out = model.forward(in)
t2 = time.time()
```**strong text**
| A simple way to implement such a requirement is by registering forward hooks on each module of the model which updates a global variable for storing the time and computes the time difference between the last and current computations.
For example:
import torch
import torchvision
import time
global_time = None
exec_times = []
def store_time(self, input, output):
global global_time, exec_times
exec_times.append(time.time() - global_time)
global_time = time.time()
model = torch.hub.load('facebookresearch/semi-supervised-ImageNet1K-models', 'resnext50_32x4d_swsl', force_reload=False)
x = torch.randn(1, 3, 224, 224)
# Register a hook for each module for computing the time difference
for module in model.modules():
module.register_forward_hook(store_time)
global_time = time.time()
out = model(x)
t2 = time.time()
for module, t in zip(model.modules(), exec_times):
print(f"{module.__class__}: {t}")
The output I get is:
<class 'torchvision.models.resnet.ResNet'>: 0.004999876022338867
<class 'torch.nn.modules.conv.Conv2d'>: 0.002006053924560547
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0009946823120117188
<class 'torch.nn.modules.activation.ReLU'>: 0.007998466491699219
<class 'torch.nn.modules.pooling.MaxPool2d'>: 0.0010004043579101562
<class 'torch.nn.modules.container.Sequential'>: 0.0020003318786621094
<class 'torchvision.models.resnet.Bottleneck'>: 0.0010023117065429688
<class 'torch.nn.modules.conv.Conv2d'>: 0.017997026443481445
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010018348693847656
<class 'torch.nn.modules.conv.Conv2d'>: 0.0009999275207519531
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.003000497817993164
<class 'torch.nn.modules.conv.Conv2d'>: 0.003999948501586914
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.001997232437133789
<class 'torch.nn.modules.activation.ReLU'>: 0.004001140594482422
<class 'torch.nn.modules.container.Sequential'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.001999378204345703
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torchvision.models.resnet.Bottleneck'>: 0.003001689910888672
<class 'torch.nn.modules.conv.Conv2d'>: 0.0020008087158203125
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0009992122650146484
<class 'torch.nn.modules.conv.Conv2d'>: 0.0019991397857666016
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010001659393310547
<class 'torch.nn.modules.conv.Conv2d'>: 0.0009999275207519531
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.002998828887939453
<class 'torch.nn.modules.activation.ReLU'>: 0.0010013580322265625
<class 'torchvision.models.resnet.Bottleneck'>: 0.0029997825622558594
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.002999544143676758
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010006427764892578
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.001001119613647461
<class 'torch.nn.modules.conv.Conv2d'>: 0.0019979476928710938
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010018348693847656
<class 'torch.nn.modules.activation.ReLU'>: 0.0010001659393310547
<class 'torch.nn.modules.container.Sequential'>: 0.00299835205078125
<class 'torchvision.models.resnet.Bottleneck'>: 0.002004384994506836
<class 'torch.nn.modules.conv.Conv2d'>: 0.0009975433349609375
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.005999088287353516
<class 'torch.nn.modules.conv.Conv2d'>: 0.0020003318786621094
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010001659393310547
<class 'torch.nn.modules.activation.ReLU'>: 0.0020017623901367188
<class 'torch.nn.modules.container.Sequential'>: 0.0009970664978027344
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0029997825622558594
<class 'torchvision.models.resnet.Bottleneck'>: 0.0010008811950683594
<class 'torch.nn.modules.conv.Conv2d'>: 0.00500035285949707
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0009984970092773438
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0020020008087158203
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0019979476928710938
<class 'torch.nn.modules.activation.ReLU'>: 0.0010018348693847656
<class 'torchvision.models.resnet.Bottleneck'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.00099945068359375
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.001001119613647461
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.002997875213623047
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010013580322265625
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.002000570297241211
<class 'torch.nn.modules.activation.ReLU'>: 0.0
<class 'torchvision.models.resnet.Bottleneck'>: 0.001997232437133789
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010008811950683594
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.001001596450805664
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.00099945068359375
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.002998828887939453
<class 'torch.nn.modules.activation.ReLU'>: 0.0010020732879638672
<class 'torch.nn.modules.container.Sequential'>: 0.0010020732879638672
<class 'torchvision.models.resnet.Bottleneck'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.001995563507080078
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.002001523971557617
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010001659393310547
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010008811950683594
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.activation.ReLU'>: 0.0029985904693603516
<class 'torch.nn.modules.container.Sequential'>: 0.0009989738464355469
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010068416595458984
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torchvision.models.resnet.Bottleneck'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.004993438720703125
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010013580322265625
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010001659393310547
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010018348693847656
<class 'torch.nn.modules.conv.Conv2d'>: 0.001997709274291992
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.activation.ReLU'>: 0.0019991397857666016
<class 'torchvision.models.resnet.Bottleneck'>: 0.0029990673065185547
<class 'torch.nn.modules.conv.Conv2d'>: 0.0030128955841064453
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0019872188568115234
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0029993057250976562
<class 'torch.nn.modules.activation.ReLU'>: 0.0010008811950683594
<class 'torchvision.models.resnet.Bottleneck'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010006427764892578
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0009992122650146484
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.003001689910888672
<class 'torch.nn.modules.conv.Conv2d'>: 0.0019986629486083984
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0010008811950683594
<class 'torch.nn.modules.activation.ReLU'>: 0.0
<class 'torchvision.models.resnet.Bottleneck'>: 0.002000093460083008
<class 'torch.nn.modules.conv.Conv2d'>: 0.0019986629486083984
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0020012855529785156
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0019981861114501953
<class 'torch.nn.modules.activation.ReLU'>: 0.0030014514923095703
<class 'torchvision.models.resnet.Bottleneck'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0029985904693603516
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010008811950683594
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0010013580322265625
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0009989738464355469
<class 'torch.nn.modules.activation.ReLU'>: 0.0
<class 'torch.nn.modules.container.Sequential'>: 0.002998828887939453
<class 'torchvision.models.resnet.Bottleneck'>: 0.002000570297241211
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.003000497817993164
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0020020008087158203
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0009982585906982422
<class 'torch.nn.modules.activation.ReLU'>: 0.0009996891021728516
<class 'torch.nn.modules.container.Sequential'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0029990673065185547
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0020003318786621094
<class 'torchvision.models.resnet.Bottleneck'>: 0.0010025501251220703
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0019981861114501953
<class 'torch.nn.modules.conv.Conv2d'>: 0.0019996166229248047
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0019996166229248047
<class 'torch.nn.modules.activation.ReLU'>: 0.0
<class 'torchvision.models.resnet.Bottleneck'>: 0.0030002593994140625
<class 'torch.nn.modules.conv.Conv2d'>: 0.0020012855529785156
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.0
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0
<class 'torch.nn.modules.conv.Conv2d'>: 0.006000518798828125
<class 'torch.nn.modules.batchnorm.BatchNorm2d'>: 0.0019979476928710938
<class 'torch.nn.modules.activation.ReLU'>: 0.0
<class 'torch.nn.modules.pooling.AdaptiveAvgPool2d'>: 0.002003192901611328
<class 'torch.nn.modules.linear.Linear'>: 0.0019965171813964844
Process finished with exit code 0
| https://stackoverflow.com/questions/66910479/ |
Positive and negative edges in train_test_split_edges from Pytorch geometric package | I am trying to find the explanation for negative and positive edges in a graph as explained in the opening of the function train_test_split_edges Pytorch Geometric doc. According to the doc file it says the function is supposed to split a graph into "positive and negative train/val/test edges". What is the meaning of a positive edge or a negative edge for that matter. According to the code the positive edges "seems" to be the connections of edges in the upper triangle of the adjacency matrix of the graph and the negative edges are the edges in the lower triangle of the adjacency matrix. So if (1,0) is considered to be a positive edge then in an undirected graph (0,1) is a negative edge. Am I correct? I am not finding anywhere about the meaning of positive edge / negative edge where in comes to graphs.
| In link prediction tasks, it is usual to treat existing edges in the graph as positive examples, and non-existent edges as negative examples.
i.e. in training/prediction you feed the network a subset of all edges in the complete graph, and the associated targets are "this is a real edge" (positive) and "this is not a real edge" (negative).
| https://stackoverflow.com/questions/66913083/ |
Using pytorch cuda in AWS sagemaker notebook instance | In colab, whenever we need GPU, we simply click change runtime type and change hardware accelarator to GPU
and cuda becomes available, torch.cuda.is_available() is True
How to do this is AWS sagemaker, i.e. turning on cuda.
I am new to AWS and trying to train model using pytorch in aws sagemaker, where Pytorch code is first tested in colab environment.
my sagemaker notebook insatnce is ml.t2.medium
| Using AWS Sagemaker you don't need to worry about the GPU, you simply select an instance type with GPU ans Sagemaker will use it. Specifically ml.t2.medium doesn't have a GPU but it's anyway not the right way to train a model.
Basically you have 2 canonical ways to use Sagemaker (look at the documentation and examples please), the first is to use a notebook with a limited computing resource to spin up a training job using a prebuilt image, in that case when you call the estimator you simply specify what instance type you want (you'll choose one with GPU, looking at the costs). The second way is to use your own container, push it to ECR and launch a training job from the console, where you specify the instance type.
| https://stackoverflow.com/questions/66915920/ |
PyTorch: Dynamic Programming as Tensor Operation | Is it possible to get the following loop with a Tensor operation?
a = torch.Tensor([1, 0, 0, 0])
b = torch.Tensor([1, 2, 3, 4])
for i in range(1, a.shape[0]):
a[i] = b[i] + a[i-1]
print(a) # [1, 3, 6, 10]
The operation depends on the previous values in a and the values that are computed on the way (in a dynamic programming fashion).
Is it possible to get this type of sequential computation with a tensor operation?
| You can achieve this with a cumulative sum:
b.cumsum(0)
| https://stackoverflow.com/questions/66919743/ |
I wonder why the corresponding storage of pytorch.tensor changes every time it is resized? | import torch
a = torch.tensor([3,2,3,4])
b = a.view(2,2)
c = a.resize(2,2)
d = a.resize_(2,2)
print(id(a.storage()))
print(id(b.storage()))
print(id(c.storage()))
print(id(d.storage()))
run at first time
2356950450056
2356950450056
2356950450056
2356950450056
run at second time
2206021857352
2206301638600
2206021857352
2206301638600
Why does id change sometimes but not change sometimes, I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this. (apologies for my poor English)
Thanks in advance.
| You don't need to view or resize the object to observe this behaviour, calling storage on the same object can return different id's:
a = torch.tensor([3,2,3,4])
print(id(a.storage()))
print(id(a.storage()))
>>> 2308579152
>>> 2308422224
This is because the python storage object is constructed when calling .storage().
| https://stackoverflow.com/questions/66920930/ |
PyTorch Tensor Operation for adding the maximum of the previous row to the next | Follow-Up question to PyTorch: Dynamic Programming as Tensor Operation.
Could the following be written as a tensor operation instead of a loop?
a = torch.Tensor([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
print(a.shape)
# (3, 4)
for i in range(1, a.shape[0]):
a[i] = a[i-1].max(dim=0)[0] + a[i]
print(a)
# tensor([[ 1, 2, 3, 4],
# [ 9, 10, 11, 12],
# [21, 22, 23, 24]])
Basically adding the maximum of the previous row to all elements of the next.
The interesting part is that you can't compute the maximum for each row beforehand and then add that to the respective row, because adding the first maximum influences what the maximum of the next row is.
| Not entirely sure why you're trying to do this, but yes, this is possible. It's basically the same as your last question:
max_vals, _ = a.max(axis=1, keepdim=True)
additions = max_vals.cumsum(0)[:-1]
a[1:, :] += additions
This is because the marginal addition from one row to the next is equivalent to the maximum, so you can take the maximums first, then cumulatively sum them and add them to the original tensor.
| https://stackoverflow.com/questions/66921943/ |
Can I apply softmax only on specific output neurons? | I am building an Actor-Critic neural network model in pytorch in order to train an agent to play the game of Quoridor (hopefully). For this reason, I have a neural network with two heads, one for the actor output which does a softmax on all the possible moves and one for the critic output which is just one neuron (for regressing the value of the input state).
Now, in quoridor, most of the times not all moves will be legal and as such I am wondering if I can exclude output neurons on the actor's head that correspond to illegal moves for the input state e.g. by passing a list of indices of all the neurons that correspond to legal moves. Thus, I want to not sum these outputs on the denominator of softmax.
Is there a functionality like this on pytorch (because I cannot find one)? Should I attempt to implement such a Softmax myself (kinda scared to, pytorch probably knows best, I ve been adviced to use LogSoftmax as well)?
Furthermore, do you think this approach of dealing with illegal moves is good? Or should I just let him guess illegal moves and penalize him (negative reward) for it in the hopes that eventually it will not pick illegal moves?
Or should I let the softmax be over all the outputs and then just set illegal ones to zero? The rest won't sum to 1 but maybe I can solve that by plain normalization (i.e. dividing by the L2 norm)?
| An easy solution would be to mask out illegal moves with a large negative value, this will practically force very low (log)softmax values (example below).
# 3 dummy actions for a batch size of 2
>>> actions = torch.rand(2, 3)
>>> actions
tensor([[0.9357, 0.2386, 0.3264],
[0.0179, 0.8989, 0.9156]])
# dummy mask assigning 0 to valid actions and 1 to invalid ones
>>> mask = torch.randint(low=0, high=2, size=(2, 3))
>>> mask
tensor([[1, 0, 0],
[0, 0, 0]])
# set actions marked as invalid to very large negative value
>>> actions = actions.masked_fill_(mask.eq(1), value=-1e10)
>>> actions
tensor([[-1.0000e+10, 2.3862e-01, 3.2636e-01],
[ 1.7921e-02, 8.9890e-01, 9.1564e-01]])
# softmax assigns no probability mass to illegal actions
>>> actions.softmax(dim=-1)
tensor([[0.0000, 0.4781, 0.5219],
[0.1704, 0.4113, 0.4183]])
| https://stackoverflow.com/questions/66930752/ |
Understanding the output of LSTM predictions |
It's a 15-class classification model, OUTPUT_DIM = 15. I'm trying to input a frequency vector like this one 'hi my name is' => [1,43,2,56].
When I call predictions = model(x_train[0]) I get an array of size torch.Size([100, 15]), instead of just a 1D array with 15 classes like this: torch.Size([15]). What's happening? Why is this the output? How can I fix it? Thank you in advance, more info below.
The model (from main docs) is the following:
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.hidden2tag = nn.Linear(hidden_dim, output_dim)
def forward(self, text):
embeds = self.word_embeddings(text)
lstm_out, _ = self.lstm(embeds.view(len(text), 1, -1))
tag_space = self.hidden2tag(lstm_out.view(len(text), -1))
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
Parameters:
INPUT_DIM = 62288
EMBEDDING_DIM = 64
HIDDEN_DIM = 128
OUTPUT_DIM = 15
| The LSTM function in Pytorch returns not just the output of the last timestep but all outputs instead (this is useful in some cases). So in your example you seem to have exactly 100 timesteps (the amount of timesteps is just your sequence length).
But since you are doing classification you just care about the last output. You can normally get it like this:
outputs, _ = self.lstm(embeddings)
# shape: batch_size x 100 x 15
output = outputs[:, -1]
# shape: batch_size x 1 x 15
| https://stackoverflow.com/questions/66935911/ |
Subclass of PyTorch dataset class cannot find dataset files | I'm trying to create a subclass of the PyTorch MNIST dataset class, which I call CustomMNISTDataset, as follows:
import torchvision.datasets as datasets
class CustomMNISTDataset(datasets.MNIST):
def __init__(self, root='/home/psando'):
super().__init__(root=root,
download=False)
but when I execute:
dataset = CustomMNISTDataset()
it fails with error: "RuntimeError: Dataset not found. You can use download=True to download it".
However, when I run the following in the same file:
dataset = datasets.MNIST(root='/home/psando', download=False)
print(len(dataset))
it succeeds and prints "60000", as expected.
Since CustomMNISTDataset subclasses datasets.MNIST why is the behavior different? I've verified that the path '/home/psando' contains the MNIST directory with raw and processed subdirectories (otherwise, explicitly calling the constructor for datasets.MNIST() would have failed). The current behavior implies that the call to super().__init__() within CustomMNISTDataset is not calling the constructor for datasets.MNIST which is very strange!
Other details: I'm using Python 3.6.8 with torch==1.6.0 and
torchvision==0.7.0. Any help would be appreciated!
| This requires some source-diving, but your problem is this function. The path to the dataset is dependant on the name of the class, so when you subclass MNIST the root folder changes to /home/psando/CustomMNISTDataset
So if you rename /home/psando/MNIST to /home/psando/CustomMNISTDataset it works.
| https://stackoverflow.com/questions/66936111/ |
NumPy/PyTorch slicing using an array of indices from an argmax call | I have the following tensors / ndarrays I am operating with.
a_intents of shape (b, n_i) - Scalar at position ij is the activation of intent j for the i-th element in the batch.
u_intents of shape (b, n_i, d_m) - Vector of dimension d_m at index ij is the pose vector of intent j for the i-th element in the batch.
I want to get the index of the intent with the biggest activation scalar, and I do:
max_activations = argmax(a_intents, dim=-1, keepdim=False)
Now, using those indices, I want to get the corresponding vectors in u_intents.
max_activation_poses = u_intents[?, ?,:]
How do I use max_activations to indicate only those indices on dim 1? My intuition tells me I will end with an incorrect shape if I do
[:, max_activations, :]
The shape I am trying to get is (b, d_m) - The vector with the same index as the highest activation, for each element in the batch.
Thanks
| It's easy if you treat the u_intents vector as 2-dimensional, and offset each argmax index by its batch index times the number of elements.
# dummy values for demonstration
>>> b, n_i, d_m = 2, 3, 5
>>> a_intents = torch.rand(b, n_i)
>>> a_intents
tensor([[0.1733, 0.9965, 0.4790],
[0.6056, 0.4040, 0.0236]])
>>> u_intents = torch.rand(b, n_i, d_m)
>>> u_intents
tensor([[[0.3301, 0.8153, 0.1356, 0.6623, 0.4385],
[0.1902, 0.1748, 0.4131, 0.3887, 0.5363],
[0.1211, 0.5773, 0.2405, 0.6313, 0.2064]],
[[0.2592, 0.5127, 0.7301, 0.8883, 0.5665],
[0.2767, 0.6545, 0.7595, 0.2677, 0.5163],
[0.8158, 0.4940, 0.0492, 0.0911, 0.0465]]])
# add to each index the batch start
>>> max_activations = a_intents.argmax(dim=-1) + torch.arange(0, b*n_i, step=n_i)
# elements 1 and 3 of 0..5
>>> max_activations
tensor([1, 3])
>>> poses = u_intents.view(b*n_i, d_m).index_select(0, max_activations)
# tensor of shape (b, d_m) correctly indexing the maxima.
>>> poses
tensor([[0.1902, 0.1748, 0.4131, 0.3887, 0.5363],
[0.2592, 0.5127, 0.7301, 0.8883, 0.5665]])
| https://stackoverflow.com/questions/66943355/ |
Pytorch model training without using forward | I'm working on training CLIP model.
Here's the source code of the model https://github.com/openai/CLIP/blob/main/clip/model.py
Basically the CLIP object is constructed like this :
class CLIP(nn.module):
...
def encode_image(self, image):
return self.visual(image.type(self.dtype))
def encode_text(self, text):
x = ...
...
return x
def forward(self, image, text):
image_features = self.encode_image(image)
text_features = self.encode_text(text)
...
return logits_per_image, logits_per_text
The forward method except pair of image and text, since I want to repurpose CLIP for other task(text-text pairs), I'm not using forward from CLIP, but I'm using others method defined inside CLIP. My training code look like this :
for k in range(epoch):
for batch in dataloader :
x,y = batch
y1 = model.encode_text(x[first_text_part])
y2 = model.encode_text(x[second_text_part])
<calculate loss, backward, step, etc>
The problem is, after 1 epoch, all the gradients turn out to be nan even though the loss is not nan.
My suspicion is PyTorch only able to propagate the gradient through the forward method.
Some source says that forward is not that special (https://discuss.pytorch.org/t/must-use-forward-function-in-nn-module/50943/3), but other source say coding with torch must use the forward (https://stackoverflow.com/a/58660175/12082666).
The question is, can we train Pytorch network without using forward method?
| The forward() in pytorch in nothing new. It just attaches the graph of your network when called. Backpropagation doesnt rely much on forward() because, the gradients are propagated through the graph.
The only difference is that in pytorch source, forward is similar to call() method with all the hooks registered in nn.Module.
| https://stackoverflow.com/questions/66948189/ |
Getting predict.proba from BERT classififer | I have a classifier on top of BERT, and I would like to see the predict probability for creating the ROC curve. How do I get the predict proba?. The predicted probas will be used to calculate the TPR FPR and threshold for ROC curve.
here is the code
class BertBinaryClassifier(nn.Module):
def __init__(self, dropout=0.1):
super(BertBinaryClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-uncased')
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(768, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, tokens, masks=None):
_, pooled_output = self.bert(tokens, attention_mask=masks, output_all_encoded_layers=False)
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
prediction = self.sigmoid(linear_output)
return prediction
# Config setting
BATCH_SIZE = 4
EPOCHS = 5
# Making dataloaders
train_dataset = torch.utils.data.TensorDataset(train_tokens_tensor, train_masks_tensor, train_y_tensor)
train_sampler = torch.utils.data.RandomSampler(train_dataset)
train_dataloader = torch.utils.data.DataLoader(train_dataset, sampler=train_sampler, batch_size=BATCH_SIZE)
test_dataset = torch.utils.data.TensorDataset(test_tokens_tensor, test_masks_tensor, test_y_tensor)
test_sampler = torch.utils.data.SequentialSampler(test_dataset)
test_dataloader = torch.utils.data.DataLoader(test_dataset, sampler=test_sampler, batch_size=BATCH_SIZE)
bert_clf = BertBinaryClassifier()
bert_clf = bert_clf.cuda()
#wandb.watch(bert_clf)
optimizer = torch.optim.Adam(bert_clf.parameters(), lr=3e-6)
# training
for epoch_num in range(EPOCHS):
bert_clf.train()
train_loss = 0
for step_num, batch_data in enumerate(train_dataloader):
token_ids, masks, labels = tuple(t for t in batch_data)
token_ids, masks, labels = token_ids.to(device), masks.to(device), labels.to(device)
preds = bert_clf(token_ids, masks)
loss_func = nn.BCELoss()
batch_loss = loss_func(preds, labels)
train_loss += batch_loss.item()
bert_clf.zero_grad()
batch_loss.backward()
optimizer.step()
#wandb.log({"Training loss": train_loss})
print('Epoch: ', epoch_num + 1)
print("\r" + "{0}/{1} loss: {2} ".format(step_num, len(train_data) / BATCH_SIZE, train_loss / (step_num + 1)))
# evaluating on test
bert_clf.eval()
bert_predicted = []
all_logits = []
probs=[]
with torch.no_grad():
test_loss = 0
for step_num, batch_data in enumerate(test_dataloader):
token_ids, masks, labels = tuple(t for t in batch_data)
token_ids, masks, labels = token_ids.to(device), masks.to(device), labels.to(device)
logits = bert_clf(token_ids, masks)
pr=logits.ravel()
probs+=pr
loss_func = nn.BCELoss()
loss = loss_func(logits, labels)
test_loss += loss.item()
numpy_logits = logits.cpu().detach().numpy()
#print(numpy_logits)
#wandb.log({"Testing loss": test_loss})
bert_predicted += list(numpy_logits[:, 0] > 0.5)
all_logits += list(numpy_logits[:, 0])
I am able to get the prediction score to calculate the accuracy or f1 score. But not the probability for creating ROC curve.
Thanks
| In your forward, you:
def forward(self, tokens, masks=None):
_, pooled_output = self.bert(...) # Get output of BERT
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output) # Take linear combination of outputs
# (unconstrained score - "logits")
prediction = self.sigmoid(linear_output) # Normalise scores
# (constrained between [0,1] - "probabilities")
return prediction
Hence the result of calling your model can be directly supplied to calculate the False Positive and True Positive rates e.g:
from sklearn import metrics
...
test_probs = bert_clf(token_ids, masks)
fpr, tpr, thresholds = metrics.roc_curve(labels, test_probs)
roc_auc = metrics.auc(fpr, tpr)
| https://stackoverflow.com/questions/66950157/ |
I try to implement vggnet, but it does not be trained well | I am new at CNN.
I try to train vggnet.
class Net(nn.Module) :
def __init__(self) :
super(Net, self).__init__()
self.conv = nn.Sequential (
#1
#####
nn.Conv2d(3,64,3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(64,64,3, padding=1),nn.ReLU(inplace=True),
nn.MaxPool2d(2,2),
#2
#####
nn.Conv2d(64,128,3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(128,128,3, padding=1),nn.ReLU(inplace=True),
nn.MaxPool2d(2,2),
#####
#3
#####
nn.Conv2d(128,256,3, padding=1),nn.ReLU(inplace=True),
nn.Conv2d(256,256,3, padding=1),nn.ReLU(inplace=True),
nn.Conv2d(256,256,3, padding=1),nn.ReLU(inplace=True),
nn.MaxPool2d(2,2),
#4
#####
nn.Conv2d(256,512,3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(512,512,3, padding=1),nn.ReLU(inplace=True),
nn.Conv2d(512,512,3, padding=1), nn.ReLU(inplace=True),
nn.MaxPool2d(2,2),
#5
#####
nn.Conv2d(512,512,3, padding=1),nn.ReLU(inplace=True),
nn.Conv2d(512,512,3, padding=1),nn.ReLU(inplace=True),
nn.Conv2d(512,512,3, padding=1),nn.ReLU(inplace=True),
nn.MaxPool2d(2,2),
#####
)
self.fc = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096), nn.ReLU(inplace=True), nn.Dropout(0.5),
nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Dropout(0.5),
nn.Linear(4096, 1000)
)
def forward(self, x):
# diffrent depending on the model
x = self.conv(x)
# change dimension
x = torch.flatten(x, 1)
# same
x = self.fc(x)
return x
But the loss is near 6.9077.
After epoch 0, it rarely change.
Even if I change weight decay to 0 (not use L2 normalization), the loss is slightly change.
My optimizer and scheduler is
optimizer = torch.optim.SGD(net.parameters(), lr=0.1, weight_decay=5e-4)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, factor = 0.1 , patience= 2 ,mode='min')
What is the problem.
Sometimes, it print bytes but only got 0.
warnings.warn(str(msg))
That is related to my problem ?
| Your loss value 6.9077 is equal to -log(1/1000), which basically means your network produces random outputs out of all possible 1000 classes.
It is a bit tricky to train VGG nets from scratch, especially if you do not include batch-norm layers.
Try to reduce the learning rate to 0.01, and add momentum to your SGD.
Add more input augmentations (e.g., flips color jittering, etc.).
| https://stackoverflow.com/questions/66950375/ |
Is there any alternate for tfagents in pytorch | We are using tfagents in tensorflow for reinforcement learning, because of limitations with static computation graphs we are planning to migrate our code to pytorch.
tfagents is great and have very good documentation and reduce a lot of time doing the same task again
We are wondering if the pytorch community have a similar kind of stuff?
| rllib is an alternative which supports PyTorch.
| https://stackoverflow.com/questions/66951936/ |
How do I predict using a PyTorch model? | I created a pyTorch Model to classify images.
I saved it once via state_dict and the entire model like that:
torch.save(model.state_dict(), "model1_statedict")
torch.save(model, "model1_complete")
How can i use these models?
I'd like to check them with some images to see if they're good.
I am loading the model with:
model = torch.load(path_model)
model.eval()
This works alright, but i have no idea how to use it to predict on a new picture.
| def predict(self, test_images):
self.eval()
# model is self(VGG class's object)
count = test_images.shape[0]
result_np = []
for idx in range(0, count):
# print(idx)
img = test_images[idx, :, :, :]
img = np.expand_dims(img, axis=0)
img = torch.Tensor(img).permute(0, 3, 1, 2).to(device)
# print(img.shape)
pred = self(img)
pred_np = pred.cpu().detach().numpy()
for elem in pred_np:
result_np.append(elem)
return result_np
network is VGG-19 and ref my source code.
like this architecture:
class VGG(object):
def __init__(self):
...
def train(self, train_images, valid_images):
train_dataset = torch.utils.data.Dataset(train_images)
valid_dataset = torch.utils.data.Dataset(valid_images)
trainloader = torch.utils.data.DataLoader(train_dataset)
validloader = torch.utils.data.DataLoader(valid_dataset)
self.optimizer = Adam(...)
self.criterion = CrossEntropyLoss(...)
for epoch in range(0, epochs):
...
self.evaluate(validloader, model=self, criterion=self.criterion)
...
def evaluate(self, dataloader, model, criterion):
model.eval()
for i, sample in enumerate(dataloader):
...
def predict(self, test_images):
...
if __name__ == "__main__":
network = VGG()
trainset, validset = get_dataset() # abstract function for showing
testset = get_test_dataset()
network.train(trainset, validset)
result = network.predict(testset)
| https://stackoverflow.com/questions/66952664/ |
How to convert a CNN LSTM form keras to pytorch | I am trying to convert a CNN LSTM for keras to pytorch but I have trouble.
ConvNN_model = models.Sequential()
ConvNN_model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 1)))
ConvNN_model.add(layers.MaxPooling2D((2, 2)))
ConvNN_model.add(layers.Conv2D(64, (3, 3), activation='relu'))
ConvNN_model.add(TimeDistributed(LSTM(128, activation='relu')))
ConvNN_model.add(Dropout(0.2))
ConvNN_model.add(LSTM(128, activation='relu'))
ConvNN_model.add(layers.Dense(64, activation='relu'))
ConvNN_model.add(layers.Dropout(0.25))
ConvNN_model.add(layers.Dense(15, activation='softmax'))
How to convert the above code from Keras to Pytorch?
| This is your CNN in Keras:
ConvNN_model = models.Sequential()
ConvNN_model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 1)))
ConvNN_model.add(layers.MaxPooling2D((2, 2)))
ConvNN_model.add(layers.Conv2D(64, (3, 3), activation='relu'))
ConvNN_model.add(TimeDistributed(LSTM(128, activation='relu')))
ConvNN_model.add(Dropout(0.2))
ConvNN_model.add(LSTM(128, activation='relu'))
ConvNN_model.add(layers.Dense(64, activation='relu'))
ConvNN_model.add(layers.Dropout(0.25))
ConvNN_model.add(layers.Dense(15, activation='softmax'))
This is the equivalent code in PyTorch:
class ConvNN_model(nn.Module):
def __init__(self):
super(ConvNN_model, self).__init__()
self.layers = nn.Sequential(
nn.Conv2d(1, 32, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d((2, 2)),
nn.Conv2d(32, 64, kernel_size=3),
nn.ReLU(),
TimeDistributed(nn.LSTM(128, 128)),
nn.Dropout(0.2),
nn.LSTM(128, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(64, 15),
nn.Softmax()
)
def forward(self, x):
return self.layers(x)
Keep in mind that there is no equivalent module for the TimeDistributed class in PyTorch, so you have to build it yourself. Here is one that you can use (from here):
class TimeDistributed(nn.Module):
def __init__(self, module, batch_first=False):
super(TimeDistributed, self).__init__()
self.module = module
self.batch_first = batch_first
def forward(self, x):
if len(x.size()) <= 2:
return self.module(x)
# Squash samples and timesteps into a single axis
x_reshape = x.contiguous().view(-1, x.size(-1)) # (samples * timesteps, input_size)
y = self.module(x_reshape)
# We have to reshape Y
if self.batch_first:
y = y.contiguous().view(x.size(0), -1, y.size(-1)) # (samples, timesteps, output_size)
else:
y = y.view(-1, x.size(1), y.size(-1)) # (timesteps, samples, output_size)
return y
There are a million-and-one ways to skin a cat; you do not necessarily have to create the entire network in the nn.Sequential block as I did. Or if you wanted to stick to the sequential method to stay consistent with Keras, you don't need to subclass nn.Module and use the sequential layers altogether.
| https://stackoverflow.com/questions/66954249/ |
Shuffle patches in image batch | I am trying to create a transform that shuffles the patches of each image in a batch.
I aim to use it in the same manner as the rest of the transformations in torchvision:
trans = transforms.Compose([
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
ShufflePatches(patch_size=(16,16)) # our new transform
])
More specifically, the input is a BxCxHxW tensor. I want to split each image in the batch into non-overlapping patches of size patch_size, shuffle them, and regroup into a single image.
Given the image (of size 224x224):
Using ShufflePatches(patch_size=(112,112)) I would like to produce the output image:
I think the solution has to do with torch.unfold and torch.fold, but didn't manage to get any further.
Any help would be appreciated!
| Indeed unfold and fold seem appropriate in this case.
import torch
import torch.nn.functional as nnf
class ShufflePatches(object):
def __init__(self, patch_size):
self.ps = patch_size
def __call__(self, x):
# divide the batch of images into non-overlapping patches
u = nnf.unfold(x, kernel_size=self.ps, stride=self.ps, padding=0)
# permute the patches of each image in the batch
pu = torch.cat([b_[:, torch.randperm(b_.shape[-1])][None,...] for b_ in u], dim=0)
# fold the permuted patches back together
f = nnf.fold(pu, x.shape[-2:], kernel_size=self.ps, stride=self.ps, padding=0)
return f
Here's an example with patch size=16:
| https://stackoverflow.com/questions/66962837/ |
Understanding Loss Functions | I was following this pytorch tutorial on how to setup a neural network, but I don't understand the loss function, loss = (y_pred - y).pow(2).sum().item(). Why is this being used rather than the derivative of the function used to calculate the predicted y value? I also don't understand what that function returns.
| That function is the euclidean L2-norm. It returns the sum of the squared errors between network output and expected output.
As for the derivative of the function, or better its gradient, it it computed internally by the deep learning framework you are using (here pytorch I assume) and is needed to update the network parameter. For most use cases, you do not need to think at it. Its computation is totally automatic.
One note: if you call .item() on a tensor, you are extracting its raw value, i.e. what you get is no more a tensor but just a number. This means that you cannot compute the gradient from it (call .backward()).
| https://stackoverflow.com/questions/66964167/ |
CNN in pytorch "Expected 4-dimensional input for 4-dimensional weight [32, 1, 5, 5], but got 3-dimensional input of size [16, 64, 64] instead" | I am new to pytorch. I am trying to use chinese mnist dataset to train the neural network that shows in below code. Is that a problem of the neural network input or something else goes wrong in my code. I have tried many ways to fix it but instead it shows me other errors
train_df = chin_mnist_df.groupby('value').apply(lambda x: x.sample(700, random_state=SEED)).reset_index(drop=True)
x_train, y_train = train_df.iloc[:, :-2], train_df.iloc[:, -2]
valid_df = chin_mnist_df.groupby('value').apply(lambda x: x.sample(200, random_state=SEED)).reset_index(drop=True)
x_valid, y_valid = valid_df.iloc[:, :-2], valid_df.iloc[:, -2]
test_df = chin_mnist_df.groupby('value').apply(lambda x: x.sample(100, random_state=SEED)).reset_index(drop=True)
x_test, y_test = test_df.iloc[:, :-2], test_df.iloc[:, -2]
train_ds = Dataset(x_train, y_train)
train_dataloader = torch.utils.data.DataLoader(train_ds, batch_size=16, shuffle=True)
valid_ds = Dataset(x_valid, y_valid)
valid_dataloader = torch.utils.data.DataLoader(valid_ds, batch_size=16, shuffle=True)
test_ds = Dataset(x_test, y_test)
test_dataloader = torch.utils.data.DataLoader(test_ds, batch_size=16, shuffle=True)
# Convolutional neural network (two convolutional layers)
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(7 * 7 * 64, 1000)
self.fc2 = nn.Linear(1000, 15)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.drop_out(out)
out = self.fc1(out)
out = self.fc2(out)
return out
model = ConvNet()
klisi=[]
apoklisi=[]
apoklisi2=[]
klisi2=[]
olatalr=[]
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
total_step = len(train_dataloader)
loss_list = []
acc_list = []
for epoch in range(num_epochs):
for i,data in enumerate(train_dataloader):#(images, labels)
batch_inputs, batch_labels = data[0][:].to(device).type(torch.float), data[1][:].to(device)
# Run the forward pass
outputs = model(batch_inputs)
loss = criterion(outputs, batch_labels)
| Your training images are greyscale images. That is, they only have one channel (as opposed to the three RGB color channels in color images).
It seems like your Dataset (implicitly) "squeezes" this singleton dimension, and instead of having a batch of shape BxCxHxW = 16x1x64x64, you end up with a batch of shape 16x64x64.
Try:
# ...
batch_inputs, batch_labels = data[0][:].to(device).type(torch.float), data[1][:].to(device)
batch_inputs = batch_inputs[:, None, ...] # explicitly add the singleton channel dimension
# Run the forward pass
# ...
| https://stackoverflow.com/questions/66969259/ |
How to differentiate a gradient in Pytorch | I'm trying to differentiate a gradient in PyTorch. I found this link but can't get it to work.
My code looks as follows:
import torch
from torch.autograd import grad
import torch.nn as nn
import torch.optim as optim
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(2, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 4)
def forward(self, x):
x=self.fc1(x)
x=self.fc2(x)
x=self.out(x)
return x
nx = net_x()
r = torch.tensor([1.0,2.0])
nx(r)
>>>tensor([-0.2356, -0.7315, -0.2100, -0.6741], grad_fn=<AddBackward0>)
But when I try to differentiate the function with respect to the first parameter
grad(nx, r[0])
I get the error
TypeError: 'net_x' object is not iterable
Update
Trying to extend this to tensors:
For some reason the gradient is the same for all inputs.
a = torch.rand((8,2), requires_grad=True)
s = []
s_t = []
for input_tensor in a:
output_tensor = nx(input_tensor)
s.append(output_tensor[0])
s_t_value = grad(output_tensor[0], input_tensor)[0][0]
s_t.append(s_t_value)
print(s_t)
But the output is:
[tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326)]
| First thing to change if you want to have the gradients with respect to r is to set the requires_grad flag to True for this tensor :
nx = net_x()
r = torch.tensor([1.0,2.0], requires_grad=True)
Then, as explained in autograd documentation, grad computes the gradients of oputputs with respect to the inputs, so you need to save the output of the model :
y = nx(r)
Now you can compute the gradients with respect to r. But there is one last issue : grad only knows how to propagate gradients from a scalar tensor, which y is not. So you need to compute the gradients with respect to each coordinate :
for x in y:
print(grad(x, r, retain_graph=True))
or equivalently:
for i in range(y.shape[0]):
# prints the vector (dy_i/dr_0, dy_i/dr_1, ... dy_i/dr_n)
print(grad(y[i], r, retain_graph=True))
You need to retain_graph because without this flag, the computational graph is cleared after the first gradient propagation. And there you have it, the derivative of each coordinate of nx(r) with respect to r !
To answer your question in the comments :
Not an error, it's normal. So you have a batched input of size (B, 2), with B = 8. You get a batched output of shape (B, 4). Now, for each vector of the batched output, for each coordinate of this vector, you can compute the derivative with respect to the batched input, which will yield a gradient of size (B,2), like that :
for b in y: # There a B vectors b of shape (4)
for x in b: # There are 4 coordinates
# This prints a tensor of shape (B, 2)
print(grad(x, r, retain_graph=True))
Now remember the way batches work : all batches are computed together to harvest the power of GPU, but they are actually completely independant. So al b vectors are actually results of the network from different inputs. Which means, the gradient of the i-th vector b with respect to the j-th vector of the input must be 0 if i!=j. Does that make sense ? It's like computing f(x,y) = (x^2, y^2). The derivative of y^2 with respect to x is obviously 0 ! Well consider x and y to be two samples from one batch, and you have you explaination for why there are a lot of 0 in your results.
A last sample of code to make it even clearer :
inputs = [torch.randn(1, 2, requires_grad=True) for i in range(8)]
r = torch.cat(inputs) # shape : (8, 2)
y = nx(r) # shape : (8, 4)
for i in range(len(y)):
print(f"Gradients of y[{i}] wrt r[{i}]")
for x in y[i]:
# prints a tensor of size (2)
print(grad(x, inputs[i], retain_graph=True))
On to why all the gradients are the same. This is because your neural network is completely linear. You have 3 nn.Linear layers, and no non-linear activation function (as a consequence, this is literally equivalent to a network with only one layer). One property of linear layers is that their gradient is constant : d(alpha*x)/dx = alpha (independant of x). Therefore the gradients will be identical along all dimensions. Just add non-linear activation layers like sigmoids and this behavior will not happen again.
| https://stackoverflow.com/questions/66976313/ |
PyTorch ValueError: Target size (torch.Size([64])) must be the same as input size (torch.Size([15])) | I'm currently using this repo to perform NLP and learn more about CNN's using my own dataset, and I keep running into an error regarding a shape mismatch:
ValueError: Target size (torch.Size([64])) must be the same as input size (torch.Size([15]))
10 }
11 for epoch in tqdm(range(params['epochs'])):
---> 12 train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
13 valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
14 epoch_mins, epoch_secs = epoch_time(start_time, end_time)
57 print("PredictionShapeAfter:")
58 print(predictions.shape)
---> 59 loss = criterion(predictions, batch.l)
60
61 acc = binary_accuracy(predictions, batch.l)
Doing some digging, I found that my CNN's prediction is a different size compared to the training data truth it's being compared to:
Input Shape:
torch.Size([15, 64])
Truth Shape:
torch.Size([64])
embedded unsqueezed: torch.Size([15, 1, 64, 100])
cat shape: torch.Size([15, 300])
Prediction Shape Before Squeeze:
torch.Size([15, 1])
PredictionShapeAfter:
torch.Size([15])
The model is making the prediction shape (the last value in this list) as the first dimension of the inputs. Is this a common problem and is there a way to rectify this issue?
My Model:
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
embedded = self.embedding(text)
embedded = embedded.unsqueeze(1)
print(f"embedded unsqueezed: {embedded.shape}")
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
cat = self.dropout(torch.cat(pooled, dim = 1))
print(f"cat shape: {cat.shape}")
return self.fc(cat)
My Training function:
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
print("InputShape:")
print(batch.t.shape)
print("Truth Shape:")
print(batch.l.shape)
predictions = model(batch.t)
print("Prediction Shape Before Squeeze:")
print(predictions.shape)
predictions = predictions.squeeze(1)
print("PredictionShapeAfter:")
print(predictions.shape)
loss = criterion(predictions, batch.l)
acc = binary_accuracy(predictions, batch.l)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
My full code can be found at this link.
| Your issue is here:
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
You are inputting data of shape [15, 1, 64, 100], which the convolutions are interpreting as batches of size 15, of 1-channel images of HxW 64x100.
What it appears you want is a batch of size 64, so swap those dimensions first:
...
embedded = embedded.swapdims(0,2)
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
...
| https://stackoverflow.com/questions/66979328/ |
The best way to implement stateful LSTM/ConvLSTM in Pytorch? | I am trying to boost the performance of a object detection task with sequential information, using ConvLSTM.
A typical ConvLSTM model takes a 5D tensor with shape (samples, time_steps, channels, rows, cols) as input.
as stated in this post, a long sequence of 500 images need to be split into smaller fragments in the Pytorch ConvLSTM layer. For example, it could be split into 10 fragements with each having 50 time steps.
I have two goals:
I want the network to remember the state across the 10 fragment sequences. I.e. how to pass the hidden state between the fragements?
I want to feed in the images (of the video) one by one. I.e. the long sequence of 500 images is split into 500 fragments with each one having only one image. The input should be like (all_samples, channels, rows, cols). This only make sense if the 1.goal could be achieved.
I found some good answers for Tensorflow, but I am using Pytorch.
TensorFlow: Remember LSTM state for next batch (stateful LSTM)
The best way to pass the LSTM state between batches
What is the best way to implement stateful LSTM/ConvLSTM in Pytorch?
| I found this post has a good example
model = nn.LSTM(input_size = 20, hidden_size = h_size)
out1, (h1,c1) = model(x1)
out2, (h2,c2) = model(x2, (h1,c1))
| https://stackoverflow.com/questions/66979507/ |
filter class/subfolder with pytorch ImageFolder | Here's my folder structure
image-folders/
βββ class_0/
| βββ 001.jpg
| βββ 002.jpg
βββ class_1/
| βββ 001.jpg
| βββ 002.jpg
βββ class_2/
βββ 001.jpg
βββ 002.jpg
By using ImageFolder from torchvision, I can create dataset with this syntax :
dataset = ImageFolder("image-folders",...)
But this will read the entire subfolder and create 3 target classes. I don't want to include the class_2 folder, I want my dataset to only contains class_0 and class_1 only, is there any way to achieve this besides delete/move the class_2 folder?
| You can do this by using torch.utils.data.Subset of the original full ImageFolder dataset:
from torchvision.datasets import ImageFolder
from torch.utils.data import Subset
# construct the full dataset
dataset = ImageFolder("image-folders",...)
# select the indices of all other folders
idx = [i for i in range(len(dataset)) if dataset.imgs[i][1] != dataset.class_to_idx['class_s']]
# build the appropriate subset
subset = Subset(dataset, idx)
| https://stackoverflow.com/questions/66979537/ |
Target value with torch.nn.MultiLabelSoftMarginLoss should be 0 or -1? | I have a multi-label classification problem (A single sample can be classified as several classes at the same time).
I want to use torch.nn.MultiLabelSoftMarginLoss but I got confused with the documentation where the ground truth are written like this :
Target: (N, C)(N,C) , label targets padded by -1 ensuring same shape as the input.
Does that mean the target is in one hot form, but the zero replace with -1?
Let's say I want to classified several attributes for object detection such as : Man, Tall, Long hair.
My first image is a tall woman with long hair, does my Target become 0 1 1 or -1 1 1 ? I can't fathom why use -1 instead of 0
It's quite hard to find example in internet since a lot of people mistook multi-label task as multiple class classification and keep using BCELoss.
| Look closer at the doc:
The targets are expected to be {0, 1} and not -1.
I'm not sure what this -1 is doing, it might be for "ignore", but you are correct that the doc there is not very clear.
There is an open issue on pytorch's github about this. Feel free to contribute.
| https://stackoverflow.com/questions/66979824/ |
How to add two separate layers on the top of one layer using pytorch? | I want to add two separate layers on the top of one layer (or a pre-trained model) Is that possible for me to do using Pytorch?
| Yes, when defining your model's forward function, you can specify how the inputs should be passed through the layers.
For example:
def forward(self, X):
X = self.common_layer(X)
X = self.activation_fn(X)
Xa = self.layer_a(X)
Xb = self.layer_b(X)
# now combine outputs of the parallel layers however you please
return self.combining_layer(torch.cat([Xa, Xb]))
Where forward is a member of MyNet:
class MyNet(nn.Module):
def __init__(self):
# define common_layer, activation_fn, layer_a, layer_b, and combining_layer
Training
The model should be trained, just like any other pytorch model.
Because the combined output goes through both layer_a and layer_b, computing gradient of the loss will optimize paramaters for both layers. The layers will be optimized seperately as the parameters of their object are independent.
For example.
model = MyNet()
...
optimizer.zero_grad()
predictions = model(input_batch)
loss = my_loss_fn(predictions, ground_truth)
loss.backward()
optimizer.step()
| https://stackoverflow.com/questions/66983088/ |
Subsets and Splits