instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How can I convert a pytorch tensor of an image of the shape (batch, channels, height, width) to (batch, height, width)? | I want to turn a colored image into a grayscale image by removing the color channels. Is this even possible?
| You can use the Grayscale transform
https://pytorch.org/vision/stable/generated/torchvision.transforms.Grayscale.html
from torchvision.transforms import Grayscale
grayscale_batch = Grayscale()(color_batch)
Which results in a (batch_size, 1, H,W) tensor.
To remove the channel dimension, apply torch.squeeze()
| https://stackoverflow.com/questions/74032115/ |
How to add a "model as an layer" in pytorch | Is there any way to use a "pre-trained model as an layer" in a custom net?
Pseudocode:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.model_layer = # ?
self.fc1 = nn.Linear(num_classes_of_model_layer, 320)
self.fc2 = nn.Linear(320, 160)
self.fc3 = nn.Linear(160, num_classes)
def forward(self, x):
x = # ?
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| Yes you can absolutely use another model a part of your Module, as the other model is also a Module
Do:
self.model_layer = pretrained_model
and make inference as always with x = self.model_layer(x)
| https://stackoverflow.com/questions/74032367/ |
How to plot a list of torch.tensors? "RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead." | I am learning Pytorch and was following a tutorial when I came accross this error:
"RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead."
I am adding my losses to a list called final_losses
for i in range(epochs):
y_pred=model.forward(X_train)
loss=loss_function(y_pred,y_train)
final_losses.append(loss)
This is a simple ANN module having 2 fully connected layers and I use Relu function in them.
I am trying to print a graph of epochs vs loss:
plt.plot(range(epochs),final_losses)
plt.show()
When I execute this I am getting the above error.("RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.")
I have printed the these variables for your reference:
Epochs is 150, length of final_losses is 150 and final_losses is [tensor(1.5851, grad_fn=<NllLossBackward0),...
I also tried doing this :
plt.plot(range(epochs),torch.detach(final_losses).numpy())
plt.show()
I am getting the following error:
TypeError: detach(): argument 'input' (position 1) must be Tensor, not list
Please let me know how to solve this.
Thank you!
| You have a list of tensors, rather than a tensor. Modify your initial code to store your losses as numpy arrays (or singular floats by taking the mean):
for i in range(epochs):
y_pred=model.forward(X_train)
loss=loss_function(y_pred,y_train)
final_losses.append(loss.detach().numpy())
If you are on gpu you might also need to convert to cpu in that last line.
| https://stackoverflow.com/questions/74034144/ |
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0 | I cannot understand why this error keeps popping out.
I start specifying the device variable
if torch.cuda.is_available():
device = torch.device("cuda")
n_gpu = torch.cuda.device_count()
torch.cuda.get_device_name(0)
else:
device = torch.device("cpu")
The model is the following
class CNN(nn.Module):
def __init__(self, initial_num_channels, num_channels):
'''
Args:
initial_num_channels (int): size of the incoming feature vector
num_classes (int): size of the output prediction vector
num_channels (int): constant channel size to use throughout network
'''
super(CNN, self).__init__()
self.convnet = nn.Sequential(
nn.Conv1d(in_channels=initial_num_channels,
out_channels=num_channels, kernel_size=3),
nn.ELU(),
nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
kernel_size=3, stride=2),
nn.ELU(),
nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
kernel_size=3, stride=2),
nn.ELU(),
nn.Conv1d(in_channels=num_channels, out_channels=num_channels,
kernel_size=3),
nn.ELU() )
def forward(self, x, apply_softmax=False):
"""The forward pass of the classifier
Args:
x (torch.Tensor): an input data tensor.
x.shape should be (batch, dataset._max_seq_length)
apply_softmax (bool): a flag for the softmax activation
should be false if used with the Cross Entropy losses
Returns:
the resulting tensor. tensor.shape should be (batch, num_classes)
"""
# input tensor: batch_size x channels x signal_length
x = self.convnet(x)
# average through maxpooling
x = F.avg_pool1d(x, kernel_size=3)
x = F.dropout(x, p=0.1)
# go back to 2 dimensions: batch_size x features
x = x.view(x.size(dim=0), -1)
# compute the input size of linear combination layer
num_features = x.size(dim=1)
# final linear comb layers
self.fc1 = nn.Linear(num_features, 100)
self.fc2 = nn.Linear(100, 2)
# mlp classifier
x = F.relu(F.dropout(self.fc1(x), p=0.1))
x = self.fc2(x)
if apply_softmax:
x = F.softmax(x, dim=1)
return x
Here I move it onto the GPU
epochs = 10
model = CNN(initial_num_channels=1, num_channels=256)
# model = MultilayerPerceptron(input_dim=MAX_LEN, hidden_dim=100, output_dim=2)
model = model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
Finally I run the training loop
train_loss_set = []
executed = False
for epoch in range(1, epochs + 1):
train_loop = tqdm(train_dataloader)
model.train()
train_steps = 0
train_loss = 0
for (idx,(train_input, train_label)) in enumerate(train_loop):
# add dimension=1 in position 1 to have channels=1
# ONLY FOR CNN
train_input = torch.unsqueeze(train_input, 1)
train_input = train_input.clone().detach().requires_grad_(True).to(device)
train_label = train_label.clone().detach().to(device)
train_output = model(train_input)
loss = criterion(train_output, train_label)
loss.backward()
train_loss_set.append(loss.item())
optimizer.step()
model.zero_grad()
train_steps += 1
train_loss += loss.item()
train_loss_set.append(loss.item())
print("Train loss: {}".format(train_loss/train_steps))
The error is triggered during the forward pass of the model, hence I assume, the problem is related to the input. I tried either the classic assignment input = input.to(device) and also the "fancier" one (which you can see above)
| The problem is with the fc1 and fc2 layers which you create in your forward function. These are only created when the model is called on an input, and are not moved to the GPU. To fix this, you can use:
self.fc1 = nn.Linear(num_features, 100, device=x.device)
self.fc2 = nn.Linear(100, 2, device=x.device)
The definition of fc2 could also be moved to the __init__ function to avoid recreating it each time.
| https://stackoverflow.com/questions/74038119/ |
Unwrapping the module did not yield a `LightningModule` | I am getting an error running the validate() function of pytorch lighting using the following code.
error:
ValueError: An invalid dataloader was passed to `Trainer.validate(dataloaders=...)`. Either pass the dataloader to the `.validate()` method OR implement `def val_dataloader(self):` in your LightningModule/LightningDataModule.
code:
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import DataLoader
...
mnist_val = MNIST(root='data',train=False, download=True, transform=transform)
mnist_val_loader = DataLoader(mnist_val, batch_size=256, num_workers=4)
...
trainer.validate()
I used the data loader into the validate() function but I get the following error:
Unwrapping the module did not yield a `LightningModule`
| I solved with the newer versions of pytorch_lighting putting as input model & data loader to trainer.validate()
trainer.validate(model, mnist_val_loader)
| https://stackoverflow.com/questions/74039692/ |
convert pytorch model to ONNX | How to convert a pytorch model to ONNX? I am trying to use this method on Python 3.7:
import torch
model = torch.load("./yolov7x.pt")
#torch.onnx.export(model, "yolo_v7x.onnx")
Even with the commented last line in the 3 lines of code, loading this errors out:
Traceback (most recent call last):
File "C:\Users\convert_onx.py", line 5, in <module>
model = torch.load("./yolov7x.pt")
File "C:\Users\Python37\lib\site-packages\torch\serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "C:\Users\Python37\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'
This is the git repo I am working with the Yolo Model 7x:
Ultimate use case is to use this model on Intel's Open VINO toolkit that requires PyTorch models to be converted to ONYX.
| When you are loading the pickled model the source tree must match the one that used when the model was saved. So
ModuleNotFoundError: No module named 'models'
expects this directory to be in your python path: https://github.com/WongKinYiu/yolov7/tree/main/models
To export to ONNX:
Clone the repo https://github.com/WongKinYiu/yolov7
git clone https://github.com/WongKinYiu/yolov7
Set the correct path to it.
import sys
sys.path.insert(0, './yolov7')
or you can set PYTHONPATH environment variable
Also you may need to have specific torch version. I've checked and it seems torch==1.8.0 works fine
Example:
import torch
import sys
sys.path.insert(0, './yolov7')
device = torch.device('cpu')
model = torch.load('yolov7x.pt', map_location=device)['model'].float()
torch.onnx.export(model, torch.zeros((1, 3, 640, 640)), 'yolov7.onnx', opset_version=12)
After that the model was exported to ONNX (visualized with netron):
Usually it is better to save weights as state_dict and keep the source code that can reconstruct the torch.nn.Module so then you can safely use:
model.load_state_dict(torch.load('weights.pt'))
| https://stackoverflow.com/questions/74041752/ |
Torch - How to calculate average of tensors with the same indexes | Suppose having two matrices: X(m, n) and index matrix I(m, 1). Every item in index matrix I_k represents the index of the kth element X_k in X.
And suppose the index is in the range of [0, 1, 2, ..., j-1]
I would like to calculate the average of tensors in X with the same index i and return a result matrix R(j, n).
For example,
X = [[1, 1, 1],
[2, 2, 2],
[3, 3, 3]]
I = [0, 0, 1]
The result matrix should be:
R = [[torch.mean(([1, 1, 1], [2, 2, 2]))],
[torch.mean(([3, 3, 3]))]
which equals to:
R = [[1.5, 1.5, 1.5],
[3, 3, 3]]
My current solution is to traverse through m, stack the tensors with the same index and perform torch.mean.
Is there a way avoiding traversing through m? It seems not elegant and rather time-consuming.
| ret = torch.empty_like(X)
ret.scatter_reduce_(0, I.unsqueeze(-1).expand_as(X), X, "mean", include_self=False)
should do what you want.
Now, note that this is a fairly new method so it may not be particularly performant. If you bump into an issue with this method, you may be better off running scatter_add_ on the tensor X and a tensor of ones and then divide.
If you want to also have a smaller tensor as output, you may want to figure out how many indices and with that infer the size of out.
| https://stackoverflow.com/questions/74051369/ |
How do I finetune a model while preserving layer names | When I fine tune a pretrained resnet152 model, I seem to lose all the named layers I’d like access to. I’ve include the simple fine tuned model code, and the print out of named layers of both pretrained and fine tuned.I'd like to maintain the layer names so I can visualize their output in a Class Activation Map.
Code
class ConvNet3(nn.Module):
def init(self):
super().init()
model = models.resnet152(pretrained=True)
model.fc = nn.Linear(2048, 10)
self.model = model
def forward(self, x):
return self.model(x) # [batch_size, 10]
import torchvision.models as models
model = ConvNet3().eval()
print([n for n, _ in model.named_children()])
model = models.resnet152(pretrained=True).eval()
print([n for n, _ in model.named_children()])
Output
[‘model’]
[‘conv1’, ‘bn1’, ‘relu’, ‘maxpool’, ‘layer1’, ‘layer2’, ‘layer3’, ‘layer4’, ‘avgpool’, ‘fc’]
| The layers are not lost, you are encapsulating the original Resnet model in your own class. If you use:
print([n for n, _ in model.model.named_children()])
since the Resnet model is stored under the model attribute of the ConvNet3 class.
Unless you need it for another reason, the wrapper class seems unnecessary, a simpler approach would be to do something as follows:
model = models.resnet152(pretrained=True)
model.fc = nn.Linear(2048,10)
model.eval()
print([n for n, _ in model.named_children()])
| https://stackoverflow.com/questions/74058751/ |
torchaudio.io not properly using ffmpeg | I am following this tutorial about hardware-accelerated gpu encoding/decoding for PyTorch [https://pytorch.org/audio/main/hw_acceleration_tutorial.html], I am encountering an error with the following code:
import torch
import torchaudio
print(torch.__version__) # 1.14.0.dev20221013+cu116
print(torchaudio.__version__) # 0.13.0.dev20221013+cu116
print(torchaudio._extension._FFMPEG_INITIALIZED) # True
from torchaudio.io import StreamReader
local_src = "vid.mp4"
cuda_conf = {
"decoder": "h264_cuvid", # Use CUDA HW decoder
"hw_accel": "cuda:0", # Then keep the memory on CUDA:0
}
def decode_vid(src, config):
frames = []
s = StreamReader(src)
s.add_video_stream(5, **config)
for i, (chunk,) in enumerate(s.stream()):
frames.append(chunk[0])
if __name__ == "__main__":
vid = decode_vid(local_src, cuda_conf)
The error message (somewhat truncated) is:
File
"/home/james/PycharmProjects/AlphaPose/Spectronix/Early_Experiments/vid_gpu_decode.py",
line 23, in decode_vid
s.add_video_stream(5, **config) File "/home/james/anaconda3/envs/alphapose/lib/python3.7/site-packages/torchaudio/io/_stream_reader.py",
line 624, in add_video_stream
hw_accel, RuntimeError: Unsupported codec: "h264_cuvid".
I have an RTX 3090 ti as my GPU, which does support the h264_cuvid decoder, and I have been able to decode a video on the command line running (taken from the tutorial linked above)
sudo ffmpeg -hide_banner -y -vsync 0 -hwaccel cuvid -hwaccel_output_format cuda -c:v h264_cuvid -i "https://download.pytorch.org/torchaudio/tutorial-assets/stream-api/NASAs_Most_Scientifically_Complex_Space_Observatory_Requires_Precision-MP4_small.mp4" -c:a copy -c:v h264_nvenc -b:v 5M test.mp4
So it seems torchaudio.io is not properly using ffmpeg. Any insights of how to fix this problem much appreciated. I'm using Ubuntu 22.04.
| RuntimeError: Unsupported codec: "h264_cuvid".
The error happens here, and the StreamReader has not gotten to the point where it executes NVDEC-specific code, so this is generic issue with FFmpeg compatibility.
This suggests that the libavcodec found at runtime is not configured with h264_cuvid.
A possible explanation is that there are multiple installations of FFmpeg in your system and torchaudio is picking up the one without NVDEC support, while when you invoke ffmpeg command, the one with NVDEC support is loaded.
Perhaps you can check your system and see if there are multiple FFmpeg installations and remove the ones without NVDEC support?
| https://stackoverflow.com/questions/74063722/ |
How to accelerate array_t construction in pybind11 | I used C++ to call python with Pytorch.
C++ generate a vector and send to Python for neural network to inference.
But send the vector is a time consuming process.
A vector contain 500000 float consume 0.5 second turning to array_t.
Is there a faster way to transfer vector to array_t? Any help will be appreciate!
Here is the part of code:
int main(){
float list[500000];
std::vector<float> v(list, list+length);
py::array_t<float> args = py::cast(v); //consume 0.5 second
py::module_ nd_to_tensor = py::module_::import("inference");
py::object result = nd_to_tensor.attr("inference")(args);
}
I also tried the second way as below, but it take 1.4 second in Python to make vector into tensor:
PYBIND11_MAKE_OPAQUE(std::vector<float>);
PYBIND11_EMBEDDED_MODULE(vectorbind, m) {
m.doc() = "C++ type bindings created by py11bind";
py::bind_vector<std::vector<float>>(m, "Vector");
}
int main(){
std::vector<float> v(list, list+length);
py::module_ nd_to_tensor = py::module_::import("inference");
py::object result = nd_to_tensor.attr("inference")(&v);
}
Here is Python code:
def inference():
tensor = torch.Tensor(Vector)
| Problem solved
py::array_t<float> args = py::array_t<float>({length}, {4}, &list[0]);
Directly init the array_t will be the best way
| https://stackoverflow.com/questions/74064332/ |
CUDA 11.6 not compatible with PyTorch 1.12.1 | The PyTorch website says that PyTorch 1.12.1 is compatible with CUDA 11.6, but I get the following error:
NVIDIA GeForce RTX 3060 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
I am using a laptop RTX 3060 and Poetry as my package manager in Python.
>>> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_Mar__8_18:18:20_PST_2022
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0
>>> poetry show
certifi 2022.9.24 Python package for providing Mozilla's CA Bundle.
charset-normalizer 2.1.1 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.
idna 3.4 Internationalized Domain Names in Applications (IDNA)
numpy 1.23.4 NumPy is the fundamental package for array computing with Python.
opencv-contrib-python 4.6.0.66 Wrapper package for OpenCV python bindings.
opencv-python 4.6.0.66 Wrapper package for OpenCV python bindings.
pillow 9.2.0 Python Imaging Library (Fork)
requests 2.28.1 Python HTTP for Humans.
torch 1.12.1 Tensors and Dynamic neural networks in Python with strong GPU acceleration
torchvision 0.13.1 image and video datasets and models for torch deep learning
typing-extensions 4.4.0 Backported and Experimental Type Hints for Python 3.7+
urllib3 1.26.12 HTTP library with thread-safe connection pooling, file post, and more.
What am I missing here? Is this a PyTorch <> CUDA issue or a CUDA <> GPU issue?
|
NVIDIA GeForce RTX 3060 Laptop GPU with CUDA capability sm_86 is not
compatible with the current PyTorch installation. The current PyTorch
install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
The build of PyTorch which you have installed doesn't have binary support for your GPU. This is because whoever built the PyTorch you are using chose to build it like that. This isn't a question of CUDA versions or PyTorch versions. It just that many frameworks are built with a limited range of binary architectures in order to keep the size of the packages they distribute small.
NVIDIA provide a method to support forward compatible architectures running older code through JIT recompilation at runtime. Unfortunately the standard PyTorch build system doesn't use it in order to save space in their build distributions, so that cannot help you in this situation.
Your only solution is to either source another build with the appropriate binary support for your GPU included.
| https://stackoverflow.com/questions/74065922/ |
Nearest Neighbour difference in Numpy/PyTorch | I need a to write a custom loss function in PyTorch but owing to PyTorch's similarity to NumPy, a Numpy based solution will also work.
I have two tensors(Numpy arrays) p and q of shape (b,...). For each batch element in p I wish to compute the minimum difference w.r.t any of the batch element of q. Sample code is given below:
loss = 0
for outer in range(b):
tmp_min = 1e+5
for inner in range(b):
tmp_loss = torch.abs(p[outer,...] - q[inner,...]) # np.abs(p[outer,...] - q[inner,...])
if tmp_loss<tmp_min:
tmp_min = tmp_loss
loss = loss + tmp_min
Since I will have to compute this loss many many times, is there a way to do it without FOR and IF statements?
Finally, I wish to compute this loss in both directions. So, is there any alternative other than repeating the above code with p and q swapped?
| You can add a singleton dimension to p to leverage broadcasting when doing p-q: (p[:, None, ...] - q[:, ...]).shape == (B, B, ...).
It would work as follows:
def loss_vec(p, q):
B = p.shape[0]
assert q.shape[0] == B
p = p.reshape(B, -1)
q = q.reshape(B, -1)
return (p[:, None, :] - q).abs().sum(axis=-1).min(axis=-1).values.sum()
def loss_op(p, q):
"""OP solution as oneliner"""
return torch.tensor([min([torch.abs(x - y).sum() for y in q]) for x in p]).sum()
B, K, M, N = 11, 3, 5, 7
p = torch.rand(B, K, M, N)
q = torch.rand(B, K, M, N)
value_op_pq = loss_op(p, q)
value_vec_pq = loss_vec(p, q)
assert value_op_pq == value_vec_pq
To compute in both directions, just change the axis=... when computing the min:
def loss_vec_bi(p, q):
"""Returns the loss in both directions"""
B = p.shape[0]
assert q.shape[0] == B
p = p.reshape(B, -1)
q = q.reshape(B, -1)
losses = (p[:, None, :] - q).abs().sum(axis=-1)
return losses.min(axis=-1).values.sum(), losses.min(axis=0).values.sum()
value_op_qp = loss_op(q, p)
value_vec_pq, value_vec_qp = loss_vec_bi(p, q)
assert value_op_pq == value_vec_pq
assert value_op_qp == value_vec_qp
| https://stackoverflow.com/questions/74076877/ |
How can I make torch.cat faster? | Below is a simplified version of what I want to do:
import torch
import time
# Create dummy tensors and save them in my_list
my_list = [[]] * 100
for i in range(len(my_list)):
my_list[i] = torch.randint(0, 1000000000, (100000, 256))
concat_list = torch.tensor([])
# I want to concat two consecutive tensors in my_list
tic = time.time()
for i in range(0, len(my_list), 2):
concat_list = torch.cat((concat_list, my_list[i]))
concat_list = torch.cat((concat_list, my_list[i+1]))
# Do some work at CPU with concat_list
concat_list = torch.tensor([]) # Empty concat_list
print('time: ', time.time() - tic) # It takes 3.5 seconds in my environment
Is there any way to make above tensor concatenation faster?
I tried to send my_list[i], my_list[i+1], and concat_list to GPU and do the torch.cat function in the device, but I then have to send concat_list back to CPU to do "some work" that I've written above. This takes more time due to frequent GPU-CPU data transfer.
I've also tested converting tensors to lists to do the concatenation with basic Python lists, but this approach was way slower than a simple torch.cat approach.
I've heard that using DataLoader with customized collate_fn can enable concatenation, but I don't know how to implement it.
Is there any faster method possible?
| Your code takes around 11 seconds on my PC. The following takes 4.1 seconds:
# Create dummy tensors and save them in my_list
my_list = [[]] * 100
for i in range(len(my_list)):
my_list[i] = torch.randint(0, 1000000000, (100000, 256))
tic = time.time()
my_list = torch.stack(my_list)
# I want to concat two consecutive tensors in my_list
for i in range(0, len(my_list), 2):
concat_list = my_list[i:i+2]
# Do some work at CPU with concat_list
print('time: ', time.time() - tic) # It takes 3.5 seconds in my environment
| https://stackoverflow.com/questions/74084524/ |
How can I launch chenrocks/uniter with NVIDIA GeForce RTX 3090 GPU? | docker run --gpus '"'device=$CUDA_VISIBLE_DEVICES'"' --ipc=host --rm -it \
--mount src=$(pwd),dst=/src,type=bind \
--mount src=$OUTPUT,dst=/storage,type=bind \
--mount src=$PRETRAIN_DIR,dst=/pretrain,type=bind,readonly \
--mount src=$TXT_DB,dst=/txt,type=bind,readonly \
--mount src=$IMG_DIR,dst=/img,type=bind,readonly \
-e NVIDIA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES \
-w /src chenrocks/uniter
When I run this file, it prints error
NVIDIA Release 19.05 (build 6411784) PyTorch Version 1.1.0a0+828a6a3
...
WARNING: Detected NVIDIA NVIDIA GeForce RTX 3090 GPU, which is not yet supported in this version of the container
ERROR: No supported GPU(s) detected to run this container
it doesn't fit with my NVIDIA GeForce RTX 3090 GPU
so I want to change version to 22.05
but when I run this,
docker run --gpus '"'device=$CUDA_VISIBLE_DEVICES'"' --ipc=host --rm
-it nvcr.io/nvidia/pytorch:22.05-py3 \
--mount src=$(pwd),dst=/src,type=bind \
--mount src=$OUTPUT,dst=/storage,type=bind \
--mount src=$PRETRAIN_DIR,dst=/pretrain,type=bind,readonly \
--mount src=$TXT_DB,dst=/txt,type=bind,readonly \
--mount src=$IMG_DIR,dst=/img,type=bind,readonly \
-e NVIDIA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES \
-w /src chenrocks/uniter
it prints error
/opt/nvidia/nvidia_entrypoint.sh: line 49: exec: --: invalid option
exec: usage: exec [-cl] [-a name] [command [arguments ...]] [redirection ...]
I'd really appreciate it if you could tell me how to change the version.
| Your second second docker run command specifies 2 images:
docker run ... nvcr.io/nvidia/pytorch:22.05-py3 chenrocks/uniter
You can only pass one.
Also note the general format of the docker run command:
$ docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
UPDATE
But if the docker image's nvidia/pytorch version doesn't fit my GPU, can I not use that docker image? Or is there something I can do?
You could try editing the Dockerfile of the referenced project to build a custom docker image.
Something like this:
git clone https://github.com/ChenRocks/UNITER.git
cd UNITER
# replace the first line of the Dockerfile with:
# FROM nvcr.io/nvidia/pytorch:22.05-py3
docker build .
# ...
# Successfully built <image_id>
Then simply edit your docker run command to use your custom built image:
docker run ... <image_id>
It looks like that at least one other person had a similar issue. Unfortunately the project is not actively maintained so it's difficult to get any kind of support when trying to make it work with the latest hardware.
| https://stackoverflow.com/questions/74086931/ |
PackagesNotFoundError When Trying to Install intel_extension_for_pytorch | I am trying to conda install intel_extension_for_pytorch but I keep getting the following error in the command line:
PackagesNotFoundError: The following packages are not available from current channels:
intel_extension_for_pytorch
this is the command that I am using
conda install intel_extension_for_pytorch
edit:
System Info:
Microsoft Windows [Version 10.0.19044.2006]
Processor 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz, 2995 Mhz, 4 Core(s), 8 Logical Processor(s)
| Currently, the Intel Extension for PyTorch is only supported by Linux OS. Try on a recent Linux version, it should work there.
Check out the docs for more info: https://www.intel.com/content/www/us/en/developer/tools/oneapi/extension-for-pytorch.html
| https://stackoverflow.com/questions/74104208/ |
How do you specify the bfloat16 mixed precision with the Intel Extension for PyTorch? | I would like to know how to use mixed precision with PyTorch and Intel Extension for PyTorch.
I have tried to look at the documentation on their GitHub, but I can't find anything that specifies how to go from fp32 to blfoat16.
| The IPEX GitHub might not be the best place to look for API documentation. I would try and use the PyTorch IPEX page, which includes examples of API applications.
This would be an example of how to use fp32
model, optimizer = ipex.optimize(model, optimizer, dtype=torch.float32)
This would be an example of how to use bfloat16
model, optimizer = ipex.optimize(model, optimizer, dtype=torch.bfloat16)
| https://stackoverflow.com/questions/74105061/ |
Aggregation by MLP for GIN and GCN: What is the difference? | I saw the following procedure for GIN in this link
and the code for a GIN layer is written like this:
self.conv1 = GINConv(Sequential(Linear(num_node_features,dim_h),
BatchNorm1d(dim_h),ReLU(),
Linear(dim_h,dim_h),ReLU()))
Is this an aggregation function inside the Sequential(....) or a pooling function?
Sequential(Linear(num_node_features,dim_h),
BatchNorm1d(dim_h),ReLU(),
Linear(dim_h,dim_h),ReLU()))
Can I do the same thing for GCN layer?
self.conv1 = GCNConv(Sequential(Linear(num_node_features,dim_h),
BatchNorm1d(dim_h),ReLU(),
Linear(dim_h,dim_h),ReLU()))
self.conv2 = GCNConv(Sequential(Linear(dim_h,dim_h),
BatchNorm1d(dim_h),ReLU(),
Linear(dim_h,dim_h),ReLU()))
I get the following error:
---> 15 self.conv1 = GCNConv(Sequential(Linear(num_node_features,dim_h),
16 BatchNorm1d(dim_h),ReLU(),
17 Linear(dim_h,dim_h),ReLU()))
18 self.conv2 = GCNConv(Sequential(Linear(dim_h,dim_h),
19 BatchNorm1d(dim_h),ReLU(),
20 Linear(dim_h,dim_h),ReLU()))
21 self.conv3 = GCNConv(Sequential(Linear(dim_h,dim_h),
22 BatchNorm1d(dim_h),ReLU(),
23 Linear(dim_h,dim_h),ReLU()))
TypeError: GCNConv.__init__() missing 1 required positional argument: 'out_channels'
| You can see GINConv and GCNConv API from torch_geometric.
GINConv()
There is a nn argument e.g., defined by torch.nn.Sequential. Therefore in the tutorial you mentioned above can use the Sequential() method.
GCNConv()
But GCNConv() does not have nn argument.
When you wonder about a method you don't know, searching for the method in API is a good way to solve issues :)
| https://stackoverflow.com/questions/74106965/ |
Iterating through all subsets of a 3d array | Assuming I have created an array by using the code below:
import numpy as np
array = np.random.randint(0, 255, (3,4,4))
>>> array
array([[[ 72, 11, 158, 252],
[160, 50, 131, 174],
[245, 127, 99, 6],
[152, 25, 58, 96]],
[[ 29, 37, 211, 215],
[195, 72, 186, 33],
[ 12, 68, 44, 241],
[ 95, 184, 188, 176]],
[[238, 90, 177, 15],
[ 48, 221, 41, 236],
[ 86, 14, 130, 192],
[ 64, 17, 44, 251]]])
Which created a 3d array of 8x8 matrices I would like to go through all possible adjacent 2x2 sub-matrices. For example, I would like to get the following matrices from the first matrix:
[[72,11],
[160, 50]]
[[11,158],
[50, 131]]
[[160,50],
[245,127]]
etc...
Is there a built-in numpy\pytorch method I can use or do I have to implement this iteration?
| Use sliding_window_view to simply generate the entire matrix:
>>> np.lib.stride_tricks.sliding_window_view(array, (1, 2, 2)).reshape(-1, 2, 2)
array([[[ 72, 11],
[160, 50]],
[[ 11, 158],
[ 50, 131]],
[[158, 252],
[131, 174]],
...
[[ 14, 130],
[ 17, 44]],
[[130, 192],
[ 44, 251]]])
Note that the data will be copied in the last step of reshaping, and the memory consumption of the results generated by large arrays may be unacceptable. If you just want to iterate over each sub array, you can reshape the original array to generate a view that is easier to iterate:
>>> from itertools import chain
>>> view = np.lib.stride_tricks.sliding_window_view(
... array.reshape(-1, array.shape[-1]), (2, 2)
... )
>>> for ar in chain.from_iterable(view):
... print(ar)
...
[[ 72 11]
[160 50]]
[[ 11 158]
[ 50 131]]
[[158 252]
[131 174]]
...
[[ 14 130]
[ 17 44]]
[[130 192]
[ 44 251]]
The above method has defects, which will cause the data between two 2d sub arrays to be treated as sub views (such as [[152, 25], [29, 37]]). The possible choice is not to use reshape and iterate through multiple loops.
| https://stackoverflow.com/questions/74107344/ |
PyTorch RetinaNet train model inputs | I have model = torchvision.models.detection.retinanet_resnet50_fpn_v2(progress=True) and would like to train it on custom data. To get the loss, I have to exececute
classification_loss, regression_loss = model(images, targets)
I have create a batch tensor for images, but for the life of me, cannot find how I am supposed to format targets for object detection... Each target has a bounding box and a class label.
| check this official tutorial: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
In general , targets is a list of dict, for e.g
targets = [
{
"boxes": torch.as_tensor([[xmin, ymin, xmax, ymax]], dtype=torch.float32),
"labels": torch.as_tensor([1,], dtype=torch.int64)
}
]
| https://stackoverflow.com/questions/74107599/ |
How to preserve dataset order when using DDP in pytorch lightning? | I need to be able to preserve the order in which the data is fed to the model when training in multiple GPUS.
According to https://github.com/Lightning-AI/lightning/discussions/13342 each GPU gets a consecutive fraction of the dataset, so if I have 2GPUs, the first one will get the first half of the dataset and the other one will get the second half of the dataset.
I need to preserve the order and don't know how to overwrite the dataset-splitting logic. Any advice?
| I got an answer here
https://github.com/Lightning-AI/lightning/discussions/15164
which is to basically write a custom DistributedSampler and pass it to the dataloader and set Trainer(replace_sampler_ddp=False)
My code is something like this
def train_dataloader(self):
"""returns a dataloader for training according to hparams
Returns:
DataLoader: DataLoader ready to deliver samples for training
"""
# define a distributed sampler in case we are using multiple GPUs
if self.hparams.num_gpus>1:
sampler = torch.utils.data.distributed.DistributedSampler(
self.train_dataset, shuffle=False)
# only use the sampler if using multiple GPUs
return DataLoader(
self.train_dataset,
shuffle=False,
num_workers=self.hparams.num_workers,
batch_size=self.hparams.batch_size,
pin_memory=False,
sampler=sampler if self.hparams.num_gpus > 1 else None)
| https://stackoverflow.com/questions/74108758/ |
Modify the rows of a tensor at specific indices given by a list (Pytorch) | I have a tensor X with shape (N,M) and a list of indices idx_list.
I would like to modify X but only at the rows given by idx_list.
So I'd like to do something like this:
X[idx_list, :] = Y
Y is a tensor with shape (len(idx_list), M)
| The solution is mentioned in your question. The usual slicing notation that's used in numpy arrays works well with PyTorch tensors as well.
X[idx_list, :] = Y
Here's a screenshot from Jupyter notebook illustrating it:
Your approach posted as answer would work too
X[(torch.tensor(idx_list),)] = Y
However, you do not need to complicate it like this by converting the idx_list into a tensor. Slicing is preferably done using standard Python lists.
| https://stackoverflow.com/questions/74109293/ |
From a 2D tensor, return a 1D tensor by selecting 1 column per row | I have a 2D tensor and a 1D tensor:
import torch
torch.manual_seed(0)
out = torch.randn((16,2))
target = torch.tensor([0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0])
For each row of out, I want to select the corresponding column as indexed by target. Thus, my output will be a (16,1) tensor. I tried the solution mentioned here:
https://stackoverflow.com/a/58937071
But I get:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3369, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-7-50d103c3b56c>", line 1, in <cell line: 1>
out.gather(1, target)
RuntimeError: Index tensor must have the same number of dimensions as input tensor
Can you help?
| In order to apply torch.gather, the two tensors must have the same number of dimensions. As such you should unsqueeze an additional dimension on target in last position:
>>> out.gather(1, target[:,None])
tensor([[-1.1258],
[-0.4339],
[ 0.6920],
[-2.1152],
[ 0.3223],
[ 0.3500],
[ 1.2377],
[ 1.1168],
[-1.6959],
[ 0.7935],
[ 0.5988],
[-0.3414],
[ 0.7502],
[ 0.1835],
[ 1.5863],
[ 0.9463]])
| https://stackoverflow.com/questions/74111502/ |
Derivative of Neural Network in Pytorch | I have implemented and trained a neural network in Pytorch, however, I am interested in the derivative of the neural network parameters with respect to the input.
I have extensively searched for any procedure to that would allow evaluating the derivative of weights with respect to a given input, but I did not find anything.
I know that I can compute the gradients of a function in the following way.
external_grad = torch.tensor([1., 1.])
Q.backward(gradient=external_grad)
But How would I do that with a trained neural network instead of a function Q?
Thanks in advance.
#!/usr/bin/env python
# coding: utf-8
# In[1]:
import numpy as np
from scipy.stats import norm
from numpy import linalg as la
import numpy.random as npr
from tabulate import tabulate
from matplotlib import pyplot as plt
import random
import os
import torch
from torch import nn
from torch.utils.data import DataLoader
#from torchvision import datasets, transforms
from torch.autograd import Variable
# In[2]:
import numpy as np
from scipy.stats import norm
from numpy import linalg as la
import numpy.random as npr
from tabulate import tabulate
from matplotlib import pyplot as plt
import random
import os
import torch
from torch import nn
from torch.utils.data import DataLoader
#from torchvision import datasets, transforms
from torch.autograd import Variable
from torch import optim
# In[3]:
nSimul = 32768
T1 = 1.0
T2 = 2.0
K = 110.0
spot = 100.0
vol = 0.2
vol0 = 0.5 # vol is increased over the 1st period so we have more points in the wings
# simulate all Gaussian returns (N1, N2) first
# returns: matrix of shape [nSimul, TimeSteps=2]
returns = np.random.normal(size=[nSimul,2])
# generate paths, step by step, and not path by path as customary
# this is to avoid slow Python loops, using NumPy's optimized vector functions instead
# generate the vector of all scenarios for S1, of shape [nSimul]
S1 = spot * np.exp(-0.5*vol0*vol0*T1 + vol0*np.sqrt(T1)*returns[:,0])
# generate the vector of all scenarios for S2, of shape [nSimul]
S2 = S1 * np.exp(-0.5*vol*vol*(T2-T1) + vol*np.sqrt(T2-T1)*returns[:,1])
# training set, X and Y are both vectors of shape [nSimul]
X = S1
Y = np.maximum(0, S2 - K)
xAxis = np.linspace(20, 200, 100)
xAxis=xAxis.reshape(-1,1)
# In[4]:
#Normalization of the simulated data:
meanX = np.mean(X)
stdX = np.std(X)
meanY = np.mean(Y)
stdY = np.std(Y)
normX = (X - meanX) / stdX
normY = (Y - meanY) / stdY
normX=normX.reshape(-1,1)
normY=normY.reshape(-1,1)
# In[5]:
class NeuralNetwork(nn.Module):
def __init__(self,inputsize,outputsize):
super(NeuralNetwork, self).__init__()
#self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(inputsize,3),
nn.ELU(),
nn.Linear(3, 5),
nn.ELU(),
nn.Linear(5,3),
nn.ELU(),
nn.Linear(3,outputsize),
)
w = torch.empty(0,1)
nn.init.normal_(w)
def forward(self, x):
#x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
# In[6]:
inputDim = 1 # takes variable 'x'
outputDim = 1 # takes variable 'y'
learningRate = 0.05
epochs = 10000
#weight=torch.empty(3)
model = NeuralNetwork(inputDim, outputDim)
##### For GPU #######
if torch.cuda.is_available():
model.cuda()
# In[7]:
#criterion = torch.nn.MSELoss()
#optimizer = torch.optim.SGD(model.parameters(), lr=learningRate)
# In[ ]:
def ridge_loss(outputs,labels):
torch.mean((outputs-labels)**2)
# In[ ]:
# In[9]:
#Adam optmization
criterion = torch.nn.MSELoss()
#optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = optim.Adam(model.parameters(), lr=0.05)
# In[10]:
for epoch in range(epochs):
# Converting inputs and labels to Variable
if torch.cuda.is_available():
inputs = Variable(torch.from_numpy(normX).cuda().float())
labels = Variable(torch.from_numpy(normY).cuda().float())
else:
inputs = Variable(torch.from_numpy(normX).float())
labels = Variable(torch.from_numpy(normY).float())
# Clear gradient buffers because we don't want any gradient from previous epoch to carry forward, dont want to cummulate gradients
optimizer.zero_grad()
# get output from the model, given the inputs
outputs = model(inputs)
# get loss for the predicted output
loss = criterion(outputs, labels)
print(loss)
# get gradients w.r.t to parameters
loss.backward()
# update parameters
optimizer.step()
print('epoch {}, loss {}'.format(epoch, loss.item()))
# In[11]:
def predict(xs):
# first, normalize
nxs = (xs - meanX) / stdX
# forward feed through ANN
# we don't need gradients in the testing phase
with torch.no_grad():
if torch.cuda.is_available():
nys = model(Variable(torch.from_numpy(nxs.rehape(-1,1)).cuda().float())).cpu().data.numpy()
else:
nys = model(Variable(torch.from_numpy(nxs.reshape(-1,1))).float()).data.numpy()
# de-normalize output
ys = meanY + stdY * nys
# we get a matrix of shape [size of xs][1], which we reshape as vector [size of xs]
return np.reshape(ys, [-1])
# In[13]:
def BlackScholes(S0,r,sigma,T,K):
d1 = 1 / (sigma * np.sqrt(T)) * (np.log(S0/K) + (r+sigma**2/2)*T)
d2 = d1 - sigma * np.sqrt(T)
return norm.cdf(d1) * S0 - norm.cdf(d2) * K * np.exp(-r*T)
def BlackScholesCallDelta(S0,r,sigma,T,K):
d1 = 1 / (sigma * np.sqrt(T)) * (np.log(S0/K) + (r+sigma**2/2)*T)
return norm.cdf(d1)
BlackScholes_vec=np.vectorize(BlackScholes)
BlackScholesCallDelta_vec=np.vectorize(BlackScholesCallDelta)
# In[14]:
BS_price=BS_prices=BlackScholes_vec(S0=xAxis,r=0,sigma=0.2,T=1.0,K=110.0)
predicted=predict(xAxis)
S1=1
#line_learn = plt.plot(Sval,y,label="Deep Neural Net")
line_learn = plt.plot(xAxis,predicted,label="Neural Regression")
line_BS = plt.plot(xAxis,BS_price, label="Black-Scholes")
plt.xlabel("Spot Price")
plt.ylabel("Option Price")
#plt.title(r'Time: %1.1f' % time, loc='left', fontsize=11)
plt.title(r'Strike: %1.2f' % K, loc='right', fontsize=11)
plt.title(r'Initial price: %1.2f' % S1, loc='center', fontsize=11)
plt.legend()
plt.show()
#plt.savefig("deephedge.png", dpi=150)
plt.savefig("deephedge.pdf")
# In[15]:
Prices_rg_mc_diff=[]
for i in range(len(xAxis)-1):
delta=(predicted[i+1]-predicted[i])/(xAxis[i+1]-xAxis[i])
Prices_rg_mc_diff.append(delta)
# In[16]:
BS_delta=BlackScholesCallDelta(S0=xAxis,r=0,sigma=0.2,T=1.0,K=110.0)
predicted=predict(xAxis)
S1=1
#line_learn = plt.plot(Sval,y,label="Deep Neural Net")
line_learn = plt.plot(xAxis[1:],Prices_rg_mc_diff,label="Neural Regression")
line_BS = plt.plot(xAxis[1:],BS_delta[1:], label="Black-Scholes")
plt.xlabel("Spot Price")
plt.ylabel("Option Price")
#plt.title(r'Time: %1.1f' % time, loc='left', fontsize=11)
plt.title(r'Strike: %1.2f' % K, loc='right', fontsize=11)
plt.title(r'Initial price: %1.2f' % S1, loc='center', fontsize=11)
plt.legend()
plt.show()
#plt.savefig("deephedge.png", dpi=150)
plt.savefig("deephedge.pdf")
# In[17]:
model.backward(retain_graph=True)
# In[ ]:
print(NeuralNetwork.weight.grad)
# In[ ]:
def predict(xs):
# first, normalize
nxs = (xs - meanX) / stdX
# forward feed through ANN
# we don't need gradients in the testing phase
with torch.no_grad():
if torch.cuda.is_available():
nys = model(Variable(torch.from_numpy(nxs.rehape(-1,1)).cuda().float())).cpu().data.numpy()
else:
nys = model(Variable(torch.from_numpy(nxs.reshape(-1,1))).float()).data.numpy()
# de-normalize output
ys = meanY + stdY * nys
# we get a matrix of shape [size of xs][1], which we reshape as vector [size of xs]
return np.reshape(ys, [-1])
# In[21]:
c3=torch.from_numpy((predicted.reshape(-1,1)), requires_grad=True)
c4=torch.from_numpy(xAxis, requires_grad=True)
#c5=torch.Tensor(c3)
#c6=torch.Tensor(c4)
loss = criterion(c3,c4) # calculating loss
loss.backward()
# In[28]:
torch.tensor(predicted.reshape(-1,1), requires_grad=True)
torch.tensor(xAxis, requires_grad=True)
criterion(torch.tensor(predicted.reshape(-1,1), requires_grad=True),torch.tensor(xAxis, requires_grad=True))
loss.backward()
| You need to explicitly use requires_grad = True when create a tensor. And to calculate gradient you first need to apply some operation on the tensor.
Here is an example:
import torch
x = torch.rand(2, 2, requires_grad=True)
y = x + 2
z = y * y * 3
out = z.mean()
out.backward()
print(x.grad)
Output:
tensor([[3.3720, 3.4302],
[3.4030, 3.3605]])
In this way you are using torch.autograd to calculate the gradient for tensor x. See autograd for more.
And for neural network you can simply use the network and backward it afterward.
A neural network Example:
import torch
import torch.nn as nn
import torch.nn.functional as f
x = torch.rand(2, 2)
# define a neural network
network = nn.Sequential(
nn.Linear(2,100),
nn.Linear(100,2)
)
pred = network(x)
loss = f.mae_loss(pred, x) # calculating loss
loss.backward()
# Update weights with gradients
network[0].weight = 0.1 * network[0].weight.grad
network[1].weight = 0.1 * network[1].weight.grad
Note: I didn't put any activation function in network for the sack of simplicity.
Example of backward() using torch.nn.MSELoss():
import torch
from torch.nn import MSELoss
criterion = MSELoss()
a = torch.tensor([1.,2.], requires_grad=True)
b = a**2
loss = criterion(b, a)
loss.backward()
print(a.grad)
Output:
tensor([0., 6.])
| https://stackoverflow.com/questions/74114496/ |
object of type 'ESC50Data' has no len() in my audio classification script | so I'm running into the error that my class ESC50Data does not have any length.
from torch.utils.data import Dataset, DataLoader
class ESC50Data(Dataset):
def __init__(self, base, df, in_col, out_col):
self.df = df
self.data = []
self.labels = []
self.c2i={}
self.i2c={}
self.categories = sorted(df[out_col].unique())
for i, category in enumerate(self.categories):
self.c2i[category]=i
self.i2c[i]=category
for ind in tqdm(range(len(df))):
row = df.iloc[ind]
file_path = os.path.join(base,row[in_col])
self.data.append(spec_to_image(get_melspectrogram(file_path))[np.newaxis,...])
self.labels.append(self.c2i[row['category']])
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx], self.labels[idx]
train_data = ESC50Data('audio', train, 'filename', 'category')
valid_data = ESC50Data('audio', valid, 'filename', 'category')
train_loader = DataLoader(train_data, batch_size=16, shuffle=True)
valid_loader = DataLoader(valid_data, batch_size=16, shuffle=True)
This is the point at which I get my error. Using Jypter Notebooks as a sidenote.
TypeError Traceback (most recent call last)
Input In [47], in <cell line: 1>()
----> 1 train_loader = DataLoader(train_data, batch_size=16, shuffle=True)
2 valid_loader = DataLoader(valid_data, batch_size=16, shuffle=True)
File ~/opt/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py:353, in DataLoader.__init__(self, dataset, batch_size, shuffle, sampler, batch_sampler, num_workers, collate_fn, pin_memory, drop_last, timeout, worker_init_fn, multiprocessing_context, generator, prefetch_factor, persistent_workers, pin_memory_device)
351 else: # map-style
352 if shuffle:
--> 353 sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]
354 else:
355 sampler = SequentialSampler(dataset) # type: ignore[arg-type]
File ~/opt/anaconda3/lib/python3.9/site-packages/torch/utils/data/sampler.py:106, in RandomSampler.__init__(self, data_source, replacement, num_samples, generator)
102 if not isinstance(self.replacement, bool):
103 raise TypeError("replacement should be a boolean value, but got "
104 "replacement={}".format(self.replacement))
--> 106 if not isinstance(self.num_samples, int) or self.num_samples <= 0:
107 raise ValueError("num_samples should be a positive integer "
108 "value, but got num_samples={}".format(self.num_samples))
File ~/opt/anaconda3/lib/python3.9/site-packages/torch/utils/data/sampler.py:114, in RandomSampler.num_samples(self)
110 @property
111 def num_samples(self) -> int:
112 # dataset size might change at runtime
113 if self._num_samples is None:
--> 114 return len(self.data_source)
115 return self._num_samples
TypeError: object of type 'ESC50Data' has no len()
Any ideas as to what could be happening? I created the class ESC50Data and then I gave it the child class called Dataset that will inherent the properties of ESC50Data. I also loaded the data into pytorch with train and valid data.
| Check the indentation of __len__(self) and __getitem__(self, idx) methods in your class ESC50Data code. Right now, it seems like these methods are defined inside the __init__ method, and not under the class itself.
See, e.g., this answer.
| https://stackoverflow.com/questions/74117946/ |
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 1 | I have read other people's questions for similar issues, but can't figure it out in my case. My code is below, how do I fix this? Thank you.
data = ImageFolder(data_dir, transform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()]))
trainloader = torch.utils.data.DataLoader(data, batch_size=3600,
shuffle=True, num_workers=2)
dataiter = iter(trainloader)
x_train, y_train = dataiter.next()
print(x_train.size())
print(y_train.size())
torch.Size([3600, 3, 224, 224])
torch.Size([3600])
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
# here we set up the tensors......
self.layer1 = torch.nn.Linear(224, 12)
self.layer2 = torch.nn.Linear(12, 10)
def forward(self, x):
# here we define the (forward) computational graph,
# in terms of the tensors, and elt-wise non-linearities
x = F.relu(self.layer1(x))
x = self.layer2(x)
return x
net = Net()
y = net.forward(x_train)
lossFn = torch.nn.CrossEntropyLoss()
loss = lossFn(y, y_train)
print(loss)
| Your input to the network is a 2D image. That is a tensor with 4 dimensions: batch, channel, height and width.
However, you treat the 2D input as a 1D signal by applying nn.Linear layers to its width dimension only, resulting with an output of shape batchchannelheight*output_dim. In contrast, the nn.CrossEntropyLoss expects only one output vector per target label.
You need to change your Net to properly process images into a single vector of predictions.
You can checkout milestone image classification architectures here.
| https://stackoverflow.com/questions/74119363/ |
Global Max Pooling in Pytorch: RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x2048 and 128x1024) | In the model I'm building I'm trying to improve performance by replacing the Flatten layer with global max pooling.
To check that shapes are in order I ran a single random sample through the net:
test = torch.rand((1, 3, 224, 224)) # [N, C, H, W]
foo = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.Conv2d(32, 32, kernel_size=3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.MaxPool2d(2)
)
foo2 = nn.Sequential(
nn.Conv2d(32, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.MaxPool2d(2)
)
foo3 = nn.Sequential(
nn.Conv2d(64, 128, kernel_size=3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(128),
nn.Conv2d(128, 128, kernel_size=3, padding=1),
nn.ReLU(),
nn.BatchNorm2d(128),
nn.MaxPool2d(2)
)
l1 = nn.Sequential(
nn.Dropout(0.5),
nn.Linear(128, 1024),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(1024, 10)
)
r1 = foo(test)
print(r1.shape) # torch.Size([1, 32, 112, 112])
r2 = foo2(r1)
print(r2.shape) # torch.Size([1, 64, 56, 56])
r3 = foo3(r2)
print(r3.shape) # torch.Size([1, 128, 28, 28])
# applying global max pooling and reshaping the layer to [N, C]
flat = F.adaptive_max_pool2d(r3, (1, 1))
ff = flat.reshape(flat.size(0), -1)
print(ff.shape) # torch.Size([1, 128])
res = l1(ff)
print(res.shape) # torch.Size([1, 10])
Here all seems to work as expected.
My model class has these same layers with the forward method like so:
def forward(self, batch: torch.Tensor) -> torch.Tensor:
r1 = self.conv1(batch)
r2 = self.conv2(r1)
r3 = self.conv3(r2)
tmp = F.adaptive_max_pool2d(r3, (1, 1))
flat = r3.view(tmp.size(0), -1)
out = self.linear(flat)
return out
Unfortunately, when I try to run the actual images through (Fashion MNIST dataset) I get the error: mat1 and mat2 shapes cannot be multiplied (128x2048 and 128x1024)
My batch size is 128 but I don't understand where 2048 might be coming from. None of my layers should output anything of that shape.
The full error message is as follows:
RuntimeError Traceback (most recent call last)
/root/fashion_mnist.ipynb Cell 7 in <cell line: 1>()
----> 1 runner.train_model(epochs=80, batch_size=128, criterion=loss_fn, optimizer=optim)
/root/fashion_mnist.ipynb Cell 7 in RunModel.train_model(self, epochs, batch_size, criterion, optimizer, device)
113 t_ep = datetime.now()
115 # run train routine
--> 116 train_loss, train_acc = self._run_train(train_loader, criterion, optimizer)
117 self.train_losses[ep] = train_loss
118 self.train_accuracies[ep] = train_acc
/root/fashion_mnist.ipynb Cell 7 in RunModel._run_train(self, train_data, criterion, optimizer)
141 inputs, targets = inputs.cuda(), targets.cuda()
142 optimizer.zero_grad()
--> 144 outputs: torch.Tensor = self.model(inputs)
145 loss: torch.Tensor = criterion(outputs, targets)
147 loss.backward()
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
1182 # If we don't have any hooks, we want to skip the rest of the logic in
1183 # this function, and just call forward.
1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1185 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1186 return forward_call(*input, **kwargs)
...
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x2048 and 128x1024)
Any ideas what's happening here?
The notebook is available here:
https://colab.research.google.com/drive/1QGpSpUCbuDz-dktmLCv_YpG6LZjYZ1TM?usp=sharing
| Use Flatten() in the layers instead of view(). So your linear layer should look like this:
self.linear = nn.Sequential(
nn.Flatten(),
nn.Dropout(0.5),
nn.Linear(128, 1024),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(1024, 10)
)
And your forward function look like:
def forward(self, batch: torch.Tensor) -> torch.Tensor:
r1 = self.conv1(batch)
r2 = self.conv2(r1)
r3 = self.conv3(r2)
tmp = F.adaptive_max_pool2d(r3, (1, 1))
out = self.linear(tmp)
return out
I have tested it on colab and it works fine.
Here is a summary output:
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 32, 32, 32] 896
ReLU-2 [-1, 32, 32, 32] 0
BatchNorm2d-3 [-1, 32, 32, 32] 64
Conv2d-4 [-1, 32, 32, 32] 9,248
ReLU-5 [-1, 32, 32, 32] 0
BatchNorm2d-6 [-1, 32, 32, 32] 64
MaxPool2d-7 [-1, 32, 16, 16] 0
Conv2d-8 [-1, 64, 16, 16] 18,496
ReLU-9 [-1, 64, 16, 16] 0
BatchNorm2d-10 [-1, 64, 16, 16] 128
Conv2d-11 [-1, 64, 16, 16] 36,928
ReLU-12 [-1, 64, 16, 16] 0
BatchNorm2d-13 [-1, 64, 16, 16] 128
MaxPool2d-14 [-1, 64, 8, 8] 0
Conv2d-15 [-1, 128, 8, 8] 73,856
ReLU-16 [-1, 128, 8, 8] 0
BatchNorm2d-17 [-1, 128, 8, 8] 256
Conv2d-18 [-1, 128, 8, 8] 147,584
ReLU-19 [-1, 128, 8, 8] 0
BatchNorm2d-20 [-1, 128, 8, 8] 256
MaxPool2d-21 [-1, 128, 4, 4] 0
Flatten-22 [-1, 128] 0
Dropout-23 [-1, 128] 0
Linear-24 [-1, 1024] 132,096
ReLU-25 [-1, 1024] 0
Dropout-26 [-1, 1024] 0
Linear-27 [-1, 10] 10,250
================================================================
Total params: 430,250
Trainable params: 430,250
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 2.76
Params size (MB): 1.64
Estimated Total Size (MB): 4.41
----------------------------------------------------------------
Trainer Output:
Epoch 1/80 completed in 0:00:32.994402. Train_loss: 1.0680, train accuracy: 0.6225 Test loss: 1.0435, test accuracy: 0.6271
Epoch 2/80 completed in 0:00:32.939861. Train_loss: 0.9726, train accuracy: 0.6578 Test loss: 0.9616, test accuracy: 0.6662
Epoch 3/80 completed in 0:00:32.811203. Train_loss: 0.9015, train accuracy: 0.6851 Test loss: 0.9015, test accuracy: 0.6883
Epoch 4/80 completed in 0:00:32.836747. Train_loss: 0.8361, train accuracy: 0.7119 Test loss: 0.8336, test accuracy: 0.7173
| https://stackoverflow.com/questions/74120641/ |
How to see which indices of an input effected an index of output | I want to test my neural network.
For example, given: an input tensor input, a nn.module with some submodules module, an output tensor output,
I want to find which indices of input effected the index (1,2) of output
More specifically, given:
Two input matrix of size (12, 12),
Operation is matmul
Queried index of the output matrix is: (0,0)
the expected output is:
InputMatrix1: (0,0), (0, 1), ..., (0, 11)
InputMatrix2: (0,0), (1, 0), ..., (11, 0)
Maybe visualization is okay.
Is there any method or libraries that can achieve this?
| This is easy. You want to look at the non-zeros entries of the grad of InputMatrix1 and InputMatrix2 w.r.t the (0,0) element of the product:
x = torch.rand((12, 12), requires_grad=True) # explicitly asking for gradient for this tensor
y = torch.rand((12, 12), requires_grad=True) # explicitly asking for gradient for this tensor
# compute the product using @ operator:
out = x @ y
# use back propagation to compute the gradient w.r.t out[0, 0]:
out[0,0].backward()
Inspect the non-zero elements of the inputs' gradients yield, as expected:
In []: x.grad.nonzero()
tensor([[ 0, 0],
[ 0, 1],
[ 0, 2],
[ 0, 3],
[ 0, 4],
[ 0, 5],
[ 0, 6],
[ 0, 7],
[ 0, 8],
[ 0, 9],
[ 0, 10],
[ 0, 11]])
In []: y.grad.nonzero()
tensor([[ 0, 0],
[ 1, 0],
[ 2, 0],
[ 3, 0],
[ 4, 0],
[ 5, 0],
[ 6, 0],
[ 7, 0],
[ 8, 0],
[ 9, 0],
[10, 0],
[11, 0]])
| https://stackoverflow.com/questions/74122270/ |
Convert torch of 9 channel to image of 3 channel (or 1) to display it | I have a tensor composed of 9 channel [9, 224, 224], (which is result of prediction. How could I convert to 3 channel as an image so that I could display it.
predicted =predicted.cpu()
label=predicted [0]
print(label.shape)
torch.Size([9, 224, 224])
| I'm assuming that your (9, 224, 224) data is semantic segmentation maps. There are two possible variants:
You have multi-class predictions
# find normalized probabilities that sums up to 1 across the classes
prediction = prediction.softmax(dim=0).cpu().numpy()
# find the most probable class for each pixel
labels = prediction.argmax(axis=0)
# create a color pallete that maps class_idx to (R, G, B)
palette = np.random.randint(0, 255, (prediction.shape[0], 3), np.uint8)
color_mask = np.zeros((*r.shape, 3), np.uint8)
# map each label to (RGB) color
for idx, color in enumerate(palette):
color_mask[r == idx] = color
cv2.imshow('color_mask', color_mask)
cv2.waitKey()
Example of visualization:
You have multi-label predictions. In that case you have 9 independent prediction masks
# prediction = torch.sigmoid(prediction) # in the case of logits
# convert 0-1 probability maps into 0-255
prediction = (prediction * 255).astype(np.uint8)
# stack multiple probability maps horizontally
prediction = np.hstack(prediction)
Example:
Image taken from https://www.publicdomainpictures.net/en/view-image.php?image=24076
| https://stackoverflow.com/questions/74129550/ |
Error: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first | I am trying to make a MLP classifier in PyTorch. The error is produced from the code in the final chunk. I'm not sure why numpy is even involved with this, can someone please point me in the right direction.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
data = ImageFolder(data_dir, transform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()]))
trainloader = torch.utils.data.DataLoader(data, batch_size=600,shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(data, batch_size=150,shuffle=True, num_workers=2)
dataiter = iter(trainloader)
x_train, y_train = dataiter.next()
x_train = x_train.view(600,-1).to('cpu').to(device)
y_train = y_train.to('cpu').to(device)
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(150528, 9408)
self.layer2 = torch.nn.Linear(9408, 3)
def forward(self, x):
# here we define the (forward) computational graph,
# in terms of the tensors, and elt-wise non-linearities
x = F.relu(self.layer1(x))
x = self.layer2(x)
return x
def train_show(network, data, targ, lossFunc, optimiser, epochs):
lossHistory = [] # just to show a plot later...
accuHistory = []
for t in range(epochs):
optimiser.zero_grad() # Gradients accumulate by default, so don't forget to do this.
y = network.forward(data) # the forward pass
loss = lossFunc(y,targ) # recompute the loss
loss.backward() # runs autograd, to get the gradients needed by optimiser
optimiser.step() # take a step
# just housekeeping and reporting
accuracy = torch.mean((torch.argmax(y,dim=1) == targ).float())
lossHistory.append(loss.detach().item())
accuHistory.append(accuracy.detach())
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(lossHistory,'r'); plt.title("loss"); plt.xlabel("epochs")
plt.subplot(1,2,2)
plt.plot(accuHistory,'b'); plt.title("accuracy")
net = Net().to('cpu').to(device)
lossFunction = torch.nn.CrossEntropyLoss().to('cpu').to(device)
optimiser = torch.optim.SGD(net.parameters(), lr=0.01)
train_show(net, x_train, y_train, lossFunction, optimiser, 50)
| The plotting function you are using, plt.plot, works on numpy arrays and not on torch.tensors. Therefore, accHistory is being converted to numpy array and failed there.
Please see this answer for more details.
| https://stackoverflow.com/questions/74133657/ |
How do i know what classes can this model predict? | Lets say i got a pretrained model model.pt
how do i know what classes can this model predict?
i think it is saved inside the model but how do I extract it?
Im trying to understand what https://github.com/AndreyGuzhov/AudioCLIP does
it has a pretrained AudioCLIP-Full-Training.pt
how do I know the labels or classes inside this AudioCLIP-Full-Training.pt
| As @lauthu already said, the first place to look would be the Notebook:
https://github.com/AndreyGuzhov/AudioCLIP/blob/master/demo/AudioCLIP.ipynb.
The notebook mentions these labels
LABELS = ['cat', 'thunderstorm', 'coughing', 'alarm clock', 'car horn']
The notebooks shows examples of only 5 classes. However more are possible, see below.
Another place to look for the classes is in the paper for AudioCLIP.
The paper mentions that AudioCLIP is trained on the AudioSet dataset which has 632 audio classes. See the entire ontology of labels here.
So it could predict easily for these 632 classes that AudioCLIP is trained on.
In addition to these 632 classes, since AudioCLIP is based on CLIP architecture, it also has zero-shot inference capabilities as noted in the AudioCLIP paper:
"keeping CLIP's ability to generalize to unseen datasets in a zero-shot fashion".
What it means essentially is you could use any common English concept/word and AudioCLIP should be able to classify sounds even if it was not trained on them. This is possible because AudioCLIP is an extension of CLIP and CLIP model has "seen" a lot of natural English words in its dataset of ~400M (image, caption) pairs.
| https://stackoverflow.com/questions/74134706/ |
Tensor slicing: tensorflow vs pytorch | I was testing this simple slicing operation in TF and PyTorch which should match in both
import tensorflow as tf
import numpy as np
import torch
tf_x = tf.random.uniform((4, 64, 64, 3))
pt_x = torch.Tensor(tf_x.numpy())
pt_x = pt_x.permute(0, 3, 1, 2)
# slicing operation
print(np.any(pt_x[:, :, 1:].permute(0, 2, 3, 1).numpy() - tf_x[:, 1:].numpy()))
# > False
pt_x = torch.Tensor(tf_x.numpy())
b, h, w, c = pt_x.shape
pt_x = pt_x.reshape((b, c, h, w))
print(np.any(pt_x.view(b, h, w, c).numpy() - tf_x.numpy())) # False
print(np.any(pt_x[:, :, 1:].reshape(4, 63, 64, 3).numpy() - tf_x[:, 1:].numpy()))
# > True
In the last line lies the problem. Both PyTorch and TF should lead to the same value but they don't. Is this discrepancy caused when I try to reshape the tensor?
| On one hand, you have pt_x equal to tf_x, use np.isclose to verify:
>>> np.isclose(pt_x.view(b, h, w, c).numpy(), tf_x.numpy()).all()
True
On the other hand, you are slicing both tensors differently: pt_x[:, :, 1:] removes the first element along axis=2, while tf_x[:, 1:] removed the first element along axis=1. Therefore you end up with two distinct elements with overlapping values, like tf_x[:, 1:][0,-1,-1,-1] and pt_x[0,-1,-1,-1].
Also keep in mind tensor layouts are different in Tensorflow and PyTorch, while the former uses channel last layout, the latter does channel first. The operation needed between those two is a permutation (not a reshape).
| https://stackoverflow.com/questions/74136269/ |
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE on PyTorch Lightning | I am working on a tutorial of PyTorch Lightning.
https://pytorch-lightning.readthedocs.io/en/stable/starter/introduction.html
Because I wanted to try GPU training, I changed definition of trainer as below.
trainer = pl.Trainer(limit_train_batches=100, max_epochs=1, gpus=1)
Then I got the following error.
RuntimeError Traceback (most recent call last)
Cell In [3], line 4
1 # train the model (hint: here are some helpful Trainer arguments for rapid idea iteration)
2 # trainer = pl.Trainer(limit_train_batches=100, max_epochs=3)
3 trainer = pl.Trainer(limit_train_batches=100, max_epochs=3, accelerator='gpu', devices=1)
----> 4 trainer.fit(model=autoencoder, train_dataloaders=train_loader)
File ~/miniconda3/envs/py38-cu116/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:696, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
677 r"""
678 Runs the full optimization routine.
679
(...)
693 datamodule: An instance of :class:`~pytorch_lightning.core.datamodule.LightningDataModule`.
694 """
695 self.strategy.model = model
--> 696 self._call_and_handle_interrupt(
697 self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
698 )
File ~/miniconda3/envs/py38-cu116/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:650, in Trainer._call_and_handle_interrupt(self, trainer_fn, *args, **kwargs)
648 return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
649 else:
--> 650 return trainer_fn(*args, **kwargs)
651 # TODO(awaelchli): Unify both exceptions below, where `KeyboardError` doesn't re-raise
652 except KeyboardInterrupt as exception:
[...]
File ~/miniconda3/envs/py38-cu116/lib/python3.8/site-packages/pytorch_lightning/core/module.py:1450, in LightningModule.backward(self, loss, optimizer, optimizer_idx, *args, **kwargs)
1433 def backward(
1434 self, loss: Tensor, optimizer: Optional[Optimizer], optimizer_idx: Optional[int], *args, **kwargs
1435 ) -> None:
1436 """Called to perform backward on the loss returned in :meth:`training_step`. Override this hook with your
1437 own implementation if you need to.
1438
(...)
1448 loss.backward()
1449 """
-> 1450 loss.backward(*args, **kwargs)
File ~/miniconda3/envs/py38-cu116/lib/python3.8/site-packages/torch/_tensor.py:396, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
387 if has_torch_function_unary(self):
388 return handle_torch_function(
389 Tensor.backward,
390 (self,),
(...)
394 create_graph=create_graph,
395 inputs=inputs)
--> 396 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File ~/miniconda3/envs/py38-cu116/lib/python3.8/site-packages/torch/autograd/__init__.py:173, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
168 retain_graph = create_graph
170 # The reason we repeat same the comment below is that
171 # some Python versions print out the first line of a multi-line function
172 # calls in the traceback and some print out the last line
--> 173 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
174 tensors, grad_tensors_, retain_graph, create_graph, inputs,
175 allow_unreachable=True, accumulate_grad=True)
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
The only thing I added to the tutorial code is gpus=1, so I cannot figure out what is the problem. How can I fix this?
FYI, I tried giving devices=1, accelerator='ddp' instead of gpus=1, and got a following error.
ValueError: You selected an invalid accelerator name: `accelerator='ddp'`. Available names are: cpu, cuda, hpu, ipu, mps, tpu.
My environments are:
CUDA 11.6
Python 3.8.13
PyTorch 1.12.1
PyTorch Lightning 1.7.7
| Though I'm not sure about the reason, the issue disappeared when I used Python 3.10 instead of 3.8.
| https://stackoverflow.com/questions/74137511/ |
Dimensionality Reduction Autoencoder Pytorch | I'm trying to use the Autoencoder which code you can see below as a tool for Dimensionality Reduction,
I was wondering how can I "extract" the hidden layer and use it for my purpose
My original Dataset went under Standard Scaling
Here I define a Dictionary to centralize the values
CONFIG = {
'BATCH_SIZE' : 1024,
'LR' : 1e-4,
'WD' : 1e-8,
'EPOCHS': 50
}
Here I convert the values of my train and test dataframes into tensors
t_test = torch.FloatTensor(test.values)
t_train = torch.FloatTensor(train.values)
Here I create data loaders
loader_test = torch.utils.data.DataLoader(dataset = t_test,
batch_size = CONFIG['BATCH_SIZE'],
shuffle = True)
loader_train = torch.utils.data.DataLoader(dataset = t_train,
batch_size = CONFIG['BATCH_SIZE'],
shuffle = True)
Here I create the class AutoEncoder (AE)
class AE(torch.nn.Module):
def __init__(self):
super().__init__()
self.encoder = torch.nn.Sequential(
torch.nn.Linear(31,16),
torch.nn.ReLU(),
torch.nn.Linear(16, 8),
torch.nn.ReLU(),
torch.nn.Linear(8, 4),
)
self.decoder = torch.nn.Sequential(
torch.nn.Linear(4, 8),
torch.nn.ReLU(),
torch.nn.Linear(8, 16),
torch.nn.ReLU(),
torch.nn.Linear(16, 31),
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
Here I define model loss_funcion and the optimizer
model = AE()
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(),
lr = CONFIG['LR'],
weight_decay = CONFIG['WD'])
Here I compute the algorithm
epochs = CONFIG['EPOCHS']
dict_list = []
for epoch in range(epochs):
for (ix, batch) in enumerate(loader_train):
model.train()
reconstructed = model(batch)
loss = loss_function(reconstructed, batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
temp_dict = {'Epoch':epoch,'Batch_N':ix,'Batch_L':batch.shape[0],'loss':loss.detach().numpy()}
dict_list.append(temp_dict)
df_learning_o = pd.DataFrame(dict_list)
| You can simply return not just the decoded output, but also the encoded embedding layer, like this:
class AE(torch.nn.Module):
def __init__(self):
super().__init__()
self.encoder = torch.nn.Sequential(
torch.nn.Linear(31,16),
torch.nn.ReLU(),
torch.nn.Linear(16, 8),
torch.nn.ReLU(),
torch.nn.Linear(8, 4),
)
self.decoder = torch.nn.Sequential(
torch.nn.Linear(4, 8),
torch.nn.ReLU(),
torch.nn.Linear(8, 16),
torch.nn.ReLU(),
torch.nn.Linear(16, 31),
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return encoded, decoded
When you pass something to your model (in the train loop for example), you would have to change it to the following:
encoded, reconstructed = model(batch)
Now you can do whatever you'd like with the encoded embedding, i.e. which is the dimensionally reduced input.
| https://stackoverflow.com/questions/74141355/ |
Generating image from MNIST Data | I am quite new to python and pytorch. Please review my code below. I have tried everything I know but I am not able to create a MNIST data set image out of the matrix below. I expect the image should be 1.
It would be great if someone can help me in it.
import torch
import torch.nn.functional as F
import torch.optim as optim
import torchquantum as tq
import torchquantum.functional as tqf
from torchquantum.datasets import MNIST
from torch.optim.lr_scheduler import CosineAnnealingLR
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
dataset = MNIST(root='../Data_Manu',
train_valid_split_ratio=[0.9, 0.1],
digits_of_interest=[3, 6],
n_test_samples=75)
data_value =dataset['train'][0]
## Output is below
{'image': tensor([[[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, 0.1740, 2.5415, 2.7960,
2.7960, 1.4214, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, 1.1668, 2.5415, 2.7833, 2.7833,
2.7833, 2.2105, -0.1696, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.2842, 0.3140, 2.3887, 2.7960, 2.7833, 2.7069,
2.3124, 1.1668, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4115, 1.5487, 2.7833, 2.7833, 2.7960, 2.2487, 0.7468,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
1.7523, 2.7833, 2.7833, 2.7833, 2.1978, -0.1696, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.2842, 0.5049,
2.7960, 2.7833, 2.7833, 2.2487, -0.1696, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, 1.3577, 2.7833,
2.7960, 2.7833, 2.1851, -0.0296, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, 1.1668, 2.3887, 2.7833,
2.7960, 2.2487, -0.0296, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, 0.9759, 2.7960, 2.7960, 2.7960,
2.8215, 1.0904, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4115, 1.4723, 2.7833, 2.7833, 2.7833,
0.0213, -0.3606, -0.4242, 0.1104, -0.0169, -0.2969, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.1569, 2.7833, 2.7833, 2.7833, 1.4596,
-0.4242, -0.0169, 1.3577, 2.3887, 2.2742, 1.4723, -0.0169,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.1569, 2.1978, 2.7833, 2.7833, 2.7833, 0.9504,
1.4214, 2.5924, 2.7833, 2.7833, 2.7960, 2.7833, 2.3124,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, 0.0467, 2.7960, 2.7960, 2.7960, 2.7960, 2.7960,
2.8215, 2.7960, 2.7960, 2.7960, 2.8215, 2.7960, 2.5287,
0.1740, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, 0.0467, 2.7833, 2.7833, 2.7833, 2.7833, 2.7833,
2.7960, 2.7833, 2.7833, 2.7833, 2.7960, 2.7833, 2.6433,
0.5559, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, 1.3577, 2.7833, 2.7833, 2.7833, 2.7833, 2.7833,
2.7960, 2.7833, 2.7833, 2.7833, 2.7960, 2.7833, 2.3124,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, 1.8796, 2.7833, 2.7833, 2.7833, 2.7833, 2.7833,
2.7960, 2.7833, 2.7833, 2.7833, 2.7960, 2.2487, 0.7468,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, 1.8923, 2.7960, 2.7960, 2.7960, 2.7960, 2.7960,
2.8215, 2.7960, 2.7960, 2.7960, 1.4214, -0.1696, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, 1.3450, 2.7833, 2.7833, 2.7833, 2.7833, 2.7833,
2.7960, 2.7833, 2.1214, 0.8104, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, 0.0467, 2.7833, 2.7833, 2.7833, 2.4524, 2.3124,
0.4922, 0.4795, -0.1696, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.2206, 1.9942, 2.7833, 2.2487, -0.0296, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242]]]),
'digit': 1}
plt.imshow(data_value.numpy()[0], cmap='gray')
AttributeError Traceback (most recent call last)
<ipython-input-7-498b4257facf> in <module>
----> 1 plt.imshow(data_value.numpy()[0], cmap='gray')
AttributeError: 'dict' object has no attribute 'numpy'
Thank you for the great help.
| Try change this plt.imshow(data_value.numpy()[0], cmap='gray') to plt.imshow(data_value['image'].numpy()[0], cmap='gray').
Your output is not a torch.Tensor is dict than contains two labels "image" (Tensor) and "digit" (int).
Is for that reason you have this error AttributeError: 'dict' object has no attribute 'numpy'
| https://stackoverflow.com/questions/74143100/ |
How to efficiently convert a large parallel corpus to a Huggingface dataset to train an EncoderDecoderModel? | Typical EncoderDecoderModel that works on a Pre-coded Dataset
The code snippet snippet as below is frequently used to train an EncoderDecoderModel from Huggingface's transformer library
from transformers import EncoderDecoderModel
from transformers import PreTrainedTokenizerFast
multibert = EncoderDecoderModel.from_encoder_decoder_pretrained(
"bert-base-multilingual-uncased", "bert-base-multilingual-uncased"
)
tokenizer = PreTrainedTokenizerFast.from_pretrained("bert-base-multilingual-uncased")
...
And a pre-processed/coded dataset can be used to train the model as such, when using the wmt14 dataset:
import datasets
train_data = datasets.load_dataset("wmt14", "de-en", split="train")
val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
from functools import partial
def process_data_to_model_inputs(batch, encoder_max_length=512, decoder_max_length=512, batch_size=2):
inputs = tokenizer([segment["en"] for segment in batch['translation']],
padding="max_length", truncation=True, max_length=encoder_max_length)
outputs = tokenizer([segment["de"] for segment in batch['translation']],
padding="max_length", truncation=True, max_length=encoder_max_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["decoder_attention_mask"] = outputs.attention_mask
batch["labels"] = outputs.input_ids.copy()
# because BERT automatically shifts the labels, the labels correspond exactly to `decoder_input_ids`.
# We have to make sure that the PAD token is ignored
batch["labels"] = [[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]]
return batch
def munge_dataset_to_pacify_bert(dataset, encoder_max_length=512, decoder_max_length=512, batch_size=2):
bert_wants_to_see = ["input_ids", "attention_mask", "decoder_input_ids",
"decoder_attention_mask", "labels"]
_process_data_to_model_inputs = partial(process_data_to_model_inputs,
encoder_max_length=encoder_max_length,
decoder_max_length=decoder_max_length,
batch_size=batch_size
)
dataset = dataset.map(_process_data_to_model_inputs,
batched=True,
batch_size=batch_size
)
dataset.set_format(type="torch", columns=bert_wants_to_see)
return dataset
train_data = munge_dataset_to_pacify_bert(train_data)
val_data = munge_dataset_to_pacify_bert(val_data)
Then the training can be done easily as such:
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
# set training arguments - these params are not really tuned, feel free to change
training_args = Seq2SeqTrainingArguments(
output_dir="./",
evaluation_strategy="steps",
...
)
# instantiate trainer
trainer = Seq2SeqTrainer(
model=multibert,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_data,
eval_dataset=val_data,
)
trainer.train()
A working example can be found on something like: https://www.kaggle.com/code/alvations/neural-plasticity-bert2bert-on-wmt14
However, parallel data used to train an EncoderDecoderModel usually exists as .txt or .tsv files, not a pre-coded dataset
Given a large .tsv file (e.g. 1 billion lines), e.g.
hello world\tHallo Welt
how are you?\twie gehts?
...\t...
Step 1: we can convert into the parquet / pyarrow format, one can do something like:
import vaex # Using vaex
import sys
filename = "train.en-de.tsv"
df = vaex.from_csv(filename, sep="\t", header=None, names=["src", "trg"], convert=True, chunk_size=50_000_000)
df.export(f"{filename}.parquet")
Step 2: Then we will can read it into a Pyarrow table to fit into the datasets.Dataset object and use the munge_dataset_to_pacify_bert() as shown above, e.g
from datasets import Dataset, load_from_disk
import pyarrow as pa
_ds = Dataset(pa.compute.drop_null(pa.parquet.read_table('train.en-de.tsv.parquet')
_ds.save_to_disk('train.en-de.tsv.parquet.hfdataset')
_ds = load_from_disk('train.en-de.tsv.parquet.hfdataset')
train_data = munge_dataset_to_pacify_bert(_ds)
train_data.save_to_disk('train.en-de.tsv.parquet.hfdataset')
While the process above works well for small-ish dataset, e.g. 1-5 million lines of data, when the scale of the goes to 500 million to 1 billion, the last .save_to_disk() function seems like it is runningf "forever" and the end is no where in sight.
Breaking down the steps in the munge_dataset_to_pacify_bert(), there are 2 sub-functions:
dataset.map(_process_data_to_model_inputs, batched=True, batch_size=batch_size)
dataset.set_format(type="torch", columns=bert_wants_to_see)
For the .map() process, it's possible to scale in parallel threads by specifying by
dataset.map(_process_data_to_model_inputs,
batched=True, batch_size=100,
num_proc=32 # num of parallel threads.
)
And when I tried to process with
num_proc=32
batch_size=100
The .map() function finishes the processing of 500 million lines in 18 hours of compute time on Intel Xeon E5-2686 @ 2.3GHz with 32 processor cores, optimally.
But somehow the .map() function created 32 temp .arrow files and 128 tmp... binary files. Seemingly the last save_to_disk function has been running for more than 10+ hours and have not finished combining the temp files in parts to save the final HF Dataset to disk.
Given the above context, my questions in parts are:
Question (Part 1): When the mapping function ends and created the temp .arrow and tmp... files, is there a way to read these individually instead of try to save them into a final directory using the save_to_disk() function?
Question (Part 2): Why is the save_to_disk() function so slow after the mapping and how can the mapped processed data be saved in a faster manner?
Question (Part 3): Is there a way to avoid the .set_format() function after the .map() and make it part of the _process_data_to_model_inputs function?
| TL;DR
(Answer's credits goes to @lhoestq)
If you have a TSV file that looks like this:
hello world\tHallo Welt
how are you?\twie gehts?
...\t...
load the dataset as such:
# tatoeba-sentpairs.tsv is a pretty large file.
ds = load_dataset("csv", data_files="../input/tatoeba/tatoeba-sentpairs.tsv",
streaming=True, delimiter="\t", split="train")
In Long
Reason not to use parquet, run map functions and save the outputs:
Loading a large dataset into parquet is already quite a feat, in the thread, see Step 1 in question, so lets avoid that
Mapping the data into the BERT format, i.e. munge_dataset_to_pacify_bert is also quite expensive operation. If that is done for 1B lines and even if it's thread-parallelized, it will take hours to days to complete
The resulting tensors that are saved with dataset.set_format(type="torch") is massive, a ~50GB of tsv with 1B lines will easily become TBs of binaries.
Instead, use stream-style processing,
Huggingface datasets supports it with stream=True when defining the dataset:
ds = load_dataset("csv", data_files="../input/tatoeba/tatoeba-sentpairs.tsv",
streaming=True, delimiter="\t", split="train")
| https://stackoverflow.com/questions/74146965/ |
How to correct name weight not define error | I am learning python by printing the output line by line from this tutorial https://towardsdatascience.com/convolution-neural-network-for-image-processing-using-keras-dc3429056306 and find out what does each line do. At the last line of code I facing error weight not define but it seem the code is running fine without the need to define the weight as inside the tutorial link. What I did wrongly in the code and how to fix it?
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as fn
filter_vals = np.array([[-1, -1, 1, 2], [-1, -1, 1, 0], [-1, -1, 1, 1], [-1, -1, 1,
1]])
print('Filter shape: ', filter_vals.shape)# Neural network with one convolutional layer
and four filters
# Neural network with one convolutional layer and four filters
class Net(nn.Module):
def __init__(self, weight): super(Net, self).__init__()
k_height, k_width = weight.shape[2:]
| The error is due to an indentation issue. The last line needs to be executed inside the constructor init for it to recognize the weight argument. The code should look like this:
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as fn
filter_vals = np.array([[-1, -1, 1, 2],
[-1, -1, 1, 0],
[-1, -1, 1, 1],
[-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape) # Neural network with one convolutional layer and four filters
# Neural network with one convolutional layer and four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
k_height, k_width = weight.shape[2:]
| https://stackoverflow.com/questions/74149565/ |
How to implement batch normalization merging in python? | I have defined the model as in the code below, and I used batch normalization merging to make 3 layers into 1 linear layer.
The first layer of the model is a linear layer and there is no bias.
The second layer of the model is a batch normalization and there is no weight and bias ( affine is false )
The third layer of the model is a linear layer.
The variables named new_weight and new_bias are the weight and bias of the newly created linear layer, respectively.
My question is: Why is the output of the following two print functions different? And where is the wrong part in the code below the batch merge comment?
import torch
import torch.nn as nn
import torch.optim as optim
learning_rate = 0.01
in_nodes = 20
internal_nodes = 8
out_nodes = 9
batch_size = 100
# model define
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
self.layer1 = nn.Linear(in_nodes, internal_nodes, bias=False)
self.layer2 = nn.BatchNorm1d(internal_nodes, affine=False)
self.layer3 = nn.Linear(internal_nodes, out_nodes)
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
return x
# optimizer and criterion
model = M()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
criterion = nn.MSELoss()
# training
for batch_num in range(1000):
model.train()
optimizer.zero_grad()
input = torch.randn(batch_size, in_nodes)
target = torch.ones(batch_size, out_nodes)
output = model(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
# batch merge
divider = torch.sqrt(model.layer2.eps + model.layer2.running_var)
w_bn = torch.diag(torch.ones(internal_nodes) / divider)
new_weight = torch.mm(w_bn, model.layer1.weight)
new_weight = torch.mm(model.layer3.weight, new_weight)
b_bn = - model.layer2.running_mean / divider
new_bias = model.layer3.bias + torch.squeeze(torch.mm(model.layer3.weight, b_bn.reshape(-1, 1)))
input = torch.randn(batch_size, in_nodes)
print(model(input))
print(torch.t(torch.mm(new_weight, torch.t(input))) + new_bias)
| Short Answer: As far as I can tell you need a model.eval() before the line
input = torch.randn(batch_size, in_nodes)
such that the end looks like this
...
model.eval()
input = torch.randn(batch_size, in_nodes)
test_input = torch.ones(batch_size,internal_nodes)/100
print(model(input))
print(torch.t(torch.mm(new_weight, torch.t(input))) + new_bias)
with that (I tested it) the two print-statements should output the same. It fixed the weights.
Long Answer:
When using Batch-Normalization according to PyTorch documentation a default momentum of 0.1 is used to compute the running_mean and running_var. The momentum defines how much the estimated statistics and how much the new observed value influence the value.
Now when you don't set a model.eval() statement the batch_normalization computes an updated running_mean and running_var due to the momentum in line
print(model(input))
For further details and or confirmation: Related Question, PyTorch-Documentation
| https://stackoverflow.com/questions/74150653/ |
Query regarding Pytorch LSTM code snippet | In the Stack Overflow thread How can i add a Bi-LSTM layer on top of bert model?, there is a line of code:
hidden = torch.cat((lstm_output[:,-1, :256],lstm_output[:,0, 256:]),dim=-1)
Can someone explain why the concatenation of last and first tokens and not any other? What would these two tokens contain that they were chosen?
| In bidirectional models, hidden states gets concatenated at each step; so, the line basically concatenates the first :256 units of the last hidden state in the positive direction (-1) to the last 256: units of the last hidden state in the negative direction (0). Such locations contain the most "interesting" summary of the input sequence.
I've written a longer and detailed answer on how hidden states are constructed in PyTorch for recurrent modules.
| https://stackoverflow.com/questions/74153439/ |
what does model.eval() do for batch normalization layer? | Why does the testing data use the mean and variance of the all training data? To keep the distribution consistent? What is the difference between the BN layer using model.train compared to model.val
| It fixes the mean and var computed in the training phase by keeping estimates of it in running_mean and running_var. See PyTorch Documentation.
As noted there the implementation is based on the description in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. As one tries to use the whole training data one can get (given similar data train/test data) a better estimate of the mean/var for the (unseen) test set.
Also similar questions have been asked here: What does model.eval() do?
| https://stackoverflow.com/questions/74160861/ |
When I run the code, it stops when the sanity check dataloader, but no error is prompted | Through debugging, I found that the problem occurred when I ran to the line of trainer. fit (model).It seems that there are some problems when loading data.
Here's my code
WEIGHT = "bert-base-uncased"
class Classifier(pl.LightningModule):
def __init__(self,
num_classes: int,
train_dataloader_: DataLoader,
val_dataloader_: DataLoader,
weights: str = WEIGHT):
super(Classifier, self).__init__()
self.train_dataloader_ = train_dataloader_
self.val_dataloader_ = val_dataloader_
self.bert = AutoModel.from_pretrained(weights)
self.num_classes = num_classes
self.classifier = nn.Linear(self.bert.config.hidden_size, self.num_classes)
def forward(self, input_ids: torch.tensor):
bert_logits, bert_pooled = self.bert(input_ids = input_ids)
out = self.classifier(bert_pooled)
return out
def training_step(self, batch, batch_idx):
# batch
input_ids, labels = batch
# predict
y_hat = self.forward(input_ids=input_ids)
# loss
loss = F.cross_entropy(y_hat, labels)
# logs
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
input_ids, labels = batch
y_hat = self.forward(input_ids = input_ids)
loss = F.cross_entropy(y_hat, labels)
a, y_hat = torch.max(y_hat, dim=1)
y_hat = y_hat.cpu()
labels = labels.cpu()
val_acc = accuracy_score(labels, y_hat)
val_acc = torch.tensor(val_acc)
val_f1 = f1_score(labels, y_hat, average='micro')
val_f1 = torch.tensor(val_f1)
return {'val_loss': loss, 'val_acc': val_acc, 'val_f1': val_f1}
def validation_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
avg_val_acc = torch.stack([x['val_acc'] for x in outputs]).mean()
avg_val_f1 = torch.stack([x['val_f1'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss, 'avg_val_acc': avg_val_acc, 'avg_val_f1': avg_val_f1}
return {'avg_val_loss': avg_loss, 'avg_val_f1':avg_val_f1 ,'progress_bar': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam([p for p in self.parameters() if p.requires_grad],
lr=2e-05, eps=1e-08)
def train_dataloader(self):
return self.train_dataloader_
def val_dataloader(self):
return self.val_dataloader_
train = pd.read_csv("data/practice/task1.csv", names =["index", "text", "gold"], sep=";", header=0)
test = pd.read_csv("data/trial/task1.csv", names =["index", "text", "gold"], sep=";", header=0)
WEIGHTS = ["distilroberta-base", "bert-base-uncased", "roberta-base", "t5-base"]
BATCH_SIZE = 12
random_seed = 1988
train, val = train_test_split(train, stratify=train["gold"], random_state=random_seed)
# from transformers import logging
# logging.set_verbosity_warning()
# logging.set_verbosity_error()
for weight in WEIGHTS:
try:
tokenizer = AutoTokenizer.from_pretrained(weight)
X_train = [torch.tensor(tokenizer.encode(text, max_length=200, truncation=True)) for text in train["text"]]
X_train = pad_sequence(X_train, batch_first=True, padding_value=0)
y_train = torch.tensor(train["gold"].tolist())
X_val = [torch.tensor(tokenizer.encode(text, max_length=200, truncation=True)) for text in val["text"]]
X_val = pad_sequence(X_val, batch_first=True, padding_value=0)
y_val = torch.tensor(val["gold"].tolist())
ros = RandomOverSampler(random_state=random_seed)
X_train_resampled, y_train_resampled = ros.fit_resample(X_train, y_train)
X_train_resampled = torch.tensor(X_train_resampled)
y_train_resampled = torch.tensor(y_train_resampled)
train_dataset = TensorDataset(X_train_resampled, y_train_resampled)
train_dataloader_ = DataLoader(train_dataset,
sampler=RandomSampler(train_dataset),
batch_size=BATCH_SIZE,
num_workers=24,
pin_memory=True)
val_dataset = TensorDataset(X_val, y_val)
val_dataloader_ = DataLoader(val_dataset,
batch_size=BATCH_SIZE,
num_workers=24,
pin_memory=True)
model = Classifier(num_classes=2,
train_dataloader_=train_dataloader_,
val_dataloader_ = val_dataloader_,
weights=weight)
trainer = pl.Trainer(devices=1,accelerator="gpu",
max_epochs=30)
trainer.fit(model)
X_test = [torch.tensor(tokenizer.encode(text, max_length=200, truncation=True)) for text in test["text"].tolist()]
X_test = pad_sequence(X_test, batch_first=True, padding_value=0)
y_test = torch.tensor(test["gold"].tolist())
test_dataset = TensorDataset(X_test, y_test)
test_dataloader_ = DataLoader(test_dataset, batch_size=16, num_workers=4)
device = "cuda:0"
model.eval()
model = model.to(device)
test_preds = []
for batch in tqdm(test_dataloader_, total=len(list(test_dataloader_))):
ii, _ = batch
ii = ii.to(device)
preds = model(input_ids = ii)
preds = torch.argmax(preds, axis=1).detach().cpu().tolist()
test_preds.extend(preds)
from sklearn.metrics import classification_report
report = classification_report(test["gold"].tolist(), test_preds)
with open("task1_experiments/"+weight+"_baseline.txt", "w") as f:
f.write(report)
except:
continue
When the code stops running, the output of the terminal is shown in the following.I don't know what caused this problem. I hope someone can help me solve this problem.
How can I solve this problem.
Thanks in advance for helping me
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | bert | RobertaModel | 124 M
1 | classifier | Linear | 1.5 K
124 M Trainable params
0 Non-trainable params
124 M Total params
498.589 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]
enter image description here
| 打印出异常信息后发现。forward方法中调用的classfier方法需要传入tensor,但是传入了字符串。
print(self.bert(input_ids = input_ids)输出一个字典,print(bert_logits, bert_pooled)得到这两个变量对应的key,通过bert_pooled = self.bert(input_ids = input_ids)['pooler_output']重新赋值,问题解决
| https://stackoverflow.com/questions/74162510/ |
Evaluating pytorch pretrained model using a single image from a dataset | Could someone help me in this problem: I try to evaluate pretraining an image with a ML model and i reveive the error state in the bottom of this post.
As I understand the pytorch model want data to be in the following format: batch_channel, channel, Height, Lenght. I modify the tensor to be in this shape but I still get that Error.
Can someone explain to me why does this error occurs?
I am very new to coding and ML, so I am sorry if this question is not very specific.
from monai.transforms import AddChannel
from skimage.io import imread
import numpy as np
import cv2
from torch.utils.data import DataLoader
from torchvision import models
img_array = imread(train_imageinfo_list[0][0])
resized_img = cv2.resize(img_array, (224, 224))
img_tensor = torch.from_numpy(resized_img)
channel_adder = AddChannel()
channel_image = channel_adder(img_tensor)
batch_image = channel_adder(channel_image)
img_tensor = batch_image
model= models.vgg16()
model(img_tensor)
eval(model)
ERROR: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 1, 224, 224] to have 3 channels, but got 1 channels instead
| Your model expects a 3-channel input, that's why you are getting an error. A naive and straightforward approach is to convert your grayscale image to rgb by repeating the channel dimension three times:
>>> x = img_tensor.repeat(1,3,1,1) # assuming img_tensor shaped BCHW
>>> y = model(x)
| https://stackoverflow.com/questions/74164853/ |
How can I copy the parameters of one model to another in LibTorch? | How can I copy the parameters of one model to another in LibTorch? I know how to do it in Torch (Python).
net2.load_state_dict(net.state_dict())
I have tried with the code below in C++ with quite a bit of work. It didn't copy one to another.
I don't see an option to copy the parameters of one like model into another like model.
#include <torch/torch.h>
using namespace torch::indexing;
torch::Device device(torch::kCUDA);
void loadstatedict(torch::nn::Module& model, torch::nn::Module& target_model) {
torch::autograd::GradMode::set_enabled(false); // make parameters copying possible
auto new_params = target_model.named_parameters(); // implement this
auto params = model.named_parameters(true /*recurse*/);
auto buffers = model.named_buffers(true /*recurse*/);
for (auto& val : new_params) {
auto name = val.key();
auto* t = params.find(name);
if (t != nullptr) {
t->copy_(val.value());
} else {
t = buffers.find(name);
if (t != nullptr) {
t->copy_(val.value());
}
}
}
}
struct Critic_Net : torch::nn::Module {
torch::Tensor next_state_batch__sampled_action;
public:
Critic_Net() {
lin1 = torch::nn::Linear(3, 3);
lin2 = torch::nn::Linear(3, 1);
lin1->to(device);
lin2->to(device);
}
torch::Tensor forward(torch::Tensor next_state_batch__sampled_action) {
auto h = next_state_batch__sampled_action;
h = torch::relu(lin1->forward(h));
h = lin2->forward(h);
return h;
}
torch::nn::Linear lin1{nullptr}, lin2{nullptr};
};
auto net = Critic_Net();
auto net2 = Critic_Net();
auto the_ones = torch::ones({3, 3}).to(device);
int main() {
std::cout << net.forward(the_ones);
std::cout << net2.forward(the_ones);
loadstatedict(net, net2);
std::cout << net.forward(the_ones);
std::cout << net2.forward(the_ones);
}
| Your solution with load_state_dict should work if I understand correctly. The problem here is the same as in your previous question : nothing is registered as either parameters or buffers or submodules. Add the register_module calls and it should work fine.
Link to the question
How this class should look like :
struct Critic_Net : torch::nn::Module {
public:
Critic_Net() {
lin1 = register_module("lin1", torch::nn::Linear(427, 42));
lin2 = register_module("lin1", torch::nn::Linear(42, 286));
lin3 = register_module("lin1", torch::nn::Linear(286, 1));
}
torch::Tensor forward(torch::Tensor next_state_batch__sampled_action) {
// unchanged
}
torch::nn::Linear lin1{nullptr}, lin2{nullptr}, lin3{nullptr};
};
| https://stackoverflow.com/questions/74166710/ |
Avoiding reloading weights/datasets in ML edit-compile-run loop | In machine learning, the edit-compile-run loop is pretty slow as your script has to load large models and datasets.
In the past, I've avoided this by loading just a tiny subset of the data, and not using pre-initialized weights when setting up the code for training.
| Use a Jupyter notebook or google colab.
You can edit and compile a cell at a time, and the dataset and trained weights in another cell will be persisted.
Somehow this didn't click, until just now.
| https://stackoverflow.com/questions/74175940/ |
Cuda.is_available() False on VSCode .ipynb but True on JupyterLab | I want to train a VGG model on a gpu, because I have many images (137 099), and I need the process to go faster.
For this, I have a notebook: test.ipynb, on VSCode. My gpu is on a cluster (SLURM) where I am connected by ssh via remote-ssh with VSCode.
I am working with a conda environment env2, Python3.7.12, torch 1.8.1+cu101, torch.version.cuda == 10.1
In my first cell, I do
import torch
print(torch.cuda.is_available())
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
and I get
False
DEVICE = 'cpu'
It looks like the system can't access to the gpu, and the training of my VGG is indeed very slow.
Howevever, if I do !nvidia-smi on my notebook, I can see the gpu (TITAN X Pascal)
Now I try the same with a python file test.py instead of the notebook test.ipynb (still on VSCode with env2) I have
torch.cuda.is_available() = True,
and the training gets much faster.
And if I run test.ipynb with JupyterLab, I also get
torch.cuda.is_available() = True,
So it looks like VSCode cannot access the gpu from the notebook (test.ipynb), but can from a python file (test.py) even if I am using the same python Kernel (env2) for both files. This might come from VSCode since it works well on jupyterlab.
Does anyone know where does it come from?
Remark:
print(sys.executable)
> /home/manon/.conda/envs/env2/bin/python
both for test.py and test.ipynb files
| I actually figured it out.
I had first to create a tunnel, so that I could run my script with a Jupyter kernel using a remote jupyter server. I created the tunnel with:
jupyter notebook --ip localhost --port 3001 --no-browser
This command gave me an URI:
http://localhost:3001/?token=8afee394ef093456
Then I selected a jupyter remote server by cliking on "Jupyter Server:Local" button in the VSCode Status bar (you can also use "Jupyter: Specify local or remote Jupyter server for connections command" from the Command Palette)
and copied the URI obtained previously in: "Enter the URI of the running Jupyter server".
After this, everything worked fine
| https://stackoverflow.com/questions/74183243/ |
Rescale Pytorch tensor values (intensity) to stretch in a specific range | I have an image tensor like
import torch
tensor = torch.rand(1,64,44)
which is like a single channel image. I apply gaussian blur to this random image
import torchvision.transforms as T
transform = T.GaussianBlur(kernel_size=(5,5), sigma = (1,2))
blurred_img = transform(tensor)
This tensor's (blurred_img) max value is around 0.75 and min value is around 0.25. How can I stretch this value so that max is around 1 and min is around 0?
I found a technique using skimage.exposure.rescale_intensity() but applying this like,
stretch = skimage.exposure.rescale_intensity(blurred_img, in_range = 'image', out_range = (0,1))
yeilds error
TypeError: min() received an invalid combination of arguments - got (out=NoneType, axis=NoneType, ), but expected one of:
* ()
* (name dim, bool keepdim)
didn't match because some of the keywords were incorrect: out, axis
* (Tensor other)
* (int dim, bool keepdim)
didn't match because some of the keywords were incorrect: out, axis
I apologise it I'm making a minor mistake but I really need to get around this issue. Any kind of help is appreciated
| I don't think skimage and torch work well with eachother, but you can renormalize it yourself:
blurred_img -= blurred_img.min()
blurred_img /= blurred_img.max()
This ensures the minimum is at 0, and the maximum at 1.
| https://stackoverflow.com/questions/74188937/ |
How to cluster PyTorch predictions | I'm trying to find road lanes from road images and then make predictions out of the images. So far, I've trained a model that finds road lanes. But most of the predictions are scattered. I'm trying to cluster PyTorch predictions that we get from these road images. These dots are the predictions of model where the road lanes might be.
Predictions shape: [1, 1, 80, 120]
Here's the image of predictions:
Here's what I want to achieve (I edited the image, deleted the dots that are scattered):
As you can see, I deleted the dots (predictions) from the image. I want each dot to be clustered with each other. How can I achieve this? I tried KNN (K Nearest Neighbors) but it didn't work.
| If you only want to remove dots then you can try to use morphological operations such as Opening (erode+dilate) to postprocess your mask.
The resulting mask without dots:
Code:
import cv2
import numpy as np
mask = cv2.imread('road_mask.jpg', cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (120, 80))
mask = cv2.erode(mask, np.ones((2, 2)))
mask = cv2.dilate(mask, np.ones((3, 3)))
mask = ((mask > 10) * 255).astype(np.uint8)
cv2.imwrite("postprocessed_mask.png", mask)
| https://stackoverflow.com/questions/74195216/ |
Torch filter multidimensional tensor by start and end values | I have a list of sentences and I am looking to extract contents between two items.
If the start or end item does not exist, I want it to return a row with padding only.
I already have the sentences tokenized and padded with 0 to a fixed length.
I figured a way to do this using for loops, but it is extremely slow, so would like to
know what is the best way to solve this, probably by using tensor operations.
import torch
start_value, end_value = 4,9
data = torch.tensor([
[3,4,7,8,9,2,0,0,0,0],
[1,5,3,4,7,2,8,9,10,0],
[3,4,7,8,10,0,0,0,0,0], # does not contain end value
[3,7,5,9,2,0,0,0,0,0], # does not contain start value
])
# expected output
[
[7,8,0,0,0,0,0,0,0,0],
[7,2,8,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
]
# or
[
[0,0,7,8,0,0,0,0,0,0],
[0,0,0,0,7,2,8,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
]
The current solution that I have, which uses a for loop. It does not produce a symmetric array like I want in the expected output.
def _get_part_from_tokens(
self,
data: torch.Tensor,
s_id: int,
e_id: int,
) -> list[str]:
input_ids = []
for row in data:
try:
s_index = (row == s_id).nonzero(as_tuple=True)[0][0]
e_index = (row == e_id).nonzero(as_tuple=True)[0][0]
except IndexError:
input_ids.append(torch.tensor([]))
continue
if s_index is None or e_index is None or s_index > e_index:
input_ids.append(torch.tensor([]))
continue
ind = torch.arange(s_index + 1, e_index)
input_ids.append(row.index_select(0, ind))
return input_ids
| A possible loop-free approach is this:
import torch
# using the provided sample data
start_value, end_value = 4,9
data = torch.tensor([
[3,4,7,8,9,2,0,0,0,0],
[1,5,3,4,7,2,8,9,10,0],
[3,4,7,8,10,0,0,0,0,0], # does not contain end value
[3,7,5,9,2,0,0,0,0,0], # does not contain start value
[3,7,5,8,2,0,0,0,0,0], # does not contain start or end value
])
First, check which rows contain only a start_value or an end_value and fill these rows with 0.
# fill 'invalid' rows with 0
starts = (data == start_value)
ends = (data == end_value)
invalid = ((starts.sum(axis=1) - ends.sum(axis=1)) != 0)
data[invalid] = 0
Then set the values up to (and including) the start_value and after (and including) the end_value to 0 in each row. This step targets mainly the 'valid' rows. Nevertheless, all other rows will (again) be overwritten with zeros.
# set values in the start and end of 'valid rows' to 0
row_length = data.shape[1]
start_idx = starts.long().argmax(axis=1)
start_mask = (start_idx[:,None] - torch.arange(row_length))>=0
data[start_mask] = 0
end_idx = row_length - ends.long().argmax(axis=1)
end_mask = (end_idx[:,None] + torch.arange(row_length))>=row_length
data[end_mask] = 0
Note: This works also, if a row contains neither a start_value nor an end_value (I added such a row to the sample data). Still, there are many more edge cases that one could think of (e.g. multiple start and end values in one row, start value after end value, ...). Not sure if they are of relevance for the specific problem.
Comparison of execution time
Using timeit and randomly generated data to compare the execution time of the different approaches suggests, that the approach without loops is considerably faster than the approach from the question. If the data is converted to numpy first and converted back to Pytorch afterwards some further (very minor) time savings are possible.
Each dot (execution time) in the plot is the minimum value of 3 trials each with 100 repetitions.
| https://stackoverflow.com/questions/74198404/ |
Does anyone knows the difference between the two lines below? | I read the codes in pytorch tutorial resently and I find a interesting thing below:
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters(): **# 1**
param.requires_grad = False **# 1**
# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that only parameters of final layer are being optimized as
# opposed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9) **# 2**
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)`
I just wonder what's the difference between # 1 and # 2,
if I set #1, Can I just set #2 to code like this?:
optimizer_ft = optim.SGD(model_ft.parameters(), lr=1e-3, momentum=0.9, weight_decay=0.1)
or if I just delete # 1 and leave # 2 alone?
I just wonder what is the difference between #1 and #2...
| Yes, if you set #1 the code for #2 could go like this optimizer_ft = optim.SGD(model_ft.parameters(), lr=1e-3, momentum=0.9, weight_decay=0.1) it will automatically get which parameters to set gradient True for.
See here: https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
This helper function sets the .requires_grad attribute of the
parameters in the model to False when we are feature extracting. By
default, when we load a pretrained model all of the parameters have
.requires_grad=True, which is fine if we are training from scratch or
finetuning. However, if we are feature extracting and only want to
compute gradients for the newly initialized layer then we want all of
the other parameters to not require gradients. This will make more
sense later.
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
For
if I just delete # 1 and leave # 2 alone?
You could do that too but imagine if you had to finetune multiple layers in that case it would be redundant to use model_conv.new_layer.parameters() for every new layer so the first way that you said and used seems a better way to do it in that case.
| https://stackoverflow.com/questions/74202108/ |
Resuming Training PyTorch | I'm attempting to save and load best model through torch, where I've defined my training function as follows:
def train_model(model, train_loader, test_loader, device, learning_rate=1e-1, num_epochs=200):
# The training configurations were not carefully selected.
criterion = nn.CrossEntropyLoss()
model.to(device)
# It seems that SGD optimizer is better than Adam optimizer for ResNet18 training on CIFAR10.
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=1e-4)
# scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=500)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[65, 75], gamma=0.75, last_epoch=-1)
# optimizer = optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
# Evaluation
model.eval()
eval_loss, eval_accuracy = evaluate_model(model=model, test_loader=test_loader, device=device, criterion=criterion)
print("Epoch: {:02d} Eval Loss: {:.3f} Eval Acc: {:.3f}".format(-1, eval_loss, eval_accuracy))
load_model = input('Load a model?')
for epoch in range(num_epochs):
if epoch//2 == 0:
write_checkpoint(model=model, epoch=epoch, scheduler=scheduler, optimizer=optimizer)
model, optimizer, epoch, scheduler = load_checkpoint(model=model, scheduler=scheduler, optimizer=optimizer)
for state in optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.to(device)
# Training
model.train()
running_loss = 0
running_corrects = 0
for inputs, labels in train_loader:
inputs = torch.FloatTensor(inputs)
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
train_loss = running_loss / len(train_loader.dataset)
train_accuracy = running_corrects / len(train_loader.dataset)
# Evaluation
model.eval()
eval_loss, eval_accuracy = evaluate_model(model=model, test_loader=test_loader, device=device, criterion=criterion)
# Set learning rate scheduler
scheduler.step()
print("Epoch: {:03d} Train Loss: {:.3f} Train Acc: {:.3f} Eval Loss: {:.3f} Eval Acc: {:.3f}".format(epoch, train_loss, train_accuracy, eval_loss, eval_accuracy))
return model
Where I'd like to be able to load a model, and start training from the epoch where model was saved.
So far I have methods to save model, optimizer,scheduler states and the epoch via
def write_checkpoint(model, optimizer, epoch, scheduler):
state = {'epoch': epoch + 1, 'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(), 'scheduler': scheduler.state_dict(), }
filename = '/content/model_'
torch.save(state, filename + f'CP_epoch{epoch + 1}.pth')
def load_checkpoint(model, optimizer, scheduler, filename='/content/checkpoint.pth'):
# Note: Input model & optimizer should be pre-defined. This routine only updates their states.
start_epoch = 0
if os.path.isfile(filename):
print("=> loading checkpoint '{}'".format(filename))
checkpoint = torch.load(filename)
start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
scheduler = checkpoint['scheduler']
print("=> loaded checkpoint '{}' (epoch {})"
.format(filename, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(filename))
return model, optimizer, start_epoch, scheduler
But I can't seem to come up with the logic of how I'd update the epoch to start at the correct one. Looking for hints or ideas on how to implement just that.
| If I understand correctly you trying to resume training from last progress with correct epoch number.
Before calling train_model load the checkpoint values including start_epoch. Then use start_epoch as loop starting point,
for epoch in range(start_epoch, num_epochs):
| https://stackoverflow.com/questions/74202992/ |
AttributeError: 'NoneType' object has no attribute 'shape' even though the tensor is not None | I'm working with a resnet-based model to generate some feature embeddings
feature = self.m_resnet(input)
when I print
print('feature:', feature)
I get output like,
tensor([[[-5.2228e-01, -2.6507e-01, -1.4583e+00, ..., -1.1618e+00,
-3.9355e-01, -6.7108e-01],
[-5.0633e-01, 9.0730e-01, -7.6286e-01, ..., -6.7644e-01,
-6.4372e-01, 4.2130e-02],
[ 1.3522e+00, 1.1739e+00, 1.1027e+00, ..., 1.0143e+00,
1.0382e+00, 5.5187e-01],
...,
[ 4.6489e-01, -1.2791e-01, 1.1394e+00, ..., -2.3228e-01,
-4.3149e-01, 3.1564e-01],
[ 1.0425e+00, 9.7971e-01, -3.5113e-01, ..., 4.3813e-01,
3.7757e-01, 3.0367e-01],
[-9.2531e-01, -3.5561e-01, -1.9557e-01, ..., 1.2157e-01,
-4.4008e-01, -9.3977e-02]],
[[-5.7037e-01, -2.3364e-01, -1.3321e+00, ..., -1.2070e+00,
-4.7131e-01, -5.4751e-01],
[-2.8480e-01, 8.5945e-01, -5.6804e-01, ..., -8.7505e-01,
-5.9196e-01, -4.7775e-02],
[ 1.4179e+00, 1.3121e+00, 1.1915e+00, ..., 9.6185e-01,
9.4094e-01, 6.2634e-01],
...,
[ 4.7378e-01, -2.0151e-01, 1.0540e+00, ..., -2.1641e-01,
-4.2161e-01, 2.7364e-01],
[ 1.0599e+00, 8.7958e-01, -1.3885e-01, ..., 3.7642e-01,
3.1348e-01, 2.2855e-01],
[-8.3528e-01, -3.6043e-01, -4.1944e-02, ..., 7.9550e-02,
-3.3973e-01, -9.5777e-02]],
[[-4.6509e-01, -3.1390e-01, -1.3608e+00, ..., -1.1940e+00,
-4.0954e-01, -6.8436e-01],
[-3.8707e-01, 9.0035e-01, -8.3244e-01, ..., -7.7861e-01,
-6.3493e-01, 1.8479e-02],
[ 1.3674e+00, 1.2385e+00, 1.1890e+00, ..., 9.8861e-01,
1.1610e+00, 6.2035e-01],
...,
[ 5.5125e-01, -1.4365e-01, 1.2544e+00, ..., -1.9578e-01,
-5.9983e-01, 3.9633e-01],
[ 1.0217e+00, 1.0091e+00, -3.5424e-01, ..., 3.7400e-01,
4.3592e-01, 2.3972e-01],
[-8.4321e-01, -3.9549e-01, -1.7280e-01, ..., 1.3756e-01,
-5.2992e-01, 1.3601e-03]],
[[-4.4482e-01, -3.2837e-01, -1.4795e+00, ..., -1.2002e+00,
-4.2828e-01, -7.6532e-01],
[-4.5860e-01, 9.2647e-01, -8.4050e-01, ..., -7.8178e-01,
-6.4275e-01, 6.4469e-02],
[ 1.3156e+00, 1.2003e+00, 1.2451e+00, ..., 1.1086e+00,
1.2272e+00, 7.0043e-01],
...,
[ 5.1705e-01, -1.1838e-01, 1.1950e+00, ..., -2.6755e-01,
-6.0813e-01, 4.4875e-01],
[ 1.0428e+00, 1.0369e+00, -4.6282e-01, ..., 4.1102e-01,
4.8060e-01, 2.7509e-01],
[-8.9088e-01, -3.8145e-01, -3.3285e-01, ..., -1.1576e-03,
-5.0087e-01, 3.1878e-02]]], device='cuda:0',
but when I print the shape,
print('shape:', feature.shape)
I get error,AttributeError: 'NoneType' object has no attribute 'shape' on this print line.
According to above lines, feature is not None, then why I'm getting this error?
| I don't see any other explanation than there is some code which runs between your first print command and your second and sets feature=None. Case in point:
>>> feature = "hello"
>>> print(feature)
hello
>>> feature = None
>>> print(feature.shape)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'shape'
If feature was still a tensor (or str or whatever), the error message should reflect that, like so:
>>> feature = "hello"
>>> print(feature)
hello
>>> print(feature.shape)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'str' object has no attribute 'shape'
If somehow feature was not in the scope and hadn't been defined at all, this should be the error message:
>>> print(feature_undefined.shape)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'feature_undefined' is not defined
| https://stackoverflow.com/questions/74203166/ |
Pytorch - Selecting n indices without replacement from dimension x | Suppose I have the following embeddings emb_user = torch.randn(64, 128, 256). From the second dimension (of length 128), I wish to pick out 16 at random at each instance. I was wondering if there was a more efficient way of doing the following:
idx = torch.multinomial(torch.ones(64, 128), 16)
sampled_emb_user = emb_user[torch.arange(len(emb_user)).unsqueeze(-1), idx]
What I also find curios is that the above multinomial would not work if the weight matrix (torch.ones(64, 128)) exceeded more than 2 dimensions.
| Since in your case you want an uniform distribution you could speed it up with
idx = torch.sort(torch.randint(
0, 128 - 15, (64, 16), device=device
), axis=1).values + torch.arange(0, 16, device=device).reshape(1, -1)
sampled_emb_user = emb_user[torch.arange(len(emb_user)).unsqueeze(-1), idx]
Instead of
idx = torch.multinomial(torch.ones(64, 128, device=device), 16)
sampled_emb_user = emb_user[torch.arange(len(emb_user)).unsqueeze(-1), idx]
The runtimes on my machine are 427 µs and 784 µs with device='cpu'; 135 µs and 260 µs and 469 µs with device='cuda'.
How it works?
The sorted randint gives the indices for a multinomial distribution with replacement. That is increasing, adding the arange term makes it strictly increasing, thus eliminates the replacements.
Illustrating with a small case
idx = torch.sort(torch.randint(0, 7, (4,))).values
print('Indices with replacement in the range from 0 to 6: ', idx)
print('Indices without replacement in the slice: ', idx + torch.arange(4))
Indices with replacement in the range from 0 to 6: tensor([0, 5, 5, 6])
Indices without replacement in the slice: tensor([0, 6, 7, 9])
A possibly faster solution, but not from exactly the same distribution is the following:
idx = torch.cumsum(torch.diff(
torch.sort(torch.randint(
0, 128 - 16, (64, 17), device=device
), axis=1).values
, axis=1) + 1, axis=1) - 1
sampled_emb_user = emb_user[torch.arange(len(emb_user)).unsqueeze(-1), idx]
One more way, I expect to be closer to the exact method, not very rigorously analyzed.
# 1-rand() to include 1 and exclude zero.
d = torch.cumsum(1 - torch.rand(64, 17, device=device
), axis=1)
# this produces a sorted tensor with values in the range [0:128-16]
d = (((128 - 15) * d[:, :-1]) / d[:, -1:]).to(torch.long)
idx = d + torch.arange(0, 16, device=device).reshape(1, -1)
But in the end it tends to be slower than the method using sort.
| https://stackoverflow.com/questions/74204664/ |
Bad results using OpenCV with Yolov5 comparing with pure Yolov5 | I'm tryting to recognize lego bricks from video cam using opencv. It performs extremely bad comparing with just running detect.py in Yolov5. Thus I made some experiments about just recognizing images, and I found using openCV still performs dramatically bad as well, is there any clue? Here are the experiments I did.
This is the result from detect.py by just running
python detect.py --weights runs/train/yolo/weights/best.pt --source legos.jpg
This is the result from openCV by implementing this
import torch
import cv2
import numpy as np
model = torch.hub.load('.', 'custom', path='runs/train/yolo/weights/last.pt', source='local')
cap = cv2.VideoCapture('legos.jpg')
while cap.isOpened():
ret, frame = cap.read()
# Make detections
results = model(frame)
cv2.imshow('YOLO', np.squeeze(results.render()))
if cv2.waitKey(0) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
If I simply do this, it gives a pretty good result
import torch
results = model('legos.jpg')
results.show()
Any genious ideas?
| Probably your model is trained with RGB images while opencv is using BGR format. Please try to convert the colour space accordingly. Example:
import torch
import cv2
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# read image and convert to RGB
img = cv2.imread('zidane.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# make detections
results = model(img)
# render results and convert back to BGR
results.render()
out = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
cv2.imshow('YOLO', out)
cv2.waitKey(-1)
cv2.destroyAllWindows()
| https://stackoverflow.com/questions/74209769/ |
Dimension error by using Patch Embedding for video processing | I am working on one of the transformer models that has been proposed for video classification. My input tensor has the shape of [batch=16 ,channels=3 ,frames=16, H=224, W=224] and for applying the patch embedding on the input tensor it uses the following scenario:
patch_dim = in_channels * patch_size ** 2
self.to_patch_embedding = nn.Sequential(
Rearrange('b t c (h p1) (w p2) -> b t (h w) (p1 p2 c)', p1 = patch_size, p2 = patch_size),
nn.Linear(patch_dim, dim), ***** (Root of the error)******
)
The parameters that I am using are as follows:
patch_size =16
dim = 192
in_channels = 3
Unfortunately I receive the following error that corresponds to the line that has been shown in the code:
Exception has occured: RuntimeError
mat1 and mat2 shapes cannot be multiplied (9408x4096 and 768x192)
I thought a lot on the reason of the error but I couldn't find out what is the reason. How can I solve the problem?
| The input tensor has shape [batch=16, channels=3, frames=16, H=224, W=224], while Rearrange expects dimensions in order [ b t c h w ]. You expect channels but pass frames. This leads to a last dimension of (p1 * p2 * c) = 16 * 16 * 16 = 4096.
Please try to align positions of channels and frames:
from torch import torch, nn
from einops.layers.torch import Rearrange
patch_size = 16
dim = 192
b, f, c, h, w = 16, 16, 3, 224, 224
input_tensor = torch.randn(b, f, c, h, w)
patch_dim = c * patch_size ** 2
m = nn.Sequential(
Rearrange('b t c (h p1) (w p2) -> b t (h w) (p1 p2 c)', p1=patch_size, p2=patch_size),
nn.Linear(patch_dim, dim)
)
print(m(input_tensor).size())
Output:
torch.Size([16, 16, 196, 192])
| https://stackoverflow.com/questions/74237285/ |
In Pytorch, how do you multiply a (b, c, h, w) size tensor with a tensor of size (c) | I have to normalize a tensor of size size (b, c, h, w) with two tensors of size (c) which represent the respective mean and standard deviation.
I cannot manage to figure out how to multiply a tensor of shape, let say torch.Size([1, 3, 128, 128]) with a tensor of shape torch.Size([3]).
What I want to accomplish is: take the first element of the smaller tensor and multiply the first [128x128] part of the larger tensor with it. And do this for the second element and second [128x128] tensor etc.
def normalize(img, mean, std):
""" Normalizes an image tensor.
# Parameters:
@img, torch.tensor of size (b, c, h, w)
@mean, torch.tensor of size (c)
@std, torch.tensor of size (c)
# Returns the normalized image
"""
# TODO: 1. Implement normalization doing channel-wise z-score normalization.
img * mean #try1: this doesn't work
torch.mul(img.view(3,128,128), mean) #try2: this doesn't work
return img
Both of my attempts throw the same error: RuntimeError: The size of tensor a (128) must match the size of tensor b (3) at non-singleton dimension 3.
I imagine you could create a tensor of the needed size, fill it with the values necessary and multiply that, but I would image there is a better solution than that.
| img * mean.reshape(1,3,1,1)
Will reshape the mean tensor so that torch can understand which dimensions you are trying to multiply together.
Edit for details:
Torch reads tensor sizes from lowest to highest dimension, so it can infer some of the higher dimensions (e.g. img * mean.reshape(3,1,1) also works in your case), however you must specify the lower dimensions to either be one, or match the tensor you are trying to multiply with.
| https://stackoverflow.com/questions/74242409/ |
Tensor repeat for image patches | I have a batch of 20 flattened tensors representing 256X256 images.
>>> imgs.shape
(20, 65536)
Each image was split into 32x32 patches (a total of 64 patches per image). I have calculated a score for each patch and got a vector with the shape of (20,64)
I would like to multiply each pixel with the corresponding patch score.
imgs * score yields an error and score.repeat(1,1,64) didn't repeat the scores in a way that preserves the score of each pixel.
How can this be achieved?
EDIT:
A simple example can be using
import torch
img_size = 4
patch_size = 2
img = torch.rand((2,img_size,img_size)) # (2,4,4)
score = torch.tensor([[1,2,3,4],[5,6,7,8]]) # (2,4)
And trying to achieve
score = [[1,1,3,3],[2,2,4,4],[5,5,6,6][7,7,8,8]]
| I would suggest reshaping your scores array to preserve information about how it relates to the original image, then using repeat_interleave() twice.
Example:
import torch
img_size = 4
patch_size = 2
patches_per_axis = int(img_size / patch_size)
num_images = 2
img = torch.rand((2,img_size,img_size)) # (2,4,4)
score = torch.tensor([[1,2,3,4],[5,6,7,8]]) # (2,4)
def expand_scores(scores):
# Unflatten scores
scores = scores.reshape((num_images, patches_per_axis, patches_per_axis))
# Repeat scores to match dimensions of image, in vertical direction
scores = scores.repeat_interleave(repeats=patch_size, axis=1)
# Repeat scores to match dimensions of image, in horizontal direction
scores = scores.repeat_interleave(repeats=patch_size, axis=2)
# Optional: use reshape() to re-flatten scores. If you do that here, you'll need to do it to the image tensor too.
return scores
(I added two constants at the top to your example, num_images, and patches_per_axis. In your original example, these would be set to 20 and 8, respectively.)
When you call expand_scores(), you'll get the following output:
tensor([[[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]],
[[5, 5, 6, 6],
[5, 5, 6, 6],
[7, 7, 8, 8],
[7, 7, 8, 8]]])
You can multiply that by the pixel values:
expand_scores(score) * img
| https://stackoverflow.com/questions/74248099/ |
Efficiently sample batches from only one class at each iteration with PyTorch | I want to train a classifier on ImageNet dataset (1000 classes) and I need each batch to contain 64 images from the same class and consecutive batches from different classes. So far based on @shai's suggestion and this post I have
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import numpy as np
import random
import argparse
import torch
import os
class DS(Dataset):
def __init__(self, data, num_classes):
super(DS, self).__init__()
self.data = data
self.indices = [[] for _ in range(num_classes)]
for i, (data, class_label) in enumerate(data):
# create a list of lists, where every sublist containts the indices of
# the samples that belong to the class_label
self.indices[class_label].append(i)
def classes(self):
return self.indices
def __getitem__(self, index):
return self.data[index]
class BatchSampler:
def __init__(self, classes, batch_size):
# classes is a list of lists where each sublist refers to a class and contains
# the sample ids that belond to this class
self.classes = classes
self.n_batches = sum([len(x) for x in classes]) // batch_size
self.min_class_size = min([len(x) for x in classes])
self.batch_size = batch_size
self.class_range = list(range(len(self.classes)))
random.shuffle(self.class_range)
assert batch_size < self.min_class_size, 'batch_size should be at least {}'.format(self.min_class_size)
def __iter__(self):
batches = []
for j in range(self.n_batches):
if j < len(self.class_range):
batch_class = self.class_range[j]
else:
batch_class = random.choice(self.class_range)
batches.append(np.random.choice(self.classes[batch_class], self.batch_size))
return iter(batches)
def main():
# Code about
_train_dataset = DS(train_dataset, train_dataset.num_classes)
_batch_sampler = BatchSampler(_train_dataset.classes(), batch_size=args.batch_size)
_train_loader = DataLoader(dataset=_train_dataset, batch_sampler=_batch_sampler)
labels = []
for i, (inputs, _labels) in enumerate(_train_loader):
labels.append(torch.unique(_labels).item())
print("Unique labels: {}".format(torch.unique(_labels).item()))
labels = set(labels)
print('Length of traversed unique labels: {}'.format(len(labels)))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
parser.add_argument('--data', metavar='DIR', nargs='?', default='imagenet',
help='path to dataset (default: imagenet)')
parser.add_argument('--dummy', action='store_true', help="use fake data to benchmark")
parser.add_argument('-b', '--batch-size', default=64, type=int,
metavar='N',
help='mini-batch size (default: 256), this is the total '
'batch size of all GPUs on the current node when '
'using Data Parallel or Distributed Data Parallel')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
args = parser.parse_args()
if args.dummy:
print("=> Dummy data is used!")
num_classes = 100
train_dataset = datasets.FakeData(size=12811, image_size=(3, 224, 224),
num_classes=num_classes, transform=transforms.ToTensor())
val_dataset = datasets.FakeData(5000, (3, 224, 224), num_classes, transforms.ToTensor())
else:
traindir = os.path.join(args.data, 'train')
valdir = os.path.join(args.data, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_dataset = datasets.ImageFolder(
traindir,
transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
]))
val_dataset = datasets.ImageFolder(
valdir,
transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
]))
# Samplers are initialized to None and train_sampler will be replaced
train_sampler, val_sampler = None, None
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None),
num_workers=args.workers, pin_memory=True, sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True, sampler=val_sampler)
main()
which prints: Length of traversed unique labels: 100.
However, creating self.indices in the for loop takes a lot of time. Is there a more efficient way to construct this sampler?
EDIT: yield implementation
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import numpy as np
import random
import argparse
import torch
import os
from tqdm import tqdm
import os.path
class DS(Dataset):
def __init__(self, data, num_classes):
super(DS, self).__init__()
self.data = data
self.data_len = len(data)
indices = [[] for _ in range(num_classes)]
for i, (_, class_label) in tqdm(enumerate(data), total=len(data), miniters=1,
desc='Building class indices dataset..'):
indices[class_label].append(i)
self.indices = indices
def per_class_sample_indices(self):
return self.indices
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.data_len
class BatchSampler:
def __init__(self, per_class_sample_indices, batch_size):
# classes is a list of lists where each sublist refers to a class and contains
# the sample ids that belond to this class
self.per_class_sample_indices = per_class_sample_indices
self.n_batches = sum([len(x) for x in per_class_sample_indices]) // batch_size
self.min_class_size = min([len(x) for x in per_class_sample_indices])
self.batch_size = batch_size
self.class_range = list(range(len(self.per_class_sample_indices)))
random.shuffle(self.class_range)
def __iter__(self):
for j in range(self.n_batches):
if j < len(self.class_range):
batch_class = self.class_range[j]
else:
batch_class = random.choice(self.class_range)
if self.batch_size <= len(self.per_class_sample_indices[batch_class]):
batch = np.random.choice(self.per_class_sample_indices[batch_class], self.batch_size)
# batches.append(np.random.choice(self.per_class_sample_indices[batch_class], self.batch_size))
else:
batch = self.per_class_sample_indices[batch_class]
yield batch
def n_batches(self):
return self.n_batches
def main():
file_path = 'a_file_path'
file_name = 'per_class_sample_indices.pt'
if not os.path.exists(os.path.join(file_path, file_name)):
print('File: {} does not exists. Create it.'.format(file_name))
per_class_sample_indices = DS(train_dataset, num_classes).per_class_sample_indices()
torch.save(per_class_sample_indices, os.path.join(file_path, file_name))
else:
per_class_sample_indices = torch.load(os.path.join(file_path, file_name))
print('File: {} exists. Do not create it.'.format(file_name))
batch_sampler = BatchSampler(per_class_sample_indices,
batch_size=args.batch_size)
train_loader = torch.utils.data.DataLoader(
train_dataset,
# batch_size=args.batch_size,
# shuffle=(train_sampler is None),
num_workers=args.workers,
pin_memory=True,
# sampler=train_sampler,
batch_sampler=batch_sampler
)
# We do not use sampler for the validation
# val_loader = torch.utils.data.DataLoader(
# val_dataset, batch_size=args.batch_size, shuffle=False,
# num_workers=args.workers, pin_memory=True, sampler=None)
labels = []
for i, (inputs, _labels) in enumerate(train_loader):
labels.append(torch.unique(_labels).item())
print("Unique labels: {}".format(torch.unique(_labels).item()))
labels = set(labels)
print('Length of traversed unique labels: {}'.format(len(labels)))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
parser.add_argument('--data', metavar='DIR', nargs='?', default='imagenet',
help='path to dataset (default: imagenet)')
parser.add_argument('--dummy', action='store_true', help="use fake data to benchmark")
parser.add_argument('-b', '--batch-size', default=64, type=int,
metavar='N',
help='mini-batch size (default: 256), this is the total '
'batch size of all GPUs on the current node when '
'using Data Parallel or Distributed Data Parallel')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
args = parser.parse_args()
if args.dummy:
print("=> Dummy data is used!")
num_classes = 100
train_dataset = datasets.FakeData(size=12811, image_size=(3, 224, 224),
num_classes=num_classes, transform=transforms.ToTensor())
val_dataset = datasets.FakeData(5000, (3, 224, 224), num_classes, transforms.ToTensor())
else:
traindir = os.path.join(args.data, 'train')
valdir = os.path.join(args.data, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_dataset = datasets.ImageFolder(
traindir,
transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
]))
val_dataset = datasets.ImageFolder(
valdir,
transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
]))
num_classes = len(train_dataset.classes)
main()
A similar post but in TensorFlow can be found here
| You should write your own batch_sampler class for the DataLoader.
| https://stackoverflow.com/questions/74252067/ |
Why are linear layers used in Binary Classification with Deep Learning? | In many examples of Binary Classification with Deep learning
Why are linear layers used? I've been trying to look around the internet for information on the reason for the use of linear layers
e.g.
https://github.com/StatsGary/PyTorch_Tutorials/blob/main/01_MLP_Thyroid_Classifier/PyTorch_Binary_From_Scratch.py
https://hutsons-hacks.info/building-a-pytorch-binary-classification-multi-layer-perceptron-from-the-ground-up
| Linear layer is just another (a bit mathematically incorrect) name of a fully connected layer, the most standard, classic, and in some sense - powerful building block of neural networks. Networks built purely from fully connected layers are universal approximators, and thus a good starting point for any sort of investigation.
| https://stackoverflow.com/questions/74252899/ |
How to use Huggingface Trainer streaming Datasets without wrapping it with torchdata's IterableWrapper? | Given a datasets.iterable_dataset.IterableDataset with stream=True, e.g.
train_data = load_dataset("csv", data_files="../input/tatoeba/tatoeba-sentpairs.tsv",
streaming=True, delimiter="\t", split="train")
and trying to use it in a Trainer object, e.g.
# instantiate trainer
trainer = Seq2SeqTrainer(
model=multibert,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_data,
eval_dataset=train_data,
)
trainer.train()
It throws an error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_27/3002801805.py in <module>
28 )
29
---> 30 trainer.train()
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1411 resume_from_checkpoint=resume_from_checkpoint,
1412 trial=trial,
-> 1413 ignore_keys_for_eval=ignore_keys_for_eval,
1414 )
1415
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1623
1624 step = -1
-> 1625 for step, inputs in enumerate(epoch_iterator):
1626
1627 # Skip past any already trained steps if resuming training
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
528 if self._sampler_iter is None:
529 self._reset()
--> 530 data = self._next_data()
531 self._num_yielded += 1
532 if self._dataset_kind == _DatasetKind.Iterable and \
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
567
568 def _next_data(self):
--> 569 index = self._next_index() # may raise StopIteration
570 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
571 if self._pin_memory:
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_index(self)
519
520 def _next_index(self):
--> 521 return next(self._sampler_iter) # may raise StopIteration
522
523 def _next_data(self):
/opt/conda/lib/python3.7/site-packages/torch/utils/data/sampler.py in __iter__(self)
224 def __iter__(self) -> Iterator[List[int]]:
225 batch = []
--> 226 for idx in self.sampler:
227 batch.append(idx)
228 if len(batch) == self.batch_size:
/opt/conda/lib/python3.7/site-packages/torch/utils/data/sampler.py in __iter__(self)
64
65 def __iter__(self) -> Iterator[int]:
---> 66 return iter(range(len(self.data_source)))
67
68 def __len__(self) -> int:
TypeError: object of type 'IterableDataset' has no len()
This can be resolved by wrapping the IterableDataset object with the IterableWrapper from torchdata library.
from torchdata.datapipes.iter import IterDataPipe, IterableWrapper
...
# instantiate trainer
trainer = Seq2SeqTrainer(
model=multibert,
tokenizer=tokenizer,
args=training_args,
train_dataset=IterableWrapper(train_data),
eval_dataset=IterableWrapper(train_data),
)
trainer.train()
Is it possible to use the IterableDataset with Seq2SeqTrainer without casting it with IterableWrapper?
For reference, a full working code would look something as below, replacing the line where train_dataset=IterableWrapper(train_data) to train_dataset=train_data will replicate the TypeError: object of type 'IterableDataset' has no len() error.
import torch
from datasets import load_dataset
from transformers import EncoderDecoderModel
from transformers import AutoTokenizer
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
from torchdata.datapipes.iter import IterDataPipe, IterableWrapper
multibert = EncoderDecoderModel.from_encoder_decoder_pretrained(
"bert-base-multilingual-uncased", "bert-base-multilingual-uncased"
)
tokenizer= AutoTokenizer.from_pretrained("bert-base-multilingual-uncased")
tokenizer.bos_token = tokenizer.cls_token
tokenizer.eos_token = tokenizer.sep_token
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
# set special tokens
multibert.config.decoder_start_token_id = tokenizer.bos_token_id
multibert.config.eos_token_id = tokenizer.eos_token_id
multibert.config.pad_token_id = tokenizer.pad_token_id
# sensible parameters for beam search
multibert.config.vocab_size = multibert.config.decoder.vocab_size
def process_data_to_model_inputs(batch, max_len=10):
inputs = tokenizer(batch["SRC"], padding="max_length",
truncation=True, max_length=max_len)
outputs = tokenizer(batch["TRG"], padding="max_length",
truncation=True, max_length=max_len)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["decoder_attention_mask"] = outputs.attention_mask
batch["labels"] = outputs.input_ids.copy()
# because BERT automatically shifts the labels, the labels correspond exactly to `decoder_input_ids`.
# We have to make sure that the PAD token is ignored
batch["labels"] = [[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]]
return batch
# tatoeba-sentpairs.tsv is a pretty large file.
train_data = load_dataset("csv", data_files="../input/tatoeba/tatoeba-sentpairs.tsv",
streaming=True, delimiter="\t", split="train")
train_data = ds.map(process_data_to_model_inputs, batched=True)
batch_size = 1
# set training arguments - these params are not really tuned, feel free to change
training_args = Seq2SeqTrainingArguments(
output_dir="./",
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_with_generate=True,
logging_steps=2, # set to 1000 for full training
save_steps=16, # set to 500 for full training
eval_steps=4, # set to 8000 for full training
warmup_steps=1, # set to 2000 for full training
max_steps=16, # delete for full training
# overwrite_output_dir=True,
save_total_limit=1,
#fp16=True,
)
# instantiate trainer
trainer = Seq2SeqTrainer(
model=multibert,
tokenizer=tokenizer,
args=training_args,
train_dataset=IterableWrapper(train_data),
eval_dataset=IterableWrapper(train_data),
)
trainer.train()
| Found the answer from https://discuss.huggingface.co/t/using-iterabledataset-with-trainer-iterabledataset-has-no-len/15790
By adding the with format to the iterable dataset, like this:
train_data.with_format("torch")
The trainer should work without throwing the len() error.
# instantiate trainer
trainer = Seq2SeqTrainer(
model=multibert,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_data.with_format("torch"),
eval_dataset=train_data.with_format("torch"),
)
trainer.train()
| https://stackoverflow.com/questions/74255617/ |
How to turn a cifar10 dataset into a tensor | I am trying to turn the cifar10 dataset into a tensor with
trainset = datasets.CIFAR10(root='./data', train=True, download=True, transform=transforms.ToTensor())
but it continually returns false when I run it torch.is_tensor(trainset) function which means it's not a tensor and it also doesn't work for functions that require tensors in the code I'm running on.
I tried to print out the trainset with
print(trainset)
and I keep getting
Dataset CIFAR10
Number of datapoints: 50000
Root location: ./data
Split: Train
StandardTransform
Transform: ToTensor()
which means it's not yet a tensor.
How exactly can I convert the entire cifar10 dataset to a tensor?
| trainset is a Dataset instance and you do not convert it to a tensor. You should load the data and then transform it.
for i, data in enumerate(trainset, 0):
do whatever
| https://stackoverflow.com/questions/74258668/ |
How to apply torch.topk only on non-zero elements of a tensor? | I want to apply the function torch.topk, but only on the non-zero elements of the tensor (i.e, not to count zero elements in the counting process).
Currently I do this:
torch.topk(tensor.view(-1), k)
But this also considers the zero elements in variable tensor and returns the top largest among them. What should I do to get the topk among non-zero elements?
| # get the top k values in a tensor excluding zeros
top_vals = torch.topk(tensor.view(-1), k)[0]
mask = top_vals != 0
values = top_vals[mask]
print(values)
# Get Indices of Top K Values
indices = torch.nonzero(mask)
print(indices)
credits: this question and its answers
| https://stackoverflow.com/questions/74261488/ |
Pytorch: RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 1 | The pytorch program needs to return the result of the "Rock, Paper, Scissors" game. Input is given as one-hot tensors: [ [1,0,0], [0,1,0] ] ([1,0,0]-rock, [0,1,0] - scissors) output must be: [1, 0] (first player win, second player lose). What's wrong with this code?
import torch
from torch import nn
import torch.utils.data as data
torch.manual_seed(42)
input = [[1, 0, 0], [0, 1, 0]], [[1, 0, 0], [0, 0, 1]], [[0, 1, 0], [0, 1, 0]], [[0, 0, 1], [1, 0, 0]]
input = torch.tensor(input, dtype=torch.float32)
result = [[1, 0], [0, 1], [1, 1], [1, 0]]
result = torch.tensor(result, dtype=torch.float32)
class LinearRegression(nn.Module):
def __init__(self, num_inputs, num_outputs):
super().__init__()
self.linear = nn.Linear(num_inputs, num_outputs)
self.act_fn = nn.Sigmoid()
def forward(self, x):
x = self.linear(x)
x = self.act_fn(x)
return x
model = LinearRegression(num_inputs=3, num_outputs=2)
print(model)
# Training loop
for name, param in model.named_parameters():
print(f"Parameter {name}, shape {param.shape}")
for x in input:
print(model(x))
print(model(input))
print('####################')
model.train()
optimizer = torch.optim.SGD(model.parameters(), lr=0.3)
lossfunc = nn.MSELoss()
# Training loop
for _ in range(1000):
res = model(input)
res = res.squeeze(dim=1)
loss = lossfunc(res, result)
# print(loss)
## Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(model(input))
| you have the following problem: You want your model to get as input the rock/paper/scissors of player1 and the rock/paper/scissors of player2. Since you do one-hot encoding you want to input 6 values into your neural network. 3 for player1 and 3 for player2. But you are trying to
Hey, a dense/feed-forward neural networl can only take ONE vector as input. But you are trying to pass two vectors of size three [[1, 0, 0], [0, 1, 0]]. What you can/should do instead is to concat both vectors. That means the first 3 values is the one-hot encoded result of player1 and the last 3 values are the one-hot encoded result of player2. Like this: [1, 0, 0, 0, 1, 0].
Therefore simply change the input data to this:
input = [[1, 0, 0, 0, 1, 0]], [[1, 0, 0, 0, 0, 1]], [[0, 1, 0, 0, 1, 0]], [[0, 0, 1, 1, 0, 0]]
and the model to this:
model = LinearRegression(num_inputs=6, num_outputs=2)
And it works!
| https://stackoverflow.com/questions/74261495/ |
ImportError: libtinfo.so.5: cannot open shared object file: No such file or directory | On an Ubuntu-based system, I got this error that I didn't have before in an existing FastAI Python project.
Traceback:
Traceback (most recent call last):
File "/home/me/PycharmProjects/project/model/predict.py", line 6, in <module>
from fastai.vision.all import *
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/fastai/vision/all.py", line 1, in <module>
from . import models
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/fastai/vision/models/__init__.py", line 1, in <module>
from . import xresnet
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/fastai/vision/models/xresnet.py", line 17, in <module>
from ...torch_basics import *
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/fastai/torch_basics.py", line 1, in <module>
from torch import multiprocessing
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/torch/__init__.py", line 190, in <module>
from torch._C import *
ImportError: libtinfo.so.5: cannot open shared object file: No such file or directory
| Terminal:
sudo apt-get install libtinfo5
| https://stackoverflow.com/questions/74261921/ |
Loss function gradient not computing - FastAI Convolutional VAE | I modified the example from the Pytorch VAE example to be a convolutional network. I then wanted to implement this in FastAI.
class convVAE(nn.Module):
def __init__(self, dim_z=20):
super(convVAE, self).__init__()
self.cv1 = nn.Conv2d(1, 32, 3, stride=2)
self.cv2 = nn.Conv2d(32, 64, 3, stride=2)
self.fc31 = nn.Linear(2304, dim_z)
self.fc32 = nn.Linear(2304, dim_z)
self.fc4 = nn.Linear(dim_z, 2304)
self.cv5 = nn.ConvTranspose2d(64, 32, 3, stride=2)
self.cv6 = nn.ConvTranspose2d(32, 1, 3, stride=2, output_padding=1)
def encode(self, x):
h1 = F.leaky_relu(self.cv1(x))
h2 = F.leaky_relu(self.cv2(h1)).view(-1, 2304)
return self.fc31(h2), self.fc32(h2)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn_like(std)
return mu + eps*std
def decode(self, z):
h5 = F.leaky_relu(self.fc4(z)).view(-1, 64, 6, 6)
h6 = F.leaky_relu(self.cv5(h5))
return torch.sigmoid(self.cv6(h6))
def forward(self, x):
mu, logvar = self.encode(x)
z = self.reparameterize(mu, logvar)
return self.decode(z).view(-1, 784), mu, logvar
def get_loss(res,y):
y_hat, mu, logvar = res
BCE = F.binary_cross_entropy(
y.view(-1, 784),
y_hat,
reduction='sum')
KLD = -0.5 * torch.sum(1 + logvar -
mu.pow(2) - logvar.exp())
return BCE + KLD
block = DataBlock(
blocks=(ImageBlock(cls=PILImageBW),ImageBlock(cls=PILImageBW)),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=(lambda x: x),
batch_tfms=aug_transforms(mult=2., do_flip=False))
path = untar_data(URLs.MNIST)
loaders = block.dataloaders(path/“training”,num_workers=0,bs=32)
loaders.train.show_batch(max_n=4, nrows=1)
mdl = convVAE(5)
learn = Learner(loaders, mdl, loss_func = convVAE.get_loss)
learn.fit(1, cbs=ShortEpochCallback())
The gradient is not computing from the loss, as the parameters all become NaN after one step. The loss function does compute but was relatively large O(1e6). The model and loss function work in the native Pytorch implementation.
EDIT: SOLUTION APPEARS TO HAVE BEEN DUE TO def init(.) instead of def __init__(.) facepalm
| There is a mistake in your BCE calculation:
BCE = F.binary_cross_entropy(
y.view(-1, 784), # this should be your model prediction
y_hat, # this should be the ground truth
reduction='sum')
A simple fix is to swap the two arguments.
| https://stackoverflow.com/questions/74263809/ |
Append or combine column wise torch tensors of different shapes | Tensors A and B below share the row size. The 6 and 3 refers to the columns of the 2 data frame, but the B tensor has at each cell a vector of size 256.
A= torch.Size([17809, 6])
B= torch.Size([17809, 3, 256])
How do I append combine these tensors?
More detail:
A column of 'A' is a numeric vector like 'Age', In B one of the 3 columns has a set of node embeddings (vector) of size 256.
| You can apply torch.nn.Embedding on A to embedding numeric vector then use torch.cat to concat embeding of A and B on the axis=1.
(In the below code I use random tensors).
import torch
from torch import nn
num_embeddings = 10 # fill base Age
embedding_dim = 256 # fill base of B tensor
embedding = nn.Embedding(num_embeddings, embedding_dim)
A = torch.randint(10, (17809, 6))
print(f"A : {A.shape}")
E_A = embedding(A)
print(f"E_A : {E_A.shape}")
B = torch.rand(17809, 3, 256)
print(f"B : {B.shape}")
C = torch.cat((E_A, B), 1)
print(f"C : {C.shape}")
Output:
A : torch.Size([17809, 6])
E_A : torch.Size([17809, 6, 256])
B : torch.Size([17809, 3, 256])
C : torch.Size([17809, 9, 256])
| https://stackoverflow.com/questions/74265925/ |
AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next' | I am trying to load the dataset using Torch Dataset and DataLoader, but I got the following error:
AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next'
the code I use is:
class WineDataset(Dataset):
def __init__(self):
# Initialize data, download, etc.
# read with numpy or pandas
xy = np.loadtxt('./data/wine.csv', delimiter=',', dtype=np.float32, skiprows=1)
self.n_samples = xy.shape[0]
# here the first column is the class label, the rest are the features
self.x_data = torch.from_numpy(xy[:, 1:]) # size [n_samples, n_features]
self.y_data = torch.from_numpy(xy[:, [0]]) # size [n_samples, 1]
# support indexing such that dataset[i] can be used to get i-th sample
def __getitem__(self, index):
return self.x_data[index], self.y_data[index]
# we can call len(dataset) to return the size
def __len__(self):
return self.n_samples
dataset = WineDataset()
train_loader = DataLoader(dataset=dataset,
batch_size=4,
shuffle=True,
num_workers=2)
I tried to make the num_workers=0, still have the same error.
Python version 3.8.9
PyTorch version 1.13.0
| I too faced the same issue, when i tried to call the next() method as follows
dataiter = iter(dataloader)
data = dataiter.next()
You need to use the following instead and it works perfectly:
dataiter = iter(dataloader)
data = next(dataiter)
Finally your code should look like follows:
class WineDataset(Dataset):
def __init__(self):
# Initialize data, download, etc.
# read with numpy or pandas
xy = np.loadtxt('./data/wine.csv', delimiter=',', dtype=np.float32, skiprows=1)
self.n_samples = xy.shape[0]
# here the first column is the class label, the rest are the features
self.x_data = torch.from_numpy(xy[:, 1:]) # size [n_samples, n_features]
self.y_data = torch.from_numpy(xy[:, [0]]) # size [n_samples, 1]
# support indexing such that dataset[i] can be used to get i-th sample
def __getitem__(self, index):
return self.x_data[index], self.y_data[index]
# we can call len(dataset) to return the size
def __len__(self):
return self.n_samples
dataset = WineDataset()
train_loader = DataLoader(dataset=dataset,
batch_size=4,
shuffle=True,
num_workers=2)
dataiter = iter(dataloader)
data = next(dataiter)
| https://stackoverflow.com/questions/74289077/ |
TypeError: unsupported operand type(s) for /: 'SequenceClassifierOutput' and 'int' | I am using hugginface library to train a bert model on classification problem.
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=10)
def training_step(self, batch, batch_nb):
sequence, label = batch
input_ids, attention_mask, labels = self.prepare_batch(sequence=sequence, label=label)
loss = self.model(input_ids=input_ids,
attention_mask=attention_mask,
labels=labels)
tensorboard_logs = {'train_loss': loss}
I am getting the following error just before the training starts:
in training_step
closure_loss = closure_loss / self.trainer.accumulate_grad_batches
TypeError: unsupported operand type(s) for /: 'SequenceClassifierOutput' and 'int'
I am using pytorch-lightning
| Calling self.model() returns an object of type SequenceClassifierOutput.
To access the loss you need to call it's loss attribute:
Replace
loss = self.model(input_ids=input_ids,
attention_mask=attention_mask,
labels=labels)
by
output = self.model(input_ids=input_ids,
attention_mask=attention_mask,
labels=labels)
loss = output.loss
| https://stackoverflow.com/questions/74290324/ |
Does optuna.integration.TorchDistributedTrial support multinode optimization? |
Does integration.TorchDistributedTrial support multinode optimization?
I'm using Optuna on a SLURM cluster. Suppose I would like to do a distributed hyperparameter optimization using two nodes with two gpus each. Would submitting a script like pytorch_distributed_simple.py to multiple nodes yield expected results?
I assume every node would be responsible for executing their own trials (i.e. no nodes share trials) and every gpu on a node is responsible for its own portion of the data, determined by torch.utils.data.Dataloader's sampler. Is this assumption correct or are edits needed apart from TorchDistributedTrial's requirement to pass None to objective calls on ranks other than 0.
I already tried the above, but I'm not sure how to check every node is responsible for distinct trials.
GitHub issues crosspost
| Apparently, Optuna does allow multiple Optuna processes to do distributed runs. Why wouldn't it :)
Basically, run pytorch_distributed_simple.py on multiple nodes (I use SLURM for this) and make sure every subprocess calls the trial.report() method. Every node is now responsible for its own trial. Trials can use DDP.
My method differs from the provided code in that I use SLURM (different environment variables) and I use sqlite to store study information. Moreover, I use the NCCL backend to initialize process groups, and therefore need to pass a device to TorchDistributedTrial.
Unrelated, but I also wanted to call MaxTrialsCallback() in every subprocess. To achieve this, I passed the callback to the rank 0 study.optimizer method and call it explicitly in local non-rank 0 processes after the objective call.
| https://stackoverflow.com/questions/74291634/ |
PyTorch runs as expected under Windows, but fails on larger images under Ubuntu | I've trained a segmentation_models_pytorch.PSPNet model for image segmentation. For prediction I load whole image in PyTorch tensor and scan it with 384x384 pixels window.
result = model.predict(image_tensor[:, :, y:y+384, x:x+384])
My Windows machine has 6Gb GPU, while Ubuntu has 8 Gb GPU. When all models are loaded they consume some 1.4 Gb GPU. When processing a large image on Windows the memory consumption increases to 1.7 Gb GPU.
Under Windows the model can handle 25 M pixel images. Under Ubuntu the same code can only process up to 5 M pixel image. Debugging is difficult because I only have ssh access to the Ubuntu machine.
What could cause this discrepancy and how to debug this issue?
| You can check cuda version, torch version.
To debug, you can just print image_tensor.shape before model.predict, maybe you are running a larger batch size on linux machine.
| https://stackoverflow.com/questions/74304028/ |
How to remove an element from the torch tensor list? | I have torch tensor list that looks like below
tensor([[[-1.8510e-01, 1.3181e-01, 3.2903e-01, ..., 1.9867e-01,
5.1037e-03, 6.4071e-03],
[-4.6331e-01, 2.0216e-01, 2.7916e-01, ..., 2.6695e-01,
-1.3543e-02, 5.3604e-02],
[-3.8719e-01, 2.9603e-01, 2.5516e-01, ..., 1.7509e-01,
8.9148e-02, 3.7516e-02],
and the shape of this torch tensor is [500, 197, 768]
There are 500 images with 197*768 dimensions. I need to remove the instance of some images. Lets say if I remove 5 images then the shape will be [495, 197, 768]
Can anyone tell me how to remove this using index?
| It depends on where along that dimension you want to remove the items.
To remove the first/last n elements (using normal Python indexing):
new_data = data[n:] # Remove first n elements
new_data = data[:-n] # Remove last n elements
To remove n items inside the tensor, you will need to specify a start-index s (s+n should not be larger than the length along that dimension):
new_data = torch.cat((data[:s], data[s+n:]), dim=0) # Remove n elements starting at s
To remove using a list of indices you could do the following:
indices = [6, 7, 9, 100, 204] # Arbitrary list of indices that should be removed
indices_to_keep = [i for i in range(data.shape[0]) if i not in indices] # List of indices that should be kept
new_data = data[torch.LongTensor(indices_to_keep)]
| https://stackoverflow.com/questions/74304719/ |
Invalid stoi argument with torch | I've been tackling python and torch specifically lately as a hobby, and, while some API works, I keep getting invalid stoi argument exception with other very basic API torch provides.
Reproduced with the code below:
import torch
torch.cuda.is_available()
torch.cuda.current_device()
First call (is_available()) works as expected and returns True, but the second throws an exception:
Exception has occurred: RuntimeError
invalid stoi argument
File "C:\DEV\pthon_test\torch_test.py", line 5, in <module>
torch.cuda.current_device()
Needless to say, more complicated things (for example, running stable_diffusion_webui) fail if used with GPU (said webui works with CPU), and trying to dig deeper into the code brings me to the same exception.
OS is Windows 11, python Python 3.10.8, torch version checked with torch.__version__ returns 1.12.1+cu113. And, well, GPU is present
List of packages installed:
❯ pip list
Package Version
----------------------- ---------------
absl-py 1.3.0
addict 2.4.0
antlr4-python3-runtime 4.9.3
basicsr 1.4.2
beautifulsoup4 4.11.1
cachetools 5.2.0
certifi 2022.9.24
charset-normalizer 2.1.1
clip 1.0
colorama 0.4.6
contourpy 1.0.6
cycler 0.11.0
einops 0.4.1
facexlib 0.2.5
ffmpy 0.3.0
filelock 3.8.0
filterpy 1.4.5
font-roboto 0.0.1
fonts 0.0.3
fonttools 4.38.0
ftfy 6.1.1
future 0.18.2
gdown 4.5.3
gfpgan 1.3.5
google-auth 2.14.0
google-auth-oauthlib 0.4.6
grpcio 1.50.0
idna 3.4
imageio 2.22.3
kiwisolver 1.4.4
lark 1.1.2
llvmlite 0.39.1
lmdb 1.3.0
lpips 0.1.4
Markdown 3.4.1
MarkupSafe 2.1.1
matplotlib 3.6.2
networkx 2.8.8
numba 0.56.3
numpy 1.23.3
oauthlib 3.2.2
omegaconf 2.2.3
opencv-python 4.6.0.66
orjson 3.8.1
packaging 21.3
piexif 1.1.3
Pillow 9.2.0
pip 22.2.2
protobuf 3.19.6
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
pycryptodome 3.15.0
pydantic 1.10.2
pyDeprecate 0.3.2
pydub 0.25.1
pyparsing 3.0.9
pyrsistent 0.19.2
PySocks 1.7.1
python-dateutil 2.8.2
python-multipart 0.0.4
pytz 2022.6
PyWavelets 1.4.1
PyYAML 6.0
regex 2022.10.31
requests 2.28.1
requests-oauthlib 1.3.1
resize-right 0.0.2
rfc3986 1.5.0
rsa 4.9
scikit-image 0.19.3
scipy 1.9.3
setuptools 63.2.0
six 1.16.0
sniffio 1.3.0
soupsieve 2.3.2.post1
tb-nightly 2.11.0a20221103
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tifffile 2022.10.10
tokenizers 0.12.1
torch 1.12.1+cu113
torchvision 0.13.1+cu113
tqdm 4.64.1
typing_extensions 4.4.0
uc-micro-py 1.0.1
urllib3 1.26.12
wcwidth 0.2.5
websockets 10.4
Werkzeug 2.2.2
wheel 0.37.1
yapf 0.32.0
zipp 3.10.0
This brings me to two questions:
What's causing the issue? I do have a rough understanding of the c++ meaning of the error, but not sure why I'm getting this on python
Is there anything I can do to fix the problem?
Thanks
| As there is no other answers, and initial problem seem to have been solved, I thought I'd share some information.
I've noticed the problem was fixed when... I updated my Nvidia driver. No idea what was the problem, and if there's anything else I unknowingly did, but updating to 526.98 did the trick for me.
If there's anyone else to share more details on the original issue, or possible solutions - feel free to edit or give a better answer, I'd mark that as the best option instead.
Have a nice day o/
| https://stackoverflow.com/questions/74308875/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x32 and 400x120) | class Net(nn.Module):
def __init__(self):
super().__init__()
#(input channel, output channel, kenel size)
#channel is a dimension of a tensor which is a container that can house data in N dimensions (matrices)
self.conv1 = nn.Conv2d(3, 6, 5)
#shrink the image stack by pooling(kernel size, stride(shift)) and take max value per window
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
#TODO: add conv3
self.conv3 = nn.Conv2d(16, 32, 5)
#drop layer deletes 20% of the feautures to help prevent overfitting
self.drop = nn.Dropout2d(p=0.2)
#linear predicts the output as a linear function of inputs
#(output channels, height, width, batch size
#TODO:
self.fc1 = nn.Linear(16 * 16 * 5, 120)
#TODO:
self.fc1_5 = nn.Linear()
#layer(size of input, size of output)
#Linear layer=Fully connected layer
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
#F.ReLUs change negative values to 0. Apply to all stack of images.
#they are activation functions. We apply it after each liner layer.
#only used in hidden layers.
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
#Select some feautures to drop after 3rd conv to prevent overfitting
x = self.drop(F.relu(self.conv3(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch into 1-D
x = F.relu(self.fc1(x))
#TODO: add fc1_5
x = F.relu(self.fc1_5(x))
x = F.relu(self.fc2(x))
#Feed to Fully connected layer to predict class
x = self.fc3(x) # no relu b/c it's a last layer.
return x
I am using images from CIFAR10 which are of size 3x32x32.
When I ran the code before, it stopped because self.fc1 linear layer size did not work with self.conv3 I've added.
I'm also not sure what to write for self.fc1_5.
Can someone explain me how this is actually working and the solution as well?
Thank you!
I have added an extra convolutional layer and you can see it is
self.conv3 = nn.Conv2d(16, 32, 5).
Lines under the TODO are where I'm stuck at.
I updated the line to:
self.fc1 = nn.Linear(16 * 16 * 5, 120)
before, it was:
self.fc1 = nn.Linear(16 * 5 * 5, 120).
| When you create a CNN for classification with a fixed input size, it's easy to figure out the size of your image by the time it has progressed through your CNN layers. Since we start with images of size [32,32] (channels are unimportant for now):
def __init__(self):
super().__init__()
#(input channel, output channel, kenel size)
#channel is a dimension of a tensor which is a container that can house data in N dimensions (matrices)
self.conv1 = nn.Conv2d(3, 6, 5) # size 28x28 - lose 2 px from each side with a kernel of size 5
#shrink the image stack by pooling(kernel size, stride(shift)) and take max value per window
self.pool = nn.MaxPool2d(2, 2) # size 14x14 - max pooling with K=2 halves the image size
self.conv2 = nn.Conv2d(6, 16, 5) # size 10x10 -> 5x5 after pooling
#TODO: add conv3
self.conv3 = nn.Conv2d(16, 32, 5) # size 1x1
#drop layer deletes 20% of the feautures to help prevent overfitting
self.drop = nn.Dropout2d(p=0.2)
#linear predicts the output as a linear function of inputs
#(output channels, height, width, batch size
self.fc1 = nn.Linear(1 * 1 * 32, 120)
self.fc1_5 = nn.Linear(120,120) # matches the output size of fc1 and input size of fc2
The CNN size losses can be negated by using padding of (K-1)//2, where K=kernel_size.
| https://stackoverflow.com/questions/74311555/ |
How to add shared or static library using PyTorch C++ extension? | How do I use torch.utils.cpp_extension.load to link shared or static library from external source?
I wrote some function in C++, and am using it in PyTorch. So I am using load function from torch.utils.cpp_extension to load a PyTorch C++ extension just-in-time (JIT).
This is the wrapper.py file's content:
import os
from torch.utils.cpp_extension import load
dir_path = os.path.dirname(os.path.realpath(__file__))
my_func = load(name='my_func', sources=[os.path.join(dir_path, 'my_func.cpp')], extra_cflags=['-fopenmp', '-O2'], extra_ldflags=['-lgomp','-lrt'])
my_func.cpp uses OpenMP, so I use the above flags.
Now, I am trying to additionally use several functions in zstd library in my_func.cpp. After cloning and makeing zstd repository, shared libraries like libzstd.so, libzstd.so.1, libzstd.so.1.5.3, and static library like libzstd.a have been created.
I've included #include <zstd.h> inside my_func.cpp and used zstd's functions.
I now have to modify wrapper.py to tell the compiler that I am using functions from zstd library.
How can I successfully compile my_func.cpp using PyTorch C++ extension's torch.utils.cpp_extension.load -- which arguments should I modify? Or, is it even possible to add external shared or static library using this method?
Frankly, I'm not familiar with the difference between static and shared library. But it seems that I can compile my_func.cpp with either one of them, i.e., g++ -fopenmp -O2 -lgomp -lrt -o my_func my_func.cpp lib/libzstd.so.1.5.3 and g++ -fopenmp -O2 -lgomp -lrt -o my_func my_func.cpp lib/libzstd.a both works.
I just can't figure out how I can do the exact same compiling using torch.utils.cpp_extension.load.
Sorry for delivering kind of a lengthy question. I just wanted to make things clear.
| I've figured this out. extra_ldflags argument in torch.utils.cpp_extension.load can handle this. In my case, I've added libzstd.so file in my repository and added -lzstd in above argument.
| https://stackoverflow.com/questions/74313768/ |
FastAI Multilayer LSTM not learning, accuracy decreases while training | I'm following Chapter 12 on RNNs/LSTMs from scratch in the fastai book, but getting stuck trying to train a custom built LSTM from scratch. Here is my code
This is the boilerplate bit (following the examples in the book)
from fastai.text.all import *
path = untar_data(URLs.HUMAN_NUMBERS)
lines = L()
with open(path/'train.txt') as f: lines += L(*f.readlines())
with open(path/'valid.txt') as f: lines += L(*f.readlines())
text = ' . '.join([l.strip() for l in lines])
tokens = text.split(' ')
vocab = L(*tokens).unique()
word2idx = {w:i for i,w in enumerate(vocab)}
nums = L(word2idx[i] for i in tokens)
def group_chunks(ds, bs):
m = len(ds) // bs
new_ds = L()
for i in range(m): new_ds += L(ds[i + m*j] for j in range(bs))
return new_ds
sl = 3
bs = 64
seqs = L((tensor(nums[i:i+sl]), nums[i+sl])
for i in range(0,len(nums)-sl-1,sl))
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(group_chunks(seqs[:cut], bs),
group_chunks(seqs[cut:], bs),
bs=bs, drop_last=True, shuffle=False)
And this is the meat of the thing
class LSTMCell(Module):
def __init__(self, ni, nh):
self.forget_gate = nn.Linear(ni + nh, nh)
self.input_gate = nn.Linear(ni + nh, nh)
self.cell_gate = nn.Linear(ni + nh, nh)
self.output_gate = nn.Linear(ni + nh, nh)
def forward(self, input, state):
h, c = state
h = torch.cat([h, input], dim=1)
c = c * torch.sigmoid(self.forget_gate(h))
c = c + torch.sigmoid(self.input_gate(h)) * torch.tanh(self.cell_gate(h))
h = torch.sigmoid(self.output_gate(h)) * torch.tanh(c)
return h, (h, c)
class MyModel(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.cells = [LSTMCell(bs, n_hidden) for _ in range(sl)]
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h = torch.zeros(bs, n_hidden)
self.c = torch.zeros(bs, n_hidden)
def forward(self, x):
x = self.i_h(x)
h, c = self.h, self.c
for i, cell in enumerate(self.cells):
res, (h, c) = cell(x[:, i, :], (h, c))
self.h = h.detach()
self.c = c.detach()
return self.h_o(res)
def reset(self):
self.h.zero_()
self.c.zero_()
learn = Learner(dls, MyModel(len(vocab), 64), loss_func=CrossEntropyLossFlat(), metrics=accuracy, cbs=ModelResetter)
learn.fit_one_cycle(5, 1e-2)
The training output looks like this
Any help appreciated
| After some playing around I was able to figure it out. The issue was the way I was initialising the list of cells. In MyModule.__init__ I only needed to change the line to
self.cells = nn.ModuleList([LSTMCell(bs, n_hidden) for _ in range(sl)])
The reason it was broken was that by initialising the Modules in a regular list, the parameters were hidden from pytorch/fastai. By using a nn.ModuleList the parameters are registered and can be trained
| https://stackoverflow.com/questions/74316188/ |
Resizing PIL Image gives a completely black image | I have a dataset of grayscale images, like this one below:
Now, I open my dataset with the following class:
"""Tabular and Image dataset."""
def __init__(self, excel_file, image_dir):
self.image_dir = image_dir
self.excel_file = excel_file
self.tabular = pd.read_excel(excel_file)
def __len__(self):
return len(self.tabular)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
tabular = self.tabular.iloc[idx, 0:]
y = tabular["Prognosis"]
image = PIL.Image.open(f"{self.image_dir}/{tabular['ImageFile']}")
image = np.array(image)
#image = image[..., :3]
image = transforms.functional.to_tensor(image)
return image, y
If I check the tensors of the image, then I have this:
tensor([[[160, 160, 192, ..., 52, 40, 40],
[176, 208, 320, ..., 96, 80, 16],
[176, 240, 368, ..., 160, 160, 52],
...,
[576, 608, 560, ..., 16, 16, 16],
[624, 592, 544, ..., 16, 16, 16],
[624, 624, 576, ..., 16, 16, 16]]], dtype=torch.int32)
Now, they should be between 0 and 1, right? Because it is grayscale, or 0-255 in RGB, but there are those big values that I have no idea from where they are coming (indeed imshow shows images with strange distorted colors like yellow and blue rather then grayscale).
However, this is the size of the images torch.Size([1, 2350, 2866]); I want to resize to (1,224,224) for example
This is my function:
def resize_images(images: List[str]):
for i in images:
image = PIL.Image.open(f"{data_path}TrainSet/{i}")
new_image = image.resize((224, 224))
new_image.save(f"{data_path}TrainImgs/{i}")
resize_images(list(df["ImageFile"]))
However, this code returns all images that are 224x244 but they are all black. All images are completely black!
| You have either shared a different file from the one your code opens, or imgur has changed your file. In any case, the most expedient way to examine the content of your file on Linux/Unix/macOS without installing any software is to use the file command. So, checking your file we can see that it is an 8-bit PNG with alpha channel:
file pZS4l.png
pZS4l.png: PNG image data, 896 x 732, 8-bit/color RGBA, non-interlaced
That immediately tells me it is not the exact image you opened in your code, because there are values exceeding 255 in your pixel dump, and that is not possible in an 8-bit file. So, the next best way to check the contents of an image is with exiftool and that works for Windows users too. That looks like this:
exiftool pZS4l.png
ExifTool Version Number : 12.30
File Name : pZS4l.png
Directory : .
File Size : 327 KiB
File Modification Date/Time : 2022:11:05 17:15:22+00:00
File Access Date/Time : 2022:11:05 17:15:23+00:00
File Inode Change Date/Time : 2022:11:05 17:15:22+00:00
File Permissions : -rw-r--r--
File Type : PNG
File Type Extension : png
MIME Type : image/png
Image Width : 896
Image Height : 732
Bit Depth : 8
Color Type : RGB with Alpha
Compression : Deflate/Inflate
Filter : Adaptive
Interlace : Noninterlaced
Profile Name : ICC Profile
Profile CMM Type : Apple Computer Inc.
Profile Version : 2.1.0
Profile Class : Display Device Profile
Color Space Data : RGB
Profile Connection Space : XYZ
Profile Date Time : 2022:07:06 14:13:59
Profile File Signature : acsp
Primary Platform : Apple Computer Inc.
CMM Flags : Not Embedded, Independent
Device Manufacturer : Apple Computer Inc.
Device Model :
Device Attributes : Reflective, Glossy, Positive, Color
Rendering Intent : Perceptual
Connection Space Illuminant : 0.9642 1 0.82491
Profile Creator : Apple Computer Inc.
Profile ID : 0
Profile Description : Display
...
...
So, now I see it is a screen-grab made on a Mac. A screen-grab is not the same as the image you are displaying! If you display a 16-bit image on an 8-bit display, the screen-grab will be 8-bit. You should share your original image, not a screen-grab. If imgur is changing your images, you should share them with Dropbox or Google Drive or similar.
Right, on to your question. Assuming you actually open a PNG in your code (which we can't tell because it is incomplete) the data should not be float because PNG cannot store floats, it can only store integers. PNG can store integer samples with bit depths of 1, 2, 4, 8 or 16-bits. If you read about a 24-bit PNG, that is RGB888. If you read about a 32-bit PNG, that is RGBA8888. If you read about a 48-bit PNG, that is RGB with 16 bits/sample. If you read about 64-bit PNG, that is RGBA with 16-bits/sample.
So, the short answer is to run:
file YOURACTUALIMAGE.PNG
and/or:
exiftool YOURACTUALIMAGE.PNG
So, my suspicion is that you have a 16-bit greyscale PNG, which is perfectly able to store samples in the range 0..65535.
Note: If you actually do want to store floats, you probably need to use TIFF, or PFM (Portable Float Map) or EXR format.
| https://stackoverflow.com/questions/74327197/ |
How to use random_split with percentage split (sum of input lengths does not equal the length of the input dataset) | I tried to use torch.utils.data.random_split as follows:
import torch
from torch.utils.data import DataLoader, random_split
list_dataset = [1,2,3,4,5,6,7,8,9,10]
dataset = DataLoader(list_dataset, batch_size=1, shuffle=False)
random_split(dataset, [0.8, 0.1, 0.1], generator=torch.Generator().manual_seed(123))
However, when I tried this, I got the error raise ValueError("Sum of input lengths does not equal the length of the input dataset!")
I looked at the docs and it seems like I should be able to pass in decimals that sum to 1, but clearly it's not working.
I also Googled this error and the closest thing that comes up is this issue.
What am I doing wrong?
| You're likely using an older version of PyTorch, such as Pytorch 1.10, which does not have this functionality.
To replicate this functionality in the older version, you can just copy the source code of the newer version:
import math
from torch import default_generator, randperm
from torch._utils import _accumulate
from torch.utils.data.dataset import Subset
def random_split(dataset, lengths,
generator=default_generator):
r"""
Randomly split a dataset into non-overlapping new datasets of given lengths.
If a list of fractions that sum up to 1 is given,
the lengths will be computed automatically as
floor(frac * len(dataset)) for each fraction provided.
After computing the lengths, if there are any remainders, 1 count will be
distributed in round-robin fashion to the lengths
until there are no remainders left.
Optionally fix the generator for reproducible results, e.g.:
>>> random_split(range(10), [3, 7], generator=torch.Generator().manual_seed(42))
>>> random_split(range(30), [0.3, 0.3, 0.4], generator=torch.Generator(
... ).manual_seed(42))
Args:
dataset (Dataset): Dataset to be split
lengths (sequence): lengths or fractions of splits to be produced
generator (Generator): Generator used for the random permutation.
"""
if math.isclose(sum(lengths), 1) and sum(lengths) <= 1:
subset_lengths: List[int] = []
for i, frac in enumerate(lengths):
if frac < 0 or frac > 1:
raise ValueError(f"Fraction at index {i} is not between 0 and 1")
n_items_in_split = int(
math.floor(len(dataset) * frac) # type: ignore[arg-type]
)
subset_lengths.append(n_items_in_split)
remainder = len(dataset) - sum(subset_lengths) # type: ignore[arg-type]
# add 1 to all the lengths in round-robin fashion until the remainder is 0
for i in range(remainder):
idx_to_add_at = i % len(subset_lengths)
subset_lengths[idx_to_add_at] += 1
lengths = subset_lengths
for i, length in enumerate(lengths):
if length == 0:
warnings.warn(f"Length of split at index {i} is 0. "
f"This might result in an empty dataset.")
# Cannot verify that dataset is Sized
if sum(lengths) != len(dataset): # type: ignore[arg-type]
raise ValueError("Sum of input lengths does not equal the length of the input dataset!")
indices = randperm(sum(lengths), generator=generator).tolist() # type: ignore[call-overload]
return [Subset(dataset, indices[offset - length : offset]) for offset, length in zip(_accumulate(lengths), lengths)]
| https://stackoverflow.com/questions/74327447/ |
How to get activation values of a layer in pytorch | I have a pytorch-lightning model that has a dense layer like so:
def __init__(...)
...
self.dense = nn.Linear(channels[-1], 64, bias=True)
...
for my project, I need to get the activation values of this layer as a list
I have tried this code which I found on the pytorch discussion forum:
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
test_img = cv.imread(f'digimage/100.jpg')
test_img = cv.resize(test_img, (128, 128))
test_img = np.moveaxis(test_img, 2, 0)
modelftr = load_feature_model(**model_dict)
num_ftrs = modelftr.fc.in_features
modelftr.fc = torch.nn.Linear(num_ftrs, 228)
modelftr.load_state_dict(torch.load('...'))
modelftr.dense.register_forward_hook(get_activation('dense'))
with torch.no_grad():
modelatt.to('cpu')
modelatt.eval()
test_img = torch.tensor(test_img).view(-1, 3, 128, 128).float()
output = modelcat(test_img)
print(activation['dense'])
But this gives a keyerror:
8 test_img = torch.tensor(test_img).view(-1, 3, 128, 128).float()
9 output = modelcat(test_img)
---> 10 print(activation['dense'])
KeyError: 'dense'
Update:
This is my full model code.
As you can see there is a linear layer named dense
class FAtNet(pl.LightningModule):
def __init__(self, image_size, in_channels, num_blocks, channels,
num_classes=20, block_types=['C', 'C', 'T', 'T'], lr=0.0001, loss_function=nn.CrossEntropyLoss()):
super().__init__()
self.lr = lr
self.loss_function = loss_function
ih, iw = image_size
block = {'C': MBConv, 'T': Transformer}
self.s0 = self._make_layer(
conv_3x3_bn, in_channels, channels[0], num_blocks[0], (ih // 2, iw // 2))
self.s1 = self._make_layer(
block[block_types[0]], channels[0], channels[1], num_blocks[1], (ih // 4, iw // 4))
self.s2 = self._make_layer(
block[block_types[1]], channels[1], channels[2], num_blocks[2], (ih // 8, iw // 8))
self.s3 = self._make_layer(
block[block_types[2]], channels[2], channels[3], num_blocks[3], (ih // 16, iw // 16))
self.s4 = self._make_layer(
block[block_types[3]], channels[3], channels[4], num_blocks[4], (ih // 32, iw // 32))
self.pool = nn.AvgPool2d(ih // 32, 1)
self.dense = nn.Linear(channels[-1], 64, bias=True)
self.fc = nn.Linear(64, num_classes, bias=False)
def forward(self, x):
x = self.s0(x)
x = self.s1(x)
x = self.s2(x)
x = self.s3(x)
x = self.s4(x)
x = self.pool(x).view(-1, x.shape[1])
x = self.dense(x)
x = self.fc(x)
return x
def _make_layer(self, block, inp, oup, depth, image_size):
layers = nn.ModuleList([])
for i in range(depth):
if i == 0:
layers.append(block(inp, oup, image_size, downsample=True))
else:
layers.append(block(oup, oup, image_size))
return nn.Sequential(*layers)
def configure_optimizers(self):
return optim.Adam(self.parameters(), lr=self.lr)
def training_step(self, batch, batch_idx):
X, y = batch
y_hat = self(X)
loss = self.loss_function(y_hat, y)
self.log('train_loss', loss)
return loss
def test_step(self, batch, batch_idx):
X, y = batch
y_hat = self(X)
loss = self.loss_function(y_hat, y)
self.log('test_loss', loss)
return loss
### custom prediction function ###
def predict(self, dm):
X_test = dm.X_test
self.eval()
X_test = torch.tensor(X_test).float()
self.to(device='cuda')
pred = []
with torch.no_grad():
for data in X_test:
output = self(data)
pred.append(output)
pred = pred[0].detach()
pred = pred.cpu()
self.to(device='cpu')
self.train()
return pred
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
| It seems like you model does not have 'dense' layer, only 'fc'.
Try:
modelftr.fc.register_forward_hook(get_activation('fc'))
| https://stackoverflow.com/questions/74337447/ |
Where to raise a typo in the PyTorch Documentation? | I have found a typo in the official PyTorch Documentation. Where can I raise the flag so that it is rectified?
| From the PyTorch Contribution Guide, in the section on Documentation:
Improving Documentation & Tutorials
We aim to produce high quality documentation and tutorials. On rare
occasions that content includes typos or bugs. If you find something
you can fix, send us a pull request for consideration.
| https://stackoverflow.com/questions/74343487/ |
Access weights, biases in model - PyTorch pruning | To implement L1-norm, unstructured, layer-wise pruning with torch.nn.utils.prune l1_unstructured, radom_unstructured methods, I have a toy LeNet-300-100 dense neural network as-
class LeNet300(nn.Module):
def __init__(self):
super(LeNet300, self).__init__()
# Define layers-
self.fc1 = nn.Linear(in_features = 28 * 28 * 1, out_features = 300)
self.fc2 = nn.Linear(in_features = 300, out_features = 100)
self.output_layer = nn.Linear(in_features = 100, out_features = 10)
self.weights_initialization()
def forward(self, x):
x = F.leaky_relu(self.fc1(x))
x = F.leaky_relu(self.fc2(x))
x = self.output_layer(x)
return x
def weights_initialization(self):
'''
When we define all the modules such as the layers in '__init__()'
method above, these are all stored in 'self.modules()'.
We go through each module one by one. This is the entire network,
basically.
'''
for m in self.modules():
if isinstance(m, nn.Linear):
nn.init.kaiming_normal_(m.weight)
nn.init.constant_(m.bias, 1)
def shape_computation(self, x):
print(f"Input shape: {x.shape}")
x = self.fc1(x)
print(f"dense1 output shape: {x.shape}")
x = self.fc2(x)
print(f"dense2 output shape: {x.shape}")
x = self.output_layer(x)
print(f"output shape: {x.shape}")
del x
return None
# Initialize architecture-
model = LeNet300().to(device)
This has 266610 trainable parameters. To prune this with 20% for the first two dense layers and 10% for the output layer until 99.5% sparsity, you need 25 pruning rounds. The pruning is done with-
l1_unstructured(module = fc, name = 'weight', amount = 0.2)
Iterating through the layers is done with-
for name, module in model.named_modules():
if name == '':
continue
else:
print(f"layer: {name}, module: {module}")
However, for a particular module, how to access its weights and biases besides using-
module.weight, module.bias
?
The idea is to use a layer-wise (for now) pruning function as-
# Prune multiple parameters/layers in a given model-
for name, module in model.named_modules():
# prune 20% of weights/connections in for all hidden layaers-
if isinstance(module, torch.nn.Linear) and name != 'output_layer':
prune.l1_unstructured(module = module, name = 'weight', amount = 0.2)
# prune 10% of weights/connections for output layer-
elif isinstance(module, torch.nn.Linear) and name == 'output_layer':
prune.l1_unstructured(module = module, name = 'weight', amount = 0.1)
Bias pruning will also be included.
| The name parameter is the attribute name of the parameter, within the module, on which the pruning will be applied (see documentation page). As such, you can provide either 'weight' or 'bias' in your case since you are focusing on nn.Linear exclusively.
Additionally, you will read that prune.l1_unstructured will:
Modifies module in place (and also return the modified module) by:
adding a named buffer called name+'_mask' corresponding to the binary mask applied to the parameter name by the pruning method.
replacing the parameter name by its pruned version, while the original (unpruned) parameter is stored in a new parameter named
name+'_orig'.
You can access the pruned weights via the weight and bias attributes and the original parameters with weight_orig and bias_orig:
m = nn.Linear(2, 3)
m = prune.l1_unstructured(m, 'weight', amount=.2)
m = prune.l1_unstructured(m, 'bias', amount=.2)
>>> m.state_dict().keys()
odict_keys(['weight_orig', 'bias_orig', 'weight_mask', 'bias_mask'])
| https://stackoverflow.com/questions/74358696/ |
Upsampling the Spatial Dimensions Of a 4D Tensor using transposed convolution | I have a tensor with the size of (16, 64, 4,4) and I want to upsample the spatial size of this tensor using transposed convolution. How can I select the kernel size, stride, padding to have a tensor with size of (16, 64, 4,6)?
For example this is my code for upsampling from (16, 64, 1,1) to (16, 64, 4,4):
nn.convTranspose2d(64,64,kernel_size=6,stride=4,padding=1)
| The equation for computing the output size is in the pytorch documentation of ConvTranspose2d. Put in your input size, set your desired output size and solve the equation for kernel size, stride, padding, etc. There may be multiple valid solutions.
For your specific problem, the following works:
from torch import nn
t = torch.ones((16, 64, 4, 4)) # test data
layer = nn.ConvTranspose2d(64, 64, kernel_size=(1, 3))
print(layer(t).shape) # torch.Size([16, 64, 4, 6])
To demonstrate that there can be multiple valid solutions, here is another solution for up-sampling a tensor with shape (16, 64, 1, 1) to (16, 64, 4, 4).
from torch import nn
t = torch.ones((16, 64, 1, 1)) # test data
layer = nn.ConvTranspose2d(64, 64, kernel_size=(4, 4))
print(layer(t).shape) # torch.Size([16, 64, 4, 4])
| https://stackoverflow.com/questions/74358922/ |
Efficient way to use regex compile (Python) with a list of 10.000 strings | I got a list which contains approx. 10.000 strings and I want to use a regex pattern to detect this in this list. When I use re.compile it takes a lot of time to only apply one regex pattern. Is there any way with Python to make it faster?
Here my code:
import re
list_of_strings = ["I like to eat meat", "I don't like to eat meat", "I like to eat fish", "I don't like to eat fish"]
outcome = [x for x in list_of_strings if len(re.compile(r"I like to eat (.*?)").findall(x)) != 0]
Out[6]: ['I like to eat meat', 'I like to eat fish']
Here I have just 4 strings to demonstrate the case. In reality the code should handle 10.000 strings.
I could also use multiple processing to solve this issue but maybe there is also another solution with pytorch, pyspark or other Frameworks existing.
[Edit]
Thanks for all answers. I should have mentioned that every string is an article. So, it is not just one sentence to be handled from regex.
I also want to say that the regex here ist not that problem. So this is not a topic to be discussed.
| You may also consider looping the list.
new_list = []
for item in list_of_strings:
if 'I like to eat' in item:
new_list.append(item)
| https://stackoverflow.com/questions/74364266/ |
PyTorch's `torch.cuda.max_memory_allocated()` showing different results from `nvidia-smi`? | I'm currently making my own custom GPU report and am using torch.cuda.max_memory_allocated(device_id) to get the maximum memory that each GPU uses. However, I've noticed that this number is different from when I run nvidia-smi during the process.
According to the documentation for torch.cuda.max_memory_allocated, the output integer is in the form of bytes. And from what I've searched online to convert the number of bytes to the number of gigabytes, you should divide it by 1024 ** 3. I'm currently doing round(max_mem / (1024 ** 3), 2)
Am I doing the calculation wrong, or am I misunderstanding how torch.cuda.max_memory_allocated works entirely? The memory allocated I've observed from one GPU during the entire process was aroudn 32GB, but torch.cuda.max_memory_allocated(0) / (1024 ** 3) returns around 13.5GB.
| Posting a link to the same question asked on the discussion forum for PyTorch. TL;DR torch.cuda.max_memory_allocated is not supposed to accurately resemble the output of nvidia-smi because nvidia-smi actually reserves a lot of memory than what is actually being used. Therefore, torch.cuda.max_memory_reserved will resemble the actual output much closely (although still not entirely accurately).
https://discuss.pytorch.org/t/pytorchs-torch-cuda-max-memory-allocated-showing-different-results-from-nvidia-smi/165706/2
| https://stackoverflow.com/questions/74384810/ |
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden while using local mode in AWS SageMaker | trainer = PyTorch(
entry_point="train.py",
source_dir= "source_dir", # directory of your training script
role=role,
framework_version="1.5.0",
py_version="py3",
instance_type= "local",
instance_count=1,
output_path=output_path,
hyperparameters = hyperparameters
)
This code is running SageMaker NoteNook instance.
Error
Creating dsd3faq5lq-algo-1-ouews ...
Creating dsd3faq5lq-algo-1-ouews ... done
Attaching to dsd3faq5lq-algo-1-ouews
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,444 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,475 sagemaker-training-toolkit INFO No GPUs detected (normal if no gpus installed)
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,494 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,507 sagemaker_pytorch_container.training INFO Invoking user training script.
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,673 sagemaker-training-toolkit ERROR Reporting training FAILURE
dsd3faq5lq-algo-1-ouews | 2022-11-10 05:39:26,674 sagemaker-training-toolkit ERROR framework error:
dsd3faq5lq-algo-1-ouews | Traceback (most recent call last):
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/trainer.py", line 85, in train
dsd3faq5lq-algo-1-ouews | entrypoint()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py", line 121, in main
dsd3faq5lq-algo-1-ouews | train(environment.Environment())
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_container/training.py", line 73, in train
dsd3faq5lq-algo-1-ouews | runner_type=runner_type)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/entry_point.py", line 92, in run
dsd3faq5lq-algo-1-ouews | files.download_and_extract(uri=uri, path=environment.code_dir)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/files.py", line 131, in download_and_extract
dsd3faq5lq-algo-1-ouews | s3_download(uri, dst)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/sagemaker_training/files.py", line 167, in s3_download
dsd3faq5lq-algo-1-ouews | s3.Bucket(bucket).download_file(key, dst)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/inject.py", line 246, in bucket_download_file
dsd3faq5lq-algo-1-ouews | ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/inject.py", line 172, in download_file
dsd3faq5lq-algo-1-ouews | extra_args=ExtraArgs, callback=Callback)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/boto3/s3/transfer.py", line 307, in download_file
dsd3faq5lq-algo-1-ouews | future.result()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/futures.py", line 106, in result
dsd3faq5lq-algo-1-ouews | return self._coordinator.result()
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/futures.py", line 265, in result
dsd3faq5lq-algo-1-ouews | raise self._exception
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/tasks.py", line 255, in _main
dsd3faq5lq-algo-1-ouews | self._submit(transfer_future=transfer_future, **kwargs)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/s3transfer/download.py", line 343, in _submit
dsd3faq5lq-algo-1-ouews | **transfer_future.meta.call_args.extra_args
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call
dsd3faq5lq-algo-1-ouews | return self._make_api_call(operation_name, kwargs)
dsd3faq5lq-algo-1-ouews | File "/opt/conda/lib/python3.6/site-packages/botocore/client.py", line 676, in _make_api_call
dsd3faq5lq-algo-1-ouews | raise error_class(parsed_response, operation_name)
dsd3faq5lq-algo-1-ouews | botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
dsd3faq5lq-algo-1-ouews |
dsd3faq5lq-algo-1-ouews | An error occurred (403) when calling the HeadObject operation: Forbidden
dsd3faq5lq-algo-1-ouews exited with code 1
1
Aborting on container exit...
The same code worked a couple of days back but it is not working from yesterday onwards.
Sagemaker Version: 2.115.0
I tried changing the SageMaker version to 2.0 but to no avail.
I also uploaded training code on s3 and use s3_uri in "source_dir" but still got the same error.
The problem only occurs while using instance_type = "local"
It works perfectly fine if I use a remote instance such as instance_type = "ml.c4.xlarge"
I also tried uploading and downloading objects directly from S3 using boto3 and it worked perfectly fine.
eg;
session = boto3.Session()
s3 = session.resource('s3')
s3_obj = s3.Object(bucket, data_key)
s3_obj.put(Body=data)
s3client = boto3.client('s3')
response = s3client.get_object(Bucket= bucket, Key= data_key)
body = response['Body'].read()
The above code works perfectly fine in same instance.
I am not sure but if it was just an issue with permission, wouldn't the above code also not work.
| I changed the code to below, replacing Sagwmaker.pytorch.Pytorch with sagemaker.estimator.Estimator, providing reference to the relevant Docker Image URI.
from sagemaker.estimator import Estimator
TRAIN_IMAGE_URI= "763104351884.dkr.ecr.ap-south-1.amazonaws.com/pytorch-training:1.12.1-cpu-py38"
trainer = Estimator(
image_uri = TRAIN_IMAGE_URI,
entry_point="train.py",
source_dir="source_dir", # directory of your training script
role=role,
base_job_name = base_job_name,
instance_type= "local",
instance_count=1,
output_path=output_path,
hyperparameters = hyperparameters
)
trainer.fit()
This seems to work perfectly in my case.
Although I still do not understand why.
| https://stackoverflow.com/questions/74384817/ |
how sum two tensors with difference shapes_pytorch | I have two tensors with these shapes:
a_tensor : torch.Size([64, 37])
b_tensor : torch.Size([64, 300])
how can I sum them (a_tensor+b_tensor ) and pad a_tensor with zeros to be size of [64,300]?
For example, a_tensor is [[1 , 2 , 3],[3 , 2 , 1]] with shape of ([2 , 3])
and
b_ensor is [[10 , 20 , 30 , 40] , [40 , 30 , 20 , 10]] with the shape of ([2,4]).
a_tensor + b_tensor would be like this:
[[1,2,3,0],[3,2,1,0]] + [[10,20,30,40],[40,30,20,10]] = [[11,22,33,40],[43,32,21,10]]
| You can do so using nn.functional.pad:
>>> a = torch.rand(64, 37)
>>> b = torch.rand(64, 300)
Measure the padding amount:
>>> r = b.size(-1) - a.size(-1)
Pad and sum:
>>> tf.pad(a, (0,0,r,0)) + b
| https://stackoverflow.com/questions/74386760/ |
torch Parameter grad return none | I want to implement learned size quantization algorithm. And I create a quante Linear layer
class QLinear(nn.Module):
def __init__(self, input_dim, out_dim, bits=8):
super(QLinear, self).__init__()
# create a tensor requires_grad=True
self.up = 2 ** bits - 1
self.down = 0
self.fc = nn.Linear(input_dim, out_dim)
weight = self.fc.weight.data
self.scale = nn.Parameter(torch.Tensor((torch.max(weight) - torch.min(weight)) / (self.up - self.down)), requires_grad=True)
self.zero_point = nn.Parameter(torch.Tensor(self.down - (torch.min(weight) / self.scale).round()), requires_grad=True)
def forward(self, x):
weight = self.fc.weight
quant_weight = (round_ste(weight / self.scale) + self.zero_point)
quant_weight = torch.clamp(quant_weight, self.down, self.up)
dequant_weight = ((quant_weight - self.zero_point) * self.scale)
self.fc.weight.data = dequant_weight
return self.fc(x)
class QNet(nn.Module):
def __init__(self):
super(QNet, self).__init__()
self.fc1 = QLinear(28 * 28, 100)
self.fc2 = QLinear(100, 10)
def forward(self, x):
x = x.view(-1, 28 * 28)
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = F.softmax(x)
return x
when I train this network,scale's grad always return None. Why this happen and how can i solve it?
| The issue is that you are passing dequant_weight through data attribute of your parameter which ends up not being registered by autograd. A simple alternative would be to handle weight as a nn.Parameter and apply a linear operator manually in the forward definition directly with the computed weight dequant_weight.
Here is a minimal example that should work:
class QLinear(nn.Module):
def __init__(self, input_dim, out_dim, bits=8):
super().__init__()
self.up = 2 ** bits - 1
self.down = 0
self.weight = nn.Parameter(torch.rand(out_dim, input_dim))
self.scale = nn.Parameter(
torch.Tensor((self.weight.max() - self.weight.min()) / (self.up - self.down)))
self.zero_point = nn.Parameter(
torch.Tensor(self.down - (self.weight.min() / self.scale).round()))
def forward(self, x):
quant_weight = (torch.round(self.weight / self.scale) + self.zero_point)
quant_weight = torch.clamp(quant_weight, self.down, self.up)
dequant_weight = ((quant_weight - self.zero_point) * self.scale)
return F.linear(x, dequant_weight)
Side notes:
nn.Parameter requires gradient computation by default (no need to provide requires_grad=True.
Additionally you can reformat QNet by inheriting from nn.Sequential to avoid boilerplate code:
class QNet(nn.Sequential):
def __init__(self):
super().__init__(nn.Flatten(),
QLinear(28 * 28, 100),
nn.ReLU(),
QLinear(100, 10),
nn.Softmax())
| https://stackoverflow.com/questions/74387343/ |
A Classifier Network Seems to be "Forgetting" older samples | This is a strange problem: Imagine a neural network classifier. It is a simple linear layer followed by a sigmoid activation that has an input size of 64, and an output size of 112. There also are 112 training samples, where I expect the output to be a one-hot vector. So the basic structure of a training loop is as follows, where samples is a list of integer indices:
model = nn.Sequential(nn.Linear(64,112),nn.Sequential())
loss_fn = nn.BCELoss()
optimizer = optim.AdamW(model.parameters(),lr=3e-4)
for epoch in range(500):
for input_state, index in samples:
one_hot = torch.zeros(112).float()
one_hot[index] = 1.0
optimizer.zero_grad()
prediction = model(input_state)
loss = loss_fn(prediction,one_hot)
loss.backward()
optimizer.step()
This model does not perform well, but I don't think it's a problem with the model itself, but rather how it's trained. I think that this is happening because for the most part, all of the one_hot tensor is zeros, that the model just tends to gravitate toward all of the outputs being zeros, which is what's happening. The question becomes: "How does this get solved?" I tried using the average loss with all the samples, to no avail. So what do I do?
| So this is very embarrassing, but the answer actually lies in how I process my data. This is a text-input project, so I used basic python lists to create blocks of messages, but when I did this, I ended up making it so that all of the inputs the net got were the same, but the output was different every time. I solved tho s problem with the copy method.
| https://stackoverflow.com/questions/74394144/ |
Mismatch in expected results of convolution using Conv2d from Pytorch? | I am experimenting with the conv2d function implemented in PyTorch. I wrote a code sample below where we have a 3x3 matrix of 1 batch, 2 input channels. I implement my convolution layer to have a kernel the exact size of the matrix (so the stride doesn't matter) and an output channel of 1. I fix the weights at 1.
Basically this should just sum up input tensor. Why are the last two printed values slightly differing? Is this the result of some sort of floating point calculation error?
import torch.nn as nn
import torch
m = nn.Conv2d(2, 1, 3, stride=2)
input = torch.randn(1, 2, 3, 3)
m.weight = torch.nn.Parameter(torch.ones_like(m.weight))
output = m(input)
print(input)
print(torch.sum(input))
print(output)
| The bias of Conv2d is not initialized as zeros.
Try this
import torch.nn as nn
import torch
m = nn.Conv2d(2, 1, 3, stride=2)
input = torch.randn(1, 2, 3, 3)
m.weight = torch.nn.Parameter(torch.ones_like(m.weight))
nn.init.zeros_(m.bias)
output = m(input)
print(input)
print(torch.sum(input))
print(output)
| https://stackoverflow.com/questions/74407560/ |
Why the computing efficiency of torch.tanh is much higher than the direct expression? | The two ways of computing 'tanh' are shown as follows. Why the computing efficiency of torch.tanh(1) is much higher than the direct expression(2)? I am confused. And where can I find the original code of torch.tanh in pytorch? Dose it written by C/C++?
import torch
import time
def tanh(x):
return (torch.exp(x) - torch.exp(-x)) / (torch.exp(x) + torch.exp(-x))
class Function(torch.nn.Module):
def __init__(self):
super(Function, self).__init__()
self.Linear1 = torch.nn.Linear(3, 50)
self.Linear2 = torch.nn.Linear(50, 50)
self.Linear3 = torch.nn.Linear(50, 50)
self.Linear4 = torch.nn.Linear(50, 1)
def forward(self, x):
# (1) for torch.torch
x = torch.tanh(self.Linear1(x))
x = torch.tanh(self.Linear2(x))
x = torch.tanh(self.Linear3(x))
x = torch.tanh(self.Linear4(x))
# (2) for direct expression
# x = tanh(self.Linear1(x))
# x = tanh(self.Linear2(x))
# x = tanh(self.Linear3(x))
# x = tanh(self.Linear4(x))
return x
func = Function()
x= torch.ones(1000,3)
T1 = time.time()
for i in range(10000):
y = func(x)
T2 = time.time()
print(T2-T1)
| The mathematical functions are writen in higly optimized code, they can use advanced CPU features and multiple cores, it can even take advantage of GPUs.
in your tanh function it evaluates the exp function four times, does 2 subtraction and one division, creating temporary tensors require memory allocation that can be slow as well, not to mention the overhead of the python interpreter, being 4 to 10 times slow is reasonable.
| https://stackoverflow.com/questions/74415472/ |
Neural network not training, parameters.grad is None | I implemented a simple NN and a custom objective function for a minimization problem. Everything seems to be working fine, except for the fact that the model does not seem to learn.
I checked if list(network.parameters())[0].grad was None, and indeed this seems to be the problem. Based on previously asked questions, the problem seems to be the detachment of the graph, but I don't know what I am doing wrong.
Here's the link to the code that you can run on colab: Colab code
Thank you!!
| This part seems problematic in your code.
output_grad, _ = torch.autograd.grad(q, (x,y))
output_grad.requires_grad_()
Your loss depends on output_grad and so when you do loss.backward() are trying to compute the gradient of parameters w.r.t to output_grad.
You cannot compute the gradient of output_grad since create_graph is False by default. And so output_grad is implicitly detached from the rest of the graph. To fix this, just pass create_graph=True in the autograd.grad. You do not need to set requires_grad either for output_grad, i.e., the second line is not needed.
| https://stackoverflow.com/questions/74416404/ |
Alternative concatenation of tensors | I have 2 tensors of shape [2, 1, 9] and [2, 1, 3]. I'd like to concatenate across the 3rd dimension alternatively (once every 4).
For example:
a = [[[1,2,3,4,5,6,7,8,9]],[[11,12,13,14,15,16,17,18,19]]]
b = [[[10, 20, 30]], [[1, 2, 3]]]
result = [[[1,2,3,10,4,5,6,20,7,8,9,30]],[[11,12,13,1,14,15,16,2,17,18,19,3]]]
How can I do this in pytorch?
| This would do the trick:
torch.concat([a.reshape((2, 1, 3, 3)), b.reshape(2, 1, 3, 1)], axis=-1).reshape((2, 1, -1))
There's probably a smarter way to do this, but hey, it works.
| https://stackoverflow.com/questions/74418841/ |
How to transform a variable to bucketed variable which tells us which bucket/range it lies to in pytorch | I have a variable a = [0.129, 0.369, 0.758, 0.012, 0.925]. I want to transform this variable into a bucketed variable. What I mean by this is explained below.
min_bucket_value, max_bucket_value = 0, 1 (Can be anything, for example, 0 to 800, but the min value is always going to be 0)
num_divisions = 10 (For this example I've taken 10, but it can be higher as well, like 80 divisions instead of 10)
Bucket/division ranges are as shown below.
0 - 0.1 -> 0
0.1 - 0.2 -> 1
0.2 - 0.3 -> 2
0.3 - 0.4 -> 3
0.4 - 0.5 -> 4
0.5 - 0.6 -> 5
0.6 - 0.7 -> 6
0.7 - 0.8 -> 7
0.8 - 0.9 -> 8
0.9 - 1.0 -> 9
so, transformed_a = [1, 3, 7, 0, 9]
So it's like I divide min_bucket_value, max_bucket_value in num_divisions different ranges/buckets and then transform original a to tell which bucket it lies in
I've tried creating torch.linspace(min_bucket_value, max_bucket_value, num_divisions), but not sure how to move forward and map it to a range so that I can get the bucket index to which it belongs to
Can you guys please help
EDIT
There's an extension to this problem.
Let's say that we've got a = [127, 362, 799] and I want to create two buckets. One is a coarse bucket, so a_transform = [12, 36, 89], but what if I want a fine bucket as well so that my second transformation becomes a_fine_transform = [7, 2, 9].
Sub-range index within the range. Basically, coarse division has 80 buckets (giving 127 in 12th bucket) and then the fine bucket which has 10 divisions which tells us that 127 lies in 12th coarse bucket and 7th fine bucket
a can be in float as well. eg, a = [127.36, 362.456, 789.646].
so a_coarse_transform = [12, 36, 78] & a_fine_transform = [7, 2, 6]
where min_bucket_value, max_bucket_value, num_coarse_divisions, num_fine_divisions = 0, 1, 80, 10
| For torch tensor, you can simply use the following code (part of the answer from Bob and partly different to work it will tensor since numpy won't work unless and until you call .cpu() method on the tensor, which I'm not sure is the right thing to do)
So instead to this,
a1 = (a - min_bucket_value) / (max_bucket_value - min_bucket_value)
a_coarse_transform, r = ((a1 * coarse_divisions)//1).type(torch.long), (a1 * coarse_divisions)%1
a_fine_transform, r = ((r * fine_divisions)//1).type(torch.long), (r * fine_divisions)%1
| https://stackoverflow.com/questions/74420314/ |
TypeError: relu(): argument 'input' (position 1) must be Tensor, not tuple. I believe it's because I have an LSTM layer |
I believe error is because I have an LSTM layer. How can I modify the code so it will work fine? Any help?
| Move the LSTM layer out of the Sequential layer.
LSTM returns a tuple of output, (hn, cn) where hn, cn are the last hidden states.
For example your init function will contain something like
class module(nn.Module):
def __init__(self):
super(nn.Module, self).__init__()
self.lstm = nn.LSTM(...)
self.seq = nn.Sequential(...)
and your forward function will be
def forward(self, x):
lstm_out= self.lstm(x)
out = self.seq(lstm_out[0])
return out
| https://stackoverflow.com/questions/74423081/ |
Appending zero rows to a 2D Tensor in PyTorch | Suppose I have a tensor 2D tensor x of shape (n,m). How can I extend the first dimension of the tensor by appending zero rows in x by specifying the indices of where the zero rows will be located in the resulting tensor? For a concrete example:
x = torch.tensor([[1,1,1],
[2,2,2],
[3,3,3],
[4,4,4]])
And I want to append 2 zero rows such that their row-index will be 1,3, respectively, in the resulting tensor? I.e. in the example the result would be
X = torch.tensor([1,1,1],
[0,0,0],
[2,2,2],
[0,0,0],
[3,3,3],
[4,4,4]])
I tried using F.pad and reshape.
| You can use torch.tensor.index_add_.
import torch
zero_index = [1, 3]
size = (6, 3)
x = torch.tensor([[1,1,1],
[2,2,2],
[3,3,3],
[4,4,4]])
t = torch.zeros(size, dtype=torch.int64)
index = torch.tensor([i for i in range(size[0]) if i not in zero_index])
# index -> tensor([0, 2, 4, 5])
t.index_add_(0, index, x)
print(t)
Output:
tensor([[1, 1, 1],
[0, 0, 0],
[2, 2, 2],
[0, 0, 0],
[3, 3, 3],
[4, 4, 4]])
| https://stackoverflow.com/questions/74423476/ |
Pytorch Lightning Loss not decreasing | I've set up a basic UNET model. When using a function to train the model directly, it optimizes fine. However, when using a similar loop in pytorch lightning with the train step defined, the loss does not change from the original value. I took out the zero_grad/backward/step bits based on this tutorial. What am I doing wrong?
# Optimizes well
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to('cuda',dtype=torch.float), y.to('cuda',dtype=torch.float)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Using this as a function inside the UNet class, which I feed to pytorch_lightning.Trainer.
# Loss does not update from initial value. Model predictions do not improve.
def training_step(self, batch, batch_idx):
X,y = batch
X, y = X.to(self.device,dtype=torch.float), y.to(self.device,dtype=torch.float)
# Compute prediction error
pred = self.forward(X)
loss = self.loss_fn(pred, y)
self.log("train_loss", loss)
return loss
| This issue was caused by the following line in the model class:
def configure_optimizers(self):
return super().configure_optimizers()
One of the threads online recommended having this together with training_step and train_dataloader as a minimum set of methods to run pytorch lightning. However, in fact this line interferes with optimization - perhaps, the same batch is loaded every time so that loss does not improve. Simply deleting this method fixes the issue. LightningModule.fit takes in a data loader and uses that to pass batches to training_step.
| https://stackoverflow.com/questions/74424436/ |
Weird 1d shape result on pytorch 3d ResNet | I have a 3dResNet model from PyTorch. I also commented out the flatten line in the resnet.py source code so my output shouldn't be 1D.
Here is the code I have:
class VideoModel(nn.Module):
def __init__(self,num_channels=3):
super(VideoModel, self).__init__()
self.r2plus1d = models.video.r2plus1d_18(pretrained=True)
self.r2plus1d.fc = Identity()
for layer in self.r2plus1d.children():
layer.requires_grad_ = False
def forward(self, x):
print(x.shape)
x = self.r2plus1d(x)
print(x.shape)
return x
My identity class exists just to ignore a layer:
class Identity(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x
When I run torch.randn(1, 3, 8, 112, 112) as my input, I get the following output:
torch.Size([1, 3, 8, 112, 112])
torch.Size([1, 512, 1, 1, 1])
Why do I have a 1D output even though I removed fc layer and the flatten operation?
Is there a better way to remove the flatten operation?
| The cause is the AdaptiveAvgPool3d layer right before the flatten step. It is called with the argument output_size=(1,1,1), and so pools the last three dimensions to (1,1,1) regardless of their original dimensions.
In your case, the output after the average pool has the shape (1,512,1,1,1), after flatten has the shape (1,512), and after the fc layer has the shape (1,400).
So the flatten operation is not responsible, disable the average pool and all subsequent steps to get the desired result.
| https://stackoverflow.com/questions/74427579/ |
nn.Parameter() doesn't register as a model parameter with torch.randn() | I'm trying to create a module, which contains certain layers of nn.Parameters().
If I initialize the layer as following -
self.W = nn.Parameter(torch.randn(4,4), requires_grad=True).double()
then this layer doesn't appear to register in the module parameters.
However, this initialization does work -
self.W = nn.Parameter(torch.FloatTensor(4,4), requires_grad=True)
Full example -
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.W = nn.Parameter(torch.randn(4,4), requires_grad=True).double()
def forward(self, x):
x = torch.matmul(x, self.W.T)
x = torch.sigmoid(x)
return x
tnet = TestNet()
print(list(tnet.parameters()))
### Output = [] (an empty list)
Compared to -
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.W = nn.Parameter(torch.FloatTensor(4,4), requires_grad=True)
def forward(self, x):
x = torch.matmul(x, self.W.T)
x = torch.sigmoid(x)
return x
tnet = TestNet()
print(list(tnet.parameters()))
Which prints -
[Parameter containing:
tensor([[-1.8859e+26, 6.0240e-01, 1.0842e-19, 3.8177e-05],
[ 1.5229e-27, -8.5899e+09, 1.5226e-27, -3.6893e+19],
[ 4.2039e-45, -4.6566e-10, 1.5229e-27, -2.0000e+00],
[ 2.8026e-45, 0.0000e+00, 0.0000e+00, 4.5918e-40]],
requires_grad=True)]
So what is the difference? Why doesn't the torch.randn() version work?
I couldn't find anything about this in the docs or in previous answers online.
| Calling randn is completely fine. The issue is that .double() is being called at the end of the operation:
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.W = nn.Parameter(torch.randn(4,4, dtype = torch.double), requires_grad=True)
# self.W = nn.Parameter(torch.randn(4,4).double(), requires_grad=True) # also works
def forward(self, x):
x = torch.matmul(x, self.W.T)
x = torch.sigmoid(x)
return x
tnet = TestNet()
print(tnet.W.dtype)
# torch.float64
print(list(tnet.parameters()))
# [Parameter containing:
# tensor([[-1.9645, -1.5445, 0.2435, 0.4380],
# [ 1.1403, 0.8836, 0.1811, -0.1212],
# [ 1.5983, -0.1854, -0.2626, 0.2881],
# [-1.2364, -0.4802, -0.6038, 0.1164]], requires_grad=True)]
Now the code registers the parameters. I added dtype = torch.double in the initialization of randn to make sure that self.W contains doubles as before.
In summary, we cannot call nn.Parameter, and then register its conversion to another data type as our neural network weights for the deep learning system.
| https://stackoverflow.com/questions/74428429/ |
Torch custom pairwise distance | i'm working on big distance matrix (10-80k row ; 3k cols) and i want to get custom pairwise distance on that matrix ; and do it fast.
I have trying with armadillo but with huge data it still "slow"
I try with torch with cuda acceleration and with built in euclidean distance that realy so fast (100 times faster).
So now i want to make custom pairwise distance like :
for pairwise row (a and b): get the standard deviation of ai*bi (where i is cols)
for example :
my_mat:
|1 |2 |3 |4
a |5 |3 |0 |4
b |1 |6 |2 |3
a//b dist = std(5*1,3*6,0*2,4*3)
= std(5,18,0,12)
= 7.889867
i think about :
start with my two dimension (N,M) tensor (my_mat)
create a new tensor with 3 dimension (N,N,P) and in P dimension store a "list" with each pairwise product by cols :
3_dim_tens :
|a |b
a |Pdim(5*5,3*3,0*0,4*4) |Pdim(5*1,3*6,0*2,4*3)
b |Pdim(5*1,3*6,0*2,4*3) |Pdim(5*5,3*3,0*0,4*4)
then if i reduce Pdim by std() i will have 2 dims (N,N) pairwise matrix with my custom distance.
(typically is like matmul my_mat * t(my_mat) but with std in place of addition)
is it possible to do this with torch or is there another way for custom pairwise distance?
| I think the most intuitive way is using einsum for this:
import torch
a = torch.tensor([[5.0, 3, 0, 4],[1, 6, 2, 3]])
b = torch.einsum('ij,kj->ikj', a, a).std(dim=2)
print(b)
| https://stackoverflow.com/questions/74428963/ |
How can I get the nearest entity in python | My code:
from mss import mss
import math
import cv2
import numpy as np
import torc
with mss() as sct:
monitor = {"top": 220, "left": 640, "width": 640, "height":640}
while True:
screenshot = np.array(sct.grab(monitor))
results = model(screenshot, size=600)
df = results.pandas().xyxy[0]
distances = []
closest = 1000
try:
xmin = int(df.iloc[0, 0])
ymin = int(df.iloc[0, 1])
xmax = int(df.iloc[0, 2])
ymax = int(df.iloc[0, 3])
centerX = (xmax + xmin) / 2 + xmin
centerY = (ymax + ymin) / 2 + ymin
distance2 = math.sqrt(((centerX - 320) ** 2) + ((centerY - 320) ** 2))
distances.append(distance2)
if closest > distances[i]:
closest = distances[i]
closestEnemy = i
Only problem now is that it doesn't seem to get the closest enemy, is my math wrong? If my math should be wrong, how can I improve it? Also if my math is correct, how can I improve it in order to achieve my goal of getting the nearest entity? Any help will be very appriciated. Thanks in advance to everyone who invests his / her time in helping me :)
| It's not entirely clear, what you are after... but my guess is, that there is a small mistake when calculating the center of the enemies. Either use:
centerX = (xmax + xmin) / 2 # do not add xmin here
centerY = (ymax + ymin) / 2 # do not add ymin here
or calculate the distance between the minimum and maximum values and add the minim again:
centerX = (xmax - xmin) / 2 + xmin # subtract minimum from maximum
centerY = (ymax - ymin) / 2 + ymin # subtract minimum from maximum
Additional remark:
Performance wise it is mostly not a good idea to iterate over a pandas data frame. Another approach is to add a new column distance to the data frame and then search for the index of the minimum value:
df['distance'] = (
(( (df['xmax']+df['xmin'])/2 - 320) ** 2) +
(( (df['ymax']+df['ymin'])/2 - 320) ** 2)
) **0.5
closest_enemy = df['distance'].idxmin()
| https://stackoverflow.com/questions/74430781/ |
Data collation step causing "ValueError: Unable to create tensor..." due to unnecessary padding attempts to extra inputs | I am trying to fine-tune a Bart model from the huggingface transformers framework on a dialogue summarisation task. The Bart model by default takes in the conversations as a monolithic piece of text as the input and takes the summaries as the decoder input while training. I want to explicitly train the model on dialogue speaker and utterance information rather than waiting for the model to implicitly learn them. For this reason, I am extracting the position IDs of the speaker name tokens and their utterance tokens when I send them to the model along with the original input tokens and summary tokens and send them separately. However, the model's data collator/padding automation expects this information to also be the same size as the inputs (I need to disable this behaviour/change the way I am encoding the speaker to utterance mapping).
Please find the code and description for the above issue below:
I am using the SAMSum dataset for the dialogue summarisation task. The dataset looks like this
Conversation:
Amanda: I baked cookies. Do you want some?
Jerry: Sure!
Amanda: I'll bring you tomorrow :-)
Summary:
Amanda baked cookies and will bring Jerry some tomorrow.
The conversation gets tokenized as:
tokens = [0, 10127, 5219, 35, 38, 17241, 1437, 15269, 4, 1832, 47, 236, 103, 116, 50121, 50118, 39237, 35, 9136, 328, 50121, 50118, 10127, 5219, 35, 38, 581, 836, 47, 3859, 48433, 2]
The explicit speaker-utterance information is encoded as:
[0, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0]
Where 1s indicate that tokens[1:3] map to a name "Amanda" and the 2s indicate that tokens[3:16] map to an utterance ": I baked cookies. Do you want some?"
I am trying to send this speaker utterance association information to the forward function in the hopes of adding a loss on the basis of this information. I intend to override the compute_loss method of the Trainer class from huggingface framework to edit the loss after I can successfully relay this explicit information.
I am currently trying the following:
tokenized_dataset_train = train_datasets.map(preprocess_function, batched=True)
where the preprocess_function tokenizes and adds the speaker-utterance information in the form of a key-value pair. tokenized_dataset_train is of the form {'input_ids':[...], 'attention_mask':[...], 'spk_utt_pos':[...], ...}
The preprocess function makes sure that the lengths for each of 'input_ids', 'attention_masks', and 'spk_utt_pos' is the same.
The data_collator from the DataCollatorForSeq2Seq pads 'input_ids' and 'attention_masks', but also tries to pad 'spk_utt_pos' which gives an error:
Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`spk_utt_pos` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
Upon printing the sizes of 'input_ids', 'attention_masks', and 'spk_utt_pos' inside the train loop during the data collation step I found that the sizes of were not the same.
Example: (A 32 instance batch)
'input_ids' sizes 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357
'attention_mask' sizes 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357 357
'spk_utt_pos' sizes 285 276 276 321 58 93 77 69 198 266 55 107 85 235 47 280 209 357 86 186 27 52 80 77 85 231 266 237 322 125 251 126
My question is: Is there something wrong with my approach to adding this explicit information to my model? What can be another method to send the speaker-utterance information to my model?
| I solved this by extending the DataCollatorForSeq2Seq class and overriding the __call__ method in it to also pad my 'spk_utt_pos' list appropriately.
| https://stackoverflow.com/questions/74437271/ |
How to print the model's parameters'shape and print the parameters while loading a .pt file? | Thanks to everyone reading this.
I'm a beginner to pytorch. I now have a .pt file and I wanna print the parameter's shape of this module. As I can see, it's a MLP model and the size of input layer is 168, hidden layer is 32 and output layer is 12.
I tried torch.load() but it returned a dict and I don't know how to deal with it. Also, I wanna print the weight of input layer to hidden layer(that maybe a 168*32 matrix) but I don't know how to do that. Thanks for helping me!
| The state dictionary of does not contain any information about the structure of forward logic of its corresponding nn.Module. Without prior knowledge about it's content, you can't get which key of the dict contains the first layer of the module... it's possibly the first one but this method is rather limited if you want to beyond just the first layer. You can inspect the content of the nn.Module but you won't be able to extract much more from it, without having the actual nn.Module class at your disposal.
| https://stackoverflow.com/questions/74442412/ |
Hyperparameter Tuning with Wandb Sweep for custom parameters | I'm trying to tune the hyperparameters using the Stable-Baseline-3 Library for the network architecture.
My configuration file is:
program: main.py
method: bayes
name: sweep
metric:
goal: minimize
name: train/loss
parameters:
batch_size:
values: [16, 32, 64, 128, 256, 512, 1024]
epochs:
values: [20, 50, 100, 200, 250, 300]
lr:
max: 0.1
min: 0.000001
But if I try to add to the parameters:
policy_kwargs:
net_arch:
pi:
values: [[ 128, 128 ],[ 256, 256 ],[ 512, 512 ]]
vf:
values: [[ 128, 128 ],[ 256, 256 ],[ 512, 512 ]]
I got the following error:
wandb.errors.CommError: Invalid sweep config: invalid hyperparameter configuration: policy_kwargs
Is it possible to use wandb sweep with Stable-Baseline-3 for the network architecture?
| You are trying to create a nested config. Please refer to this documentation here.
Your configuration should be:
program: main.py
method: bayes
name: sweep
metric:
goal: minimize
name: train/loss
parameters:
batch_size:
values: [16, 32, 64, 128, 256, 512, 1024]
epochs:
values: [20, 50, 100, 200, 250, 300]
lr:
max: 0.1
min: 0.000001
policy_kwargs:
parameters:
net_arch:
parameters:
pi:
values: [[ 128, 128 ],[ 256, 256 ],[ 512, 512 ]]
vf:
values: [[ 128, 128 ],[ 256, 256 ],[ 512, 512 ]]
| https://stackoverflow.com/questions/74444263/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.