instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
PyTorch gives incorrect results due to broadcasting
I want to run some neural net experiments with PyTorch, but a minimal test case is giving wrong answers. The test case sets up a simple neural network with two input variables and an output variable that is just the sum of the inputs, and tries learning it as a regression problem; I expect it to converge on zero mean squared error, but it actually converges on 0.165. It's probably because of the issue alluded to in the warning message; how can I fix it? Code: import torch import torch.nn as nn # data Xs = [] ys = [] n = 10 for i in range(n): i1 = i / n for j in range(n): j1 = j / n Xs.append([i1, j1]) ys.append(i1 + j1) # torch tensors X_tensor = torch.tensor(Xs) y_tensor = torch.tensor(ys) # hyperparameters in_features = len(Xs[0]) hidden_size = 100 out_features = 1 epochs = 500 # model class Net(nn.Module): def __init__(self, hidden_size): super(Net, self).__init__() self.L0 = nn.Linear(in_features, hidden_size) self.N0 = nn.ReLU() self.L1 = nn.Linear(hidden_size, 1) def forward(self, x): x = self.L0(x) x = self.N0(x) x = self.L1(x) return x model = Net(hidden_size) criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # train print("training") for epoch in range(1, epochs + 1): # forward output = model(X_tensor) cost = criterion(output, y_tensor) # backward optimizer.zero_grad() cost.backward() optimizer.step() # print progress if epoch % (epochs // 10) == 0: print(f"{epoch:6d} {cost.item():10f}") print() output = model(X_tensor) cost = criterion(output, y_tensor) print("mean squared error:", cost.item()) Output: training C:\Users\russe\Anaconda3\envs\torch2\lib\site-packages\torch\nn\modules\loss.py:445: UserWarning: Using a target size (torch.Size([100])) that is different to the input size (torch.Size([100, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) 50 0.167574 100 0.165108 150 0.165070 200 0.165052 250 0.165039 300 0.165028 350 0.165020 400 0.165013 450 0.165009 500 0.165006 mean squared error: 0.1650056540966034 And the message: UserWarning: Using a target size (torch.Size([100])) that is different to the input size (torch.Size([100, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
You're going to be a bit more specific on which tensors (X, or Y), but we can can reshape our tensors by using the torch.view function. For example: Y_tensor = torch.tensor(Ys) print(Y_tensor.shape) >> torch.Size([5]) new_shape = (len(Ys), 1) Y_tensor = Y_tensor.view(new_shape) print(Y_tensor.shape) >> torch.Size([5, 1]) However, I'm skeptical that this broadcasting behavior is why you're having accuracy issues.
https://stackoverflow.com/questions/65219569/
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed pytorch
How do I clear the gradient before calling the .backward() for the second time. RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time. a = torch.tensor([2.0], requires_grad = True) b = torch.tensor([2.0], requires_grad = True) d = torch.tensor([2.0], requires_grad = True) c=a*b c.backward() e = d*e e.backward(retain_graph=True) I tried doing this: c.zero_grad() but I got the error c has no method zero_grad()
As the error message reads, you need to specify the retain_graph=True option on the first .backward call, not the second: c.backward(retain_graph=True) e = d*c e.backward() If you do not retain the graph, the second backward pass will not be able to reach the nodes c, a, and b because the activations will have been cleared by the first backward pass.
https://stackoverflow.com/questions/65220764/
What do the logits and probabilities from RobertaForSequenceClassification represent?
Being new to the "Natural Language Processing" scene, I am experimentally learning and have implemented the following segment of code: from transformers import RobertaTokenizer, RobertaForSequenceClassification import torch path = "D:/LM/rb/" tokenizer = RobertaTokenizer.from_pretrained(path) model = RobertaForSequenceClassification.from_pretrained(path) inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) pred_logits = outputs.logits print(pred_logits) probs = pred_logits.softmax(dim=-1).detach().cpu().flatten().numpy().tolist() print(probs) I understand that applying the model returns a "torch.FloatTensor comprising various elements depending on the configuration (RobertaConfig) and inputs", and that the logits are accessible using .logits. As demonstrated I have applied the .softmax function to the tensor to return normalised probabilities and have converted the result into a list. I am outputted with the following: [0.5022980570793152, 0.49770188331604004] Do these probabilities represent some kind of overall "masked" probability? What do the first and second index represent in context of the input? EDIT: model.num_labels Output: 2 @cronoik explains that the model "tries to classify if a sequence belongs to one class or another" Am I to assume that because there are no trained output layers these classes don't mean anything yet? For example, I can assume that the probability that the sentence, post analysis, belongs to class 1 is 0.5. However, what is class 1? Additionally, model cards with pre-trained output layers such as the open-ai detector help differentiate between what is "real" and "fake", and so I can assume the class that a sentence belongs to. However, how can I confirm these "labels" without some type of "mapping.txt" file?
You have initialized a RobertaForSequenceClassification model that per default (in case of roberta-base and roberta-large which have no trained output layers for sequence classification) tries to classify if a sequence belongs to one class or another. I used the expression "belongs to one class or another" because these classes have no meaning yet. The output layer is untrained and it requires a finetuning to give these classes a meaning. Class 0 could be X and Class 1 could be Y or the other way around. For example, the tutorial for finetuning a sequence classification model for the IMDb review dataset defines negative reviews as Class 0 and positive reviews as Class 1 (link). You can check the number of supported classes with: model.num_labels Output: 2 The output you get is the non-normalized probability for each class (i.e. logits). You applied the softmax function to normalize these probabilities, which leads to 0.5022980570793152 for the first class and 0.49770188331604004 for the second class. Maybe you got confused because the values are close to each other. Let's try a model with a pretrained output layer (model card): sentimodel = RobertaForSequenceClassification.from_pretrained('cardiffnlp/twitter-roberta-base-sentiment') print(sentimodel.num_labels) outputs = sentimodel(**inputs) print(outputs.logits.softmax(dim=-1).tolist()) Output: 3 [[0.0015561950858682394, 0.019568447023630142, 0.9788752794265747]] These values represent the probabilities for the sentence Hello, my dog is cute to be negative, neutral, or positive. We know what these classes are because the authors provided mapping that clarifies it. In case the authors of the model do not provide such a mapping (via a readme or the original training code), we can only guess what each class represents by testing it with random samples. The model card you have mentioned does not provide any useful information regarding the mapping of the classes to what they represent, but the model is provided by huggingface itself and they provide a link to the code used for training the model. The dataset.py indicates that fake is represented by Class 0 and real by Class 1.
https://stackoverflow.com/questions/65221079/
The conflict is caused by: The user requested tensorboard==2.1.0 tensorflow 1.15.4 depends on tensorboard=1.15.0
I am trying to install a package VIBE from a git repo and inistally I was installing its dependencies. The code is located here: https://github.com/mkocabas/VIBE how should I fix this? Here's the error I got: (vibe-env) mona@mona:~/research/VIBE$ pip install -r requirements.txt Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: torchvision==0.5.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 19)) (0.5.0) Collecting git+https://github.com/mattloper/chumpy.git (from -r requirements.txt (line 24)) Cloning https://github.com/mattloper/chumpy.git to /tmp/pip-req-build-vdh2h3jw Collecting git+https://github.com/mkocabas/yolov3-pytorch.git (from -r requirements.txt (line 25)) Cloning https://github.com/mkocabas/yolov3-pytorch.git to /tmp/pip-req-build-ay_gkil2 Collecting git+https://github.com/mkocabas/multi-person-tracker.git (from -r requirements.txt (line 26)) Cloning https://github.com/mkocabas/multi-person-tracker.git to /tmp/pip-req-build-l9jgk1qb Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Collecting filterpy==1.4.5 Using cached filterpy-1.4.5-py3-none-any.whl Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting gdown==3.6.4 Downloading gdown-3.6.4.tar.gz (5.2 kB) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Collecting h5py==2.10.0 Using cached h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting joblib==0.14.1 Downloading joblib-0.14.1-py2.py3-none-any.whl (294 kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 294 kB 5.6 MB/s Collecting llvmlite==0.32.1 Downloading llvmlite-0.32.1-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20.2 MB 14.1 MB/s Collecting matplotlib==3.1.3 Using cached matplotlib-3.1.3-cp37-cp37m-manylinux1_x86_64.whl (13.1 MB) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting numba==0.47.0 Downloading numba-0.47.0-cp37-cp37m-manylinux1_x86_64.whl (3.7 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.7 MB 33.0 MB/s Requirement already satisfied: setuptools in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from numba==0.47.0->-r requirements.txt (line 6)) (51.0.0.post20201207) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting opencv-python==4.1.2.30 Downloading opencv_python-4.1.2.30-cp37-cp37m-manylinux1_x86_64.whl (28.3 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 28.3 MB 29.4 MB/s Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting pillow==6.2.1 Downloading Pillow-6.2.1-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.1 MB 107.9 MB/s Collecting progress==1.5 Downloading progress-1.5.tar.gz (5.8 kB) Collecting pyrender==0.1.36 Downloading pyrender-0.1.36-py3-none-any.whl (1.2 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.2 MB 23.0 MB/s Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Collecting PyYAML==5.3.1 Using cached PyYAML-5.3.1-cp37-cp37m-linux_x86_64.whl Collecting scikit-image==0.16.2 Downloading scikit_image-0.16.2-cp37-cp37m-manylinux1_x86_64.whl (26.5 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 26.5 MB 25.7 MB/s Collecting scikit-video==1.1.11 Using cached scikit_video-1.1.11-py2.py3-none-any.whl (2.3 MB) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting scipy==1.4.1 Using cached scipy-1.4.1-cp37-cp37m-manylinux1_x86_64.whl (26.1 MB) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting smplx==0.1.13 Downloading smplx-0.1.13-py3-none-any.whl (26 kB) Requirement already satisfied: torch>=1.0.1.post2 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from smplx==0.1.13->-r requirements.txt (line 7)) (1.4.0) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting tensorboard==2.1.0 Downloading tensorboard-2.1.0-py3-none-any.whl (3.8 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.8 MB 29.3 MB/s Requirement already satisfied: setuptools in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from numba==0.47.0->-r requirements.txt (line 6)) (51.0.0.post20201207) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Requirement already satisfied: wheel>=0.26 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from tensorboard==2.1.0->-r requirements.txt (line 18)) (0.36.1) Collecting tensorflow==1.15.4 Downloading tensorflow-1.15.4-cp37-cp37m-manylinux2010_x86_64.whl (110.5 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 110.5 MB 22 kB/s Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Requirement already satisfied: wheel>=0.26 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from tensorboard==2.1.0->-r requirements.txt (line 18)) (0.36.1) INFO: pip is looking at multiple versions of tensorboard to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of smplx to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of scipy to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of scikit-video to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of scikit-image to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of pyyaml to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of pyrender to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of progress to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of pillow to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of opencv-python to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of numpy to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of numba to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of multi-person-tracker to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of matplotlib to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of llvmlite to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of joblib to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of h5py to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of gdown to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of filterpy to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of chumpy to determine which version is compatible with other requirements. This could take a while. ERROR: Cannot install -r requirements.txt (line 17) and tensorboard==2.1.0 because these package versions have conflicting dependencies. The conflict is caused by: The user requested tensorboard==2.1.0 tensorflow 1.15.4 depends on tensorboard<1.16.0 and >=1.15.0 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies (vibe-env) mona@mona:~/research/VIBE$ python Python 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.__version__ '1.4.0' Here are all the commands I ran before this: (base) mona@mona:~/research/VIBE$ export CONDA_ENV_NAME=vibe-env (base) mona@mona:~/research/VIBE$ conda create -n $CONDA_ENV_NAME python=3.7 (base) mona@mona:~/research/VIBE$ eval "$(conda shell.bash hook)" (base) mona@mona:~/research/VIBE$ conda activate $CONDA_ENV_NAME (vibe-env) mona@mona:~/research/VIBE$ pip install numpy==1.17.5 torch==1.4.0 torchvision==0.5.0 (vibe-env) mona@mona:~/research/VIBE$ pip install git+https://github.com/giacaglia/pytube.git --upgrade
The key here is this: The conflict is caused by: The user requested tensorboard==2.1.0 tensorflow 1.15.4 depends on tensorboard<1.16.0 and >=1.15.0 This is due to the fact that there is a conflict in requirements.txt of https://github.com/mkocabas/VIBE since it requires tensorboard==2.1.0 and tensorflow==1.15.4. However, according to the error message, this version of tensorflow only works with tensorboard 1.15.0 - 1.15.x. If you read the error closely you will see that pip itself suggests how to resolve this: To fix this you could try to: loosen the range of package versions you've specified remove package versions to allow pip attempt to solve the dependency conflict
https://stackoverflow.com/questions/65226693/
Custom PyTorch optimizer is not working properly and am unable to access gradients
I'm trying to implement the elastic averaging stochastic gradient descent (EASGD) algorithm from the paper Deep Learning with Elastic Averaging SGD and was running into some trouble. I'm using PyTorch's torch.optim.Optimizer class and referencing the official implementation of SGD and the official implementation of Accelerated SGD in order to start off somewhere. The code that I have is: import torch.optim as optim class EASGD(optim.Optimizer): def __init__(self, params, lr, tau, alpha=0.001): self.alpha = alpha if lr < 0.0: raise ValueError(f"Invalid learning rate {lr}.") defaults = dict(lr=lr, alpha=alpha, tau=tau) super(EASGD, self).__init__(params, defaults) def __setstate__(self, state): super(EASGD, self).__setstate__(state) def step(self, closure=None): loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: tau = group['tau'] for t, p in enumerate(group['params']): x_normal = p.clone() x_tilde = p.clone() if p.grad is None: continue if t % tau == 0: p = p - self.alpha * (x_normal - x_tilde) x_tilde = x_tilde + self.alpha * (x_normal - x_tilde) d_p = p.grad.data p.data.add_(d_p, alpha=-group['lr']) return loss When I run this code, I get the following error: /home/user/github/test-repo/easgd.py:50: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. Reading this PyTorch Discussion helped understand what the difference between leaf and non-leaf variables are, but I'm not sure how I should fix my code to make it work properly. Any tips on what to do or where to look are appreciated. Thanks.
I think the issue is you are copying p on this line: p = p - self.alpha * (x_normal - x_tilde) If this line gets execute (which the case in the first cycle when t=0) the following line raises the error because p doesn't have .grad attribute anymore. You should use inplace operators instead, add_, mult_, sub_, divide_, etc... for t, p in enumerate(group['params']): if p.grad is None: continue d_p = p.grad.data if t % tau == 0: d_p.sub_(self.alpha*0.01) p.data.add_(d_p, alpha=-group['lr']) Above, I have removed x_normal, x_tilde since you didn't give them proper values. But I hope you get the idea. Only use inplace operator when working with the data inside the step function.
https://stackoverflow.com/questions/65228851/
How to dynamically update batch norm momentum in TF2?
I found a PyTorch implementation that decays the batch norm momentum parameter from 0.1 in the first epoch to 0.001 in the final epoch. Any suggestions on how to do this with the batch norm momentum parameter in TF2? (i.e., start at 0.9 and end at 0.999) For example, this is what is done in the PyTorch code: # in training script momentum = initial_momentum * np.exp(-epoch/args.epochs * np.log(initial_momentum/final_momentum)) model_pos_train.set_bn_momentum(momentum) # model class function def set_bn_momentum(self, momentum): self.expand_bn.momentum = momentum for bn in self.layers_bn: bn.momentum = momentum SOLUTION: The selected answer below provides a viable solution when using the tf.keras.Model.fit() API. However, I was using a custom training loop. Here is what I did instead: After each epoch: mi = 1 - initial_momentum # i.e., inital_momentum = 0.9, mi = 0.1 mf = 1 - final_momentum # i.e., final_momentum = 0.999, mf = 0.001 momentum = 1 - mi * np.exp(-epoch / epochs * np.log(mi / mf)) model = set_bn_momentum(model, momentum) set_bn_momentum function (credit to this article): def set_bn_momentum(model, momentum): for layer in model.layers: if hasattr(layer, 'momentum'): print(layer.name, layer.momentum) setattr(layer, 'momentum', momentum) # When we change the layers attributes, the change only happens in the model config file model_json = model.to_json() # Save the weights before reloading the model. tmp_weights_path = os.path.join(tempfile.gettempdir(), 'tmp_weights.h5') model.save_weights(tmp_weights_path) # load the model from the config model = tf.keras.models.model_from_json(model_json) # Reload the model weights model.load_weights(tmp_weights_path, by_name=True) return model This method did not add significant overhead to the training routine.
You can set an action in the begin/the end of each batch, so you can control the any parameter during the epoch. Below the options for the callbacks: class CustomCallback(keras.callbacks.Callback): def on_epoch_begin(self, epoch, logs=None): keys = list(logs.keys()) print("Start epoch {} of training; got log keys: {}".format(epoch, keys)) def on_epoch_end(self, epoch, logs=None): keys = list(logs.keys()) print("End epoch {} of training; got log keys: {}".format(epoch, keys)) def on_train_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print("...Training: start of batch {}; got log keys: {}".format(batch, keys)) def on_train_batch_end(self, batch, logs=None): keys = list(logs.keys()) print("...Training: end of batch {}; got log keys: {}".format(batch, keys)) def on_test_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys)) def on_test_batch_end(self, batch, logs=None): keys = list(logs.keys()) print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys)) You can access the momentum batch = tf.keras.layers.BatchNormalization() batch.momentum = 0.001 Inside the model you have to specified the correct layer model.layers[1].momentum = 0.001 You can find more information and example at writing_your_own_callbacks
https://stackoverflow.com/questions/65233132/
What is the numpy equivalent of expand in pytorch?
Suppose I have a numpy array x of shape [1,5]. I want to expand it along axis 0 such that the resulting array y has shape [10,5] and y[i:i+1,:] is equal to x for each i. If x were a pytorch tensor I could simply do y = x.expand(10,-1) But there is no expand in numpy and the ones that look like it (expand_dims and repeat) don't seem to behave like it. Example: >>> import torch >>> x = torch.randn(1,5) >>> print(x) tensor([[ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724]]) >>> print(x.expand(10,-1)) tensor([[ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724]])
You can achieve that with np.broadcast_to. But you can't use negative numbers: >>> import numpy as np >>> x = np.array([[ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724]]) >>> print(np.broadcast_to(x,(10,5))) [[ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724]]
https://stackoverflow.com/questions/65234748/
Does model.train() put every thing in train mode in pytorch even sub-networks?
I wrote a neural network in PyTorch, which uses ResNet as feature generator, and it is fined tuned with the whole network. My model consists of Resnet and several layers that I added to it. My question is: When I call model.train(), Does it put ResNet in train mode or I should call train on it separately?
By the look of it, if you call train() on a module, it will call train() recursively on all children. So model.train() - model being the model containing the Resnet - will suffice.
https://stackoverflow.com/questions/65236101/
Multiple values for argument
I am trying to convert this code passing it with pysyft refference like this : class SyNet(sy.Module): def __init__(self,embedding_size, num_numerical_cols, output_size, layers, p ,torch_ref): super(SyNet, self ).__init__( embedding_size, num_numerical_cols , output_size , layers , p=0.4 ,torch_ref=torch_ref ) self.all_embeddings=self.torch_ref.nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in embedding_size]) self.embedding_dropout=self.torch_ref.nn.Dropout(p) self.batch_norm_num=self.torch_ref.nn.BatchNorm1d(num_numerical_cols) all_layers= [] num_categorical_cols = sum((nf for ni, nf in embedding_size)) input_size = num_categorical_cols + num_numerical_cols for i in layers: all_layers.append(self.torch_ref.nn.Linear(input_size,i)) all_layers.append(self.torch_ref.nn.ReLU(inplace=True)) all_layers.append(self.torch_ref.nn.BatchNorm1d(i)) all_layers.append(self.torch_ref.nn.Dropout(p)) input_size = i all_layers.append(self.torch_ref.nn.Linear(layers[-1], output_size)) self.layers = self.torch_ref.nn.Sequential(*all_layers) def forward(self, x_categorical, x_numerical): embeddings= [] for i,e in enumerate(self.all_embeddings): embeddings.append(e(x_categorical[:,i])) x_numerical = self.batch_norm_num(x_numerical) x = self.torch_ref.cat([x, x_numerical], 1) x = self.layers(x) return x But when I try to create a instance of the model model = SyNet( categorical_embedding_sizes, numerical_data.shape[1], 2, [200,100,50], p=0.4 ,torch_ref= th) I got a TypeError TypeError: multiple values for argument 'torch_ref' I tried to change the order of the arguments but i got an error about positional arguments . Can you help me , I am not very experienced in classes and functions (oop) Thank you in advance !
Looking at PySyft source code for Module. The constructor of your class parent only takes a single argument: torch_ref. You should therefore call the super constructor with: super(SyNet, self).__init__(torch_ref=torch_ref) # line 3 removing all arguments but torch_ref from the call.
https://stackoverflow.com/questions/65236793/
Python: fastest way to check in which interval number is in
Suppose I have split the interval [0, 1] into a series of smaller intervals [0, 0.2), [0.2, 0.4), [0.4, 0.9), [0.9, 1.0]. Now I sample a value r in [0, 1]. What is the fastest way I can check in which interval this belongs to using Python / Numpy / Pytorch? The obvious way is this: r = np.random.rand() if 0 <= r < 0.2: pass # do something elif 0.2 <= r < 0.4: pass # do something else elif 0.4 <= r < 0.9: pass # do yet something else again elif 0.9 <= r <= 1.0: pass # do some other thing
You'll want to first transform your list of intervals into a list of boundaries, so instead of many intervals [0, 0.2), [0.2, 0.4), [0.4, 0.9), [0.9, 1.0], you just define: boundaries = [0, 0.2, 0.4, 0.9, 1.0] # values must be sorted!! Then you can perform a binary search over all of them, to see in which segment a value belongs: index = bisect.bisect_right(boundaries, value) index will be the index of the upper bound, so to get the range, you'd to: range_low = boundaries[index - 1] if index > 0 else None range_high = boundaries[index] if index < len(boundaries) else None This will also take care of handling values which are not in any of the intervals. The binary search will be done in log(N) compares, which is the theoretical best thing you can do for arbitrary intervals.
https://stackoverflow.com/questions/65237428/
Pytorch 1.7.0 | DataLoader Error - TypeError: 'module' object is not callable
This is my code, I am using pycharm! Imports import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torch.utils.data as DataLoader import torchvision.datasets as Datasets import torchvision.transforms as transforms create Fully Connected Network class NN(nn.Module): def __init__(self, input_size, num_classes): #(28x28 = 784) super(NN, self).__init__() self.fc1 = nn.Linear(input_size, 50) self.fc2 = nn.Linear(50, input_size) #hidden layer def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return x #Set Device device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') Hyperparameters input_size = 784 num_classes = 10 learning_rate = 0.001 batch_size = 2 num_epochs = 1 Load Data train_dataset = Datasets.MNIST(root='dataset/', train=True, transform=transforms.ToTensor(), download=True) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_dataset = Datasets.MNIST(root='dataset/', train=False, transform=transforms.ToTensor(), download=True) test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True) Initialize network model = NN(input_size=input_size, num_classes=num_classes).to(device) Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) Train network for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): data = data.to(device=device) targets = targets.to(device=device) print(data.shape) I am getting the error on this line train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) Error is in <module> train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) TypeError: 'module' object is not callable
You need to edit your import from torch.utils.data import DataLoader, Dataset
https://stackoverflow.com/questions/65238236/
Is it safe to truncate torchaudio's loaded 16-bit audios to `float16` from `float32`?
I have multiple WAV files with 16 bits of depth/precision. torchaudio.info(...) recognizes this, giving me: precision = {int} 16 Yet when I use torchaudio.load(...), I get a float32 dtype for the resulting tensor. With a tensor called audio, I know that I can do audio.half() to truncate it to 16 bits, reducing memory usage of my dataset. But is this an operation that will preserve precision of all possible original values? I'm not lowering the dtype's precision below the original audio's precision, but there may be a good reason I'm unaware of as to why torchaudio still returns float32.
I would say it's returned as float32 because this is pytorch's default datatype. So if you create any models with weights, they'll be float32 as well. Therefore, the inputs will be incompatible with the model if you make the conversion on the input data. (E: or it will silently convert your data to 32 bit anyway, to make it compatible with your model. Not sure which pytorch opt for, but tensorflow definitely throws the error). Look at setting the default datatype to float16, before creating any models, if you're looking to make small models: https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html HOWEVER note that you will lose 5 bits of precision if you convert a 16-bit int, as you have diagnosed that the number actually is (but is represented as a 32-bit float), to a 16-bit float. This is because 5 bits of precision are used in the exponent, leaving just 10 bits to represent the decimal part representing the number. I would just keep it at float32, if you're not particularly memory constrained.
https://stackoverflow.com/questions/65239135/
Metrics mismatch between BertForSequenceClassification Class and my custom Bert Classification
I implemented my custom Bert Binary Classification Model class, by adding a classifier layer on top of Bert Model (attached below). However, the accuracy/metrics are significantly different when I train with the official BertForSequenceClassification model, which makes me wonder if I am missing somehting in my class. Few Doubts I have: While loading the official BertForSequenceClassification from_pretrained are the classifiers weight initialized as well from pretrained model or they are randomly initialized? Because in my custom class they are randomly initialized. class MyCustomBertClassification(nn.Module): def __init__(self, encoder='bert-base-uncased', num_labels, hidden_dropout_prob): super(MyCustomBertClassification, self).__init__() self.config = AutoConfig.from_pretrained(encoder) self.encoder = AutoModel.from_config(self.config) self.dropout = nn.Dropout(hidden_dropout_prob) self.classifier = nn.Linear(self.config.hidden_size, num_labels) def forward(self, input_sent): outputs = self.encoder(input_ids=input_sent['input_ids'], attention_mask=input_sent['attention_mask'], token_type_ids=input_sent['token_type_ids'], return_dict=True) pooled_output = self.dropout(outputs[1]) # for both tasks logits = self.classifier(pooled_output) return logits
Each model tells you via a warning message which layers are randomly initialized when you use the method from_pretrained: from transformers import BertForSequenceClassification b = BertForSequenceClassification.from_pretrained('bert-base-uncased') Output: Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias'] The difference between your implementation and the BertForSequenceClassification is that you do not use any pretrained weights at all. The method from_config does not load the pretrained weights from a state_dict: import torch from transformers import AutoModelForSequenceClassification, AutoConfig b2 = AutoModelForSequenceClassification.from_config(AutoConfig.from_pretrained('bert-base-uncased')) b3 = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased') print("Does from_config provides pretrained weights: {}".format(torch.equal(b.bert.embeddings.word_embeddings.weight, b2.base_model.embeddings.word_embeddings.weight))) print("Does from_pretrained provides pretrained weights: {}".format(torch.equal(b.bert.embeddings.word_embeddings.weight, b3.base_model.embeddings.word_embeddings.weight))) Output: Does from_config provides pretrained weights: False Does from_pretrained provides pretrained weights: True Therefore you probably want to change your class to: class MyCustomBertClassification(nn.Module): def __init__(self, encoder='bert-base-uncased', num_labels=2, hidden_dropout_prob=0.1): super(MyCustomBertClassification, self).__init__() self.config = AutoConfig.from_pretrained(encoder) self.encoder = AutoModel.from_pretrained(encoder) self.dropout = nn.Dropout(hidden_dropout_prob) self.classifier = nn.Linear(self.config.hidden_size, num_labels) def forward(self, input_sent): outputs = self.encoder(input_ids=input_sent['input_ids'], attention_mask=input_sent['attention_mask'], token_type_ids=input_sent['token_type_ids'], return_dict=True) pooled_output = self.dropout(outputs[1]) # for both tasks logits = self.classifier(pooled_output) return logits myB = MyCustomBertClassification() print(torch.equal(b.bert.embeddings.word_embeddings.weight, myB.encoder.embeddings.word_embeddings.weight)) Output: True
https://stackoverflow.com/questions/65242786/
How to get the second moment of the gradient
In the OpenAI Five paper it is mentioned that the "Gradients are additionally clipped per parameter to be within between ±5√v where v is the running estimate of the second moment of the (unclipped) gradient.". This is something I would like to implement in my project, but I am not sure how to do it neihter in theory nor in practice. From wikipedia I found out that the "The second central moment is the variance. The positive square root of the variance is the standard deviation [...]". My best guess regarding the "running estimate" is that it is the Exponential Moving Average. The gradients of a network can be accessed as this comment suggests. From these I would assume that √v is the Exponential Running Average of the standard dev. of the gradients and could be calculated via: estimate = alpha * torch.std(list(param.grad for param in model.parameters())) + (1-alpha) * estimate Is my theory correct? Is there a better way to do it? Thanks in advance. Edit: fixed gradient gathering after Mr. For Example"s answer.
I think you are on the right path, my guess is basically as same as yours, just with little different. First, what is moment? N-th moment of a random variable is defined as the expected value of that variable to the power of n. More formally: m β€” moment, X β€” random variable So the first moment is mean, and the second moment is uncentered variance (meaning we don’t subtract the mean during variance calculation), intuitively, clipping the gradients by moving average of its standard deviation wrt zero make sense. Second, what is the correct code? list(network.parameters()) only give you the parameters, for get gradient of each parameters you need [param.grad for param in network.parameters()] Given all the things we know above, the correct code should be (you can try to optimize it by all means): grads_square = torch.FloatTensor([torch.square(param.grad) for param in network.parameters()]) estimate = alpha * torch.sqrt(torch.mean(grads_square)) + (1-alpha) * estimate
https://stackoverflow.com/questions/65243290/
What is the Problem in my Building Softmax from Scratch in Pytorch
I read this post ans try to build softmax by myself. Here is the code import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import time import sys import numpy as np #============================ get the dataset ========================= mnist_train = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=True, download=True, transform=transforms.ToTensor()) mnist_test = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=False, download=True, transform=transforms.ToTensor()) batch_size = 256 num_workers = 0 train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=num_workers) test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=num_workers) #============================ train ========================= num_inputs = 28 * 28 num_outputs = 10 epochs = 5 lr = 0.05 # Initi the Weight and bia W = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_outputs)), dtype=torch.float) b = torch.zeros(num_outputs, dtype=torch.float) W.requires_grad_(requires_grad = True) b.requires_grad_(requires_grad=True) # softmax function def softmax(X): X = X.exp() den = X.sum(dim=1, keepdim=True) return X / den # loss def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))).sum() # accuracy function def accuracy(y_hat, y): return (y_hat.argmax(dim=1) == y).float().mean().item() for epoch in range(epochs): train_loss_sum = 0.0 train_acc_sum = 0.0 n_train = 0 for X, y in train_iter: # X.shape: [256, 1, 28, 28] # y.shape: [256] # flatten the X into [256, 28*28] X = X.flatten(start_dim=1) y_pred = softmax(torch.mm(X, W) + b) loss = cross_entropy(y_pred, y) loss.backward() W.data = W.data - lr * W.grad b.data = b.data - lr* b.grad W.grad.zero_() b.grad.zero_() train_loss_sum += loss.item() train_acc_sum += accuracy(y_pred, y) n_train += y.shape[0] # evaluate the Test test_acc, n_test = 0.0, 0 with torch.no_grad(): for X_test, y_test in test_iter: X_test = X_test.flatten(start_dim=1) y_test_pred = softmax(torch.mm(X_test, W) + b) test_acc += accuracy(y_test_pred, y_test) n_test += y_test.shape[0] print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f' % (epoch + 1, train_loss_sum/n_train , train_acc_sum / n_train, test_acc / n_test)) Compare with original post, Here I turn def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))) into def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))).sum() Since the backward need a scalar. However, My results are epoch 1, loss nan, train acc 0.000, test acc 0.000 epoch 2, loss nan, train acc 0.000, test acc 0.000 epoch 3, loss nan, train acc 0.000, test acc 0.000 epoch 4, loss nan, train acc 0.000, test acc 0.000 epoch 5, loss nan, train acc 0.000, test acc 0.000 Any idea? Thanks.
Change: def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))).sum() To: def cross_entropy(y_hat, y): return - torch.log(y_hat[range(len(y_hat)), y] + 1e-8).sum() Outputs should be something like: epoch 1, loss 9.2651, train acc 0.002, test acc 0.002 epoch 2, loss 7.8493, train acc 0.002, test acc 0.002 epoch 3, loss 6.6875, train acc 0.002, test acc 0.003 epoch 4, loss 6.0928, train acc 0.003, test acc 0.003 epoch 5, loss 5.1277, train acc 0.003, test acc 0.003 And be aware the problem of nan can also cause by X = X.exp() in the softmax(X), when X is too big then exp() will outputs inf, when this happen you could try to clip the X before using exp()
https://stackoverflow.com/questions/65245787/
How does max_length, padding and truncation arguments work in HuggingFace' BertTokenizerFast.from_pretrained('bert-base-uncased')?
I am working with Text Classification problem where I want to use the BERT model as the base followed by Dense layers. I want to know how does the 3 arguments work? For example, if I have 3 sentences as: 'My name is slim shade and I am an aspiring AI Engineer', 'I am an aspiring AI Engineer', 'My name is Slim' SO what will these 3 arguments do? What I think is as follows: max_length=5 will keep all the sentences as of length 5 strictly padding=max_length will add a padding of 1 to the third sentence truncate=True will truncate the first and second sentence so that their length will be strictly 5. Please correct me if I am wrong. Below is my code which I have used. ! pip install transformers==3.5.1 from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') tokens = tokenizer.batch_encode_plus(text,max_length=5,padding='max_length', truncation=True) text_seq = torch.tensor(tokens['input_ids']) text_mask = torch.tensor(tokens['attention_mask'])
What you have assumed is almost correct, however, there are few differences. max_length=5, the max_length specifies the length of the tokenized text. By default, BERT performs word-piece tokenization. For example the word "playing" can be split into "play" and "##ing" (This may not be very precise, but just to help you understand about word-piece tokenization), followed by adding [CLS] token at the beginning of the sentence, and [SEP] token at the end of sentence. Thus, it first tokenizes the sentence, truncates it to max_length-2 (if truncation=True), then prepend [CLS] at the beginning and [SEP] token at the end.(So a total length of max_length) padding='max_length', In this example it is not very evident that the 3rd example will be padded, as the length exceeds 5 after appending [CLS] and [SEP] tokens. However, if you have a max_length of 10. The tokenized text corresponds to [101, 2026, 2171, 2003, 11754, 102, 0, 0, 0, 0], where 101 is id of [CLS] and 102 is id of [SEP] tokens. Thus, padded by zeros to make all the text to the length of max_length Likewise, truncate=True will ensure that the max_length is strictly adhered, i.e, longer sentences are truncated to max_length only if truncate=True
https://stackoverflow.com/questions/65246703/
Sorting a Pytorch Tensor by Trace
I have a (100,64,22,3,3) shaped pytorch tensor, and I would like to sort along axis=0 by the trace of the (3,3) components. The code I have below works, but it is very slow due to the for loops. Is there a way to vectorize the operation to speed it up? x=torch.rand(100,64,22,3,3) x_sorted=torch.zeros((x.shape[0],x.shape[1],x.shape[2],x.shape[3],x.shape[4])) for i in range(x.shape[0]): #compute tensorized trace trace=new=torch.diagonal(x[i], dim1=-2, dim2=-1).sum(-1) #Sort the trace trace_values,trace_ind=torch.sort(trace,dim=0,descending=True) for j in range(x_sorted.shape[1]): for k in range(x_sorted.shape[2]): x_sorted[i,j,k]=x[i,trace_ind[j,k],k]
Try: tensor = torch.tensor(np.random.rand(100,64, 3, 3)) orders = torch.argsort(torch.einsum('ijkk->ijk', tensor).sum(-1), axis=0) orders.shape tensor[orders, torch.arange(s.shape[1])[None, :]]
https://stackoverflow.com/questions/65254935/
Still confused about model.train()
I read all the posts here regarding model.train() and still didn't understand what is up with it. Specifically, when I use a pre-trained model like DenseNet or VGG with all parameters frozen beside the last layer not using drop-out nor Batch Normalization, the training loss starts off a lot smaller when using model.train(), but then decreases at about the same rate as when without it. Why?
There are just three options: just model(inputs), model.train()(inputs) and model.eval()(inputs). The only difference is, that when using .eval() all the dropout and normalization is ignored because its just used for training and not for tesing. Now you asked why it is still training when you just use model(inputs)? Because when you dont use train() nor eval() the model will be automatically in train-mode. So model(inputs) is the same as model.train()(inputs).
https://stackoverflow.com/questions/65260309/
How to create a neural network that takes in an image and ouputs another image?
I'm trying to create a neural network that has an image of the L (from Lab) format and output the ab dimensions. I am able to pass the L dimension without an issue, but I'm having trouble figuring out how to output the ab dimensions. The output should be for shape 1x2xHxW, where H and W are the height and width of the input image. Here is my network so far: class Model(nn.Module): def __init__(self): super(Model, self).__init__() # Get the resnet18 model from torchvision.model library self.model = models.resnet18(pretrained=True) self.model.conv1 = nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) # Replace fully connected layer of our model to a 2048 feature vector output self.model.classifier = nn.Sequential() # Add custom classifier layers self.fc1 = nn.Linear(1000, 1024) self.Dropout1 = nn.Dropout() self.PRelU1 = nn.PReLU() self.fc2 = nn.Linear(1024, 512) self.Dropout2 = nn.Dropout() self.PRelU2 = nn.PReLU() self.fc3 = nn.Linear(512, 256) self.Dropout3 = nn.Dropout() self.PRelU3 = nn.PReLU() self.fc4 = nn.Linear(256, 313) # self.PRelU3 = nn.PReLU() self.softmax = nn.Softmax(dim=1) self.model_out = nn.Conv2d(313, 2, kernel_size=1, padding=0, dilation=1, stride=1, bias=False) self.upsample4 = nn.Upsample(scale_factor=4, mode='bilinear') def forward(self, x): # x is our input data x = self.model(x) x = self.Dropout1(self.PRelU1(self.fc1(x))) x = self.Dropout2(self.PRelU2(self.fc2(x))) x = self.Dropout3(self.PRelU3(self.fc3(x))) x = self.softmax(self.fc4(x)) return x
I dont really know what you mean by "ab dimensions" and Im not sure what the "L format" is but I can tell you how to use cnns to generate images. Normally you would use an autoencoder but that depends on the task. An autoencoder takes a image as input and, similiar to a normal classification reduces the dimensions. But unlike in classification you dont flatten the featuremaps and add classification layers, but you unsample and deconvolve them. So first you "encode" the "image" and then you "decode" it. The middle layer before you start upsampling, is called the bottleneck. There are no dense layers and no softmax activations needed. Here is an example how this would look like as a pytorch model (an Autoencoder for the cifar10 dataset): class Model(nn.Module): def __init__(self): super(Model, self).__init__() """ encoder """ self.conv1 = nn.Conv2d(3, 32, kernel_size=(5, 5)) self.batchnorm1 = nn.BatchNorm2d(32) self.conv2 = nn.Conv2d(32, 64, kernel_size=(4, 4), stride=3) self.batchnorm2 = nn.BatchNorm2d(64) self.conv3 = nn.Conv2d(64, 128, kernel_size=(3, 3), stride=3) self.batchnorm3 = nn.BatchNorm2d(128) self.maxpool2x2 = nn.MaxPool2d(2) # not in usage """ decoder """ self.upsample2x2 = nn.Upsample(scale_factor=2) # not in usage self.deconv1 = nn.ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=3) self.batchnorm1 = nn.BatchNorm2d(64) self.deconv2 = nn.ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=3) self.batchnorm2 = nn.BatchNorm2d(32) self.deconv3 = nn.ConvTranspose2d(32, 3, kernel_size=(5, 5)) self.batchnorm3 = nn.BatchNorm2d(3) def forward(self, x, train_: bool=True, print_: bool=False, return_bottlenecks: bool=False): """ encoder """ x = self.conv1(x) x = self.batchnorm1(x) x = F.relu(x) x = self.conv2(x) x = self.batchnorm2(x) x = F.relu(x) x = self.conv3(x) x = self.batchnorm3(x) bottlenecks = F.relu(x) """ decoder """ x = self.deconv1(bottlenecks) x = self.batchnorm1(x) x = F.relu(x) x = self.deconv2(x) x = self.batchnorm2(x) x = F.relu(x) x = self.deconv3(x) x = torch.sigmoid(x) return x In this example I dont use "maxpool" and "upsample" but that depends on your model. Upsample is basically the opposite of of maxpool, and you can see ConvTranspose2d also like the opposite of convolution (even though that wouldnt really be he right explanaition). So you basically want the "decoder" part to be the opposite (or mirrored version) of the "encoder" part. Figuring out the dimensions kernel-sizes etc for each layer can be quite tricky but you basically have to set them so that the archtitecture is almost symmetrical and the output dimensions of the model are the size of the image you want to produce. Thats what an "image producing" architecture could like like: source: https://www.semanticscholar.org/paper/Feature-discovery-and-visualization-of-robot-data-Flaspohler-Roy/514a2f7461edd3e4c2d56d57f9002e1dc445eb58/figure/1
https://stackoverflow.com/questions/65260432/
Query padding mask and key padding mask in Transformer encoder
I'm implementing self-attention part in transformer encoder using pytorch nn.MultiheadAttention and confusing in the padding masking of transformer. The following picture shows the self-attention weight of the query (row) and key (column). As you can see, there are some tokens "<PAD>" and I have already mask it in key. Therefore the tokens will not calculate the attention weight. There are still two questions: In query part, can I also mask them("<PAD>") except for the red square part? Is this reasonable? How can I mask "<PAD>" in the query? The attention weights also use the softmax function along the row by giving mask in src_mask or src_key_padding_mask argument. If I set all the "<PAD>" row into -inf, the softmax will return nan and the loss with be nan
There is no need to mask the queries during self-attention, it should be enough if do not use the states corresponding to the <PAD> tokens later in the network (either as hidden states or keys/values), they will not influence the loss function nor anything else in the network. If you want to make sure that you did not make a bug causing the gradient flowing through the <PAD> tokens you can explicitly zero-out the self-attention using torch.where after it is computed.
https://stackoverflow.com/questions/65262928/
How to convert sparse to dense adjacency matrix?
I am trying to convert a sparse adjacency matrix/list that only contains the indices of the non-zero elements ([[rows], [columns]]) to a dense matrix that contains 1s at the indices and otherwise 0s. I found a solution using to_dense_adj from Pytorch geometric (Documentation). But this does not exactly what I want, since the shape of the dense matrix is not as expected. Here is an example: sparse_adj = torch.tensor([[0, 1, 2, 1, 0], [0, 1, 2, 3, 4]]) So the dense matrix should be of size 5x3 (the second array "stores" the columns; with non-zero elements at (0,0), (1,1), (2,2),(1,3) and (0,4)) because the elements in the first array are lower or equal than 2. However, dense_adj = to_dense(sparse_adj)[0] outputs a dense matrix, but of shape (5,5). Is it possible to define the output shape or is there a different solution to get what I want? Edit: I have a solution to convert it back to the sparse representation now that works dense_adj = torch.sparse.FloatTensor(sparse_adj, torch.ones(5), torch.Size([3,5])).to_dense() ind = dense_adj.nonzero(as_tuple=False).t().contiguous() sparse_adj = torch.stack((ind[1], ind[0]), dim=0) Or is there any alternative way that is better?
You can acheive this by first constructing a sparse matrix with torch.sparse then converting it to a dense matrix. For this you will need to provide torch.sparse.FloatTensor a 2D tensor of indices, a tensor of values as well as a output size: sparse_adj = torch.tensor([[0, 1, 2, 1, 0], [0, 1, 2, 3, 4]]) torch.sparse.FloatTensor(sparse_adj, torch.ones(5), torch.Size([3,5])).to_dense() You can get the size of the output matrix dynamically with sparse_adj.max(axis=1).values + 1 So it becomes: torch.sparse.FloatTensor( sparse_adj, torch.ones(sparse_adj.shape[1]), (sparse_adj.max(axis=1).values + 1).tolist())
https://stackoverflow.com/questions/65263666/
Pytorch UnpicklingError: A load persistent id instruction was encountered
My dataloader raises this error when loading its files: UnpicklingError Traceback (most recent call last) <ipython-input-14-cb081a68afbe> in <module> ----> 1 torch.load("/network/tmp1/ccai/data/labelbox_2020/imgs/AB_304.png") ~/.conda/envs/omnienv/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 593 return torch.jit.load(opened_file) 594 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) --> 595 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) 596 597 ~/.conda/envs/omnienv/lib/python3.8/site-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args) 762 "functionality.") 763 --> 764 magic_number = pickle_module.load(f, **pickle_load_args) 765 if magic_number != MAGIC_NUMBER: 766 raise RuntimeError("Invalid magic number; corrupt file?") UnpicklingError: A load persistent id instruction was encountered, but no persistent_load function was specified. What's bugging me is that I'm not doing multiple loads from the same process as pointed out here. Could it be that this error is triggered by multiple python processes reading the same file? pytorch 1.7 on ubuntu 18 with python 3.8
I found it: this error can be triggered when using torch.load on the wrong type of data, a .png image in this case.
https://stackoverflow.com/questions/65264230/
How to understand "torch.randn()" size* parameter arguments?
From what I understand, torch.randn(layers/depth, rows, columns), which can be seen when executing: torch.randn(2, 3, 3) ==> 2 layers (3x3) matrix: tensor([[[ 1.4838, 1.2926, 1.6147], [ 0.7923, 0.6414, -0.2676], [-0.1949, 0.3859, -0.6940]], [[ 0.2454, -1.9215, -0.3078], [ 0.8544, 0.9726, 0.0330], [ 0.3579, 0.8247, 2.1288]]]) But what does adding an extra term in the size* parameter imply? As in: torch.randn(2, 1, 3, 3) tensor([[[[ 0.6206, -1.3697, -0.2267], [ 1.0511, 2.3375, -0.9598], [-0.8148, -0.0911, -2.1211]]], [[[ 0.0659, 1.0764, 0.6150], [-1.7226, 0.5038, -0.9544], [-0.6447, -0.3325, 0.2048]]]]) What did the "1" add into the Tensor created?
Each number you are introducing refers to a dimension of the matrix. It is hard for humans to visualize more than 3 dimensions, but computers are fine with it. In this particular case, you can think about the extra dimension as something like a batch size.
https://stackoverflow.com/questions/65265161/
Is the differentiation correct in pytorch tutorial?
I'm going through https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py and noticed an equation I couldn't understand. For the below equation for the tutorial, Isn't do/dx = 6 (xi+2). How is it 3/2(xi+2)?
There is a 1/4 term in front of the summation. So you get 6/4(xi + 2) = 3/2(xi + 2).
https://stackoverflow.com/questions/65271972/
How to get index.cuda?
I am reading a code about CMC downloaded from github, and it can't work on vscode. Code: if torch.cuda.is_available(): index = index.cuda(async=True) inputs = inputs.cuda() Error message is as follows: File "e:\CMC-master\train_CMC.py", line 219 index = index.cuda(async=True) ^ SyntaxError: invalid syntax How to fix it?
Try using non_blocking=True instead: index = index.cuda(non_blocking=True) See Tensor.cuda for more information, and this answer.
https://stackoverflow.com/questions/65272827/
How to interpret Tensor sizes?
I'm having difficulty understanding the difference between the following: x1 = torch.tensor([1, 2, 3]) # single brackets x2 = torch.tensor([[1, 2, 3]]) # double brackets When checking their sizes: x1.size() and x2.size() we get the following: torch.Size([3]) torch.Size([1, 3]) Which I interpret as x1 being a (3x1) column vector, while x2 is a (1x3) row vector. However, when attempting to transpose both vectors: print(x1.T) print(x2.T), we get: tensor([1, 2, 3]) tensor([[1], [2], [3]]) x1 seems to be unaffected by transposition? Further when attempting to force x1 to be a (1x3) row vector using ".view()": print(x1.view(1, -1)) we get: tensor([[1, 2, 3]]) # double brackets So how come ".T" didn't do the trick, but ".view(1, -1)" was able to transform x1 into a (1x3) row vector? What really is x1 when we first assigned it?
As per official documentation - Expects input to be <= 2-D tensor and transposes dimensions 0 and 1. 0-D and 1-D tensors are returned as is. When input is a 2-D tensor this is equivalent to transpose(input, 0, 1). x = torch.randn(()) torch.t(x) #tensor(0.1995) x = torch.randn(3) x #tensor([ 2.4320, -0.4608, 0.7702]) torch.t(x) #tensor([ 2.4320, -0.4608, 0.7702]) x = torch.randn(2, 3) x #tensor([[ 0.4875, 0.9158, -0.5872], # [ 0.3938, -0.6929, 0.6932]]) torch.t(x) #tensor([[ 0.4875, 0.3938], # [ 0.9158, -0.6929], # [-0.5872, 0.6932]]) This is the reason why x1 has no effect. It's currently a 1D tensor and NOT a 2D tensor. There is a difference between the shape of (3,) and (3,1). The first only has a single axis while the other has 2 axis (similar to the double brackets you added) This statement, Which I interpret as x1 being a (3x1) column vector, while x2 is a (1x3) row vector. is incorrect to some extent. x1 #(3,) 1D tensor x1.reshape((3,1) #(3,1) #2D tensor x1.T #(1,3) 2D tensor with successful transpose
https://stackoverflow.com/questions/65273414/
Trying to accumulate gradients in Pytorch, but getting RuntimeError when calling loss.backward
I'm trying to train a model in Pytorch, and I'd like to have a batch size of 8, but due to memory limitations, I can only have a batch size of at most 4. I've looked all around and read a lot about accumulating gradients, and it seems like the solution to my problem. However, I seem to have trouble implementing it. Every time I run the code I get RuntimeError: Trying to backward through the graph a second time. I don't understand why since my code looks like all these other examples I've seen (unless I'm just missing something major): https://stackoverflow.com/a/62076913/1227353 https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255 https://discuss.pytorch.org/t/why-do-we-need-to-set-the-gradients-manually-to-zero-in-pytorch/4903/20 One caveat is that the labels for my images are all different size, so I can't send the output batch and the label batch into the loss function; I have to iterate over them together. This is what an epoch looks like (it's been pared down for the sake of brevity): # labels_batch contains labels of different sizes for batch_idx, (inputs_batch, labels_batch) in enumerate(dataloader): outputs_batch = model(inputs_batch) # have to do this because labels can't be stacked into a tensor for output, label in zip(outputs_batch, labels_batch): output_scaled = interpolate(...) # make output match label size loss = train_criterion(output_scaled, label) / (BATCH_SIZE * 2) loss.backward() if batch_idx % 2 == 1: optimizer.step() optimizer.zero_grad() Is there something I'm missing? If I do the following I also get an error: # labels_batch contains labels of different sizes for batch_idx, (inputs_batch, labels_batch) in enumerate(dataloader): outputs_batch = model(inputs_batch) # CHANGE: we're gonna accumulate losses manually batch_loss = 0 # have to do this because labels can't be stacked into a tensor for output, label in zip(outputs_batch, labels_batch): output_scaled = interpolate(...) # make output match label size loss = train_criterion(output_scaled, label) / (BATCH_SIZE * 2) batch_loss += loss # CHANGE: accumulate! # CHANGE: do backprop outside for loop batch_loss.backward() if batch_idx % 2 == 1: optimizer.step() optimizer.zero_grad() The error I get in this case is RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn. This happens when the next epoch starts though... (INCORRECT, SEE EDIT BELOW) How can I train my model with gradient accumulation? Or am I doomed to train with a batch size of 4 or less? Oh and as a side question, does the location of where I put loss.backward() affect what I need to normalize the loss by? Or is it always normalized by BATCH_SIZE * 2? EDIT: The second code segment was getting an error due to the fact that I was doing torch.set_grad_enabled(phase == 'train') but I had forgotten to wrap the call to batch_loss.backward() with an if phase == 'train'... my bad So now the second segment of code seems to work and do gradient accumulation, but why doesn't the first bit of code work? It feel equivalent to setting BATCH_SIZE as 1. Furthermore, I'm creating a new loss object each time, so shouldn't the calls to backward() operate on different graphs entirely?
It seems you have two issues here, you said you couldn't have batch_size=8 because of memory limitations but later state that your labels are not of the same size. The latter seems much more important than the former. Anyway, I will try to answer your questions best I can. How can I train my model with gradient accumulation? Or am I doomed to train with a batch size of 4 or less? You want to call .backward() on every loop cycle otherwise the batch will have no effect on the training. You can then call step() and zero_grad() only when batch_idx % 2 is True (i.e. for every other batch). Here's an example which accumulates the gradient, not the loss: model = nn.Linear(10, 3) optim = torch.optim.SGD(model.parameters(), lr=0.1) ds = TensorDataset(torch.rand(100, 10), torch.rand(100, 3)) dl = DataLoader(ds, batch_size=4) for i, (x, y) in enumerate(dl): y_hat = model(x) loss = F.l1_loss(y_hat, y) / 2 loss.backward() if i % 2: optim.step() optim.zero_grad() Note this approach is different to accumulating the loss, and back-propagating only all batches (or part of the batches) have gone through the network. In the example above we backpropagate every 4 datapoints and updating the model every 8 datapoints. Oh and as a side question, does the location of where I put loss.backward() affect what I need to normalize the loss by? Or is it always normalized by BATCH_SIZE * 2? Usually torch's built-in losses have reduction='mean' set as default. This means the loss gets averaged over all batch elements that contributed to calculating the loss. So this will depend on your loss implementation. However if you are using gradient accumalation, then yes you will need to average your loss by the number of accumulation steps (here loss = F.l1_loss(y_hat, y) / 2). Since your gradients will be accumulated twice. To read more about this, I recommend taking a look at this other SO post.
https://stackoverflow.com/questions/65273522/
torch.unique does not work for float tensors
I am trying to extract the unique elements from a float tensor. I have tried : out = torch.unique(my_tensor) However this method only works for int/long tensor. My tensor is quantizied tensor in a non-uniform way, thus its guaranteed to have a small set of float values.
You could using numpy.unique instead import torch import numpy as np t = torch.tensor([1.05, 1.05, 2.01, 2.01, 3.9, 3.9001]) print(np.unique(t.numpy())) Outputs: [1.05 2.01 3.9 3.9001]
https://stackoverflow.com/questions/65274969/
Efficient pytorch broadcasting not found
I have the following code snippet in my implemenatation. There is a nested for loop with 3 loops. In the main code the 3D coordinates of the original system is stacked as a 1D vector of constinous stacking of points as for a point with coordinate (x,y,z) a sample cells will look like Predictions =[...x,y,z,...] whereas for my calulation I need reshaped_prediction vector as a 2D matrix with prediction_reshaped[i][0]=x, prediction_reshaped[i][1]=y prediction_reshaped[i][2]=z where i is any sample row in the matrix prediction_reshaped. The following code shows the logic prediction_reshaped=torch.zeros([batch,num_node,dimesion]) for i in range(batch): for j in range(num_node): for k in range(dimesion): prediction_reshaped[i][j][k]=prediction[i][3*j+k] is their any efficient broadcasting to avoid these three nested loop? it is slowing down my code. torch.reshape does not suit my purpose. The code is implemented using pytorch with all matrices as pytorch tensor but any numpy solution will also help.
This should do the job. import torch batch = 2 num_nodes = 4 x = torch.rand(batch, num_nodes * 3) # tensor([[0.8076, 0.2572, 0.7100, 0.4180, 0.6420, 0.4668, 0.8915, 0.0366, 0.5704, # 0.0834, 0.3313, 0.9080], # [0.2925, 0.7367, 0.8013, 0.4516, 0.5470, 0.5123, 0.1929, 0.4191, 0.1174, # 0.0076, 0.2864, 0.9151]]) x = x.reshape(batch, num_nodes, 3) # tensor([[[0.8076, 0.2572, 0.7100], # [0.4180, 0.6420, 0.4668], # [0.8915, 0.0366, 0.5704], # [0.0834, 0.3313, 0.9080]], # # [[0.2925, 0.7367, 0.8013], # [0.4516, 0.5470, 0.5123], # [0.1929, 0.4191, 0.1174], # [0.0076, 0.2864, 0.9151]]])
https://stackoverflow.com/questions/65277876/
How to use 'collate_fn' with dataloaders?
I am trying to train a pretrained roberta model using 3 inputs, 3 input_masks and a label as tensors of my training dataset. I do this using the following code: from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler batch_size = 32 # Create the DataLoader for our training set. train_data = TensorDataset(train_AT, train_BT, train_CT, train_maskAT, train_maskBT, train_maskCT, labels_trainT) train_dataloader = DataLoader(train_data, batch_size=batch_size) # Create the Dataloader for our validation set. validation_data = TensorDataset(val_AT, val_BT, val_CT, val_maskAT, val_maskBT, val_maskCT, labels_valT) val_dataloader = DataLoader(validation_data, batch_size=batch_size) # Pytorch Training training_args = TrainingArguments( output_dir='C:/Users/samvd/Documents/Master/AppliedMachineLearning/FinalProject/results', # output directory num_train_epochs=1, # total # of training epochs per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='C:/Users/samvd/Documents/Master/AppliedMachineLearning/FinalProject/logs', # directory for storing logs ) trainer = Trainer( model=model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset = train_data, # training dataset eval_dataset = validation_data, # evaluation dataset ) trainer.train() However this gives me the following error: TypeError: vars() argument must have dict attribute Now I have found out that it is probably because I don't use collate_fn when using DataLoader, but I can't really find a source that helps me define this correctly so the trainer understands the different tensors I put in. Can anyone point me in the right direction?
Basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create your batch without spending much time implementing it manually. Try to see it as a glue that you specify the way examples stick together in a batch. If you don’t use it, PyTorch only put batch_size examples together as you would using torch.stack (not exactly it, but it is simple like that). Suppose for example, you want to create batches of a list of varying dimension tensors. The below code pads sequences with 0 until the maximum sequence size of the batch, that is why we need the collate_fn, because a standard batching algorithm (simply using torch.stack) won’t work in this case, and we need to manually pad different sequences with variable length to the same size before creating the batch. def collate_fn(data): """ data: is a list of tuples with (example, label, length) where 'example' is a tensor of arbitrary shape and label/length are scalars """ _, labels, lengths = zip(*data) max_len = max(lengths) n_ftrs = data[0][0].size(1) features = torch.zeros((len(data), max_len, n_ftrs)) labels = torch.tensor(labels) lengths = torch.tensor(lengths) for i in range(len(data)): j, k = data[i][0].size(0), data[i][0].size(1) features[i] = torch.cat([data[i][0], torch.zeros((max_len - j, k))]) return features.float(), labels.long(), lengths.long() The function above is fed to the collate_fn param in the DataLoader, as this example: DataLoader(toy_dataset, collate_fn=collate_fn, batch_size=5) With this collate_fn function, you always gonna have a tensor where all your examples have the same size. So, when you feed your forward() function with this data, you need to use the length to get the original data back, to not use those meaningless zeros in your computation. Source: Pytorch Forum
https://stackoverflow.com/questions/65279115/
How to add a multiclass multilabel layer on top of pretrained BERT model?
I am trying to do a multitask multiclass sentence classification task using the pretrained BERT model from the huggingface transformers library . I have tried to use the BERTForSequenceClassification model from there but the issue I am having is that I am not able to extend it for multiple tasks . I will try to make it more informative through this example. Suppose we have four different tasks and for each sentence and for each task we have labels like this as follows in the examples: A :[ 'a' , 'b' , 'c' , 'd' ] B :[ 'e' , 'f' , 'g' , 'h' ] C :[ 'i' , 'j' , 'k' , 'l' ] D :[ 'm' , 'n' , 'o' , 'p' ] Now , if I have a sentence for this model , I want the output to give me output for all the four different tasks (A,B,C,D). This is what I was doing earlier model = BertForSequenceClassification.from_pretrained( "bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab. num_labels = 4, # The number of output labels--2 for binary classification. # You can increase this for multi-class tasks. output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False, # Whether the model returns all hidden-states. ) Then I tried to implement a CustomBERT model like this : class CustomBERTModel(nn.Module): def __init__(self): super(CustomBERTModel, self).__init__() self.bert = BertModelForSequenceClassification.from_pretrained("bert-base-uncased") ### New layers: self.linear1 = nn.Linear(768, 256) self.linear2 = nn.Linear(256, num_classes) ## num_classes is the number of classes in this example def forward(self, ids, mask): sequence_output, pooled_output = self.bert( ids, attention_mask=mask) # sequence_output has the following shape: (batch_size, sequence_length, 768) linear1_output = self.linear1(sequence_output[:,0,:].view(-1,768)) linear2_output = self.linear2(linear2_output) return linear2_output I have went through the answers to questions similar to it available earlier but none of them appeared to answer my question . I have tried to get through all the points which I think can be helpful for the understanding of my problem and would try to clear further more in case of any descrepancies made by me in the explaination of the question . Any answers related to this will be very much helpful .
You should use BertModel and not BertModelForSequenceClassification, as BertModelForSequenceClassification adds a linear layer for classification on top of BERT model and uses CrossEntropyLoss, which is meant for multiclass classification. Hence, first use BertModel instead of BertModelForSequenceClassification: class CustomBERTModel(nn.Module): def __init__(self): super(CustomBERTModel, self).__init__() self.bert = BertModel.from_pretrained("bert-base-uncased") ### New layers: self.linear1 = nn.Linear(768, 256) self.linear2 = nn.Linear(256, 4) ## as you have 4 classes in the output self.sig = nn.functional.sigmoid() def forward(self, ids, mask): sequence_output, pooled_output = self.bert( ids, attention_mask=mask) # sequence_output has the following shape: (batch_size, sequence_length, 768) linear1_output = self.linear1(sequence_output[:,0,:].view(-1,768)) linear2_output = self.linear2(linear2_output) linear2_output = self.sig(linear2_output) return linear2_output Next, multilabel classification uses 'Sigmoid' activation instead of 'Softmax' (Here, the sigmoid layer is added in the above code) Further, for multilabel classification, you need to use BCELoss instead of CrossEntropyLoss.
https://stackoverflow.com/questions/65285054/
PyTorch - ModuleNotFoundError
I've trained Retinanet for object detection in google colab and now I want to load its .pt file in another python project but I keep getting this error. Any thoughts? Traceback (most recent call last): File "C:\Users\stefan_cepa995\Desktop\breast-mammography-app\app.py", line 522, in <module> model = torch.load(os.path.join(".", "models", "retinanet", "retinanet_gwd.pt")) File "C:\Users\stefan_cepa995\anaconda3\envs\tensorflow\lib\site-packages\torch\serialization.py", line 594, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\stefan_cepa995\anaconda3\envs\tensorflow\lib\site-packages\torch\serialization.py", line 853, in _load result = unpickler.load() ModuleNotFoundError: No module named 'retinanet'
As @Rika mentioned in comments, solution is to save state_dict and then load it with load_state_dict() function.
https://stackoverflow.com/questions/65285198/
RuntimeError: Given groups=1, weight of size [64, 1, 4, 4], expected input[256, 3, 32, 32] to have 1 channels, but got 3 channels instead
Could you help me fix the above error? If I were to load the mnist dataset, there is no error popping up. The error has to do with the dimension of the other datasets, cifar10, fmnist and so on and cannot be run when applied to these sets. Any help appreciated. # noinspection PyUnresolvedReferences import os # imports # noinspection PyUnresolvedReferences import pickle from time import time from torchvision import datasets, transforms from torchvision.utils import save_image import site site.addsitedir('/content/gw_gan/model') from loss import gwnorm_distance, loss_total_variation, loss_procrustes from model_cnn import Generator, Adversary from model_cnn import weights_init_generator, weights_init_adversary # internal imports from utils import * # get arguments args = get_args() # system preferences seed = np.random.randint(100) torch.set_default_dtype(torch.double) np.random.seed(seed) torch.manual_seed(seed) # settings batch_size = 256 z_dim = 100 lr = 0.0002 ngen = 3 beta = args.beta lam = 0.5 niter = 10 epsilon = 0.005 num_epochs = args.num_epochs cuda = args.cuda channels = args.n_channels id1 = args.id model = 'gwgan_{}_eps_{}_tv_{}_procrustes_{}_ngen_{}_channels_{}_{}' \ .format(args.data, epsilon, lam, beta, ngen, channels, id1) save_fig_path = 'out_' + model if not os.path.exists(save_fig_path): os.makedirs(save_fig_path) # data import dataloader = torch.utils.data.DataLoader( datasets.CIFAR10('./data/cifar10', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])), batch_size=batch_size, drop_last=True, shuffle=True) # print example images save_image(next(iter(dataloader))[0][:25], os.path.join(save_fig_path, 'real.pdf'), nrow=5, normalize=True) # define networks and parameters generator = Generator(output_dim=channels) adversary = Adversary(input_dim=channels) # weight initialisation generator.apply(weights_init_generator) adversary.apply(weights_init_adversary) if cuda: generator = generator.cuda() adversary = adversary.cuda() # create optimizer g_optimizer = torch.optim.Adam(generator.parameters(), lr, betas=(0.5, 0.99)) # zero gradients generator.zero_grad() c_optimizer = torch.optim.Adam(adversary.parameters(), lr, betas=(0.5, 0.99)) # zero gradients adversary.zero_grad() # sample for plotting num_test_samples = batch_size z_ex = torch.randn(num_test_samples, z_dim) if cuda: z_ex = z_ex.cuda() loss_history = list() loss_tv = list() loss_orth = list() loss_og = 0 is_hist = list() for epoch in range(num_epochs): t0 = time() for it, (image, _) in enumerate(dataloader): train_c = ((it + 1) % (ngen + 1) == 0) x = image.double() if cuda: x = x.cuda() # sample random number z from Z z = torch.randn(image.shape[0], z_dim) if cuda: z = z.cuda() if train_c: for q in generator.parameters(): q.requires_grad = False for p in adversary.parameters(): p.requires_grad = True else: for q in generator.parameters(): q.requires_grad = True for p in adversary.parameters(): p.requires_grad = False # result generator g = generator.forward(z) # result adversary f_x = adversary.forward(x) f_g = adversary.forward(g) # compute inner distances D_g = get_inner_distances(f_g, metric='euclidean', concat=False) D_x = get_inner_distances(f_x, metric='euclidean', concat=False) # distance matrix normalisation D_x_norm = normalise_matrices(D_x) D_g_norm = normalise_matrices(D_g) # compute normalized gromov-wasserstein distance loss, T = gwnorm_distance((D_x, D_x_norm), (D_g, D_g_norm), epsilon, niter, loss_fun='square_loss', coupling=True, cuda=cuda) if train_c: # train adversary loss_og = loss_procrustes(f_x, x.view(x.shape[0], -1), cuda) loss_to = -loss + beta * loss_og loss_to.backward() # parameter updates c_optimizer.step() # zero gradients reset_grad(generator, adversary) else: # train generator loss_t = loss_total_variation(g) loss_to = loss + lam * loss_t loss_to.backward() # parameter updates g_optimizer.step() # zero gradients reset_grad(generator, adversary) # plotting # get generator example g_ex = generator.forward(z_ex) g_plot = g_ex.cpu().detach() # plot result save_image(g_plot.data[:25], os.path.join(save_fig_path, 'g_%d.pdf' % epoch), nrow=5, normalize=True) fig1, ax = plt.subplots(1, 3, figsize=(15, 5)) ax0 = ax[0].imshow(T.cpu().detach().numpy(), cmap='RdBu_r') colorbar(ax0) ax1 = ax[1].imshow(D_x.cpu().detach().numpy(), cmap='Blues') colorbar(ax1) ax2 = ax[2].imshow(D_g.cpu().detach().numpy(), cmap='Blues') colorbar(ax2) ax[0].set_title(r'$T$') ax[1].set_title(r'inner distances of $D$') ax[2].set_title(r'inner distances of $G$') plt.tight_layout(h_pad=1) fig1.savefig(os.path.join(save_fig_path, '{}_ccc.pdf'.format( str(epoch).zfill(3))), bbox_inches='tight') loss_history.append(loss) loss_tv.append(loss_t) loss_orth.append(loss_og) plt.close('all') # plot loss history fig2 = plt.figure(figsize=(2.4, 2)) ax2 = fig2.add_subplot(111) ax2.plot(loss_history, 'k.') ax2.set_xlabel('Iterations') ax2.set_ylabel(r'$\overline{GW}_\epsilon$ Loss') plt.tight_layout() plt.grid() fig2.savefig(save_fig_path + '/loss_history.pdf') fig3 = plt.figure(figsize=(2.4, 2)) ax3 = fig3.add_subplot(111) ax3.plot(loss_tv, 'k.') ax3.set_xlabel('Iterations') ax3.set_ylabel(r'Total Variation Loss') plt.tight_layout() plt.grid() fig3.savefig(save_fig_path + '/loss_tv.pdf') fig4 = plt.figure(figsize=(2.4, 2)) ax4 = fig4.add_subplot(111) ax4.plot(loss_orth, 'k.') ax4.set_xlabel('Iterations') ax4.set_ylabel(r'$R_\beta(f_\omega(X), X)$ Loss') plt.tight_layout() plt.grid() fig4.savefig(save_fig_path + '/loss_orth.pdf') The error displays: Traceback (most recent call last): File "/content/gw_gan/main_gwgan_cnn.py", line 160, in <module> f_x = adversary.forward(x) File "/content/gw_gan/model/model_cnn.py", line 62, in forward x = self.conv(input) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 423, in forward return self._conv_forward(input, self.weight) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 420, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [64, 1, 4, 4], expected input[256, 3, 32, 32] to have 1 channels, but got 3 channels instead This is for an application of a generative model, where this is a CNN. The reference this is taken from is from main_gwgan_cnn at https://github.com/bunnech/gw_gan. A GAN is proposed to learn from incomparable spaces and produce results.
You have to set the --n_channels otherwise args.n_chanels will default to 1 as see here. The example given here is for FMNIST which has a single channel. You are running on CIFAR, you should set it to 3 since there are three channels.
https://stackoverflow.com/questions/65286955/
pytorch MNIST neural network produces several non-zero outputs
I tried to do a neural network that operates on MNIST data set. I was mostly following the pytorch.nn tutorial. As a result, i got a model that learns, but there's something wrong with the process or with the model itself. Instead of one active neuron at the output, i recieve multiple ones. Here's the model itself: model = nn.Sequential( nn.Linear(784, 64), nn.ReLU(), nn.Linear(64, 10), nn.ReLU(), ) And here's the training process: loss_func = nn.CrossEntropyLoss() opt = optim.SGD(model.parameters(), lr=lr) for epoch in range(epochs): model.train() for xbt, ybt in train_dl: pred = model(xbt) loss = loss_func(pred, ybt) opt.zero_grad() loss.backward() opt.step() model.eval() # Validation if epoch % 10 == 0: with torch.no_grad(): losses, nums = zip( *[(loss_func(model(xbv), ybv), len(xbv)) for xbv, ybv in valid_dl] ) val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums) print(epoch, val_loss) Here's average loss each 10th epoch: 0 0.13384412774592638 10 0.0900113809091039 20 0.09795805384699234 30 0.10341344920364791 40 0.10804545368137551 And thats how result of applying the model to the validation set looks like: [[ 0. 0. 0. ... 28.436266 0. 5.001435 ] [ 7.3331523 12.666427 31.898096 ... 0. 0. 0. ] [ 0. 18.116354 8.049953 ... 4.330721 0. 0. ] ... [ 8.504517 0. 6.302228 ... 0. 0. 0. ] [ 1.7339934 0. 0. ... 0. 2.1565871 0. ] [45.750134 0. 6.2685804 ... 2.247082 0. 0. ]] Shape: (9984, 10) I tried changing learning speed, model layers, amount of epochs, but nothing seems to work.
You have 10 neurons with ReLU in the last layers and yes all the neurons will fire/activated. In this case every neuron applies a ReLu function on the output of linear activation. ie ReLu(w.x+b). There are 10 such neurons and all of them will give out certain output based on its input, and yes all of them get fired/activated. The way you infer an output from this is by taking the class corresponding to the neuron which has the hugest activation (using np.argmax or torch.max).
https://stackoverflow.com/questions/65287454/
Problem with forward step in pytorch model
Traceback saying that: * Epoch 1/20 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-4f0f868c6227> in <module>() 1 max_epochs = 20 2 optim = torch.optim.Adam(model.parameters(), lr=1e-3) ----> 3 train(model, optim, bce_loss, max_epochs, data_tr, data_val) 2 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), TypeError: forward() takes 2 positional arguments but 3 were given But in train section of my net I did not pass more than two arguments(including self argument) this is link to my code Maybe problem not in train
Your model has sub-modules for which you (implicitly) call forward with more than one argument: x = self.dec_conv3(self.unpool3(x, indices3)) Your unpool are simply MaxPool layers - they do not expect two input arguments.
https://stackoverflow.com/questions/65288144/
What is the difference between Neural Network Frameworks and RL Algorithm Libraries?
I know this is a silly question, but I cannot find a good way to put it. I've worked with TensorFlow and TFAgents, and am now moving to Ray RLlib. Looking at all the RL frameworks/libraries, I got confused about the difference between the two below: frameworks such as Keras, TensorFlow, PyTorch RL implementation libraries such as TFAgents, RLlib, OpenAi Baseline, Tensorforce, KerasRL, etc For example, there are Keras codes in TensorFlow and Ray RLlib supports both TensorFlow and PyTorch. How are they all related? My understanding so far is that Keras allows to make neural networks and TensorFlow is more of a math library for RL (I don't have enough understanding about PyTorch). And libraries like TFAgents and RLlib use frameworks like Keras and TensorFlow to implement existing RL algorithms so that programmers can utilize them with ease. Can someone please explain how they are interconnected/different? Thank you very much.
Yes you are kind of right. Frameworks like Keras, TF (which also uses keras btw) and Pytorch are general Deep Learning frameworks. For most artificial neural network use-cases these frameworks work just fine and your typical pipeline is going to look something like: Preprocess your dataset Select an appropriate model for this problem setting model.fit(dataset) Analyze results Reinforcement Learning though is substantially different from most other Data Science ML applications. To start with, in RL you actually generate your own dataset by having your model (the Agent) interact with an environment; this complicates the situation substantially particularly from a computational standpoint. This is because in the traditional ML scenario most of the computational heavy-lifting is done by that model.fit() call. And the good thing about the aforementioned frameworks is that from that call your code actually enters very efficient C/C++ code (usually also implementing CUDA libraries to use the GPU). In RL the big problem is the environment that the agent interacts with. I separate this problem in two parts: a) The environment cannot be implemented in these frameworks because it will always change based on what you are doing. As such you have to code the environment and - chances are - it's not gonna be very efficient. b) The environment is a key component in the code and it constantly intreacts multiple times with your Agent, and there are multiple ways in which that interaction can be mediated. These two factors lead to the necessity to standardize the environment and the interaction between it and the agent. This standardization allows for highly reusable code and also code that is more interpretable by others in how it exactly operates. Furthermore it is possible this way to, for example, easily run parallel environments (TF-agents allows this for example) even though your environment object is not really written to manage this. RL frameworks are thus providing this standardization and features that come with it. Their relation to Deep Learning frameworks is that RL libraries often come with a lot of pre-implemented and flexible agent architectures that have been among the most relevant in the literature. These agents are usually nothing more than a some fancy ANN architecture wrapped in some class that standardizes their operation within the given RL framework. Therefore as a backend for these ANN models, RL frameworks use DL frameworks to run the computations efficiently.
https://stackoverflow.com/questions/65291261/
How to get the image torchvision.utils.save_image saves, without reading it back from disk?
from torchvision.utils import save_image ... save_image(im, f'im_name.png') In my case (standard mnist), using code from here, im is a Tensor:96, and save_image works. I want that image in memory to show it in other plots, and I don't want to read it back after saving it, which seems kind of stupid. Is there a way to separate the functionality of generating the image and of saving it? Edit clarification: I want an equivalent to save_image(im, f'im_name.png') reread = plt.imread(f'im_name.png') without saving the image and reading it back. I just want the image, and I want to save it later. the save_image function does some work, like stacking multiple images into one, converting the tensor to images of correct sizes and so on. I want only that part without the saving to disk.
About 2 weeks later, I stumbled upon the solution by accident. grid = torchvision.utils.make_grid(im) grid will be the image save_image was just saving.
https://stackoverflow.com/questions/65295096/
Pytorch: No training effect after deepcopy
I tried to make a copy of a neural network in pytorch and subsequently train the copied network, but training does not seem to change the weights in the network after copying. This post suggests that deepcopy is a convenient way to make a copy of a neural network, so I tried using that in my code. The code below works just fine and shows that the weights and accuracy of the network are different after training from before training. However, when I toggle so that network_cp=deepcopy(network) and optimizer_cp=deepcopy(optimizer), the accuracy and weights before and after training are exactly the same. # torch settings torch.backends.cudnn.enabled = True device = torch.device("cpu") # training settings learning_rate = 0.01 momentum = 0.5 batch_size_train = 64 batch_size_test = 1000 # get MNIST data set train_loader, test_loader = load_mnist(batch_size_train=batch_size_train, batch_size_test=batch_size_test) # make a network network = Net() optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) network.to(device) # train network train(network, optimizer, train_loader, device) # copy network network_cp = network #network_cp = deepcopy(network) optimizer_cp = optimizer #optimizer_cp = deepcopy(optimizer) # get edge weights and accuracy of the copied network acc1 = float(test(network_cp, optimizer_cp, test_loader, device)) weights1 = np.array(get_edge_weights(network_cp)) # train copied network train(network_cp, optimizer_cp, train_loader, device) # get edge weights and accuracy of the copied network after training acc2 = float(test(network_cp, optimizer_cp, test_loader, device)) weights2 = np.array(get_edge_weights(network_cp)) # compare edge weights and accuracy of copied network before and after training print('accuracy', acc1, acc2) print('abs diff of weights for net1 and net2', np.sum(np.abs(weights1-weights2))) To run the code above, include these imports and function definitions: import torch import torchvision import torchvision.transforms as transforms import torch.optim as optim import torch.nn as tnn import torch.nn.functional as tnf from copy import deepcopy import numpy as np def load_mnist(batch_size_train = 64, batch_size_test = 1000): train_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('temp/', #'/data/users/alice/pytorch_training_files/', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_train, shuffle=True) test_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('temp/', #'/data/users/alice/pytorch_training_files/', train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_test, shuffle=True) return(train_loader, test_loader) def train(network, optimizer, train_loader, device, n_epochs=5): network.train() for epoch in range(1, n_epochs + 1): for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = network(data) loss = tnf.nll_loss(output, target) loss.backward() optimizer.step() def test(network, optimizer, test_loader, device): network.eval() test_loss, correct = 0, 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = network(data) test_loss += tnf.nll_loss(output, target, size_average=False).item() pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).sum() test_loss /= len(test_loader.dataset) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) return(float(correct)/float(len(test_loader.dataset))) def get_edge_weights(network): layers = [module for module in network.modules()][1:] output = np.zeros(1) for j, layer in enumerate(layers): weights = list(layer.parameters())[0] weights_arr = weights.detach().numpy() weights_arr = weights_arr.flatten() output = np.concatenate((output,weights_arr)) return output[1:] class Net(tnn.Module): def __init__(self): super(Net, self).__init__() self.fc1 =tnn.Linear(784,264) self.fc2 = tnn.Linear(264,10) def forward(self, x): x = tnf.relu(self.fc1(x.view(-1,784))) x = tnf.relu(self.fc2(x)) return tnf.log_softmax(x)
After optimizer_cp = deepcopy(optimizer), the optimizer_cp still wants to optimize the old model's parameters (as defined by optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)). After deep copying the model, the optimizer needs to be told to optimize this new model's parameters: optimizer_cp = optim.SGD(network_cp.parameters(), lr=learning_rate, momentum=momentum)
https://stackoverflow.com/questions/65298796/
Pytorch does not pad the layers
While making a network I used deconvolution layer : def deconv3d(cin,cout,k=4,s=2,pad=-1): pad = (k - 1) // 2 if pad < 0 else pad return nn.Sequential( nn.ConvTranspose3d(cin,cout,kernel_size=k,stride=s,padding = pad,bias=False), nn.ReLU(inplace=True) ) And I adjusted this to networks and it keeps reducing the size of layer. self.conv_f5 = conv3d(128, 128, k=3, s=1, pad=1) self.conv_f6 = deconv3d(128,64,k=3,s=2,pad=1) self.conv_f7 = conv3d(64,64,k=3,s=1,pad=1) self.conv_f8 = deconv3d(64,32,k=3,s=2,pad=1) self.conv_f9 = conv3d(32,32,k=3,s=1,pad=1) This is the layers I made and the result is : Cost Volume: torch.Size([2, 128, 100, 120, 160]) fc5 torch.Size([2, 128, 50, 60, 80]) fc6 torch.Size([2, 64, 99, 119, 159]) fc8 torch.Size([2, 32, 197, 237, 317]) I cannot understand why f3 has smaller size than expected and finding how to fix it. Please tell me how and thank you very much.
This is still not a reproducible example. I see there are calls to conv3d for which you have not provided the source code. I believe it is not a mere wrapper around nn.Conv3d because the resulting size does not match (self.conv_5for example should not change the tensor shape with its given parameters) However I believe what puzzles you is the odd size ([2, 64, 99, 119, 159]). In that case you want to read very carefully the documentation for nn.ConvTranspose3d. In particular : However, when stride > 1, Conv3d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note that output_padding is only used to find output shape, but does not actually add zero-padding to output. According to the shape equation which is provided with the documentation, I believe you want to add an output_padding=1 to your deconv layers.
https://stackoverflow.com/questions/65299990/
Cannot install torchvision via poetry in windows
I succeeded in pytorch installation thanks to answers here Poetry and PyTorch. But, I'm still failed to install torchvision via poetry. > poetry add torchvision==0.8.2 Updating dependencies Resolving dependencies... Writing lock file Package operations: 1 install, 0 updates, 0 removals β€’ Installing torchvision (0.8.2) RuntimeError Unable to find installation candidates for torchvision (0.8.2) at ~\.poetry\lib\poetry\installation\chooser.py:72 in choose_for 68β”‚ 69β”‚ links.append(link) 70β”‚ 71β”‚ if not links: β†’ 72β”‚ raise RuntimeError( 73β”‚ "Unable to find installation candidates for {}".format(package) 74β”‚ ) 75β”‚ 76β”‚ # Get the best link Failed to add packages, reverting the pyproject.toml file to its original content. I googled it and found some answers that say 'just pip install torchvision'. But I'm suspicious that it works because according to PyPi(https://pypi.org/project/torchvision/#files), there is no wheel file for windows. And I tried it and it failed as I expected. Is there any way to install latest torchvisin which is compatible with latest torch(1.7.1) in windows? + via poetry?
Look in https://download.pytorch.org/whl/torch_stable.html for the version you want to install (torchvision version, python version, CUDA version, OS etc.) Add a URL dependency to your pyproject.toml file. For example I have torchvision 1.8.0 working with the following in my dependencies [tool.poetry.dependencies] python = "^3.8" torch = {url = "https://download.pytorch.org/whl/cu102/torch-1.8.0-cp38-cp38-win_amd64.whl"} torchvision = {url = "https://download.pytorch.org/whl/cu102/torchvision-0.9.0-cp38-cp38-win_amd64.whl"} From your activated virtual environment do poetry update torchvision and you should be good to go.
https://stackoverflow.com/questions/65301757/
How to understand creating leaf tensors in PyTorch?
From PyTorch documentation: b = torch.rand(10, requires_grad=True).cuda() b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor e = torch.rand(10).cuda().requires_grad_() e.is_leaf True # e requires gradients and has no operations creating it f = torch.rand(10, requires_grad=True, device="cuda") f.is_leaf True # f requires grad, has no operation creating it But why are e and f leaf Tensors, when they both were also cast from a CPU Tensor, into a Cuda Tensor (an operation)? Is it because Tensor e was cast into Cuda before the in-place operation requires_grad_()? And because f was cast by assignment device="cuda" rather than by method .cuda()?
When a tensor is first created, it becomes a leaf node. Basically, all inputs and weights of a neural network are leaf nodes of the computational graph. When any operation is performed on a tensor, it is not a leaf node anymore. b = torch.rand(10, requires_grad=True) # create a leaf node b.is_leaf # True b = b.cuda() # perform a casting operation b.is_leaf # False requires_grad_() is not an operation in the same way as cuda() or others are. It creates a new tensor, because tensor which requires gradient (trainable weight) cannot depend on anything else. e = torch.rand(10) # create a leaf node e.is_leaf # True e = e.cuda() # perform a casting operation e.is_leaf # False e = e.requires_grad_() # this creates a NEW tensor e.is_leaf # True Also, detach() operation creates a new tensor which does not require gradient: b = torch.rand(10, requires_grad=True) b.is_leaf # True b = b.detach() b.is_leaf # True In the last example we create a new tensor which is already on a cuda device. We do not need any operation to cast it. f = torch.rand(10, requires_grad=True, device="cuda") # create a leaf node on cuda
https://stackoverflow.com/questions/65301875/
gaussianblur transform not found in torchvision.transforms
I have written the following data augmentation pipeline for Pytorch: transform = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.GaussianBlur(11, sigma=(0.1, 2.0)), transforms.ToTensor(), transforms.Normalize(mean, std) ]) But running the above code gives me the following error: AttributeError: module 'torchvision.transforms' has no attribute 'GaussianBlur' Is GaussianBlur a new feature that has not been included in torchvision yet? Or is it just my torchvision version that is too old? I found it in the following documentation page: torchvision.transforms Here are my packages versions: torch 1.6.0+cu101 torchvision 0.7.0+cu101
On this documentation page, you can look for features on the different versions of pytorch (change version in the upper left corner). It appears that GaussianBlur did not exist in pytorch 1.6, and was added in 1.7. Your code should work with the latest version. If interested, here is the relevant merge commit : https://github.com/pytorch/vision/pull/2658
https://stackoverflow.com/questions/65304189/
Clarification of a Faster R-CNN torchvision implementation
I'm digging through the source code of the Faster R-CNN implementation of torchvision and I'm facing some things I don't quite understand. Namely, assuming that I want to create a Faster R-CNN model, not pretrained on COCO, with a backbone pre-trained on ImageNet, and then just get the backbone I do the following: plain_backbone = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body Which is consistent with how the backbone is set-up as indicated here and here. However, when I pass an image through the model, the results don't correspond to what I would obtain if I just set-up a resnet50 directly. Namely: # Regular resnet50, pretrained on ImageNet, without the classifier and the average pooling layer resnet50_1 = torch.nn.Sequential(*(list(torchvision.models.resnet50(pretrained=True).children())[:-2])) resnet50_1.eval() # Resnet50, extract from the Faster R-CNN, also pre-trained on ImageNet resnet50_2 = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body resnet50_2.eval() # Loading a random image, converted to torch.Tensor, rescalled to [0, 1] (not that it matters) image = transforms.ToTensor()(Image.open("random_images/random.jpg")).unsqueeze(0) # Obtaining the model outputs with torch.no_grad(): # Output from the regular resnet50 output_1 = resnet50_1(image) # Output from the resnet50 extracted from the Faster R-CNN output_2 = resnet50_2(image)["3"] # Their outputs aren't the same, which I would assume they should be np.testing.assert_almost_equal(output_1.numpy(), output_2.numpy()) Looking forward to your thoughts!
This is because fasterrcnn_resnet50_fpn uses a custom normalization layer (FrozenBatchNorm2d) instead of the default BatchNorm2D. They are very similar but I suspect that the small numerical differences are causing issues. It will pass the check if you specify the same normalization layer to be used for the standard resnet: import torch import torchvision from torchvision.models.detection.faster_rcnn import fasterrcnn_resnet50_fpn import numpy as np from torchvision.ops import misc as misc_nn_ops # Regular resnet50, pretrained on ImageNet, without the classifier and the average pooling layer resnet50_1 = torch.nn.Sequential(*(list(torchvision.models.resnet50(pretrained=True, norm_layer=misc_nn_ops.FrozenBatchNorm2d).children())[:-2])) resnet50_1.eval() # Resnet50, extract from the Faster R-CNN, also pre-trained on ImageNet resnet50_2 = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body resnet50_2.eval() # am too lazy to get a real image image = torch.ones((1, 3, 224, 224)) # Obtaining the model outputs with torch.no_grad(): # Output from the regular resnet50 output_1 = resnet50_1(image) # Output from the resnet50 extracted from the Faster R-CNN output_2 = resnet50_2(image)["3"] # Passes np.testing.assert_almost_equal(output_1.numpy(), output_2.numpy())
https://stackoverflow.com/questions/65305682/
Why is the decoder in an autoencoder uses a sigmoid on the last layer?
I am looking at this working variational auto encoder. The main class class VAE(nn.Module): def __init__(self): super(VAE, self).__init__() self.fc1 = nn.Linear(784, 400) self.fc21 = nn.Linear(400, 20) self.fc22 = nn.Linear(400, 20) self.fc3 = nn.Linear(20, 400) self.fc4 = nn.Linear(400, 784) def encode(self, x): h1 = F.relu(self.fc1(x)) return self.fc21(h1), self.fc22(h1) def reparametrize(self, mu, logvar): std = logvar.mul(0.5).exp_() if torch.cuda.is_available(): eps = torch.cuda.FloatTensor(std.size()).normal_() else: eps = torch.FloatTensor(std.size()).normal_() eps = Variable(eps) return eps.mul(std).add_(mu) def decode(self, z): h3 = F.relu(self.fc3(z)) return F.sigmoid(self.fc4(h3)) def forward(self, x): mu, logvar = self.encode(x) z = self.reparametrize(mu, logvar) return self.decode(z), mu, logvar has def decode(self, z): h3 = F.relu(self.fc3(z)) return F.sigmoid(self.fc4(h3)) I can't explain to myself why the last layer should be passed through a sigmoid before returning. Please explain. EDIT: I just checked without the sigmoid. Results are still nice. Now I am not sure if it is needed or not.
As mentioned in the answer by Jim J, sigmoid forces the output to the range [0, 1]. In this case, it's not because we want to interpret the output as a probability, rather it's done to force the output to be interpreted as pixel intensity of a grey scale image. If you remove the sigmoid, the NN will have to learn that all the outputs should be in the range [0, 1]. The sigmoid might help making the learning process more stable.
https://stackoverflow.com/questions/65307833/
Getting nan as loss value
I have implemented focal loss in Pytorch with using of this paper. And ran into a problem with loss - got nan as loss function value. This is implementation of focal loss: def focal_loss(y_real, y_pred, gamma = 2): y_pred = torch.sigmoid(y_pred) return -torch.sum((1 - y_pred)**gamma * y_real * torch.log(y_pred) + y_pred**gamma * (1 - y_real) * torch.log(1 - y_pred)) Train loop and my SegNet are working, I think so, because I have tested them with dice and bce losses. I think errors occurs in backprop. Why can it be? Maybe my implementation is wrong?
This version is working: def focal_loss(y_real, y_pred, eps = 1e-8, gamma = 0): probabilities = torch.clamp(torch.sigmoid(y_pred), min=eps, max=1-eps) return torch.mean((1 - probabilities)**gamma * (y_pred - y_real * y_pred + torch.log(1 + torch.exp(-y_pred))))
https://stackoverflow.com/questions/65310095/
GPyTorch, how to set initial value for "lengthscale" hyperparameter?
I am using GPyTorch regressor according to the documentation. I would like to set an initial value for the "lengthscale" hyperparameter in RBF kernel. I want to set a constant number as initial value for "lengthscale" (similar to what we can do in scikit-learn Gaussian Process Regressor). If you have any idea, please let me know.
There are two cases that follow from your question: You want to initialize your lengthscale with some value but the lengthscale is then optimized on further by the optimizer Assuming you have the same model as given in the documentation you have linked, just add the following before your training loop: init_lengthscale = 0.1 model.covar_module.base_kernel.lengthscale = init_lengthscale The model.covar_module gets your entire kernel and the base_kernel gets you your RBF kernel. You want to fix your lengthscale as some constant value which will not be further optimized In addition to the code for the first case, you do not feed the lengthscale as a hyperparameter to be optimized to your optimizer. all_params = set(exactModel.parameters()) final_params = list(all_params - {exactModel.covar_module.base_kernel.raw_lengthscale}) optimizer = torch.optim.Adam(final_params, lr=0.1) We remove the set of raw lengthscale values from all_params to create final_params which we then pass to the optimizer. Some sources to help: https://github.com/cornellius-gp/gpytorch/issues/689 https://docs.gpytorch.ai/en/v1.1.1/examples/00_Basic_Usage/Hyperparameters.html
https://stackoverflow.com/questions/65316550/
can I train(optimize) on f1 score loss with pytorch
I am building a binary classifier like below. Can I replace the BCELoss to optimize f1 score? criterion = nn.BCELoss() preds = model(inputs) loss = criterion(preds , labels)
F1 score is not a smooth function, so it cannot be optimized directly with gradient descent. With gradually changing network parameters, the output probability changes smoothly but the F1 score only changes when the probability crosses the boundary of 0.5. As a result, the gradient of F1 score is zero almost everywhere. You can use a soft version of the F-measure as described here. The trick is that you basically replace the count of true positives and false positives with a sort of probabilistic version: where oi is the network output and ti is the ground truth target probability. Then you continue with computing F-measure as usual. Also, you might find this Kaggle tutorial useful.
https://stackoverflow.com/questions/65318064/
no member named 'KInt16' in namespace 'torch'
I am working on a cpp extension for pytorch, and followed the official tutorial, using libtorch and cmake to compile the program. But I met the problem of creating tensor. This code can work. #include <torch/torch.h> #include <iostream> int main(){ std::vector<int64_t> test_data = {1, 2, 3, 4, 5, 6, 7, 8, 9}; torch::Tensor s = torch::from_blob(test_data.data(), {3, 3}, torch::kInt64); std::cout << "test case pass" << std::endl; } But this code can't work. int main(){ auto option = torch::TensorOptions().dtype(torch::KInt16); auto b = torch::zeros({2,3}, option); std::cout << "test case pass" << std::endl; } and the compile error log is here. error: no member named 'KInt16' in namespace 'torch'; did you mean 'kInt16'?
As explicitly stated by the error message, you made a typo : you should have written torch::kInt16 instead of torch::KInt16. The k should not be capitalized.
https://stackoverflow.com/questions/65325944/
Pytorch LSTM in ONNX.js - Uncaught (in promise) Error: unrecognized input '' for node: LSTM_4
I am trying to run a Pytorch LSTM network in browser. But I am getting this error: graph.ts:313 Uncaught (in promise) Error: unrecognized input '' for node: LSTM_4 at t.buildGraph (graph.ts:313) at new t (graph.ts:139) at Object.from (graph.ts:77) at t.load (model.ts:25) at session.ts:85 at t.event (instrument.ts:294) at e.initialize (session.ts:81) at e.<anonymous> (session.ts:63) at onnx.min.js:14 at Object.next (onnx.min.js:14) How can I resolve this? Here is my code for saving the model to onnx: net = torch.load('trained_model/trained_model.pt') net.eval() with torch.no_grad(): input = torch.tensor([[1,2,3,4,5,6,7,8,9]]) h0, c0 = net.init_hidden(1) output, (hn, cn) = net.forward(input, (h0,c0)) torch.onnx.export(net, (input, (h0, c0)), 'trained_model/trained_model.onnx', input_names=['input', 'h0', 'c0'], output_names=['output', 'hn', 'cn'], dynamic_axes={'input': {0: 'sequence'}}) I put input as the only dynamic axis since it is the only one that can vary in size. With this code, the model saves properly as trained_model.onnx. It does give me a warning: UserWarning: Exporting a model to ONNX with a batch_size other than 1, with a variable length with LSTM can cause an error when running the ONNX model with a different batch size. Make sure to save the model with a batch size of 1, or define the initial states (h0/c0) as inputs of the model. warnings.warn("Exporting a model to ONNX with a batch_size other than 1, " This warning a little confusing since I am exporting it with a batch_size of 1: input has shape torch.Size([1, 9]) h0 has shape torch.Size([2, 1, 256]) - corresponding to (num_lstm_layers, batch_size, hidden_dim) c0 also has shape torch.Size([2, 1, 256]) But since I do define h0/c0 as inputs of the model I don't think this relates to the problem. This is my javascript code for running in the browser: <script src="https://cdn.jsdelivr.net/npm/onnxjs/dist/onnx.min.js"></script> <!-- Code that consume ONNX.js --> <script> // create a session const myOnnxSession = new onnx.InferenceSession(); console.log('trying to load the model') // load the ONNX model file myOnnxSession.loadModel("./trained_model.onnx").then(() => { console.log('successfully loaded model!') // after this I generate input and run the model // since my code fails before this it isn't relevant }); </script> Based on the console.log statements, it is failing to load to the model. How should I resolve this? If relevant I'm using Python 3.8.5, Pytorch 1.6.0, ONNX 1.8.0.
For anyone coming across this in the future, I believe I'm getting this error because even though ONNX supports Pytorch LSTM networks, ONNX.js does not support it yet. To get around this, instead of running in the browser I may use a simple web application framework called streamlit.
https://stackoverflow.com/questions/65326913/
load pytorch dataloader into GPU
Is there a way to load a pytorch DataLoader (torch.utils.data.Dataloader) entirely into my GPU? Now, I load every batch separately into my GPU. CTX = torch.device('cuda') train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=0, ) net = Net().to(CTX) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE) for epoch in range(EPOCHS): for inputs, labels in test_loader: inputs = inputs.to(CTX) # this is where the data is loaded into GPU labels = labels.to(CTX) optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() print(f'training accuracy: {net.validate(train_loader, device=CTX)}/{len(train_dataset)}') print(f'validation accuracy: {net.validate(test_loader, device=CTX)}/{len(test_dataset)}') where the Net.validate() function is given by def validate(self, val_loader, device=torch.device('cpu')): correct = 0 for inputs, labels in val_loader: inputs = inputs.to(device) labels = labels.to(device) outputs = torch.argmax(self(inputs), dim=1) correct += int(torch.sum(outputs==labels)) return correct I would like to improve the speed by loading the entire dataset trainloader into my GPU, instead of loading every batch separately. So, I would like to do something like train_loader.to(CTX) Is there an equivalent function for this? Because torch.utils.data.DataLoader does not have this attribute .to(). I work with an NVIDIA GeForce RTX 2060 with CUDA Toolkit 10.2 installed.
you can put your data of dataset in advance train_dataset.train_data.to(CTX) #train_dataset.train_data is a Tensor(input data) train_dataset.train_labels.to(CTX) for example of minst import torch from torch.utils.data import DataLoader from torchvision import datasets from torchvision import transforms batch_size = 64 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) train_data = datasets.MNIST( root='./dataset/minst/', train=True, download=False, transform=transform ) train_loader = DataLoader( dataset=train_data, shuffle=True, batch_size=batch_size ) train_data.train_data.to(torch.device("cuda:0")) # put data into GPU entirely train_data.train_labels.to(torch.device("cuda:0")) I got this solution by using debugger...
https://stackoverflow.com/questions/65327247/
How can I fold a Tensor that I unfolded with PyTorch that has overlap?
I have a Tensor of size: torch.Size([1, 63840]) which I then unrolled: inp_unfolded = inp_seq.unfold(1, 160, 80) that gives me a shape of: torch.Size([1, 797, 160]) How can I re-fold that to get a Tensor of torch.Size([1, 63840])?
Well, actually the conditions, given t.unfold(i, n, s) are: n >= s (otherwise step is skipping some original data and we cannot restore it) n + s <= t.shape[i] Then we can do it via: def roll(x, n, s, axis=1): return torch.cat((p[0], p[1:][:, n-s:].flatten()), axis) explanation: p[0] is the starting chunk that is always unique at start p[1:][:, n-s:] - then, we take rest of rolls and n-s depict how many elements will overlap between rolls so we want to ignore them and take only those from n-s ilustration: x.unfold(0, 5, 2) tensor([[ 1., 2., 3., 4., 5.], [ 3., 4., 5., 6., 7.], # 3, 4, 5 are repeated [ 5., 6., 7., 8., 9.], # 5, 6, 7 are repeated... [ 7., 8., 9., 10., 11.], [ 9., 10., 11., 12., 13.], [11., 12., 13., 14., 15.], [13., 14., 15., 16., 17.]]) example: >> x = torch.arange(1., 18) >> p = x.unfold(0, 5, 2) >> roll(p, 5, 2, 0) tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17.]) you can also try it with x = torch.arange(1., 18).reshape(1, 17) and axis 1
https://stackoverflow.com/questions/65327458/
How to train the Shared Layers in PyTorch
I have the follow code import torch import torch.nn as nn from torchviz import make_dot, make_dot_from_trace class Net(nn.Module): def __init__(self, input, output): super(Net, self).__init__() self.fc = nn.Linear(input, output) def forward(self, x): x = self.fc(x) x = self.fc(x) return x model = Net(12, 12) print(model) x = torch.rand(1, 12) y = model(x) make_dot(y, params = dict(model.named_parameters())) Here I reuse the self.fc twice in the forward. The computational graph is look I am confused about the computational graph and, I am curious how to train this model in back propagation? It seem for me the gradient will live in a loop forever. Thanks a lot.
There are no issues with your graph. You can train it the same way as any other feed-forward model. Regarding looping: Since it is a directed acyclic graph, the are no actual loops (check out the arrow directions). Regarding backprop: Let’s consider fc.bias parameter. Since you are reusing the same layer two times, the bias has two outgoing arrows (used in two places of your net). During backpropagation stage the direction is reversed: bias will get gradients from two places, and these gradients will add up. Regarding the graph: An FC layer can be represented as this: Addmm(bias, x, T(weight), where T is transposing and Addmm is matrix multiplication plus adding a vector. So, you can see how data (weight, bias) is passed into functions (Addmm, T) https://pytorch.org/docs/stable/generated/torch.addmm.html https://pytorch.org/docs/stable/generated/torch.t.html
https://stackoverflow.com/questions/65328416/
pytorch torch segmentation model syntax error in jupyter notebook
i get the error: nn.MaxPool2d(kernel_size=2, stride=2), ^ SyntaxError: invalid syntax with the following code: import torch import torch.nn as nn import torch.nn.functional as F class CNNSEG(nn.Module): # Define your model def __init__(self, num_classes=1): super(CNNSEG, self).__init__() #Adds one extra class to stand for the zero-padded pixels self.num_classes = num_classes + 1 self.conv1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), nn.LocalResponseNorm(size=5, alpha=1e-4, beta=0.75), ) self.conv2 = nn.Sequential( nn.Conv2d(in_channels=32, out_channels=256, kernel_size=5, stride=1, padding=2, groups=2), nn.ReLU(),j nn.MaxPool2d(kernel_size=2, stride=2), nn.LocalResponseNorm(size=5, alpha=1e-4, beta=0.75), ) self.conv3 = nn.Sequential( nn.Conv2d(in_channels=256, out_channels=512, kernel_size=5, stride=1, padding=2, groups=2), nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2), nn.LocalResponseNorm(size=5, alpha=1e-4, beta=0.75), ) self.score_conv = nn.Conv2d(in_channels=512, out_channels=num_classes, kernel_size=1, padding=0) self.deconv = nn.ConvTranspose2d(in_channels=num_classes, out_channels=num_classes, kernel_size=16, stride=8, bias=False) self.out_activation = nn.Softmax((num_classes, 4)) def forward(self, x): out1 = self.conv1(x) out2 = self.conv2(out1) out3 = self.conv3(out2) out_score = self.score_conv(out3) out_deconv = self.deconv(out_score) return out_deconv model = CNNSEG(num_classes=4) # We can now create a model using your defined segmentation model print(model) i haven't got any open brackets as far as I can see, I don't see what can be wrong here ? thanks
When setting the second layer (self.conv2) after the ReLu, you must have mistyped a "j". That's why you get invalid syntax.
https://stackoverflow.com/questions/65328702/
Efficiently find the dot product of two lists of vectors stored as PyTorch tensors & preserve backprop
Suppose I had tensors X and Y which are both (batch_size, d) dimensional. I would like to find the (batch_size x 1) tensor resulting from [X[0]@Y[0].T, X[1]@Y[1].T, ...] There are two ways I can think of doing this, neither of which are particularly efficient. Way 1 product = torch.eye(batch_size) * [email protected] product = torch.sum(product, dim=1) This works, but for large matrices there are a LOT of wasted computations Way 2 product = torch.cat( [ X[i]@Y[i].T for i in X.size(0) ], dim=0 ) This is good in that no cycles are wasted, but it won't leverage any of the built-in parallelism torch offers. I'm aware that numpy has a method that will do this, but converting the tensors to np arrays will destroy the chain of backpropagation, and this is for a neural net, so that's not an option. Am I missing an obvious built in torch method, or am I stuck with these two options?
One way would be this. Simply use broadcasted matrix multiplication over reshaped row vectors of X and column vectors of Y. import torch X = X.reshape(batch_size, 1, d) Y = Y.reshape(batch_size, d, 1) product = torch.matmul(X, Y).squeeze(1) The output product will have the required shape of (batch_size, 1) with the desired result.
https://stackoverflow.com/questions/65330884/
My PyTorch GAN is changing from producing random noise to darkness with no convergence. Why is this?
My code is very basic, just following the first lab from DeepLearning.ai's GAN specialization. However, my code does not have the same output, what is the reason for this. Sorry if this is just a silly mistake, this is my first experience with GANs. I begin by creating the Generator and Discriminator classes, my random noise function, and creating my models. I then run the training loop, but after 3 epochs, all of the outputs from the GAN are black. import torch from torch import nn from tqdm.auto import tqdm from torchvision import transforms from torchvision.datasets import MNIST from torchvision.utils import make_grid from torch.utils.data import DataLoader, Dataset import matplotlib.pyplot as plt torch.manual_seed(0) def show_tensor_images(image_tensor,num_images=25,size=(1,28,28)): image_unflat=image_tensor.detach().cpu().view(-1,*size) image_grid=make_grid(image_unflat[:num_images],nrow=5) plt.imshow(image_grid.permute(1,2,0).squeeze()) plt.show() class Generator(nn.Module): def __init__(self,z_dim): super(Generator,self).__init__() self.linear1=nn.Linear(z_dim,128) self.bn1=nn.BatchNorm1d(128) self.linear2=nn.Linear(128,256) self.bn2=nn.BatchNorm1d(256) self.linear3=nn.Linear(256,512) self.bn3=nn.BatchNorm1d(512) self.linear4=nn.Linear(512,1024) self.bn4=nn.BatchNorm1d(1024) self.linear5=nn.Linear(1024,784) self.relu=nn.ReLU(True) self.sigmoid=nn.Sigmoid() def forward(self,x): x=self.linear1(x) x=self.bn1(x) x=self.relu(x) x=self.linear2(x) x=self.bn2(x) x=self.relu(x) x=self.linear3(x) x=self.bn3(x) x=self.relu(x) x=self.linear4(x) x=self.bn4(x) x=self.relu(x) x=self.linear5(x) x=self.sigmoid(x) return(x) def get_noise(n_samples,z_dim,device='cpu'): return torch.randn(n_samples,z_dim,device=device) class Discriminator(nn.Module): def __init__(self): super(Discriminator,self).__init__() self.linear1=nn.Linear(784,512) self.linear2=nn.Linear(512,256) self.linear3=nn.Linear(256,128) self.linear4=nn.Linear(128,1) self.relu=nn.LeakyReLU(0.2,True) def forward(self,x): x=self.linear1(x) x=self.relu(x) x=self.linear2(x) x=self.relu(x) x=self.linear3(x) x=self.relu(x) x=self.linear4(x) return(x) criterion=nn.BCEWithLogitsLoss() epochs=200 z_dim=64 display_step=500 batch_size=128 lr=0.00001 device='cuda' dataloader=DataLoader(MNIST('.',download=True,transform=transforms.ToTensor()),batch_size=batch_size,shuffle=True) gen=Generator(z_dim).to(device) gen_opt=torch.optim.Adam(gen.parameters(),lr=lr) disc=Discriminator().to(device) disc_opt=torch.optim.Adam(disc.parameters(),lr=lr) def get_disc_loss(gen,disc,criterion,real,num_images,z_dim,device): noise=get_noise(num_images,z_dim,device=device) gen_out=gen(noise) disc_fake_out=disc(gen_out.detach()) fake_loss=criterion(disc_fake_out,torch.zeros_like(disc_fake_out)) disc_real_out=disc(real) real_loss=criterion(disc_real_out,torch.zeros_like(disc_real_out)) disc_loss=(fake_loss+real_loss)/2 return(disc_loss) def get_gen_loss(gen,disc,criterion,num_images,z_dim,device): noise=get_noise(num_images,z_dim,device=device) gen_out=gen(noise) disc_out=disc(gen_out) loss=criterion(disc_out,torch.ones_like(disc_out)) return loss cur_step=0 mean_generator_loss=0 mean_discriminator_loss=0 gen_loss=False error=False for epoch in range(epochs): for x,_ in tqdm(dataloader): cur_batch_size=len(x) x=x.view(cur_batch_size,-1).to(device) disc_opt.zero_grad() disc_loss=get_disc_loss(gen,disc,criterion,x,cur_batch_size,z_dim,device) disc_loss.backward(retain_graph=True) disc_opt.step() gen_opt.zero_grad() gen_loss=get_gen_loss(gen,disc,criterion,cur_batch_size,z_dim,device) gen_loss.backward() gen_opt.step() mean_discriminator_loss+=disc_loss.item()/display_step mean_generator_loss+=gen_loss.item()/display_step if cur_step%display_step==0 and cur_batch_size>0: print(f"Step {cur_step}: Generator loss: {mean_generator_loss}, discriminator loss: {mean_discriminator_loss}") fake_noise = get_noise(cur_batch_size, z_dim, device=device) fake = gen(fake_noise) show_tensor_images(fake) show_tensor_images(x) mean_generator_loss = 0 mean_discriminator_loss = 0 cur_step += 1
Your discriminator loss is wrong. The labels for the real images should be 1 instead of 0. Updated code: def get_disc_loss(gen,disc,criterion,real,num_images,z_dim,device): noise=get_noise(num_images,z_dim,device=device) gen_out=gen(noise) disc_fake_out=disc(gen_out.detach()) fake_loss=criterion(disc_fake_out,torch.zeros_like(disc_fake_out)) disc_real_out=disc(real) real_loss=criterion(disc_real_out,torch.ones_like(disc_real_out)) disc_loss=(fake_loss+real_loss)/2 return(disc_loss) The output image looks pretty good to me:
https://stackoverflow.com/questions/65333797/
Trying to understand PyTorch SmoothL1Loss Implementation
I have been trying to go through all of the loss functions in PyTorch and build them from scratch to gain a better understanding of them and I’ve run into what is either an issue with my recreation, or an issue with PyTorch’s implementation. According to Pytorch’s documentation for SmoothL1Loss it simply states that if the absolute value of the prediction minus the ground truth is less than beta, we use the top equation. Otherwise, we use the bottom one. Please see documentation for the equations. Below is my implementation of this in the form of a minimum test: import torch import torch.nn as nn import numpy as np predictions = torch.randn(3, 5, requires_grad=True) target = torch.randn(3, 5) def l1_loss_smooth(predictions, targets, beta = 1.0): loss = 0 for x, y in zip(predictions, targets): if abs(x-y).mean() < beta: loss += (0.5*(x-y)**2 / beta).mean() else: loss += (abs(x-y) - 0.5 * beta).mean() loss = loss/predictions.shape[0] output = l1_loss_smooth(predictions, target) print(output) Gives an output of: tensor(0.7475, grad_fn=<DivBackward0>) Now the Pytorch implementation: loss = nn.SmoothL1Loss(beta=1.0) output = loss(predictions, target) Gives an output of: tensor(0.7603, grad_fn=<SmoothL1LossBackward>) I can’t figure out where the error in implementation lies. Upon looking a little deeper into the smooth_l1_loss function in the _C module (file: smooth_c_loss_op.cc) I noticed that the doc string mentions that it’s a variation on Huber Loss but the documentation for SmoothL1Loss says it is Huber Loss. So overall, just confused on how it’s implemented and whether it’s a combo of SmoothL1Loss and Huber Loss, Just Huber Loss, or something else.
The description in the documentation is correct. Your implementation wrongly applies the case selection on the mean of the data. It should be an element-wise selection instead (if you think about the implementation of the vanilla L1 loss, and the motivation for smooth L1 loss). The following code gives a consistent result: import torch import torch.nn as nn import numpy as np predictions = torch.randn(3, 5, requires_grad=True) target = torch.randn(3, 5) def l1_loss_smooth(predictions, targets, beta = 1.0): loss = 0 diff = predictions-targets mask = (diff.abs() < beta) loss += mask * (0.5*diff**2 / beta) loss += (~mask) * (diff.abs() - 0.5*beta) return loss.mean() output = l1_loss_smooth(predictions, target) print(output) loss = nn.SmoothL1Loss(beta=1.0) output = loss(predictions, target) print(output)
https://stackoverflow.com/questions/65335024/
calculate two losses in a model and backpropagate twice
I'm creating a model using BertModel to identify answer span (without using BertForQA). I have an indepent linear layer for determining start and end token respectively. In init(): self.start_linear = nn.Linear(h, output_dim) self.end_linear = nn.Linear(h, output_dim) In forward(), I output a predicted start layer and predicted end layer: def forward(self, input_ids, attention_mask): outputs = self.bert(input_ids, attention_mask) # input = bert tokenizer encoding lhs = outputs.last_hidden_state # (batch_size, sequence_length, hidden_size) out = lhs[:, -1, :] # (batch_size, hidden_dim) st = self.start_linear(out) end = self.end_linear(out) predict_start = self.softmax(st) predict_end = self.softmax(end) return predict_start, predict_end Then in train_epoch(), I tried to backpropagate the losses separately: def train_epoch(model, train_loader, optimizer): model.train() total = 0 st_loss, st_correct, st_total_loss = 0, 0, 0 end_loss, end_correct, end_total_loss = 0, 0, 0 for batch in train_loader: optimizer.zero_grad() input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) start_idx = batch['start'].to(device) end_idx = batch['end'].to(device) start, end = model(input_ids=input_ids, attention_mask=attention_mask) st_loss = model.compute_loss(start, start_idx) end_loss = model.compute_loss(end, end_idx) st_total_loss += st_loss.item() end_total_loss += end_loss.item() # perform backward propagation to compute the gradients st_loss.backward() end_loss.backward() # update the weights optimizer.step() But then I got on the line of end_loss.backward(): Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time. Am I supposed to do the backward pass separately? Or should I do it in another way? Thank you!
The standard procedure is just to sum both losses and backpropagate on the sum. It can be important to make sure both losses you want to sum have values that are on average approximately as big, or at least proportional to the importance you want each to have relative to one another(otherwise, the model is going to optimize for the bigger loss more than for the smaller one). In the span detection case, I'm guessing this won't be necessary however due to the apparent symmetry of the problem.
https://stackoverflow.com/questions/65337804/
Tensor format issue from converting Pytorch -> Onnx -> Tensorflow
I have an issue with Tensorflow model that is converted from Pytorch -> Onnx -> Tensorflow. The issue is the converted Tensorflow model expects the input in Pytorch format that is (batch size, number channels, height, width) but not in Tensorflow format (batch size, height, width, number channel). Therefore, I cannot use the model to process further with Vitis AI. So I would like to ask is there is any ways to convert this Pytorch input format to Tensorflow format by using tools from Onnx, Tensorflow 1, or others? My code is as below: Pytorch -> Onnx from hardnet import hardnet import torch import onnx ckpt = torch.load('../hardnet.pth') model_state_dict = ckpt['model_state_dict'] optimizer_state_dict = ckpt['optimizer_state_dict'] model = hardnet(11) model.load_state_dict(model_state_dict) model.eval() dummy_input = torch.randn(1, 3, 1080, 1920) input_names = ['input0'] output_names = ['output0'] output_file = 'hardnet.onnx' torch.onnx.export(model, dummy_input, output_file, verbose=True, input_names=input_names, output_names=output_names, opset_version=11, keep_initializers_as_inputs=True) onnx_model = onnx.load(output_file) onnx.checker.check_model(onnx_model) print('Passed Onnx') Onnx -> Tensorflow 1 (using Tensorflow 1.15) import cv2 import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import onnx from onnx_tf.backend import prepare output_file = 'hardnet.onnx' onnx_model = onnx.load(output_file) output = prepare(onnx_model) output.export_graph('hardnet.pb') tf.compat.v1.disable_eager_execution() def load_pb(path_to_pb: str): """From: https://stackoverflow.com/questions/51278213/what-is-the-use-of-a-pb-file-in-tensorflow-and-how-does-it-work """ with tf.gfile.GFile(path_to_pb, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def, name='') return graph graph = load_pb('hardnet.pb') input = graph.get_tensor_by_name('input0:0') output = graph.get_tensor_by_name('output0:0') mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] img = cv2.imread('train_0.jpg', cv2.IMREAD_COLOR) img = cv2.resize(img, (1920, 1080)) img = img/255 img = img - mean img = img/std img = np.expand_dims(img, -1) # To Pytorch format. img = np.transpose(img, (3, 2, 0, 1)) img = img with tf.Session(graph=graph) as sess: pred = sess.run(output, {input: img})
You could wrap your Pytorch model into another one that would do the transpose you want to have in TensorFlow. See the following example: Let's say you have the following toy NN: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.rnn = nn.LSTM(10, 20, 2) def forward(self, x): h0 = torch.zeros(2, 3, 20) c0 = torch.zeros(2, 3, 20) return self.rnn(x, (h0, c0)) the exemplary pytorch/tensorflow input shape would be : >> pytorch_input = torch.randn(5, 3, 10) >> tf_input = torch.transpose(pytorch_input, 1, 2) >> print("PyTorch input shape: ", pytorch_input.shape) >> print("TensorFlow input shape: ", tf_input.shape) PyTorch input shape: torch.Size([5, 3, 10]) TensorFlow input shape: torch.Size([5, 10, 3]) Now, the wrapper which will first transpose input and then pass transposed input to some model: class NetTensorFlowWrapper(nn.Module): def __init__(self, main_module: nn.Module): super(NetTensorFlowWrapper, self).__init__() self.main_module = main_module def forward(self, x): x = torch.transpose(x, 1, 2) return self.main_module(x) Then, this is possible: net = Net() net_wrapper = NetTensorFlowWrapper(net) net(pytorch_input) net_wrapper(tf_input) and then, when you finally save your models like you did previously via torch.onnx.export and read their graph via onnx package (not torch.onnx) you will have... for Net- input 5x3x10 and no transpose layer graph torch-jit-export ( %input0[FLOAT, 5x3x10] { %76 = Shape(%input0) %77 = Constant[value = <Scalar Tensor []>]() for NetTensorFlowWrapper- input 5x10x3 and transpose layer graph torch-jit-export ( %input0[FLOAT, 5x10x3] { %9 = Transpose[perm = [0, 2, 1]](%input0) %77 = Shape(%9) %78 = Constant[value = <Scalar Tensor []>]() ...
https://stackoverflow.com/questions/65338639/
Multi Head Attention: Correct implementation of Linear Transformations of Q, K, V
I am implementing the Multi-Head Self-Attention in Pytorch now. I looked at a couple of implementations and they seem a bit wrong, or at least I am not sure why it is done the way it is. They would often apply the linear projection just once: self.query_projection = nn.Linear(input_dim, output_dim) self.key_projection = nn.Linear(input_dim, output_dim) self.value_projection = nn.Linear(input_dim, output_dim) and then they would often reshape the projection as query_heads = query_projected.view(batch_size, query_lenght, head_count, head_dimension).transpose(1,2) key_heads = key_projected.view(batch_size, key_len, head_count, head_dimension).transpose(1, 2) # (batch_size, heads_count, key_len, d_head) value_heads = value_projected.view(batch_size, value_len, head_count, head_dimension).transpose(1, 2) # (batch_size, heads_count, value_len, d_head) attention_weights = scaled_dot_product(query_heads, key_heads) According to this code, each head will work on a piece of a projected query. However, the initial paper says that we need to have a different Linear projection for each head in the encoder. Is this displayed implementation correct?
They are equivalent. Theoretically (and in paper writing), it is easier to consider them as separate linear projections. Say if you have 8 heads, and each head has a M->N projection, then one would have 8 N by M matrix. In implementation though, it is faster to have a M->8N transformation by having a 8N by M matrix. One can concatenate the matrices in the first formulation to obtain the matrix in the second formulation.
https://stackoverflow.com/questions/65340088/
Pytorch RNN model not learning anything
Task: Predicting whether provided disaster tweets are real or not. Have already converted my textual data into tensors and then into train_loader. All the required code is mentioned below. My Model Architecture class RealOrFakeLSTM(nn.Module): def __init__(self, input_size, output_size, embedding_dim, hidden_dim, n_layers, bidirec, drop_prob): super().__init__() self.output_size=output_size self.n_layers=n_layers self.hidden_dim=hidden_dim self.bidirec=True; self.embedding=nn.Embedding(vocab_size, embedding_dim) self.lstm1=nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=drop_prob, batch_first=True, bidirectional=bidirec) #self.lstm2=nn.LSTM(hidden_dim, hidden_dim, n_layers, dropout=drop_prob, batch_first=True) self.dropout=nn.Dropout(drop_prob) self.fc=nn.Linear(hidden_dim, output_size) self.sigmoid=nn.Sigmoid() def forward(self, x): batch=len(x) hidden1=self.init_hidden(batch) #hidden2=self.init_hidden(batch) embedd=self.embedding(x) lstm_out1, hidden1=self.lstm1(embedd, hidden1) #lstm_out2, hidden2=self.lstm2(lstm_out1, hidden2) lstm_out1=lstm_out1.contiguous().view(-1, self.hidden_dim) # make it lstm_out2, if you un comment the other lstm cell. out=self.dropout(lstm_out1) out=self.fc(out) sig_out=self.sigmoid(out) sig_out=sig_out.view(batch, -1) sig_out=sig_out[:, -1] return sig_out def init_hidden(self, batch): if (train_on_gpu): if self.bidirec==True: hidden=(torch.zeros(self.n_layers*2, batch, self.hidden_dim).cuda(),torch.zeros(self.n_layers*2, batch, self.hidden_dim).cuda()) else: hidden=(torch.zeros(self.n_layers, batch, self.hidden_dim).cuda(),torch.zeros(self.n_layers, batch, self.hidden_dim).cuda()) else: if self.bidirec==True: hidden=(torch.zeros(self.n_layers*2, batch, self.hidden_dim),torch.zeros(self.n_layers*2, batch, self.hidden_dim)) else: hidden=(torch.zeros(self.n_layers, batch, self.hidden_dim),torch.zeros(self.n_layers, batch, self.hidden_dim)) return hidden Hyper parameters and training learning_rate=0.005 epochs=50 vocab_size = len(vocab_to_int)+1 # +1 for the 0 padding output_size = 2 embedding_dim = 300 hidden_dim = 256 n_layers = 2 batch_size=23 net=RealOrFakeLSTM(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, True, 0.3) net.to(device) criterion=nn.BCELoss() optimizer=torch.optim.Adam(net.parameters(),lr=learning_rate) net.train() loss_arr=np.array([]) lossPerEpoch=np.array([]) for i in range(epochs): total_loss=0; for input,label in train_loader: if train_on_gpu: input=input.to(device) label=label.to(device) optimizer.zero_grad() input=input.clone().detach().long() out=net(input) loss=criterion(out.squeeze(),label.float()) loss_arr=np.append(loss_arr,loss.cpu().detach().numpy()) loss.backward() optimizer.step() total_loss+=loss total_loss=total_loss/len(train_loader) lossPerEpoch=np.append(lossPerEpoch,total_loss.cpu().detach().numpy()) print("Epoch ",i,": ",total_loss) torch.save(net.state_dict(), Path+"/RealOrFakeLSTM.pt") torch.save(net, Path+"/RealOrFakeLSTM.pth") current_time=str(time.time()) torch.save(net.state_dict(), Path+"/pt/RealOrFakeLSTM"+'_pt_'+current_time+".pt") torch.save(net, Path+"/pth/RealOrFakeLSTM"+'_pth_'+current_time+".pth") The total loss values are all almost same, All the outcomes probabilities in the test dataset are exactly same. I am quite new to this, so hyper parameter tuning, i am kinda going with bruteforce, but nothing seems to work, I think my problem is not with the architecture but with the training part, as all the predictions are exactly same.
From what I can tell you are initializing the hidden1=self.init_hidden(batch) in every forward pass. That should not be correct. Initializing a layer in every forward pass explains the behavior you described.
https://stackoverflow.com/questions/65340258/
Classes in Coco dataset
I have been checking out this detr repository and the total number of classes are 100, but 10 of these are empty string as shown here. Is there any particular reason behind this?
Basically, the COCO dataset was described in a paper before its release (you can find it here). At this point, the authors gave a list of the 91 types of objects that would be in the dataset. But when the 2014 and 2017 datasets sere released, it turned out that you could find only 80 of these objects in the annotations. The list you have is the original list of objects (as described in the paper) but with every object that does not appear in the 2014 and 2017 replaced by the empty string "". My guess is that the sole purpose of keeping these "phantom" objects is to keep consistency with object ids that may have been fixed someday in the past. If you want to learn more about it, you can look at this blog entry.
https://stackoverflow.com/questions/65340780/
How to use different optimizers for each model layer Pytorch?
I have a two layer network built in pytorch and two two different optimizers. I would like to use one optimizer on the first layer and the other optimizers on the second layer. Is this possible?
Yes this is possible: When initializing an optimizer you need to pass it the parameters that you want to optimize which is where you have to do this division. For instance: import torch.nn as nn import torch.optim net = nn.Sequential( nn.Linear(1, 3), nn.Linear(3, 5), nn.Linear(5, 1) ) opt1 = torch.optim.Adam(params=net[0].parameters(), lr=0.1) opt2 = torch.optim.Adam(params=[*net[1].parameters(), *net[2].parameters()], lr=0.001)
https://stackoverflow.com/questions/65342928/
How to check if a model is in train or eval mode in Pytorch?
How to check from within a model if it is currently in train or eval mode?
From the Pytorch forum, with a small tweak: use if self.training: # it's in train mode else: # it's in eval mode Always better to have a stack overflow answer than to look at forums. Explanation about the modes
https://stackoverflow.com/questions/65344578/
Getting range of values from Pytorch Tensor
I am trying to get a specific range of values from my pytorch tensor. tensor=torch.tensor([0,1,2,3,4,5,6,7,8,9]) new_tensor=tensor[tensor>2] print(new_tensor) This will give me a tensor with scalars of 3-9 new_tensor2=tensor[tensor<8] print(new_tensor2) This will give me a tensor with scalars of 0-7 new_tensor3=tensor[tensor>2 and tensor<8] print(new_tensor3) However this raises an error. Would I be able to get a tensor with the values of 3-7 using something like this? I am trying to edit the tensor directly, and do not wish to change the order of the tensor itself. grad[x<-3]=0.1 grad[x>2]=1 grad[(x>=-3 and x<=2)]=siglrelu(grad[(x>=-3 and x<=2)])*(1.0-siglrelu(grad[(x>=-3 and x<=2)])) This is what I am really going for, and I am not exactly sure of how to go about this. Any help is appreciated, thank you!
You can use & operation, t = torch.arange(0., 10) print(t) print(t[(t > 2) & (t < 8)]) Output is, tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) tensor([3., 4., 5., 6., 7.])
https://stackoverflow.com/questions/65349544/
How to convert tensorflow.js model weights to pytorch tensors, and back?
I am using ml5.js, a wrapper around tensorflowjs. I want to train a neural network in the browser, download the weights, process them as tensors in pyTorch, and load them back into the browser's tensorflowjs model. How do I convert between these formats tfjs <-> pytorch? The browser model has a save() function which generates three files. A metadata file specific to ml5.js (json), a topology file describing model architecture (json), and a binary weights file (bin). // Browser model.save() // HTTP/Download model_meta.json (needed by ml5.js) model.json (needed by tfjs) model.weights.bin (needed by tfjs) # python backend import json with open('model.weights.bin', 'rb') as weights_file: with open('model.json', 'rb') as model_file: weights = weights_file.read() model = json.loads(model_file.read()) #### pytorch_tensor = convert2tensor(weights, model) # whats in this function? #### # Do some processing in pytorch #### new_weights_bin = convert2bin(pytorch_tensor, model) # and in this? #### Here is sample javascript code to generate and load the 3 files in the browser. To load, select all 3 files at once in the dialog box. If they are correct, a popup will show a sample prediction.
I was able to find a way to convert from tfjs model.weights.bin to numpy's ndarrays. It is trivial to convert from numpy arrays to pytorch state_dict which is a dictionary of tensors and their names. Theory First, the tfjs representation of the model should be understood. model.json describes the model. In python, it can be read as a dictionary. It has the following keys: The model architecture is described as another json/dictionary under the key modelTopology. It also has a json/dictionary under the key weightsManifest which describes the type/shape/location of each weight wrapped up in the corresponding model.weights.bin file. As an aside, the weights manifest allows for multiple .bin files to store weights. Tensorflow.js has a companion python package tensorflowjs, which comes with utility functions to read and write weights between the tf.js binary and numpy array format. Each weight file is read as a "group". A group is a list of dictionaries with keys name and data which refer to the weight name and the numpy array containing weights. There are optionally other keys too. group = [{'name': weight_name, 'data': np.ndarray}, ...] # 1 *.bin file Application Install tensorflowjs. Unfortunately, it will also install tensorflow. pip install tensorflowjs Use these functions. Note that I changed the signatures for convenience. from typing import Dict, ByteString import torch from tensorflowjs.read_weights import decode_weights from tensorflowjs.write_weights import write_weights def convert2tensor(weights: ByteString, model: Dict) -> Dict[str, torch.Tensor]: manifest = model['weightsManifest'] # If flatten=False, returns a list of groups equal to the number of .bin files. # Use flatten=True to convert to a single group group = decode_weights(manifest, weights, flatten=True) # Convert dicts in tfjs group format into pytorch's state_dict format: # {name: str, data: ndarray} -> {name: tensor} state_dict = {d['name']: torch.from_numpy(d['data']) for d in group} return state_dict def convert2bin(state_dict: Dict[str: np.ndarray], model: Dict, directory='./'): # convert state_dict to groups (list of 1 group) groups = [[{'name': key, 'data': value} for key, value in state_dict.items()]] # this library function will write to .bin file[s], but you can read it back # or change the function internals my copying them from source write_weights(groups, directory, write_manifest=False)
https://stackoverflow.com/questions/65350949/
PyTorch : Aggregate two models
Hello and greetings from Greece class Model(nn.Module): def __init__(self, embedding_size, num_numerical_cols, output_size, layers, p=0.4): super().__init__() self.all_embeddings = nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in embedding_size]) self.embedding_dropout = nn.Dropout(p) self.batch_norm_num = nn.BatchNorm1d(num_numerical_cols) all_layers = [] num_categorical_cols = sum((nf for ni, nf in embedding_size)) input_size = num_categorical_cols + num_numerical_cols for i in layers: all_layers.append(nn.Linear(input_size, i)) all_layers.append(nn.ReLU(inplace=True)) all_layers.append(nn.BatchNorm1d(i)) all_layers.append(nn.Dropout(p)) input_size = i all_layers.append(nn.Linear(layers[-1], output_size)) self.layers = nn.Sequential(*all_layers) def forward(self, x_categorical, x_numerical): embeddings = [] for i,e in enumerate(self.all_embeddings): embeddings.append(e(x_categorical[:,i])) x = torch.cat(embeddings, 1) x = self.embedding_dropout(x) x_numerical = self.batch_norm_num(x_numerical) x = torch.cat([x, x_numerical], 1) x = self.layers(x) return x Suppose I have this nn for classification and I create two instances model_1=Model(categorical_embedding_sizes, numerical_data.shape[1], 2, [200,100,50], p=0.4) model_2=Model(categorical_embedding_sizes, numerical_data.shape[1], 2, [200,100,50], p=0.4) Ξ‘nd after I trained these two models i saved them with torch.save as model_1.pt and model_2.pt Is there a way to create a new model with the mean parameters of the two models ? something like model_new.weight=(model_1.weight+model_2.weight)/2 model_new.bias=(model_1.bias+model_2.bias)/2 Thank you in advance
You can easily do this by generating a state dictionary from your two models' state dictionaries: state_1 = model_1.state_dict() state_2 = model_2.state_dict() for layer in state_1: state_1[layer] = (state_1[layer] + state_2[layer])/2 The above will loop through parameters (weights and biases) of all layers. Then overwrite this new state on either model_1 or a newly instanced model, like so: model_new = Model(categorical_embedding_sizes, numerical_data.shape[1], 2, [200,100,50], p=0.4) model_new.load_state_dict(state1)
https://stackoverflow.com/questions/65354978/
How to check if a small tensor is inside anthor large tensor
a = torch.tensor([[1,1],[2,2]]) I want to know if tensor([1,1]) is inside in a (return one bool) a.eq(torch.tensor([1,1])) tensor([[ True, True], [False, False]]) –> which should return a True to my case. a.eq(torch.tensor([1,2])) tensor([[ True, False], [False, True]]) –> which should return a false to my case. Any suggestion?
You could convert it to a list and then check via in operator: p = torch.arange(1, 5).reshape((2,2)) k = torch.tensor([1,2]) v = torch.tensor([1,3]) >> k.tolist() in p.tolist() True >> v.tolist() in p.tolist() False Or if you want to do it via torch only torch.count_nonzero(k==p, 1) == len(k)
https://stackoverflow.com/questions/65358256/
Pytorch how to reshape/reduce the number of filters without altering the shape of the individual filters
With a 3D tensor of shape (number of filters, height, width), how can one reduce the number of filters with a reshape which keeps the original filters together as whole blocks? Assume the new size has dimensions chosen such that a whole number of the original filters can fit side by side in one of the new filters. So an original size of (4, 2, 2) can be reshaped to (2, 2, 4). A visual explanation of the side by side reshape where you see the standard reshape will alter the individual filter shapes: I have tried various pytorch functions such as gather and select_index but not found a way to get to the end result in a general manner (i.e. works for different numbers of filters and different filter sizes). I think it would be easier to rearrange the tensor values after performing the reshape but could not get a tensor of the pytorch reshaped form: [[[1,2,3,4], [5,6,7,8]], [[9,10,11,12], [13,14,15,16]]] to: [[[1,2,5,6], [3,4,7,8]], [[9,10,13,14], [11,12,15,16]]] for completeness, the original tensor before reshaping: [[[1,2], [3,4]], [[5,6], [7,8]], [[9,10], [11,12]], [[13,14], [15,16]]]
Another option is to construct a list of parts and concatenate them x = torch.arange(4).reshape(4, 1, 1).repeat(1, 2, 2) y = torch.cat([x[i::2] for i in range(2)], dim=2) print('Before\n', x) print('After\n', y) which gives Before tensor([[[0, 0], [0, 0]], [[1, 1], [1, 1]], [[2, 2], [2, 2]], [[3, 3], [3, 3]]]) After tensor([[[0, 0, 1, 1], [0, 0, 1, 1]], [[2, 2, 3, 3], [2, 2, 3, 3]]]) Or a little more generally we could write a function that takes groups of neighbors along a source dimension and concatenates them along a destination dimension def group_neighbors(x, group_size, src_dim, dst_dim): assert x.shape[src_dim] % group_size == 0 return torch.cat([x[[slice(None)] * (src_dim) + [slice(i, None, group_size)] + [slice(None)] * (len(x.shape) - (src_dim + 2))] for i in range(group_size)], dim=dst_dim) x = torch.arange(4).reshape(4, 1, 1).repeat(1, 2, 2) # read as "take neighbors in groups of 2 from dimension 0 and concatenate them in dimension 2" y = group_neighbors(x, group_size=2, src_dim=0, dst_dim=2) print('Before\n', x) print('After\n', y)
https://stackoverflow.com/questions/65361912/
PyTorch with yolov5: color channel and result display
I have a script that grabs an application's screenshot and displays it. it works quite nicely on my machine like a video with around 60FPS. Now I want to use a yolov5 object detection model on these frames, with TorchHub, as advised here. The following works: import os os.getcwd() from PIL import ImageGrab import numpy as np import cv2 import pyautogui import win32gui import time from mss import mss from PIL import Image import tempfile os.system('calc') sct = mss() xx=1 tstart = time.time() while xx<10000: hwnd = win32gui.FindWindow(None, 'Calculator') left_x, top_y, right_x, bottom_y = win32gui.GetWindowRect(hwnd) #screen = np.array(ImageGrab.grab( bbox = (left_x, top_y, right_x, bottom_y ) ) ) bbox = {'top': top_y, 'left': left_x, 'width': right_x-left_x, 'height':bottom_y-top_y } screen = sct.grab(bbox) scr = np.array(screen) cv2.imshow('window', scr) if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break xx+=1 cv2.destroyAllWindows() tend = time.time() print(xx/(tend-tstart)) print((tend-tstart)) os.system('taskkill /f /im calculator.exe') Below I try to import torch and use my previously trained model, screen = sct.grab(bbox) scr = np.array(screen) result = model(scr, size=400) result.save("test.png") #this gives a TypeError: save() takes 1 positional argument but 2 were given result.show() #this opens a new Paint instance for every frame instead of keeping the same window. # The shown image is also in a wrong color channel scr = cv2.imread("test.png") # How can I use the `result` as argument to cv2.imshow(), # without saving to disk if possible? My questions: result.show() shows an image with wrong color channel compared to cv2.imshow(), how can I ensure that the image being fed to model is on the correct channel? The performance of classification and detection drastically decrease compared to the training validation, perhaps because of 1? Do you know how I can display the result model image with bounding boxes in a single window like what cv2.imshow() does ? (result.show() opens a new Paint process instance for each frame) ? How can I save this result image to disk and find more documentation on how to interact with model objects in general?
The following worked: result = model(cv2.cvtColor(scr, cv2.COLOR_BGR2RGB), size=400) This solved the accuracy problem and model.save() has pre-defined output names which are not currently changeable, it takes no arguments. model.show() shows the correct color channel output when fed the correct color channel as input.
https://stackoverflow.com/questions/65363565/
What parameters do I change to train a pytorch model from scratch?
I followed this tutorial to train a pytorch model for instance segmentation: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html I would not like to train a model on entirely different data and classes, totally unrelated to COCO. What changes do I need to make to retrain the model. From my reading I'm guessing besides have the correct number of classes I just need to train this line: model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) to model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False) But I notice there is another parameters: pretrained_backbone=True, trainable_backbone_layers=None should they be changed too?
The function signature is torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=3, **kwargs) Setting pretrained=False will tell PyTorch not to download model pre-trained on COCO train2017. You want it as you're interested in training. Usually, this is enough if you want to train on a different dataset. When you set pretrained=False, PyTorch will download pretrained ResNet50 on ImageNet. And by default, it'll freeze first two blocks named conv1 and layer1. This is how it was done in Faster R-CNN paper which frooze the initial layers of pretrained backbone. (Just print model to check its structure). layers_to_train = ['layer4', 'layer3', 'layer2', 'layer1', 'conv1'][:trainable_layers] Now, if you don't even want the first two layers to freeze, you can set trainable_backbone_layers=5 (done automatically, when you set pretrained_backbone=False), which will train the entire resnet backbone from scratch. Check PR#2160.
https://stackoverflow.com/questions/65365588/
(Numpy or PyTorch) Sum array elements for given bins
I would like this problem to be solved using PyTorch tensors. If there is no efficient solution in torch, then feel free to suggest a numpy solution. Let a be a 1-dimensional tensor (or numpy array), and bin_indices be a tensor (np array) of integers between 0 and n excluded. I want to compute the array bins that at position i contains the sum of elements of a[bins_indices == i]. n = 3 a = [1, 4, 3, -2, 5] # Values bins_indices = [0, 0, 1, 2, 0] # Correspondent bin indices bins = [10, 3, -2] # bins[0] = 1 + 4 + 5 etc. bins has 3 elements since n=3 If you can provide also a way of making this work for batches I would be immensely grateful to you!
Here's a one-line Numpy solution I could think of: bins = [np.sum(a[np.argwhere(bins_indices == i).flatten()]) for i in range(len(a))]
https://stackoverflow.com/questions/65373512/
No model named torch
Dear fellow humans and others, i have a question about using and importing pytorch into my editor. I installed pytorch through Conda and in my terminal editor I can use pytorch,see image but not in Visual studio code or Pycharm! No model named torch error code in VSC Why is that? Very sorry for the basic question and thank you for sparing some time! Should I use pip?
Reason: The environment where the module "torch" is stored is not the environment you currently select in VSCode. For installing and using the module "torch" in VSCode, you could refer to the following: Check the environment. The Python environment displayed in the lower left corner of VSCode is the same as that of the terminal. Install the module. (pip install torch) Run. Check the installation of the module. (pip show torch) Reference: Environment in VSCode.
https://stackoverflow.com/questions/65375934/
How to check if a tensor is on cuda or send it to cuda in Pytorch?
I have a tensor t = torch.zeros((4, 5, 6)) How to check if it is on gpu or not, and send it to gpu and back?
From the pytorch forum use t.is_cuda, t.cuda(), t.cpu() t = torch.randn(2,2) t.is_cuda # returns False t = torch.randn(2,2).cuda() t.is_cuda # returns True t = t.cpu() t.is_cuda # returns False When passing to and from gpu and cpu, new arrays are allocated on the relevant device.
https://stackoverflow.com/questions/65381244/
How can I crop away a tensor’s constant value padding (padding height and width are the same) with an unknown value and size?
How can I crop away a tensor’s constant value padding (padding height and width are the same) with an unknown value and size? I would think that because the padding surrounding my tensor has a constant value, and the same height / width, that it should be possible to know where to crop the tensor to remove the padding. import torch # Test tensor with NCHW dimensions a = torch.randn(1,4,5,5) # Can have any H & W size # Create padding for testing b = torch.nn.functional.pad(a, (2,2,2,2), 'constant', value=1.2) # Value can be any number c = # equal to a, without being able to use the variables a or b (or their argument values) NumPy solutions are acceptable as I can easily convert them to PyTorch. Edit: pad = torch.where(b[0, 0] - b[0, 0, 0, 0] != 0)[0][0] x_pad, y_pad = 0, 0 if (b.size(3) % 2) == 0: x_pad = 1 if (b.size(2) % 2) == 0: y_pad = 1 c = b[:, :, pad : -(pad + y_pad), pad : -(pad + x_pad)] assert c == a
You can get an idea of the content of a feature map by taking its middle row and measure the padding by looking for the first element change: midrow = b[0, 0, b.shape[3]//2, :] pad = (midrow[:-1] == midrow[:1])[:midrow.shape[0]//2].sum() Alternatively you could substract one of the feature maps with the padding value and find the first non zero value, which would be the padding size: pad = torch.where(b[0,0] - b[0,0,0,0] != 0)[0][0] Having the padding, we can discard the right amount of values around the feature maps for all batch elements and all channels: a = b[:, :, pad:-pad, pad:-pad]
https://stackoverflow.com/questions/65381859/
Issue when Re-implement Matrix Factorization in Pytorch
I try to implement matrix factorization in Pytorch as the data extractor and model. The original model is written in mxnet. Here I try to use the same idea in Pytorch. Here is my code, it can be runned directly in codelab import torch import torch.nn as nn import pandas as pd import numpy as np from torch.utils.data import Dataset, DataLoader import collections from collections import defaultdict from IPython import display import math from matplotlib import pyplot as plt import os import pandas as pd import random import re import shutil import sys import tarfile import time import requests import zipfile import hashlib # ============data obtained, not change the original code DATA_HUB= {} # Defined in file: ./chapter_multilayer-perceptrons/kaggle-house-price.md def download(name, cache_dir=os.path.join('..', 'data')): """Download a file inserted into DATA_HUB, return the local filename.""" assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}." url, sha1_hash = DATA_HUB[name] os.makedirs(cache_dir, exist_ok=True) fname = os.path.join(cache_dir, url.split('/')[-1]) if os.path.exists(fname): sha1 = hashlib.sha1() with open(fname, 'rb') as f: while True: data = f.read(1048576) if not data: break sha1.update(data) if sha1.hexdigest() == sha1_hash: return fname # Hit cache print(f'Downloading {fname} from {url}...') r = requests.get(url, stream=True, verify=True) with open(fname, 'wb') as f: f.write(r.content) return fname # Defined in file: ./chapter_multilayer-perceptrons/kaggle-house-price.md def download_extract(name, folder=None): """Download and extract a zip/tar file.""" fname = download(name) base_dir = os.path.dirname(fname) data_dir, ext = os.path.splitext(fname) if ext == '.zip': fp = zipfile.ZipFile(fname, 'r') elif ext in ('.tar', '.gz'): fp = tarfile.open(fname, 'r') else: assert False, 'Only zip/tar files can be extracted.' fp.extractall(base_dir) return os.path.join(base_dir, folder) if folder else data_dir #1. obtain dataset DATA_HUB['ml-100k'] = ('http://files.grouplens.org/datasets/movielens/ml-100k.zip', 'cd4dcac4241c8a4ad7badc7ca635da8a69dddb83') def read_data_ml100k(): data_dir = download_extract('ml-100k') names = ['user_id', 'item_id', 'rating', 'timestamp'] data = pd.read_csv(os.path.join(data_dir, 'u.data'), '\t', names=names, engine='python') num_users = data.user_id.unique().shape[0] num_items = data.item_id.unique().shape[0] return data, num_users, num_items # 2. Split data #@save def split_data_ml100k(data, num_users, num_items, split_mode='random', test_ratio=0.1): """Split the dataset in random mode or seq-aware mode.""" if split_mode == 'seq-aware': train_items, test_items, train_list = {}, {}, [] for line in data.itertuples(): u, i, rating, time = line[1], line[2], line[3], line[4] train_items.setdefault(u, []).append((u, i, rating, time)) if u not in test_items or test_items[u][-1] < time: test_items[u] = (i, rating, time) for u in range(1, num_users + 1): train_list.extend(sorted(train_items[u], key=lambda k: k[3])) test_data = [(key, *value) for key, value in test_items.items()] train_data = [item for item in train_list if item not in test_data] train_data = pd.DataFrame(train_data) test_data = pd.DataFrame(test_data) else: mask = [True if x == 1 else False for x in np.random.uniform( 0, 1, (len(data))) < 1 - test_ratio] neg_mask = [not x for x in mask] train_data, test_data = data[mask], data[neg_mask] return train_data, test_data #@save def load_data_ml100k(data, num_users, num_items, feedback='explicit'): users, items, scores = [], [], [] inter = np.zeros((num_items, num_users)) if feedback == 'explicit' else {} for line in data.itertuples(): user_index, item_index = int(line[1] - 1), int(line[2] - 1) score = int(line[3]) if feedback == 'explicit' else 1 users.append(user_index) items.append(item_index) scores.append(score) if feedback == 'implicit': inter.setdefault(user_index, []).append(item_index) else: inter[item_index, user_index] = score return users, items, scores, inter #@save def split_and_load_ml100k(split_mode='seq-aware', feedback='explicit', test_ratio=0.1, batch_size=256): data, num_users, num_items = read_data_ml100k() train_data, test_data = split_data_ml100k(data, num_users, num_items, split_mode, test_ratio) train_u, train_i, train_r, _ = load_data_ml100k(train_data, num_users, num_items, feedback) test_u, test_i, test_r, _ = load_data_ml100k(test_data, num_users, num_items, feedback) # Create Dataset train_set = MyData(np.array(train_u), np.array(train_i), np.array(train_r)) test_set = MyData(np.array(test_u), np.array(test_i), np.array(test_r)) # Create Dataloader train_iter = DataLoader(train_set, shuffle=True, batch_size=batch_size) test_iter = DataLoader(test_set, batch_size=batch_size) return num_users, num_items, train_iter, test_iter class MyData(Dataset): def __init__(self, user, item, score): self.user = torch.tensor(user) self.item = torch.tensor(item) self.score = torch.tensor(score) def __len__(self): return len(self.user) def __getitem__(self, idx): return self.user[idx], self.item[idx], self.score[idx] # create a nn class (just-for-fun choice :-) class RMSELoss(nn.Module): def __init__(self, eps=1e-6): '''You should be careful with NaN which will appear if the mse=0, adding self.eps''' super().__init__() self.mse = nn.MSELoss() self.eps = eps def forward(self,yhat,y): loss = torch.sqrt(self.mse(yhat,y) + self.eps) return loss class MF(nn.Module): def __init__(self, num_factors, num_users, num_items, **kwargs): super(MF, self).__init__(**kwargs) self.P = nn.Embedding(num_embeddings=num_users, embedding_dim=num_factors) self.Q = nn.Embedding(num_embeddings=num_items, embedding_dim=num_factors) self.user_bias = nn.Embedding(num_users, 1) self.item_bias = nn.Embedding(num_items, 1) def forward(self, user_id, item_id): P_u = self.P(user_id) Q_i = self.Q(item_id) b_u = self.user_bias(user_id) b_i = self.item_bias(item_id) outputs = (P_u * Q_i).sum() + b_u.squeeze() + b_i.squeeze() return outputs # train # Device configuration device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Hyper parameters num_epochs = 50 batch_size = 512 lr = 0.001 num_users, num_items, train_iter, test_iter = split_and_load_ml100k(test_ratio=0.1, batch_size=batch_size) model = MF(30, num_users, num_items).to(device) # Loss and Optimizer optimizer = torch.optim.SGD(model.parameters(), lr=lr) criterion = RMSELoss() # Train the Model train_rmse = [] test_rmse = [] for epoch in range(num_epochs): train_loss = 0 num_train = 0 model.train() for users, items, scores in train_iter: users = users.to(device) items = items.to(device) scores = scores.float().to(device) # Forward pass outputs = model(users, items) loss = criterion(outputs, scores) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() train_loss += loss.item() num_train += scores.shape[0] train_rmse.append(train_loss / num_train) model.eval() test_loss = 0 num_test = 0 with torch.no_grad(): for users, items, scores in test_iter: users = users.to(device) items = items.to(device) scores = scores.float().to(device) outputs = model(users, items) loss = criterion(outputs, scores) test_loss += loss.item() num_test += scores.shape[0] test_rmse.append(test_loss / num_test) # plot %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') x = list(range(num_epochs)) fig = plt.figure() ax = plt.axes() plt.plot(x, train_rmse, label='train_rmse'); plt.plot(x, test_rmse, label='test_rmse'); leg = ax.legend(); I got the result The MXNET result is here Why I cannot get a beautiful shape. And my train_rmse is larger than test_rmse.
I modified your code a bit and got a similar result with mxnet's. Here is the code in colab. model. you missed axis=1 in the summation operation. outputs = (P_u * Q_i).sum(axis=1) + b_u.squeeze() + b_i.squeeze() The default sum operation will sum all the elements in a tensor and produces a scalar. It is fine to add a scalar to a tensor so you didn't catch an error. optimizer. I used the same optimizer - Adam as the mxnet's implementation. Similarly, I also added weight decay. optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=wd) initialization. Initialize the weights with normal distribution. nn.init.normal_(self.P.weight, std=0.01) nn.init.normal_(self.Q.weight, std=0.01) nn.init.normal_(self.user_bias.weight, std=0.01) nn.init.normal_(self.item_bias.weight, std=0.01) Other, You don't need to add num_train with the batch size. The loss has already been divided by the batch size in the MSELoss. num_train += 1
https://stackoverflow.com/questions/65383426/
How to sort a 3d tensor by coordinates in last dimension (pytorch)
I have a tensor with shape [bn, k, 2]. The last dimension are coordinates and I want each batch to be sorted independently depending on the y coordinate ([:, :, 0]). My approach looks something like this: import torch a = torch.randn(2, 5, 2) indices = a[:, :, 0].sort()[1] a_sorted = a[:, indices] print(a) print(a_sorted) So far so good, but I now it sorts both batches according to both index lists, so I get 4 batches in total: a tensor([[[ 0.5160, 0.3257], [-1.2410, -0.8361], [ 1.3826, -1.1308], [ 0.0338, 0.1665], [-0.9375, -0.3081]], [[ 0.4140, -1.0962], [ 0.9847, -0.7231], [-0.0110, 0.6437], [-0.4914, 0.2473], [-0.0938, -0.0722]]]) a_sorted tensor([[[[-1.2410, -0.8361], [-0.9375, -0.3081], [ 0.0338, 0.1665], [ 0.5160, 0.3257], [ 1.3826, -1.1308]], [[ 0.0338, 0.1665], [-0.9375, -0.3081], [ 1.3826, -1.1308], [ 0.5160, 0.3257], [-1.2410, -0.8361]]], [[[ 0.9847, -0.7231], [-0.0938, -0.0722], [-0.4914, 0.2473], [ 0.4140, -1.0962], [-0.0110, 0.6437]], [[-0.4914, 0.2473], [-0.0938, -0.0722], [-0.0110, 0.6437], [ 0.4140, -1.0962], [ 0.9847, -0.7231]]]]) As you can see, I want only the 1st and the 4th batch to be returned. How do I do that?
What you want: concatenation of a[0, indices[0]] and a[1, indices[1]]. What you coded: concatenation of a[0, indices] and a[1, indices]. The issue you are facing is because the indices returned by sort are shaped like the first dimensions, but the values are only indices into the second dimension. When you go to use these, you want to match indices[0] on a[0], but pytorch doesn't do this implicitly (because fancy indexing is very powerful, and needs this syntax for it's power). So, all you have to do is give a parallel list of indices for the first dimension. i.e. You want to use something like: a[[[0], [1]], indices]. To generalise this a bit more, you may use something like: n = a.shape[0] first_indices = torch.arange(n)[:, None] a[first_indices, indices] This is a little tricksy, so here's an example: >>> a = torch.randn(2,4,2) >>> a tensor([[[-0.2050, -0.1651], [ 0.5688, 1.0082], [-1.5964, -0.9236], [ 0.3093, -0.2445]], [[ 1.0586, 1.0048], [ 0.0893, 2.4522], [ 2.1433, -1.2428], [ 0.1591, 2.4945]]]) >>> indices = a[:, :, 0].sort()[1] >>> indices tensor([[2, 0, 3, 1], [1, 3, 0, 2]]) >>> a[:, indices] tensor([[[[-1.5964, -0.9236], [-0.2050, -0.1651], [ 0.3093, -0.2445], [ 0.5688, 1.0082]], [[ 0.5688, 1.0082], [ 0.3093, -0.2445], [-0.2050, -0.1651], [-1.5964, -0.9236]]], [[[ 2.1433, -1.2428], [ 1.0586, 1.0048], [ 0.1591, 2.4945], [ 0.0893, 2.4522]], [[ 0.0893, 2.4522], [ 0.1591, 2.4945], [ 1.0586, 1.0048], [ 2.1433, -1.2428]]]]) >>> a[0, indices] tensor([[[-1.5964, -0.9236], [-0.2050, -0.1651], [ 0.3093, -0.2445], [ 0.5688, 1.0082]], [[ 0.5688, 1.0082], [ 0.3093, -0.2445], [-0.2050, -0.1651], [-1.5964, -0.9236]]]) >>> a[1, indices] tensor([[[ 2.1433, -1.2428], [ 1.0586, 1.0048], [ 0.1591, 2.4945], [ 0.0893, 2.4522]], [[ 0.0893, 2.4522], [ 0.1591, 2.4945], [ 1.0586, 1.0048], [ 2.1433, -1.2428]]]) >>> a[0, indices[0]] tensor([[-1.5964, -0.9236], [-0.2050, -0.1651], [ 0.3093, -0.2445], [ 0.5688, 1.0082]]) >>> a[1, indices[1]] tensor([[ 0.0893, 2.4522], [ 0.1591, 2.4945], [ 1.0586, 1.0048], [ 2.1433, -1.2428]]) >>> a[[[0], [1]], indices] tensor([[[-1.5964, -0.9236], [-0.2050, -0.1651], [ 0.3093, -0.2445], [ 0.5688, 1.0082]], [[ 0.0893, 2.4522], [ 0.1591, 2.4945], [ 1.0586, 1.0048], [ 2.1433, -1.2428]]])
https://stackoverflow.com/questions/65385874/
BERT DataLoader: Difference between shuffle=True vs Sampler?
I trained a DistilBERT model with DistilBertForTokenClassification on ConLL data fro predicting NER. Training seem to have completed with no problems but I have 2 problems during evaluation phase. I'm getting negative loss value During training, I used shuffle=True for DataLoader. But during evaluation, when I do shuffle=True for DataLoader, I get very poor metric results(f_1, accuracy, recall etc). But if I do shuffle = False or use a Sampler instead of shuffling I get pretty good metric results. I'm wondering if there is anything wrong with my code. Here is the evaluation code: print('Prediction started on test data') model.eval() eval_loss = 0 predictions , true_labels = [], [] for batch in val_loader: b_input_ids = batch['input_ids'].to(device) b_input_mask = batch['attention_mask'].to(device) b_labels = batch['labels'].to(device) with torch.no_grad(): outputs = model(b_input_ids, attention_mask=b_input_mask) logits = outputs[0] logits = logits.detach().cpu().numpy() label_ids = b_labels.detach().cpu().numpy() predictions.append(logits) true_labels.append(label_ids) eval_loss += outputs[0].mean().item() print('Prediction completed') eval_loss = eval_loss / len(val_loader) print("Validation loss: {}".format(eval_loss)) out: Prediction started on test data Prediction completed Validation loss: -0.2584906197858579 I believe I'm calculating the loss wrong here. Is it possible to get negative loss values with BERT? For DataLoader, if I use the code snippet below, I have no problems with the metric results. val_sampler = SequentialSampler(val_dataset) val_loader = DataLoader(val_dataset, sampler=val_sampler, batch_size=128) Bu if I do this one I get very poor metric results val_loader = DataLoader(val_dataset, batch_size=128, shuffle=True) Is it normal that I'm getting vastly different results with shuffle=True vs shuffle=False ? code for the metric calculation: metric = load_metric("seqeval") results = metric.compute(predictions=true_predictions, references=true_labels) results out: {'LOCATION': {'f1': 0.9588207767898924, 'number': 2134, 'precision': 0.9574766355140187, 'recall': 0.9601686972820993}, 'MISC': {'f1': 0.8658965344048217, 'number': 995, 'precision': 0.8654618473895582, 'recall': 0.8663316582914573}, 'ORGANIZATION': {'f1': 0.9066332916145182, 'number': 1971, 'precision': 0.8947628458498024, 'recall': 0.9188229325215627}, 'PERSON': {'f1': 0.9632426988922457, 'number': 2015, 'precision': 0.9775166070516096, 'recall': 0.9493796526054591}, 'overall_accuracy': 0.988255561629313, 'overall_f1': 0.9324058459808882, 'overall_precision': 0.9322748349023465, 'overall_recall': 0.932536893886156} The above metrics are printed when I use Sampler or shuffle=False. If I use shuffle=True, I get: {'LOCATION': {'f1': 0.03902284263959391, 'number': 2134, 'precision': 0.029496402877697843, 'recall': 0.057638238050609185}, 'MISC': {'f1': 0.010318142734307824, 'number': 995, 'precision': 0.009015777610818933, 'recall': 0.012060301507537688}, 'ORGANIZATION': {'f1': 0.027420984269014285, 'number': 1971, 'precision': 0.019160951996772892, 'recall': 0.04819888381532217}, 'PERSON': {'f1': 0.02119907254057635, 'number': 2015, 'precision': 0.01590852597564007, 'recall': 0.03176178660049628}, 'overall_accuracy': 0.5651741788003777, 'overall_f1': 0.02722600361161272, 'overall_precision': 0.020301063389034663, 'overall_recall': 0.041321152494729445} UPDATE: I modified loss code for evaluation. There seems to be no problem with this code. You can see the new code below: print('Prediction started on test data') model.eval() eval_loss = 0 predictions , true_labels = [], [] for batch in val_loader: b_labels = batch['labels'].to(device) batch = {k:v.type(torch.long).to(device) for k,v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss, logits = outputs[0:2] logits = logits.detach().cpu().numpy() label_ids = b_labels.detach().cpu().numpy() predictions.append(logits) true_labels.append(label_ids) eval_loss += loss print('Prediction completed') eval_loss = eval_loss / len(val_loader) print("Validation loss: {}".format(eval_loss)) Though I still haven't got an asnwer to the DataLoader question. Also I jsut realised when I do print(model.eval()) I still get dropouts from the model in evaluation mode.
As far as I understand, the answer is pretty simple: "I saw my father do it this way, and his father was also doing it this way, so I'm also doing it this way". I've looked around a lot of notebooks to see how people were loading the data for validation and in every notebook I saw that people were using the Sequential Sampler for validation. Nobody uses Shuffling or Random Sampling during validation. I don't exactly know why, but this is the case. So if anyone visiting this post was wondering the same thing, the answer is basically what I quoted above. Also, I edited the original post for the loss problem I was having. I was calculating it wrong. Apperantly Bert reutrns loss at index 0 of the output (outputs[0]) if you also feed the model the original labels. In the first code snippet, when I was getting the outputs from the model, I was not feeding the model with the original labels, so it was not returning the loss value at index 0, but returning only the logits. Basically what you need to do is: outputs = model(input_ids, mask, label=label) loss = outputs[0] logits = outputs[1]
https://stackoverflow.com/questions/65396650/
PyTorch: Shuffle DataLoader
There are several scenarios that make me confused about shuffling the data loader, which are as follows. I set the β€œshuffle” parameter to False on both train_loader and valid_loader. then the results I get are as follows Epoch 1/4 loss=0.8802 val_loss=0.8202 train_acc=0.63 val_acc=0.63 Epoch 2/4 loss=0.6993 val_loss=0.6500 train_acc=0.66 val_acc=0.72 Epoch 3/4 loss=0.5363 val_loss=0.5385 train_acc=0.76 val_acc=0.80 Epoch 4/4 loss=0.4055 val_loss=0.5130 train_acc=0.85 val_acc=0.81 I set the β€œshuffle” parameter to True on train_loader and False to valid_loader. then the results I get are as follows Epoch 1/4 loss=0.8928 val_loss=0.8284 train_acc=0.63 val_acc=0.63 Epoch 2/4 loss=0.7308 val_loss=0.6263 train_acc=0.61 val_acc=0.73 Epoch 3/4 loss=0.5594 val_loss=0.5046 train_acc=0.54 val_acc=0.81 Epoch 4/4 loss=0.4304 val_loss=0.4525 train_acc=0.49 val_acc=0.82 Based on that result, my training accuracy has a worse performance when I shuffle train_loader. And this is a snippet of my code. for epoch in range(n_epochs): model.train() avg_loss = 0. train_preds = np.zeros((len(train_X),len(le.classes_))) for i, (x_batch, y_batch) in enumerate(train_loader): y_pred = model(x_batch) loss = loss_fn(y_pred, y_batch) optimizer.zero_grad() loss.backward() optimizer.step() avg_loss += loss.item() / len(train_loader) train_preds[i * batch_size:(i+1) * batch_size] = F.softmax(y_pred).cpu().detach().numpy() train_accuracy = sum(train_preds.argmax(axis=1) == y_train)/len(y_train) model.eval() avg_val_loss = 0. val_preds = np.zeros((len(x_cv),len(le.classes_))) for i, (x_batch, y_batch) in enumerate(valid_loader): y_pred = model(x_batch).detach() avg_val_loss += loss_fn(y_pred, y_batch).item() / len(valid_loader) val_preds[i * batch_size:(i+1) * batch_size] =F.softmax(y_pred).cpu().numpy() val_accuracy = sum(val_preds.argmax(axis=1)==y_test)/len(y_test) Did I make a mistake calculating the training accuracy? Thanks in advance
You are comparing the shuffled predictions with the un-shuffled labels. To fix that, count the number of accurate predictions in every iteration, and compute the overall accuracy at the end. for epoch in range(n_epochs): model.train() avg_loss = 0. total_correct = 0 total_samples = 0 for i, (x_batch, y_batch) in enumerate(train_loader): y_pred = model(x_batch) loss = loss_fn(y_pred, y_batch) optimizer.zero_grad() loss.backward() optimizer.step() avg_loss += loss.item() / len(train_loader) total_correct += (torch.argmax(y_pred, 1) == y_batch).sum() total_samples += y_batch.shape[0] train_accuracy = total_correct / total_samples (I haven't tested this code)
https://stackoverflow.com/questions/65402802/
Is there a better way to calculate loss for multi-task DNN modeling?
Suppose there are over one thousand tasks in the multi-task deep learning. More than a thousand columns of labels. Each task (column) has a specific weight in this case. It would take such long time to loop over each task to calculate the sum of loss using the following code snippet. criterion = nn.MSELoss() outputs = model(inputs) loss = torch.tensor(0.0).to(device) for j, w in enumerate(weights): # mask keeping labeled molecules for each task mask = labels[:, j] >= 0.0 if len(labels[:, j][mask]) != 0: # the loss is the sum of each task/target loss. # there are labeled samples for this task, so we add it's loss loss += criterion(outputs[j][mask], labels[:, j][mask].view(-1, 1)) * w This dataset was quite small. The dataset has 10K rows and 1024 columns and the labels are a 10K * 160 sparse matrix. Each of those 160 columns is one task. Batch size is 32. Below are the shapes of outputs, labels, weights: len(outputs[0]), len(outputs) (32, 160) weights.shape torch.Size([160]) labels.shape torch.Size([32, 160]) But what I really want to try is one dataset which has over 1M rows and 1024 features and over 10K labels. The labels are sparse of course. **update** Thanks for you suggestions and code, Shai. I modified the code a little bit as follows, but the loss was the same as your code. all_out = torch.cat(outputs).view(len(outputs), -1).T all_mask = labels != -100.0 err = (all_out - labels) ** 2 # raw L2 err = all_mask * err # mask only the relevant entries in the err mask_nums = all_mask.sum(axis=0) err = err * weights[None, :] # weight each task err = err / mask_nums[None, :] err[err != err] = torch.tensor([0.0], requires_grad=True).to(device) # replace nan to 0.0 loss = err.sum() A newly raised question is the loss can't get back propagated. Only the loss of the first batch was calculated. The following batches got a loss of 0.0. Epoch: [1/20], Step: [1/316], Loss: 4.702103614807129 Epoch: [1/20], Step: [2/316], Loss: 0.0 Epoch: [1/20], Step: [3/316], Loss: 0.0 Epoch: [1/20], Step: [4/316], Loss: 0.0 Epoch: [1/20], Step: [5/316], Loss: 0.0 The loss was 0 and outputs was 32* 160 of nan after the first batch.
How is your loss different than: all_out = torch.cat([o_[:, None] for o_ in outputs], dim=1) # all_out has shape 32x160 all_mask = labels >= 0 err = (all_out - labels) ** 2 # raw L2 err = all_mask * err # mask only the relevant entries in the err err = err * weights[None, :] # weight each task err = err.sum() There might be a slight issue here with the summation - you might need to weight by the number of 1s in each column of all_mask.
https://stackoverflow.com/questions/65403978/
What does calling self within a class do?
I noticed in the documentation for Pytorch Lightning, it was mentioned you can call the forward method from another method in the same class just by calling self(x). I haven't been able to find any info about how this works. I always thought you would call the method using self.forward Evidently, it calls the forward method but how? Is there any python documentation about what's going on? I found this at the following URL: https://pytorch-lightning.readthedocs.io/en/stable/new-project.html The specific code fragment is this: def training_step(self, batch, batch_idx): ... z = self(x)
Generally speaking, in python, when "calling" an object, you are invoking its __call__ method. That is, self(x) is equivalent to self.__call__(x) For pytorch nn.Module (and all derivative classes) __call__ wraps around the module's forward function, therefore, from your perspective self(x) is basically forwarding x through the module self.
https://stackoverflow.com/questions/65404671/
Unflatten a tensor back to an image
I am working on GANs and I want to visualize the image formed. For this, I was trying def show_images(image_tensor, num_images=9, size=(1, 28, 28)): image_unflat = image_tensor.detach().cpu.view(-1, *size) image_grid = make_grid(image_unflat[:num_images], nrow=3) plt.imshow(image_grid.permute(1, 2, 0).squeeze()) plt.show() but when I am trying to show_image(some_tensor), I am getting an error as image_unflat = image_tensor.detach().cpu.view(-1, *size) AttributeError: 'builtin_function_or_method' object has no attribute 'view' Here, the size of some_tensor is N x 784.
You need to call cpu() before broadcasting with view. image_unflat = image_tensor.detach().cpu().view(-1, *size)
https://stackoverflow.com/questions/65406713/
How to correctly use Cross Entropy Loss vs Softmax for classification?
I want to train a multi class classifier using Pytorch. Following the official Pytorch doc shows how to use a nn.CrossEntropyLoss() after a last layer of type nn.Linear(84, 10). However, I remember this is what Softmax does. This leaves me confused. How to train a "standard" classification network in the best way? If the network has a final linear layer, how to infer the probabilities per class? If the network has a final softmax layer, how to train the network (which loss, and how)? I found this thread on the Pytorch forum, which likely answers all that, but I couldn't compile it into working and readable Pytorch code. My assumed answers: Like the doc says. Exponentiation of the outputs of the linear layer, which are really logits (log probalbilities). I don't understand.
I think that it's important to understand softmax and cross-entropy, at least from a practical point of view. Once you have a grasp on these two concepts then it should be clear how they may be "correctly" used in the context of ML. Cross Entropy H(p, q) Cross-entropy is a function that compares two probability distributions. From a practical standpoint it's probably not worth getting into the formal motivation of cross-entropy, though if you're interested I would recommend Elements of Information Theory by Cover and Thomas as an introductory text. This concept is introduced pretty early on (chapter 2 I believe). This is the intro text I used in grad school and I thought it did a very good job (granted I had a wonderful instructor as well). The key thing to pay attention to is that cross-entropy is a function that takes, as input, two probability distributions: q and p and returns a value that is minimal when q and p are equal. q represents an estimated distribution, and p represents a true distribution. In the context of ML classification we know the actual label of the training data, so the true/target distribution, p, has a probability of 1 for the true label and 0 elsewhere, i.e. p is a one-hot vector. On the other hand, the estimated distribution (output of a model), q, generally contains some uncertainty, so the probability of any class in q will be between 0 and 1. By training a system to minimize cross entropy we are telling the system that we want it to try and make the estimated distribution as close as it can to the true distribution. Therefore, the class that your model thinks is most likely is the class corresponding to the highest value of q. Softmax Again, there are some complicated statistical ways to interpret softmax that we won't discuss here. The key thing from a practical standpoint is that softmax is a function that takes a list of unbounded values as input, and outputs a valid probability mass function with the relative ordering maintained. It's important to stress the second point about relative ordering. This implies that the maximum element in the input to softmax corresponds to the maximum element in the output of softmax. Consider a softmax activated model trained to minimize cross-entropy. In this case, prior to softmax, the model's goal is to produce the highest value possible for the correct label and the lowest value possible for the incorrect label. CrossEntropyLoss in PyTorch The definition of CrossEntropyLoss in PyTorch is a combination of softmax and cross-entropy. Specifically CrossEntropyLoss(x, y) := H(one_hot(y), softmax(x)) Note that one_hot is a function that takes an index y, and expands it into a one-hot vector. Equivalently you can formulate CrossEntropyLoss as a combination of LogSoftmax and negative log-likelihood loss (i.e. NLLLoss in PyTorch) LogSoftmax(x) := ln(softmax(x)) CrossEntropyLoss(x, y) := NLLLoss(LogSoftmax(x), y) Due to the exponentiation in softmax, there are some computational "tricks" that make directly using CrossEntropyLoss more stable (more accurate, less likely to get NaN), than computing it in stages. Conclusion Based on the above discussion the answers to your questions are 1. How to train a "standard" classification network in the best way? Like the doc says. 2. If the network has a final linear layer, how to infer the probabilities per class? Apply softmax to the output of the network to infer the probabilities per class. If the goal is to just find the relative ordering or highest probability class then just apply argsort or argmax to the output directly (since softmax maintains relative ordering). 3. If the network has a final softmax layer, how to train the network (which loss, and how)? Generally, you don't want to train a network that outputs softmaxed outputs for stability reasons mentioned above. That said, if you absolutely needed to for some reason, you would take the log of the outputs and provide them to NLLLoss criterion = nn.NLLLoss() ... x = model(data) # assuming the output of the model is softmax activated loss = criterion(torch.log(x), y) which is mathematically equivalent to using CrossEntropyLoss with a model that does not use softmax activation. criterion = nn.CrossEntropyLoss() ... x = model(data) # assuming the output of the model is NOT softmax activated loss = criterion(x, y)
https://stackoverflow.com/questions/65408027/
Calculating gradient from network output in PyTorch gives error
I am trying to use a manually calculate a gradient using the output of my network, I will then use this in a loss function. I have managed to get an example working in keras, but converting it to PyTorch has proven more difficult I have a model like: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(1, 50) self.fc2 = nn.Linear(50, 10) self.fc3 = nn.Linear(10, 1) def forward(self, x): x = F.sigmoid(self.fc1(x)) x = F.sigmoid(self.fc2(x)) x = self.fc3(x) return x and some data: x = torch.unsqueeze(torch.linspace(-1, 1, 101), dim=1) x = Variable(x) I can then try find a gradient like: output = net(x) grad = torch.autograd.grad(outputs=output, inputs=x, retain_graph=True)[0] I want to be able to find the gradient of each point, then do something like: err_sqr = (grad - x)**2 loss = torch.mean(err_sqr)**2 However, at the moment if I try to do this I get the error: grad can be implicitly created only for scalar outputs I have tried changing the shape of my network output to fix this, but if I change it to much it says its not part of the graph. I can get rid of that error by allowing that, but then it says my gradient is None. I've managed to get this working in keras, so I'm confident that its possible here too, I just need a hand! My questions are: Is there a way to "fix" what I have to allow me to calculate the gradient
PyTorch expects an upstream gradient in the grad call. For usual (scalar) loss functions, the upstream gradient is implicitly assumed to be 1. You can do a similar thing by passing ones as the upstream gradient: grad = torch.autograd.grad(outputs=output, inputs=x, grad_outputs=torch.ones_like(output), retain_graph=True)[0]
https://stackoverflow.com/questions/65409620/
Pytorch 2D Feature Tensor 1D output Same weights across dimension
I am handeling a timeseries dataset with n timesteps, m features and k objects. As a result my feature vector has a shape of (n,k,m) While my targets shape is (n,m) I want to predict the targets for every timestep and object, but with the same weights for every opject. Also my loss function looks like this. average_loss = loss_func(prediction, labels) sum_loss = loss_func(sum(prediction), sum(labels)) loss = loss_weight * average_loss + (1-loss_weight) * sum_loss My plan is to not only make sure, that I predict every item as good as possible, but also that the sum of all items get perdicted. loss_weights is a constant. Currently I am doing this kind of ugly solution: features = local_batch.squeeze(dim = 0) labels = torch.unsqueeze(local_labels.squeeze(dim = 0), 1) prediction = net(features) I set my batchsize = 1. And squeeze it to make the k objects my batch. My network looks like this: def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() self.hidden = torch.nn.Linear(n_feature, n_hidden) # hidden layer self.predict = torch.nn.Linear(n_hidden, n_output) # output layer def forward(self, x): x = F.relu(self.hidden(x)) # activation function for hidden layer x = self.predict(x) # linear output return x How do I make sure I do a reasonable convolution over the opject dimension in order to keep the same weights for all objects, without commiting to batchsize=1? Also, how do I achieve the same loss function, where I compute the loss of the prediction sum vs target sum for any timestamp?
It's not exactly ugly -- I would do the same but generalize it a bit for batch size >1 using view. # Using your notations n, k, m = features.shape features = local_batch.view(n*k, m) prediction = net(features).view(n, k, m) With the prediction in the correct shape (n*k*m), implementing your loss function should not be difficult.
https://stackoverflow.com/questions/65410468/
How to get the last index of model's prediction?
I am new to PyTorch. I have a variable pred which has a list of a tensor. print(pred) output: [tensor([[176.64380, 193.86154, 273.84702, 306.30405, 0.83492, 2.00000]])] So I wanted to access the last element which is the class. I did that by first converting the list into a tensor. x = torch.stack(pred) output: tensor([[[176.64380, 193.86154, 273.84702, 306.30405, 0.83492, 2.00000]]]) Now, how do I access the last element or is there any better/efficient way of doing this? EDIT: For further reference here's the code that performs classification task. def classify_face(image): device = torch.device("cpu") img = process_image(image) print('Image processed') # img = image.unsqueeze_(0) # img = image.float() pred = model(img)[0] # Apply NMS pred = non_max_suppression(pred, 0.4, 0.5, classes = [0, 1, 2], agnostic = None ) if classify: pred = apply_classifier(pred, modelc, img, im0s) #print(pred) model.eval() model.cpu() print(pred) # output = non_max_suppression(output, 0.4, 0.5, classes = class_names, agnostic = False) #_, predicted = torch.max(output[0], 1) #print(predicted.data[0], "predicted") classification = torch.cat(pred)[:, -1] index = int(classification) print(names[index]) return names[index] During prediction pred consists of x1, y1, x2, y2, conf, and class. e.g. pred = [tensor([[176.64380, 193.86154, 273.84702, 306.30405, 0.83492, 2.00000]])] If there are no predictions made by the model then pred is simply empty. e.g. pred = [tensor([], size=(0, 6))] Presently my program stops prediction if it receives an empty tensor and throws an error: Traceback (most recent call last): File "WEBCAM_DETECT.py", line 168, in <module> label = classify_face(frame) File "WEBCAM_DETECT.py", line 150, in classify_face index = int(classification) ValueError: only one element tensors can be converted to Python scalars EDIT1: It seems to work when I check length of the pred but I get this error when there are two or more rows in tensor. [tensor([[212.38568, 117.47020, 339.35773, 266.00513, 0.74144, 2.00000], [214.60651, 118.50694, 339.90192, 265.91696, 0.44277, 0.00000]])] ################# ################# Traceback (most recent call last): File "WEBCAM_DETECT.py", line 172, in <module> label = classify_face(frame) File "WEBCAM_DETECT.py", line 154, in classify_face index = int(classification) ValueError: only one element tensors can be converted to Python scalars How do I make my program sort of just ignore if there are no predictions made at a certain frame and continue onto the next frame?
You can select the last element with the index notation on the 3rd axis, then broadcasting to a 1D tensor: x[:, :, -1].view(-1) However, I would rather use torch.cat on pred, this avoids creating a new axis: torch.cat(pred)[:, -1] Edit - you may check if the tensor is empty beforehand with: if len(pred) == 0: return None
https://stackoverflow.com/questions/65410731/
TypeError: 'DiscreteFactor' object is not subscriptable
I have a Bayesian algorithm program to be executed, I am using python 3 import numpy as np import csv import pandas as pd from pgmpy.models import BayesianModel from pgmpy.estimators import MaximumLikelihoodEstimator from pgmpy.inference import VariableElimination heartDisease = pd.read_csv('heart.csv') heartDisease = heartDisease.replace('?',np.nan) print('Few examples from the dataset are given below') print(heartDisease.head()) model = BayesianModel([('age','trestbps'),('age','fbs'),('sex','trestbps'),('exang','trestbps'),('trestbps','heartdisease'),('fbs','heartdisease'),('heartdisease','restecg'),('heartdisease','thalach'),('heartdisease','chol')]) print('\nLearning CPD using Maximum likelihood estimators') model.fit(heartDisease,estimator=MaximumLikelihoodEstimator) print('\n Inferencing with Bayesian Network:') HeartDisease_infer = VariableElimination(model) print('\n 1. Probability of HeartDisease given Age=28') q=HeartDisease_infer.query(variables=['heartdisease'],evidence={'age':28}) print(q['heartdisease']) print('\n 2. Probability of HeartDisease given cholesterol=100') q=HeartDisease_infer.query(variables=['heartdisease'],evidence={'chol':100}) print(q['heartdisease']) the error that i have recieved when i run my Bayesian network program is: TypeError Traceback (most recent call last) <ipython-input-7-84a6b48627b2> in <module> 23 print('\n 1. Probability of HeartDisease given Age=28') 24 q=HeartDisease_infer.query(variables=['heartdisease'],evidence={'age':28}) ---> 25 print(q['heartdisease']) 26 27 print('\n 2. Probability of HeartDisease given cholesterol=100') TypeError: 'DiscreteFactor' object is not subscriptable So far I haven't seen this exact error here on stackoverflow. Can anyone explain why I am getting this error?
After trying to solve the error I came up with the solution. print('\n 1. Probability of HeartDisease given Age=28') q=HeartDisease_infer.query(variables=['heartdisease'],evidence={'age':28}) print(q['heartdisease']) print(q['heartdisesase'] over in this part of the code snippet I just removed ['heartdisease']. here the output was actually trying to store itself into a array object, however the output actually is in special table format that cant be stored into an array so printing the actual answer 'q' gives you a required result. print(q) this get's your job done..!!
https://stackoverflow.com/questions/65419735/
Numpy sum elements in a multi-dimensional array according to indices
I am dealing with a very large multi-dimensional data , but let me take a 2D array for example. Given a value array that is changing every iteration, arr = np.array([[ 1, 2, 3, 4, 5], [5, 6, 7, 8, 9]]) # a*b and an index array that is fixed all the time. idx = np.array([[[0, 1, 1], [-1, -1, -1]], [[5, 1, 3], [1, -1, -1]]]) # n*h*w, where n = a*b, Here -1 means no index will be applied. And I wish to get a result res = np.array([[1+2+2, 0], [5+2+4, 2]]) # h*w In real practice, I am doing with a very large 3D tensor (n ~ trillions), with a very sparse idx (i.e. lots of -1). As idx is fixed, my current solution is to pre-compute a n*(h*w) array index_tensor by filling 0 and 1, and then do tmp = arr.reshape(1, n) res = (tmp @ index_tensor).reshape([h,w]) It works fine but takes a huge memory to store the index_tensor. Is there any approach that I can take the advantage of the sparsity and unchangeableness of idx to reduce the memory cost and keep a fair running speed in python (using numpy or pytorch would be the best)? Thanks in advance!
Ignoring the -1 complication for the moment, the straight forward indexing and summation is: In [58]: arr = np.array([[ 1, 2, 3, 4, 5], [5, 6, 7, 8, 9]]) In [59]: idx = np.array([[[0, 1, 1], [2, 4, 6]], ...: [[5, 1, 3], [1, -1, -1]]]) In [60]: arr.flat[idx] Out[60]: array([[[1, 2, 2], [3, 5, 6]], [[5, 2, 4], [2, 9, 9]]]) In [61]: _.sum(axis=-1) Out[61]: array([[ 5, 14], [11, 20]]) One way (not necessarily fast or memory efficient) of dealing with the -1 is with a masked array: In [62]: mask = idx<0 In [63]: mask Out[63]: array([[[False, False, False], [False, False, False]], [[False, False, False], [False, True, True]]]) In [65]: ma = np.ma.masked_array(Out[60],mask) In [67]: ma Out[67]: masked_array( data=[[[1, 2, 2], [3, 5, 6]], [[5, 2, 4], [2, --, --]]], mask=[[[False, False, False], [False, False, False]], [[False, False, False], [False, True, True]]], fill_value=999999) In [68]: ma.sum(axis=-1) Out[68]: masked_array( data=[[5, 14], [11, 2]], mask=[[False, False], [False, False]], fill_value=999999) Masked arrays deal with operations like the sum by replacing the masked values with something neutral, such as 0 for the case of sums. (I may revisit this in the morning). sum with matrix product In [72]: np.einsum('ijk,ijk->ij',Out[60],~mask) Out[72]: array([[ 5, 14], [11, 2]]) This is more direct, and faster, than the masked array approach. You haven't elaborated on constructing the index_tensor so I won't try to compare it. Another possibility is to pad the array with a 0, and adjust indexing: In [83]: arr1 = np.hstack((0,arr.ravel())) In [84]: arr1 Out[84]: array([0, 1, 2, 3, 4, 5, 5, 6, 7, 8, 9]) In [85]: arr1[idx+1] Out[85]: array([[[1, 2, 2], [3, 5, 6]], [[5, 2, 4], [2, 0, 0]]]) In [86]: arr1[idx+1].sum(axis=-1) Out[86]: array([[ 5, 14], [11, 2]]) sparse A first stab at using sparse matrix: Reshape idx to 2d: In [141]: idx1 = np.reshape(idx,(4,3)) make a sparse tensor from that. For a start I'll go the iterative lil approach, though usually constructing coo (or even csr) inputs directly is faster: In [142]: M = sparse.lil_matrix((4,10),dtype=int) ...: for i in range(4): ...: for j in range(3): ...: v = idx1[i,j] ...: if v>=0: ...: M[i,v] = 1 ...: In [143]: M Out[143]: <4x10 sparse matrix of type '<class 'numpy.int64'>' with 9 stored elements in List of Lists format> In [144]: M.A Out[144]: array([[1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 1, 0, 1, 0, 0, 0], [0, 1, 0, 1, 0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0, 0]]) This can then be used for a sum of products: In [145]: [email protected]() Out[145]: array([ 3, 14, 11, 2]) Using [email protected]() is essentially what you do. While M is sparse, arr is not. For this case M.A@ is faster than M@.
https://stackoverflow.com/questions/65420433/
Pytorch's nn.TransformerEncoder "src_key_padding_mask" not functioning as expected
Im working with Pytorch's nn.TransformerEncoder module. I got input samples with (as normal) the shape (batch-size, seq-len, emb-dim). All samples in one batch have been zero-padded to the size of the biggest sample in this batch. Therefore I want the attention of the all zero values to be ignored. The documentation says, to add an argument src_key_padding_mask to the forward function of the nn.TransformerEncoder module. This mask should be a tensor with shape (batch-size, seq-len) and have for each index either True for the pad-zeros or False for anything else. I achieved that by doing: . . . def forward(self, x): # x.size -> i.e.: (200, 28, 200) mask = (x == 0).cuda().reshape(x.shape[0], x.shape[1]) # mask.size -> i.e.: (200, 20) x = self.embed(x.type(torch.LongTensor).to(device=device)) x = self.pe(x) x = self.transformer_encoder(x, src_key_padding_mask=mask) . . . Everything works good when I dont set the src_key_padding_mask. But the error I get when I do is the following: File "/home/me/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/functional.py", line 4282, in multi_head_attention_forward assert key_padding_mask.size(0) == bsz AssertionError Seems seems like it is comparing the first dimension of the mask, which is the batch-size, with bsz which probably stands for batch-size. But why is it failing then? Help very much appreciated!
I got the same issue, which is not a bug: pytorch's Transformer implementation requires the input x to be (seq-len x batch-size x emb-dim) while yours seems to be (batch-size x seq-len x emb-dim).
https://stackoverflow.com/questions/65424676/
How to convert one-hot vector to label index and back in Pytorch?
How to transform vectors of labels to one-hot encoding and back in Pytorch? The solution to the question was copied to here after having to go through the entire forum discussion, instead of just finding an easy one from googling.
From the Pytorch forums import torch import numpy as np labels = torch.randint(0, 10, (10,)) # labels --> one-hot one_hot = torch.nn.functional.one_hot(target) # one-hot --> labels labels_again = torch.argmax(one_hot, dim=1) np.testing.assert_equals(labels.numpy(), labels_again.numpy())
https://stackoverflow.com/questions/65424771/
Why define a backward method for a custom layer in Pytorch?
I am currently constructing a model in Pytorch that requires multiple custom layers. I have only been defining the forward method and thus do not define a backward method. The model seems to run well, and the optimizer is able to update using the gradients from the layers. However, I see many people defining backward methods, and I wonder if I am missing something. Why might you need to define a backwards pass?
In very few cases should you be implementing your own backward function in PyTorch. This is because PyTorch's autograd functionality takes care of computing gradients for the vast majority of operations. The most obvious exceptions are You have a function that cannot be expressed as a finite combination of other differentiable functions (for example, if you needed the incomplete gamma function, you might want to write your own forward and backward which used numpy and/or lookup tables). You're looking to speed up the computation of a particularly complicated expression for which the gradient could be drastically simplified after applying the chain rule.
https://stackoverflow.com/questions/65425429/
Defining a Torch Class in R package "torch"
this post is related to my earlier How to define a Python Class which uses R code, but called from rTorch? . I came across the torch package in R (https://torch.mlverse.org/docs/index.html) which allows to Define a DataSet class definition. Yet, I also need to be able to define a model class like class MyModelClass(torch.nn.Module) in Python. Is this possible in the torch package in R? When I tried to do it with reticulate it did not work - there were conflicts like ImportError: /User/homes/mreichstein/miniconda3/envs/r-torch/lib/python3.6/site-packages/torch/lib/libtorch_python.so: undefined symbol: _ZTINSt6thread6_StateE It also would not make much sense, since torch isn't wrapping Python. But it is loosing at lot of flexibility, which rTorch has (but see my problem in the upper post). Thanks for any help! Markus
You can do that directly using R's torch package which seems quite comprehensive at least for the basic tasks. Neural networks Here is an example of how to create nn.Sequential like this: library(torch) model <- nn_sequential( nn_linear(D_in, H), nn_relu(), nn_linear(H, D_out) ) Below is a custom nn_module (a.k.a. torch.nn.Module) which is a simple dense (torch.nn.Linear) layer (source): library(torch) # creates example tensors. x requires_grad = TRUE tells that # we are going to take derivatives over it. dense <- nn_module( clasname = "dense", # the initialize function tuns whenever we instantiate the model initialize = function(in_features, out_features) { # just for you to see when this function is called cat("Calling initialize!") # we use nn_parameter to indicate that those tensors are special # and should be treated as parameters by `nn_module`. self$w <- nn_parameter(torch_randn(in_features, out_features)) self$b <- nn_parameter(torch_zeros(out_features)) }, # this function is called whenever we call our model on input. forward = function(x) { cat("Calling forward!") torch_mm(x, self$w) + self$b } ) model <- dense(3, 1) Another example, using torch.nn.Linear layers to create a neural network this time (source): two_layer_net <- nn_module( "two_layer_net", initialize = function(D_in, H, D_out) { self$linear1 <- nn_linear(D_in, H) self$linear2 <- nn_linear(H, D_out) }, forward = function(x) { x %>% self$linear1() %>% nnf_relu() %>% self$linear2() } ) Also there are other resources like here (using flow control and weight sharing). Other Looking at the reference it seems most of the layers are already provided (didn't notice transformer layers at a quick glance, but this is minor). As far as I can tell basic blocks for neural networks, their training etc. are in-place (even JIT so sharing between languages should be possible).
https://stackoverflow.com/questions/65427461/
Repeat 3d tensor's rows in pytorch
I have a BxCxd tensor of coordinates and want to repeat each row in the following way: [[[1,0,0],[0,1,0],[0,0,1]]] -> [[[1,0,0],[1,0,0],[0,1,0],[0,1,0],[0,0,1],[0,0,1]]] In the above example each row is repeated 2 times. What's especially important is the ordering. Each row in the first tensor should appear k times in the second one before the next row appears. I tried the following code: print(x.size()) params = x.repeat_interleave(self.k, dim=-1).permute(0,2,1) In the above snippet, x is of size 32x128x4 before repeat_interleave. With self.k = 64 I would expect the result to be a 32x8192x4 tensor, however the result I am getting is 32x256x128 which does not make sense to me. What am I missing here?
I think you want: t.repeat_interleave(2, dim=1) Output: ensor([[[1, 0, 0], [1, 0, 0], [0, 1, 0], [0, 1, 0], [0, 0, 1], [0, 0, 1]]])
https://stackoverflow.com/questions/65428042/
Error when loading torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') on AWS EC2
I'm trying to run some code using Torch (and Roberta language model) on an EC2 instance on AWS. The compilation seems to fail, does anyone have a pointer to fix? Confirm that Torch is correctly installed import torch a = torch.rand(5,3) print (a) Return this: tensor([[0.7494, 0.5213, 0.8622],... Attempt to load Roberta roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') Using cache found in /home/ubuntu/.cache/torch/hub/pytorch_fairseq_master /home/ubuntu/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 fatal: not a git repository (or any of the parent directories): .git running build_ext /home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py:352: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) skipping 'fairseq/data/data_utils_fast.cpp' Cython extension (up-to-date) skipping 'fairseq/data/token_block_utils_fast.cpp' Cython extension (up-to-date) building 'fairseq.libnat' extension x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include/TH -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/include/python3.8 -c fairseq/clib/libnat/edit_dist.cpp -o build/temp.linux-x86_64-3.8/fairseq/clib/libnat/edit_dist.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=libnat -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 In file included from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149, from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3, from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12, from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3, from fairseq/clib/libnat/edit_dist.cpp:9: /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84: warning: ignoring #pragma omp parallel [-Wunknown-pragmas] 84 | #pragma omp parallel for if ((end - begin) >= grain_size) It then ends, after a long while. x86_64-linux-gnu-gcc: fatal error: Killed signal terminated program cc1plus compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Got it to work by loading the pretrained model locally instead of from the hub. from fairseq.models.roberta import RobertaModel roberta = RobertaModel.from_pretrained('roberta.large.mnli', 'model.pt', '/home/ubuntu/deployedapp/roberta.large') roberta.eval() Note that I had to go for a XLarge EC2 instance to run this, otherwise process would be killed due to low memory.
https://stackoverflow.com/questions/65430580/
Pytorch doesn't work with CUDA in PyCharm/IntelliJ
I have just downloaded PyTorch with CUDA via Anaconda and when I type into the Anaconda terminal: import torch if torch.cuda.is_available(): print('it works') then he outputs that; that means that it worked and it works with PyTorch. But when I go to my IDE (PyCharm and IntelliJ) and write the same code, it doesn't output anything. Could someone please explain to me how I can somehow get this to work in the IDE?
It was driving me mad as well... What finally helped me was the first link that says to use PyCharm "Terminal" to run the pip install command (from the PyTorch website). That fixed all my problems. (I had installed pytorch 3 times by that time and tried different interpreters...) https://www.datasciencelearner.com/how-to-install-pytorch-in-pycharm/ pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html I hope this helps save someone hours of headache. :)
https://stackoverflow.com/questions/65439154/
RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[4, 32, 6, 7] to have 3 channels, but got 32 channels instead
I am trying to implement such CNN. This is my implementation: class Net(BaseFeaturesExtractor): def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 256): super(Net, self).__init__(observation_space, features_dim) n_input_channels = observation_space.shape[0] print("Observation space shape:"+str(observation_space.shape)) print("Number of channels:" + str(n_input_channels)) self.cnn = nn.Sequential( nn.Conv2d(n_input_channels, 32, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(n_input_channels, 32, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(n_input_channels, 32, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Flatten(), nn.Linear(in_features=128,out_features=64), nn.ReLU(), nn.Linear(in_features=64,out_features=7), nn.Sigmoid() ) def forward(self, observations: th.Tensor) -> th.Tensor: print("Observation shape:"+str(observations[0].shape)) return self.cnn(observations) When I tried to run the code which uses this CNN, I am getting following log: Observation space shape:(3, 6, 7) Number of channels:3 Observation shape:torch.Size([3, 6, 7]) Traceback (most recent call last): File "/Users/joe/Documents/JUPYTER/ConnectX/training3.py", line 250, in <module> learner.learn(total_timesteps=iterations, callback=eval_callback) ... RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[4, 32, 6, 7] to have 3 channels, but got 32 channels instead What is the problem here? How can I solve it?
in_channels of a conv layer should be equal to out_channels of the previous layer. In your case, in_channels of the 2nd and 3rd conv layers don't have the correct values. They should be like below, self.cnn = nn.Sequential( nn.Conv2d(n_input_channels, 32, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1), nn.ReLU(), ... ) Also, you should check in_features of the 1st Linear layer. It depends on the input shape and should be equal to last_conv_out_channels * last_conv_output_height * last_conv_output_width. For example, for an input=torch.randn(1, 3, 256, 256) last conv layer's output shape would be ([1, 32, 64, 64]), in that case the 1st Linear layer should be, nn.Linear(in_features=32*64*64,out_features=64) ---- Update after the comment: Output shape of a conv layer is calculated through the formula here (see under "Shape:" section). Using input = torch.randn(1, 3, 256, 256) as input to the network, here are outputs of each conv layer (I skipped the ReLUs since they don't change the shape), conv1: (1, 3, 256, 256) -> (1, 32, 256, 256) conv2: (1, 32, 256, 256) -> (1, 32, 128, 128) conv3: (1, 32, 128, 128) -> (1, 32, 64, 64) So how did last_conv_output_height and last_conv_output_width became 64 ? The last conv layer is defined as follows, nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1) Data is processed as (num_samples, num_channels, height​, width​) in PyTorch and the default value for dilation is stated as 1 in the conv2d doc. So, for the last conv layer, H_in is 128, padding[0] is 1, dilation[0] is 1, kernel_size[0] is 3 and stride[0] is 2. Therefore, height of its output becomes, H_out = ⌊(128 + 2 * 1 - 1 * (3 - 1) - 1) / 2βŒ‹ + 1 H_out = 64 Since square-size kernels and equal-size stride, padding and dilation are used, W_out also becomes 64 for the last conv layer. I think the easiest way to compute in_features for the 1st Linear layer would be run the model for the desired size input until that layer. An example for your architecture, inp = torch.randn(1, 3, 256, 256) arch = nn.Sequential( nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1) ) outp = arch(inp) print('outp.shape:', outp.shape) This prints, outp.shape: torch.Size([1, 32, 64, 64]) Finally, last_conv_out_channels is out_channels of the last conv layer. The last conv layer in your architecture is nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1). Here out_channels is the 2nd parameter, so last_conv_out_channels is 32.
https://stackoverflow.com/questions/65440443/
pytorch custom loss function on minimizing the angle between vectors
The goal is to minimize the angle between the actual and predicted vectors in a neural network setting. Can someone please check if the following execution is correct? criterion = nn.CosineSimilarity() loss = torch.mean(torch.abs(criterion(actual_vectors,predicted_vectors))) #back-propagation on the above *loss* will try cos(angle) = 0. But I want angle between the vectors to be 0 or cos(angle) = 1. loss = 1 - loss #To me, the above does not seem right. Isn't back-propagation on the above 'loss' similar to minimizing the negative of 'loss' from line 2? #Does '1' have no role to play here when back-propagation is applied? loss.backward()
Theoretically that makes sense. The goal of back-propagation is to minimize the loss. If the loss is 1 - cos(A) (where A is the angle difference between the two) then that is equivalent to saying that the goal is to maximize cos(A), which in turn is equivalent to minimizing the Angle between the two vectors. A simple example would be the goal of minimizing X^2 + 4 the answer to that optimization problem is the same as the answer to the goal of maximizing -(X^2 + 4). Sticking a minus on the whole equation and swapping min with max would make the statements equivalent. So if you have a function you want to MAXIMIZE and your optimization model can only MINIMIZE then just slap a minus sign on your function and call it a day. Another question you might ask is "what is significant about the 1? Could we have just said loss = -loss" and the answer is... it depends. Theoretically yes that is equivalent and the one doesn't play a role in the backward propagation (since it disappears with the derivative). However, once we start talking about actual optimization with numerical errors and complicated optimizers/update rules then the constant 1 might play a role. Another reason to have the 1 is so that your loss is nicely defined between 0 and 1 which is a nice property to have. So yes, minimizing the loss of 1 - cos(A) through back-propagation is equivalent to minimizing the angle between the vectors.
https://stackoverflow.com/questions/65442946/
what is the equivalent sytax for `Tensor.grad` in Tensorflow 2.0
In the Pytorch, we can access the gradient of a variable x by z.grad What is the same sytax in Tensorflow 2. My goad is to cut the gradient. Here is Pytorch code if z.grad > 1000: z.grad = 10 Can tensorflow 2 apply the same functions? Thanks
So in TF2, assume we define following variables and optimizer: import tensorflow as tf from tensorflow import keras opt = tf.keras.optimizers.Adam(learning_rate=0.1) x = tf.Variable([3.0, 4.0]) y = tf.Variable([1.0, 1.0, 1.0, 1.0]) var_list = [x, y] Then we can get gradients by using tf.GradientTape(): with tf.GradientTape() as tape: loss = tf.reduce_sum(x ** 2) + tf.reduce_sum(y) grads = tape.gradient(loss, var_list) Finally we could process the gradients by custom function: def clip_grad(grad): if grad > 1000: grad = 10 return grad processed_grads = [tf.map_fn(clip_grad, g) for g in grads] opt.apply_gradients(zip(processed_grads, var_list)) Note you may find the keras optimizers have get_gradients method, but it won't work with eager execution enabled which is default in TF2, if you want use that, then you may have to write code in TF1 fashion
https://stackoverflow.com/questions/65443602/
What is the difference between an Embedding Layer with a bias immediately afterwards and a Linear Layer in PyTorch
I am reading the "Deep Learning for Coders with fastai & PyTorch" book. I'm still a bit confused as to what the Embedding module does. It seems like a short and simple network, except I can't seem to wrap my head around what Embedding does differently than Linear without a bias. I know it does some faster computational version of a dot product where one of the matrices is a one-hot encoded matrix and the other is the embedding matrix. It does this to in effect select a piece of data? Please point out where I am wrong. Here is one of the simple networks shown in the book. class DotProduct(Module): def __init__(self, n_users, n_movies, n_factors): self.user_factors = Embedding(n_users, n_factors) self.movie_factors = Embedding(n_movies, n_factors) def forward(self, x): users = self.user_factors(x[:,0]) movies = self.movie_factors(x[:,1]) return (users * movies).sum(dim=1)
Embedding [...] what Embedding does differently than Linear without a bias. Essentially everything. torch.nn.Embedding is a lookup table; it works the same as torch.Tensor but with a few twists (like possibility to use sparse embedding or default value at specified index). For example: import torch embedding = torch.nn.Embedding(3, 4) print(embedding.weight) print(embedding(torch.tensor([1]))) Would output: Parameter containing: tensor([[ 0.1420, -0.1886, 0.6524, 0.3079], [ 0.2620, 0.4661, 0.7936, -1.6946], [ 0.0931, 0.3512, 0.3210, -0.5828]], requires_grad=True) tensor([[ 0.2620, 0.4661, 0.7936, -1.6946]], grad_fn=<EmbeddingBackward>) So we took the first row of the embedding. It does nothing more than that. Where is it used? Usually when we want to encode some meaning (like word2vec) for each row (e.g. words being close semantically are close in euclidean space) and possibly train them. Linear torch.nn.Linear (without bias) is also a torch.Tensor (weight) but it does operation on it (and the input) which is essentially: output = input.matmul(weight.t()) every time you call the layer (see source code and functional definition of this layer). Code snippet The layer in your code snippet does this: creates two lookup tables in __init__ the layer is called with input of shape (batch_size, 2): first column contains indices of user embeddings second column contains indices of movie embeddings these embeddings are multiplied and summed returning (batch_size,) (so it's different from nn.Linear which would return (batch_size, out_features) and perform dot product instead of element-wise multiplication followed by summation like here) This is probably used to train both representations (of users and movies) for some recommender-like system. Other stuff I know it does some faster computational version of a dot product where one of the matrices is a one-hot encoded matrix and the other is the embedding matrix. No, it doesn't. torch.nn.Embedding can be one hot encoded and might also be sparse, but depending on the algorithms (and whether those support sparsity) there might be performance boost or not.
https://stackoverflow.com/questions/65445174/
How To Get The Pixel Count Of A Segmented Area in an Image I used Vgg16 for Segmentation
I am new to deep learning but have succeeded in semantic segmentation of the image I am trying to get the pixel count of each class in the label. As an example in the image I want to get the pixel count of the carpet, or the chandelier or the light stand. How do I go about? Thanks any suggestions will help.
Edit: In what format the regions are returned? Do you have only the final image or the regions are given as contours? If you have them as contours (list of coordinates), you can apply findContourArea directly on that structure. If you can receive/sample the regions one by one in an image (but do not have the contour), you can sequentially paint each of the colors/classes in a clear image, either convert it to grayscale or directly paint it in grayscale or binary, or binarize with threshold; then numberPixels = len(cv2.findNonZero(bwImage)). cv2.findContour and cv2.contourArea should do the same. Instead of rendering each class in a separate image, if your program receives only the final segmentation and not per-class contours, you can filter/mask the regions by color ranges on that image. I built that and it seemed to do the job, 14861 pixels for the pink carpet: import cv2 import numpy as np # rgb 229, 0, 178 # the purple carpet in RGB (sampled with IrfanView) # b,g,r = 178, 0, 229 # cv2 uses BGR class_color = [178, 0, 229] multiclassImage = cv2.imread("segmented.png") cv2.imshow("MULTI", multiclassImage) filteredImage = multiclassImage.copy() low = np.array(class_color); mask = cv2.inRange(filteredImage, low, low) filteredImage[mask == 0] = [0, 0, 0] filteredImage[mask != 0] = [255,255,255] cv2.imshow("FILTER", filteredImage) # numberPixelsFancier = len(cv2.findNonZero(filteredImage[...,0])) # That also works and returns 14861 - without conversion, taking one color channel bwImage = cv2.cvtColor(filteredImage, cv2.COLOR_BGR2GRAY) cv2.imshow("BW", bwImage) numberPixels = len(cv2.findNonZero(bwImage)) print(numberPixels) cv2.waitKey(0) If you don't have the values of the colors given or/and can't control them, you can use numpy.unique(): https://numpy.org/doc/stable/reference/generated/numpy.unique.html and it will return the unique colors, then they could be applied in the algorithm above. Edit 2: BTW, another way to compute or verify such counts is by calculating histograms. That's with IrfanView on the black-white image:
https://stackoverflow.com/questions/65445213/
Predictions using Logistic Regression in Pytorch return infinity
I started watching a tutorial on PyTorch and I am learning the concept of logistic regression. I tried it using some stock data that I had. I have inputs, which contains two parameters trade_quantity and trade_value, and targets which has the corresponding stock price. inputs = torch.tensor([[182723838.00, 2375432.00], [185968153.00, 2415558.00], [181970093.00, 2369140.00], [221676832.00, 2811589.00], [339785916.00, 4291782.00], [225855390.00, 2821301.00], [151430199.00, 1889032.00], [122645372.00, 1552998.00], [129015052.00, 1617158.00], [121207837.00, 1532166.00], [139554705.00, 1789392.00]]) targets = torch.tensor([[76.90], [76.90], [76.90], [80.70], [78.95], [79.60], [80.05], [78.90], [79.40], [78.95], [77.80]]) I defined the model function, the loss as the mean square error, and tried to run it a few times to get some predictions. Here's the code: def model(x): return x @ w.t() + b def mse(t1, t2): diff = t1 - t2 return torch.sum(diff * diff) / diff.numel() preds = model(inputs) loss = mse(preds, targets) loss.backward() with torch.no_grad(): w -= w.grad * 1e-5 b -= b.grad * 1e-5 w.grad.zero_() b.grad.zero_() I am using Jupyter for this and ran the last part of the code a few times, after which the predictions come as: tensor([[inf], [inf], [inf], [inf], [inf], [inf], [inf], [inf], [inf], [inf], [inf]], grad_fn=<AddBackward0>) If I run it for a few more times the predictions become nan. Can you please tell me why is this happening?
To me, this looks more like linear regression than logistic regression. You are trying to fit a linear model onto your data. It's different to a binary classification task where you would need to use a special kind of activation function (a sigmoid for instance) so that the output is either 0 or 1. In this particular instance you want to solve a 2D linear problem given input x of shape (batch, x1, x2) (where x1 is trade_quantity and x2 is trade_value) and target (batch, y) (y being the stock_price). So the objective is to find the best w and b matrices (weight matrix and bias column) so that x@w + b is the closest to y as possible, according to your criterion, the mean square error. I would recommend normalizing your data so it stays in a [0, 1] range. You can do so by measuring the mean and standard deviation of inputs and targets. inputs_min, inputs_max = inputs.min(axis=0).values, inputs.max(axis=0).values targets_min, targets_max = targets.min(axis=0).values, targets.max(axis=0).values Then applying the transformation: x = (inputs - inputs_min)/(inputs_max - inputs_min) y = (targets - targets_min)/(targets_max - targets_min) Try changing your learning rate and have it run for multiple epochs. lr = 1e-2 for epochs in range(100): preds = model(x) loss = mse(preds, y) loss.backward() with torch.no_grad(): w -= lr*w.grad b -= lr*b.grad w.grad.zero_() b.grad.zero_() I use a (1, 2) randomly initialized matrix for w (and a (1,) matrix for b): w = torch.rand(1, 2) w.requires_grad = True b = torch.rand(1) b.requires_grad = True And got the following train loss over 100 epochs: To find the right hyperparameters, it's better to have a validation set. This set will get normalized with the mean and std from the train set. It will be used to evaluate the performances at the end of each epoch on data that is 'unknown' to the model. Same goes for your test set, if you have one.
https://stackoverflow.com/questions/65445585/