id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st183000 | The tensor size [ 1124823 x 13 ]. So I want to slice each raw from its center and take 10 elements, 5 from the right and five from left. So I need to iterate through every raw. |
st183001 | What do you mean by “raw”? row?
If you have a Tensor of size [ 1124823 x 13 ], a single row is a 1D Tensor of size 13.
So you want to do: feat.narrow(1, feat.size(1)//2 -5, 10). |
st183002 | Yes, sorry for this typo it is raw.
But, Do I need a for loop to iterate for each raw |
st183003 | No, the operation above will give a Tensor of size [ 1124823 x 10 ] containing the result for every row. |
st183004 | Hi, I’m new to audio signal processing and to pytorch and I’m having some trouble understanding this part of the docs of the torchaudio load function:
normalization (bool, number, or callable, optional) – If boolean True, then output is divided by 1 << 31 (assumes signed 32-bit audio), and normalizes to [-1, 1]. If number, then output is divided by that number If callable, then the output is passed as a parameter to the given function, then the output is divided by the result. (Default: True)
From what I understand, the function assumes the file to have a bit depth of 32 bit, however that bit depth is rather rare. Does 32-bit audio mean indeed bit depth or something else?
Also I don’t understand what is the meaning of output is divided by 1 << 31. What is meant by output and what is meant by 1 << 31?
Thanks for your help |
st183005 | Solved by ptrblck in post #2
I also assume 32-bit audio corresponds to the bits per sample.
1 << 31 is a left shift by 31 positions, so it translates to 1 << 31 == 2**31 == 2147483648, which would be the max value of each sample.
If I’m not mistaken, 32bit audio would have the range [−2,147,483,648, 2,147,483,647], so you wo… |
st183006 | I also assume 32-bit audio corresponds to the bits per sample.
1 << 31 is a left shift by 31 positions, so it translates to 1 << 31 == 2**31 == 2147483648, which would be the max value of each sample.
If I’m not mistaken, 32bit audio would have the range [−2,147,483,648, 2,147,483,647], so you would get a minimal error for the max positive value. |
st183007 | thank you very much for the reply,
it must be true indeed that
I also assume 32-bit audio corresponds to the bits per sample.
as if i normalize with True I get a tensor with max and min values [-1,1]:
max: 0.0881
min: -0.1289
while if I use normalization=16:
max: 11821056
min: -17301504
and for normalization=False:
max: 1.8914e+08
min: -2.7682e+08
indeed the std and avg of the data loaded using normalization True are between 0 and 1 so it seems like the correct normalization for the data I’m working with.
Thank you very much again |
st183008 | import torch
from torch.autograd import Variable
import torch.nn as nn
from fela import feat, labels
from Dataloader import train_loader, test_loader, X_train, X_test, X_val, y_train, y_test, y_val
input_size = 13
hidden1_size = 13
hidden2_size = 64
hidden3_size = 128
hidden4_size = 256
hidden5_size = 1024
output_size = 3988
class DNN(nn.Module):
def __init__(self, input_size, hidden1_size, hidden2_size, hidden3_size, hidden4_size, hidden5_size, output_size):
super(DNN, self).__init__()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.drp1 = nn.Dropout(p=0.2, inplace=False)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.drp2 = nn.Dropout(p=0.2, inplace=False)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden2_size, hidden3_size)
self.drp3 = nn.Dropout(p=0.2, inplace=False)
self.relu3 = nn.ReLU()
self.fc4 = nn.Linear(hidden3_size, hidden4_size)
self.drp4 = nn.Dropout(p=0.2, inplace=False)
self.relu4 = nn.ReLU()
self.fc5 = nn.Linear(hidden4_size, hidden5_size)
self.drp5 = nn.Dropout(p=0.2, inplace=False)
self.relu5 = nn.ReLU()
self.fc6 = nn.Linear(hidden5_size, output_size)
def forward(self, x):
out = self.fc1(x)
out = self.drp1(out)
out = self.relu1(out)
out = self.fc2(out)
out = self.drp2(out)
out = self.relu2(out)
out = self.fc3(out)
out = self.drp3(out)
out = self.relu3(out)
out = self.fc4(out)
out = self.drp4(out)
out = self.relu4(out)
out = self.fc5(out)
out = self.drp5(out)
out = self.relu5(out)
out = self.fc6(out)
return out
batch_size = 10
n_iterations = 50
no_eps = n_iterations / (13 / batch_size)
no_epochs = int(no_eps)
model = DNN(input_size, hidden1_size, hidden2_size, hidden3_size, hidden4_size, hidden5_size, output_size)
criterion = nn.CrossEntropyLoss()
learning_rate = 0.0001
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(no_epochs):
for i, (X_train, y_train) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(Variable(X_train))
loss = criterion(outputs, Variable(y_train))
print('Iter %d --> loss %f' % (i, loss.item()))
loss.backward()
optimizer.step()
correct = 0
total = 0
for X_test, y_test in test_loader:
outputs = model(Variable(X_test))
pred = outputs.argmax(dim=1, keepdim=True)
total += y_test.size(0)
correct += (pred.squeeze() == y_test).sum() # pred.eq(y_test.view_as(pre d)).sum().item()
accuracy = 100 * correct / total
print('Iteration: {}. Accuracy: {}'.format(epoch, accuracy))
After some iterations, it gives me this error {IndexError: Target 3988 is out of bounds}
“Hint I am using CPU not GPU” |
st183009 | Hi,
Can you share the full stack trace showing where the error happens?
If I had to bet, I would say that one of your y_train has an invalid value |
st183010 | input_size = 13
hidden1_size = 13
hidden2_size = 128
hidden3_size = 64
output_size = 30
class DNN(nn.Module):
def init(self, input_size, hidden1_size, hidden2_size, hidden3_size, output_size):
super(DNN, self).init()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden2_size, hidden3_size)
self.relu3 = nn.ReLU()
self.fc4 = nn.Linear(hidden3_size, output_size)
self.relu4 = nn.ReLU()
def forward(self, x):
out = self.fc1(x)
out = self.relu1(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
out = self.relu3(out)
out = self.fc4(out)
out = self.relu4(out)
return out
batch_size = 10
n_iterations = 50
no_eps = n_iterations / (13 / batch_size)
no_epochs = int(no_eps)
model = DNN(input_size, hidden1_size, hidden2_size, hidden3_size, output_size)
criterion = nn.CrossEntropyLoss()
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(no_epochs):
for i, (X_train, y_train) in enumerate(train_loader):
y_train = torch.empty(batch_size, dtype=torch.long).random_(output_size)
optimizer.zero_grad()
outputs = model(X_train)
loss = criterion(outputs, y_train)
loss.backward()
optimizer.step()
iter += 1
if iter % 500 == 0:
correct = 0
total = 0
for X_test, y_test in test_loader:
outputs = model(X_test)
pred = outputs.argmax(dim=1, keepdim=True)
total += y_test.size(0)
correct += pred.eq(y_test.view_as(pred)).sum().item()
accuracy = 100 * correct / total
print(‘Iteration: {}. Loss: {}. Accuracy: {}’.format(iter, loss.item(), accuracy))
The output start with Iteration 500 not 1 why !!!
Iteration: 500. Loss: 3.403779983520508. Accuracy: 0.2761323337093925
Iteration: 1000. Loss: 3.4060192108154297. Accuracy: 0.3276070990724763
Iteration: 1500. Loss: 3.416713237762451. Accuracy: 0.4173101012337052
Iteration: 2000. Loss: 3.402294635772705. Accuracy: 0.34867708074959347
Iteration: 2500. Loss: 3.3952858448028564. Accuracy: 0.2755100135754692
Iteration: 3000. Loss: 3.4023067951202393. Accuracy: 0.3158719194042085 |
st183011 | You are increasing iter directly before the if condition to 1, which result in iter % 500 == 0 when iter reaches multiples of 500.
If you want to print the very first iteration, increase iter after the condition. |
st183012 | I am trying to read multiple .gz file and return its content in one tensor as follows:
value_list = [ ]
with ReadHelper('ark: gunzip -c /home/mnabih/kaldi/egs/timit/s5/exp/mono_ali/*.gz|') as reader:
for i, b in enumerate(reader):
value = numpy.asarray(b[1])
value_list.append(value)
value = torch.from_numpy(value)
print(type(value))
it gives me multiple tensors, I tried this command to concatenate it
values = torch.cat(value, dim = 0, out=None)
But it gives me
cat() received an invalid combination of arguments - got (Tensor, out=NoneType, dim=int), but expected one of:
(tuple of Tensors tensors, name dim, Tensor out) |
st183013 | Could you append tensors to the list and call torch.tensor(value_list)?
This should work:
value = np.random.randn(10, 10)
value_list = []
for _ in range(3):
value_list.append(value)
res = torch.tensor(value_list)
print(res.shape)
> torch.Size([3, 10, 10]) |
st183014 | class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=7, stride=4, padding=3)
self.relu1 = nn.ReLU()Preformatted text
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=64, kernel_size=5, stride=1, padding=2)
self.relu2 = nn.ReLU()
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
self.fc1 = nn.Linear(3136, 10)
self.fcrelu = nn.ReLU()
def forward(self, x):
out = self.cnn1(x)
out = self.relu1(out)
out = self.maxpool1(out)
out = self.cnn2(out)
out = self.relu2(out)
out = self.maxpool2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.fcrelu(out)
return out |
st183015 | A 2D layer, e.g. nn.Conv2d, expects the input to have the shape [batch_size, channels, height, width].
The in_channels of the first conv layer correspond to the channels in your input.
Based on the definition of self.cnn1 it seems you want to pass an input with a single channel, so you might want to call x = x.unsqueeze(1) to create the channel dimension.
This would only work, if your current input is defined as [batch_size, height, width]. |
st183016 | Hi
I am using this command to read the ark.files
train_feat = torchaudio.kaldi_io.read_mat_ark(’/home/mnabih/kaldi/egs/timit/s5/mfcc/cmvn_train.ark’)
The output type is Generator [(str, torch.Tensor)]
How can I extract the tensor part only to train my network.
Also,
d = {u: d for u, d in torchaudio.kaldi_io.read_mat_ark(’/home/mnabih/kaldi/egs/timit/s5/mfcc/cmvn_train.ark’)}
it gives me class with key: string and value: torch.tensor
I want to remove the key and using the tensor to train my network |
st183017 | Have you tried?
g = torchaudio.kaldi_io.read_mat_ark(’/home/mnabih/kaldi/egs/timit/s5/mfcc/cmvn_train.ark’)
d = {u: d for u, d in g} # dictionary
t = [d for _, d in g] # list of tensors |
st183018 | Hi everyone,
I’m an intermediate PyTorch user (only did a vision project before) who wants to use torchaudio for something new.
I’m making currently making a dataset loader for the NSynth dataset (16bit 16kHz PCM wav files) specific to my application.
As I’ll be training on an RTX GPU, and the data is originally 16bit, I was wondering if it would be smart to use float16 in this case. I’m not exactly sure about this though, given my limited knowledge of signal processing. torchaudio.load returns 32bit floats by default and does not give the option to load float16, so I was wondering whether there exists a theoretical reason. |
st183019 | I don’t think the data loading should be performed in FP16, as you might end up with some quantization noise.
If I’m not mistaken, the 16bit audio files represent 65536 different levels.
Since FP16 cannot represent all integers >2048 (Wikipedia - FP16 1), you’ll lose some information.
That being said, once you’ve loaded and preprocessed the data, you could still use FP16 for the model training. Have a look at apex/amp 2 for an automatic mixed-precision approach. |
st183020 | Actually, 64bit float can maintain the original information without any loss. According to my experience, the amount of information lost from using float32 is actually quite small. |
st183021 | FP32 is able to represent all integers in [-16777216, 16777216], which should thus work for these audio files. Why would you lose information in this use case or did I misunderstood your explanation? |
st183022 | I did some googling and found that indeed I was wrong. My original conclusion was based on the fact that when I used librosa to load wav files and set the precision to fp32, some values were rounded. But in the case of fp64, the exact value are shown. I guess perhaps that was due to display issue? I’m not sure what’s mechanism behind that. |
st183023 | Thanks! Makes a lot of sense. I think I’ll stay away from apex for now, I don’t want to overcomplicate things. I guess I can always start with FP32 and move to FP16 later and compare. |
st183024 | Hello folks~
I made an PyTorch package that included some classic spectrogram inversion algorithms (like Griffin-Lim) to recover phase information given only the magnitude response of audio. I would like to invite everyone to take a look and use it.
You can find its repository here: https://github.com/yoyololicon/spectrogram-inversion 29
The docs: https://spectrogram-inversion.readthedocs.io/ 16.
Any issues and contributions are wellcome.
Cheers |
st183025 | Thanks for sharing your code!
May I ask the naive question, when this would be used?
Could we e.g. create a spectrum using a GAN and use your inversion algos to create the waveform? |
st183026 | Exactly, as long as the spectrum follow the regular fourier representation.
We assume the target magnitude spectrum that used in training is obtained using torch.stft. |
st183027 | Thanks for sharing your code indeed! Are there additions you’d also like to see in the implementation available in the master branch of torchaudio 15? |
st183028 | Dear All;
I am running Kaldi ASR toolkit and fit MFCC features from speech Dataset and stored it in .ark, .scp and CMVN files, so how I can train my Network based on these files
Thanks |
st183029 | You could use a library like kaldiio 25 to load these samples and create a custom Dataset and pass it to a DataLoader as explained in this tutorial 49.
Once you have the Dataset ready, you could continue working with the architecture or your model.
I’m not completely sure how the data is stored, but since you are dealing with MFCC data, I assume you could treat it as “image” data? |
st183030 | torchaudio 20 supports ark and scp 22, offers MFCC 3, and also has a template for datasets with DataLoader 36 |
st183031 | Hello! I am constantly running into a RuntimeError that I cannot seem to understand, and I would be very grateful if someone can explain what is happening and how I can fix it.
I am creating a dataset class and dataloader for music genre classification. The main issue arises when I include a mel-spectrogram transform from torchaudio.
Here is a script that shows the error I have.
import os
import torch
from torch.utils.data import Dataset
from torch.utils import data
import torchaudio
import pandas as pd
sr = 22050
class Mp3Dataset(Dataset):
"""
Mp3 dataset class to work with the FMA dataset.
Input:
df - pandas dataframe containing track_id and genre.
audio_path - directory with mp3 files
duration - how much of the songs to sample
"""
def __init__(self, df: pd.DataFrame, audio_path: str, duration: float):
self.audio_path = audio_path
self.IDs = df['track_id'].astype(str).to_list()
self.genre_list = df.genre.to_list()
self.duration = duration
self.E = torchaudio.sox_effects.SoxEffectsChain()
self.E.append_effect_to_chain("trim", [0, self.duration])
self.E.append_effect_to_chain("rate", [sr])
self.E.append_effect_to_chain("channels", ["1"])
self.mel = torchaudio.transforms.MelSpectrogram(sample_rate=sr)
def __len__(self):
return len(self.IDs)
def __getitem__(self, index):
ID = self.IDs[index]
genre = self.genre_list[index]
# sox: set input file
self.E.set_input_file(self.get_path_from_ID(ID))
# use sox to read in the file using my effects
waveform, _ = self.E.sox_build_flow_effects() # size: [1, len * sr]
melspec = self.mel(waveform)
return melspec, genre
def get_path_from_ID(self, ID):
"""
Gets the audio path from the ID using the FMA dataset format
"""
track_id = ID.zfill(6)
return os.path.join(self.audio_path, track_id[:3], track_id + '.mp3')
if __name__ == '__main__':
# my path to audio files
audio_path = os.path.join('data', 'fma_small')
# my dataframe that has track_id and genre info
df = pd.read_csv('data/fma_metadata/small_track_info.csv')
torchaudio.initialize_sox()
dataset = Mp3Dataset(df, audio_path, 1.0)
params = {'batch_size': 8, 'shuffle': True, 'num_workers': 2}
dataset_loader = data.DataLoader(dataset, **params)
print(next(iter(dataset_loader)))
torchaudio.shutdown_sox()
When I run this code, I get the following errors thrown at me:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~/Documents/data_sci.nosync/fma/min_ex.py in <module>
71 dataset_loader = data.DataLoader(dataset, **params)
72
---> 73 print(next(iter(dataset_loader)))
74
75 torchaudio.shutdown_sox()
/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
817 else:
818 del self.task_info[idx]
--> 819 return self._process_data(data)
820
821 next = __next__ # Python 2 compatibility
/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)
844 self._try_put_index()
845 if isinstance(data, ExceptionWrapper):
--> 846 data.reraise()
847 return data
848
/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
367 # (https://bugs.python.org/issue2651), so we work around it.
368 msg = KeyErrorMessage(msg)
--> 369 raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 80, in default_collate
return [default_collate(samples) for samples in transposed]
File "/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 80, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/usr/local/anaconda3/envs/pytorch_fma/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad.
Two things I observe with this script:
Removing the mel-spectrogram transform and outputting the waveform instead removes this error.
Changing num_workers to 0 removes this error as well.
To be clear, these observations are independent. Performing only one of these modifications removes the error. However, I would like both workers and a melspectrogram transform
I am using PyTorch 1.2.0 and Torchaudio 0.3.0+bf88aef, as well as python 3.7.4. |
st183032 | I have the exact same problem.
Until now, I did not find a solution.
Edit, posting possible solution:
Using python 3.6.9
Updating to torch 1.3.1
and torchaudio 0.4.0
did the trick for me. |
st183033 | Hello,
I am trying to generate pictures from audio spectrogram. I couldn’t find specific examples on internet and I attempted to put together a solution myself.
I managed to implement an algorithm that can generate pictures passing files encoded mp3 or wav.
At high level everything seems to work ok for Wav files but for mp3 I seem to generate a picture where the spectrum is faint (compared to the one generated by the wav file).
I generated the files using Audacity and I saved the track to mp3 or wav.
Below is the code. I am using Jupiterlab on Sagemaker for the runtime environment.
This seems to be a standard use case in audio classification modelling. Can somebody help explain the reason behind this and whether there is any resource that could have code that can convert audio to RGB pictures for Resnet ingestion ?
import torch
import torchaudio
import matplotlib.pyplot as plt
def normalize_input(tensor):
# Subtract the mean, and scale to the interval [-1,1]
tensor_minusmean = tensor - tensor.mean()
return tensor_minusmean/tensor_minusmean.abs().max()
filename = "/home/ec2-user/SageMaker/GiuseppeProjects/GiuseppeTest.mp3"
waveform, sample_rate = torchaudio.load(filename)
#waveform = np.delete(waveform, (1), axis=0)
waveform = normalize_input(waveform)
print("Shape of waveform: {}".format(waveform.size()))
print("Sample rate of waveform: {}".format(sample_rate))
#print(waveform)
plt.figure()
plt.plot(waveform.t().numpy())
plt.show()
import sys
from torchvision import transforms
import torchvision
from skimage.util import img_as_ubyte
from skimage import exposure
from sklearn import preprocessing
from PIL import Image
import numpy as np
transform_spectra = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((224,224)),
transforms.RandomVerticalFlip(1)
])
def make0min(tensornd):
tensor = tensornd.numpy()
res = np.where(tensor == 0, 1E-19 , tensor)
return torch.from_numpy(res)
def normalize(tensor):
tensor_minusmean = tensor - tensor.mean()
return tensor_minusmean/tensor_minusmean.abs().max()
def normalize_nd(tensor):
tensor_minusmean = tensor - tensor.mean()
return tensor_minusmean/np.absolute(tensor_minusmean).max()
def spectrogrameToImage(waveform):
specgram = torchaudio.transforms.Spectrogram(n_fft=400, win_length=None, hop_length=None, pad=0,window_fn=torch.hann_window, power=2, normalized=True, wkwargs=None)(waveform )
specgram= make0min(specgram)
specgram = specgram.log2()[0,:,:].numpy()
np.set_printoptions(linewidth=300)
np.set_printoptions(threshold=sys.maxsize)
specgram= normalize_nd(specgram)
specgram = img_as_ubyte(specgram)
specgramImage = transform_spectra(specgram)
return specgramImage
def print_spec(spec):
torch.set_printoptions(linewidth=150)
torch.set_printoptions(profile="full")
#spec = torchvision.transforms.ToTensor()(spec)
spec = np.array(spec)
#print (spec)
waveform = normalize(waveform)
spec = spectrogrameToImage(waveform)
spec = spec.convert('RGB')
plt.figure()
plt.imshow(spec) |
st183034 | I seem to have achieved better results by eliminating the first 10 data points of the spectrogram. It looks like there is a clear difference in the spectrogram for the first data points which causes this problem when normalizing the picture.
I assume this is something with the difference between MP3 and WAV that anything to do with Spectrogram ? |
st183035 | Have you printed a sample from each waveform? Printing some statistics like mean, std dev, range, of each, and some information about the difference would also help. I expect the difference to come from the waveform and not the spectrogram. |
st183036 | Hello,
I followed the tutorial at https://pytorch.org/tutorials/beginner/audio_preprocessing_tutorial.html 2
and applied log2 to the spectrogram. However one of my mp3 files is as such the Spectrogram returns data = 0. This makes the log2 to return inf.
Is this expected ? or is there something it can be done to avoid this scenario?
Thanks. |
st183037 | Solved by vincentqb in post #2
This is expected. Do you need to apply the log transformation? If so, you can also add a little value before taking log:
epsilon = 1e-6
log2(epsilon + specgram) |
st183038 | This is expected. Do you need to apply the log transformation? If so, you can also add a little value before taking log:
epsilon = 1e-6
log2(epsilon + specgram) |
st183039 | vincentqb:
epsilon = 1e-6
Thank you, will need to do something like this.
The odd problem I have is the following:
I am converting / comparing 2 sound files. One WAV and the other the MP3 of the same file then create the spectrogram and convert to image.
I get the issue with the MP3 version of the file while everything is ok for WAV (noticed this consistently with other audios).
The other key difference is that because the first data points for each row are very small compared to the others when I normalize the pictures to RGB (0 -255) the image is very faint. While the pictures generated with a Wav file is good. I normalize the waveform, spectrum and the image.
Are there techniques to avoid this problem ? any particular reason why the MP3 version has this different behavior ?
Thanks, |
st183040 | Can you provide a minimal code and files to reproduce? Is the mp3 converted from the wav or vice-versa? |
st183041 | Hi vincentqb,
I created a new topic as the problem is now different to the question on this topic spectrogram-to-rgb-pictures-for-resnet-faint-image-with-mp3-files 8 |
st183042 | I am trying to install the latest torchaudio version from this link 18 using the following pip command:
pip install torchaudio_nightly -f https://download.pytorch.org/whl/nightly/torch_nightly.html
This downloads and installs torchaudio version 0.4.0.dev20190801. However, on the given link there are multiple wheels available for a torchaudio 0.4.0 dev version with a later date (all the way up to 20200101). How can I install one of the later versions (preferably the latest one) using pip? |
st183043 | Do you have some dependency that fix your torchaudio version to this one? like torch_nightly? |
st183044 | No, when using a completely empty environment and installing torchaudio_nightly using the command above it the latest version pip downloads is 0.4.0-dev20190801, see this Colab notebook 8. There clearly are torchaudio nightly versions available from a newer data as can be seen on the html nightly link. Conda has a newer version available from this link 7 (v0.4.0.dev20200102). |
st183045 | I can’t import torchaudio module in python 3 base conda does it support windows or it works then how do I use I am new in pytorch |
st183046 | Thanks for asking! We don’t officially offer binaries yet for torchaudio in windows, see open issue 148. You can try compiling from source using the instructions here 237. Please do post about issues you may run into while doing so |
st183047 | Hi,
I’m using torchaudio v0.3.1 and I’m getting this error while I’m running torchaudio official example in google colab,
module 'torchaudio.transforms' has no attribute 'DownmixMono'
https://pytorch.org/tutorials/beginner/audio_classifier_tutorial.html?highlight=audio 20
https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/audio_classifier_tutorial.ipynb 15 |
st183048 | Solved by vincentqb in post #2
DownmixMono has been deprecated, see breaking changes. |
st183049 | If you open to degrade, this works to me.
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=10.0 torchaudio=0.2.0 -c pytorch |
st183050 | I have an error when trying to import torchaudio:
>>> import torchaudio
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-4cf0a64f61c0> in <module>
----> 1 import torchaudio
/opt/conda/lib/python3.6/site-packages/torchaudio-0.4.0a0+256458f-py3.6-linux-x86_64.egg/torchaudio/__init__.py in <module>
5 import _torch_sox
6
----> 7 from torchaudio import transforms, datasets, kaldi_io, sox_effects, compliance, _docs
8
9 try:
/opt/conda/lib/python3.6/site-packages/torchaudio-0.4.0a0+256458f-py3.6-linux-x86_64.egg/torchaudio/transforms.py in <module>
4 import torch
5 from typing import Optional
----> 6 from . import functional as F
7 from .compliance import kaldi
8
/opt/conda/lib/python3.6/site-packages/torchaudio-0.4.0a0+256458f-py3.6-linux-x86_64.egg/torchaudio/functional.py in <module>
637
638 @torch.jit.script
--> 639 def highpass_biquad(waveform, sample_rate, cutoff_freq, Q=0.707):
640 # type: (Tensor, int, float, float) -> Tensor
641 r"""Designs biquad highpass filter and performs filtering. Similar to SoX implementation.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py in script(obj, optimize, _frames_up, _rcb)
1209 if _rcb is None:
1210 _rcb = _gen_rcb(obj, _frames_up)
-> 1211 fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
1212 # Forward docstrings
1213 fn.__doc__ = obj.__doc__
RuntimeError:
Arguments for call are not valid.
The following operator variants are available:
aten::div.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'float'.
aten::div.Tensor(Tensor self, Tensor other) -> (Tensor):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'float'.
aten::div.Scalar(Tensor self, Scalar other) -> (Tensor):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'float'.
aten::div(int a, int b) -> (float):
Expected a value of type 'int' for argument 'a' but instead found type 'float'.
aten::div(float a, float b) -> (float):
Expected a value of type 'float' for argument 'b' but instead found type 'int'.
div(float a, Tensor b) -> (Tensor):
Expected a value of type 'Tensor' for argument 'b' but instead found type 'int'.
div(int a, Tensor b) -> (Tensor):
Expected a value of type 'int' for argument 'a' but instead found type 'float'.
The original call is:
at /opt/conda/lib/python3.6/site-packages/torchaudio-0.4.0a0+256458f-py3.6-linux-x86_64.egg/torchaudio/functional.py:654:9
sample_rate (int): sampling rate of the waveform, e.g. 44100 (Hz)
cutoff_freq (float): filter cutoff frequency
Q (float): https://en.wikipedia.org/wiki/Q_factor
Returns:
output_waveform (torch.Tensor): Dimension of `(channel, time)`
"""
GAIN = 1.
w0 = 2 * math.pi * cutoff_freq / sample_rate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
A = math.exp(GAIN / 40.0 * math.log(10))
alpha = math.sin(w0) / 2. / Q
mult = _dB2Linear(max(GAIN, 0))
b0 = (1 + math.cos(w0)) / 2
b1 = -1 - math.cos(w0)
b2 = b0
a0 = 1 + alpha
a1 = -2 * math.cos(w0)
My setup: Ubuntu running from Docker, pytorch 1.3.0a0+24ae9b5, torchaudio 0.4.0a0+256458f |
st183051 | Solved by vincentqb in post #2
Since you are compiling from source, please make sure you have the latest version (of pytorch and torchaudio) from master. This may have been fixed by https://github.com/pytorch/audio/pull/326.
Note to self: This should not be related to https://github.com/pytorch/audio/pull/339. |
st183052 | Since you are compiling from source, please make sure you have the latest version (of pytorch and torchaudio) from master. This may have been fixed by https://github.com/pytorch/audio/pull/326 18.
Note to self: This should not be related to https://github.com/pytorch/audio/pull/339 11. |
st183053 | HI ! I’m biginner to pytorch!
And I’m trying to use packed padded sequence to torch.nn.LSTMCell,
but there is no tutorials about using packed padded sequences to torch.nn.LSTMCELL(not torch.nn.LSTM)
is there any way to use packed padded sequences to torch.nn.LSTMCELL? |
st183054 | In Anaconda Python 3.6.7 with PyTorch installed, on Windows 10, I do this sequence:
conda install -c conda-forge librosa
conda install -c groakat sox
then in a fresh download from https://github.com/pytorch/audio 39 I do
python setup.py install
and it runs for a while and ends like this:
torchaudio/torch_sox.cpp(3): fatal error C1083: Cannot open include file: 'sox.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.15.26726\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
I think I’m almost there and this is doable. Please help!
Note this is also an open GitHub issue of long standing: https://github.com/pytorch/audio/issues/50 19 |
st183055 | Not a windows user. But I think the problem is with the environment setup. It is not able to locate the sox.h file, which is installed in your conda environment. You are facing this problem as you are currently not in the folder in which you installed conda. Try moving to the folder of your environment and run the command again. |
st183056 | Kushaj, I also cross-posted on StackOverflow and I got an answer over there which looks like a firm and definitive negative for Windows: https://stackoverflow.com/questions/54872876/how-to-install-torch-audio-on-windows-10-conda 195 |
st183057 | Can you do a check for me. Open terminal and import numpy or any conda package. Tell me if it works or not. |
st183058 | It works fine:
>python
Python 3.6.7 (default, Feb 24 2019, 05:34:16) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
This is an open, logged issue for Windows. I’m hoping a PyTorch guru will take pity on Windows folks and fix it. |
st183059 | I checked the problem in some detail, I think for now you will not be able to do so on Windows. But if you want a trick, you can run an Ubuntu VM with the libraries installed and try to communicate between Windows and the VM. |
st183060 | Other people have suggested WSL Linux on Windows but there are still problems accessing GPU with that: http://www.erogol.com/using-windows-wsl-for-deep-learning-development/ 8
For my purposes (training speech recognition in Swahili), I am giving up on PyTorch in favor of this Tensorflow-based project done for Udacity which has no missing pieces on Windows: https://github.com/simoninithomas/DNN-Speech-Recognizer 13 |
st183061 | For the record, I was trying to reproduce this OpenNMT-py speech training demo on Windows: http://opennmt.net/OpenNMT-py/speech2text.html 12 |
st183062 | Hi,
I managed to compile torchaudio with sox in Windows 10, but is a bit tricky.
Unfortunately the sox_effects are not usable, this error shows up:
RuntimeError: Error opening output memstream/temporary file
But you can use the other torchaudio functionalities.
The steps I followed for Windows 10 64bit are:
#############################
TORCHAUDIO WINDOWS10 64bit
#############################
Note: I mix some command lines unix-like syntax, you can use file explorer or whatever
preliminar arrangements
Download sox sources
$ git clone git://git.code.sf.net/p/sox/code sox
Download other sox source to get lpc10
$ git clone https://github.com/chirlu/sox/tree/master/lpc10 sox2
$ cp -R sox2/lpc10 sox
IMPORTANT get VisualStudio2019 and BuildTools installed
lpc10 lib
4.0. Create a VisualStudio CMake project for lpc10 and build it
Start window -> open local folder -> sox/lpc10
(it reads CMakeLists.txt automatically)
Build->build All
4.2. Copy lpc10.lib to sox
$ mkdir -p sox/src/out/build/x64-Debug
$ cp sox/lpc10/out/build/x64-Debug/lpc10.lib sox/src/out/build/x64-Debug
gsm lib
5.0. Create a CMake project for libgsm and compile it as before with lpc10
5.1. Copy gsm.lib to sox
$ mkdir -p sox/src/out/build/x64-Debug
$ cp sox/libgsm/out/build/x64-Debug/gsm.lib sox/src/out/build/x64-Debug
sox lib
6.0. Create a CMake project for sox in VS
6.1. Edit some files:
CMakeLists.txt: (add at the very beginning)
project(sox)
sox_i.h: (add under stdlib.h include line)
#include <wchar.h> /* For off_t not found in stdio.h */
#define UINT16_MAX ((int16_t)-1)
#define INT32_MAX ((int32_t)-1)
sox.c: (add under time.h include line)
`#include <sys/timeb.h>`
6.2. Build sox with VisualStudio
6.3. Copy the libraries where python will find them, I use a conda environment:
$ cp sox/src/out/build/x64-Debug/libsox.lib envs\<envname>\libs\sox.lib
$ cp sox/src/out/build/x64-Debug/gsm.lib envs\<envname>\libs
$ cp sox/src/out/build/x64-Debug/lpc10.lib envs\<envname>\libs
torchaudio
$ activate <envname>
7.0. Download torchaudio from github
$ git clone https://github.com/pytorch/audio thaudio
7.1. Update setup.py, after the “else:” statement of “if IS_WHEEL…”
$ vi thaudio/setup.py
#if IS_WHEEL…
else:
audio_path = os.path.dirname(os.path.abspath(__file__))
# Add include path for sox.h, I tried both with the same outcome
include_dirs += [os.path.join(audio_path, '../sox/src')]
#include_dirs += [os.path.join(audio_path, 'torchaudio/sox')]
# Add more libraries
#libraries += ['sox']
libraries += ['sox','gsm','lpc10']
7.2. Edit sox.cpp from torchaudio because dynamic arrays are not allowed:
$ vi thaudio/torchaudio/torch_sox.cpp
//char* sox_args[max_num_eopts];
char* sox_args[20]; //Value of MAX_EFFECT_OPTS
7.3. Build and install
$ cd thaudio
$ python setup.py install
It will print out tons of warnings about type conversion and some library conflict with MSVCRTD but “works”.
And thats all. |
st183063 | what are channels in mfcc, because when i use transform.mfcc i get the output as [2, n_mfcc, time], my question is what are the channels, and are the number of channels consistent in similar type of audio (basically in a dataset), how do use channel in models concatenate mfcc end-to-end? |
st183064 | Solved by vincentqb in post #2
The conventions we use for dimensions are given in the README. In particular, a waveform is (channel, time), and MFCC : (channel, time) -> (channel, mfcc, time), and so MFCC is applied per channel. Your original waveform must therefore have had 2 channels.
In the datasets we provide, the number of … |
st183065 | The conventions we use for dimensions are given in the README 3. In particular, a waveform is (channel, time), and MFCC : (channel, time) -> (channel, mfcc, time), and so MFCC is applied per channel. Your original waveform must therefore have had 2 channels.
In the datasets we provide, the number of channels are the same across all waveforms.
Since the output of MFCC is just a tensor, you can use torch.cat to concatenate two MFCCs along a given axis.
Is that what you were asking? |
st183066 | It’s pretty much what I asked, thank you. I still want to know what channels are actually. And if you use log2 with mfcc, which I think is used, how do you handle the ban values, currently I have replaced them with 0s. |
st183067 | Hi everyone, recently I started to read about Siamese Nets and I wanted to try this type of model on a gender recognition task. My data is a .csv dataset containing ~3000 n-vectors of audio features (n=20). The loss function I’m using is the Contrastive Loss 9. Here is my model:
class ContrastiveLoss(torch.nn.Module):
"""
Contrastive loss function.
Based on: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
"""
def __init__(self, margin=1.0):
super(ContrastiveLoss, self).__init__()
self.margin = margin
def forward(self, output1, output2, label):
euclidean_distance = F.pairwise_distance(output1, output2)
loss_contrastive = torch.mean((1-label) * torch.pow(euclidean_distance.double(), 2) +
(label) * torch.pow(torch.clamp(self.margin - euclidean_distance.double(), min=0.0), 2))
return loss_contrastive
class SiameseMLP(nn.Module):
def __init__(self):
super(SiameseMLP, self).__init__()
self.layers = nn.Sequential(
nn.Linear(20, 256),
nn.ReLU(),
nn.Dropout(p=0.1),
nn.Linear(256, 256),
nn.ReLU(),
nn.Dropout(p=0.1),
nn.Linear(256, 256),
nn.ReLU(),
nn.Dropout(p=0.1),
nn.Linear(256, 2)
)
def forward_once(self, x):
x = x.view(-1, 20)
x = self.layers(x)
return x
def forward(self, x_1, x_2):
y_1 = self.forward_once(x_1)
y_2 = self.forward_once(x_2)
return y_1, y_2
I started to do some tests and for now I can’t reach a good value for the training loss, keeping it around ~0.33. When I calculate the dissimilarity between pairs, I get random values so I think the model is not learning completely. Do you have any suggest? Thanks. |
st183068 | I am not sure of the exact reason.
Below are some points that I could think of:
You do not need tow square the euclidean distance, as pairwise distance already gives that.
dengio:
torch.pow(euclidean_distance.double(), 2)
Check if your dataset is balanced/imbalanced.
I see that you have used only 2-dimensions in the final layer. Try using more (64, 128, 256 etc.,)
I am not sure if reducing 20 dimensions to 2 dimensions is not generalizing well.
dengio:
nn.Linear(256, 2)
Try normalizing the final descriptor (F.normalize())
Play-around with the margin parameter of siamese (contrastive) loss.
Also, try triplet loss. |
st183069 | Hey there,
I can’t figure out how to run the MelSpectrogram (from torchaudio) on batches of data.
I checked the torchaudio docs:
https://pytorch.org/audio/_modules/torchaudio/transforms.html#MelSpectrogram 2
and the waveform needs to be in this format: (channel, time)
which makes sense for one wave, but what if we have batches of data?
Thanks! |
st183070 | This transformation might only work on a single sample (like torchvision.transforms), as it’s usual use case would probably be to apply it in the __getitem__ method on each data sample. |
st183071 | Hello!
I am trying to go through VCTK dataset in this way:
train_set = datasets.VCTK(root = 'processed/training.pt', download = True, transform = transforms.PadTrim(max_len=30000))
training_data_loader = DataLoader(dataset = train_set,
num_workers=opt.nThreads, batch_size=opt.batchSize,shuffle=True)
for batch_idx, batch in enumerate(training_data_loader):
print(batch_idx)
........
However, it prints only 0 and shows the following error:
0
Traceback (most recent call last):
File "main_audio.py", line 153, in <module>
for batch_idx, batch in enumerate(training_data_loader):
File "/mnt/home/20140941/.conda/envs/opt_anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 281, in __next__
return self._process_next_batch(batch)
File "/mnt/home/20140941/.conda/envs/opt_anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 301, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
IndexError: Traceback (most recent call last):
File "/mnt/home/20140941/.conda/envs/opt_anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 55, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "build/bdist.linux-x86_64/egg/torchaudio/datasets/vctk.py", line 126, in __getitem__
audio, target = self.data[index], self.labels[index]
IndexError: tuple index out of range
How can I solve this problem? |
st183072 | I assume you are using the torchaudio library. The root should be a folder not a file. Otherwise, you’ll have to make a custom torch.utils.data.Dataset. Also, did you check to see that it actually downloads the files? The VCTK dataset is very large and takes a long time to download. You should probably just run the first line in a REPL and see if the dataset gets downloaded correctly. |
st183073 | Thanks for your reply! I fixed root and all data was downloaded. However, there is still problem.
train_set = datasets.VCTK(root = '.', download = True, transform = transforms.PadTrim(max_len=30000))
training_data_loader = DataLoader(dataset = train_set,
num_workers=opt.nThreads, batch_size=2,shuffle=True)
for batch_idx, batch in enumerate(training_data_loader, 0):
print(batch_idx)
The length of training set is 44257. When I run code it prints integers from 1 to 20 (supposed to print to 22129). And shows similar errror.
for batch_idx, batch in enumerate(training_data_loader, 0):
File "/mnt/home/20140941/.conda/envs/opt_anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 281, in __next__
return self._process_next_batch(batch)
File "/mnt/home/20140941/.conda/envs/opt_anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 301, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
IndexError: Traceback (most recent call last):
File "/mnt/home/20140941/.conda/envs/opt_anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 55, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "build/bdist.linux-x86_64/egg/torchaudio/datasets/vctk.py", line 126, in __getitem__
audio, target = self.data[index], self.labels[index]
IndexError: tuple index out of range
Please, help! |
st183074 | Did you ever get an answer for this? I’m running into the exact same issue now…
It seems to work as expected when I just iterate through the dataset directly though (rather than using the loader). |
st183075 | Hi,
Is there any plan to provide torchaudio with a new feature of calculating the LPC analysis parameters for speech signals? Or at least converting MFCC to LPC? |
st183076 | I installed torchaudio according to the instruction by https://github.com/pytorch/audio 37. Nevertheless, when I tried to import torchaudio, the following error message popped up:
Traceback (most recent call last):
File “test.py”, line 1, in
import torchaudio
File “/Users/q7/Desktop/Wavenet/audio-master/torchaudio/init.py”, line 7, in
from torchaudio import transforms, datasets, kaldi_io, sox_effects, legacy, compliance
File “/Users/q7/Desktop/Wavenet/audio-master/torchaudio/transforms.py”, line 6, in
from . import functional as F
File “/Users/q7/Desktop/Wavenet/audio-master/torchaudio/functional.py”, line 108, in
@torch.jit.script
File “/anaconda3/lib/python3.7/site-packages/torch/jit/init.py”, line 824, in script
fn = torch._C._jit_script_compile(ast, _rcb, get_default_args(obj))
File “/anaconda3/lib/python3.7/site-packages/torch/jit/annotations.py”, line 55, in get_signature
return parse_type_line(type_line)
File “/anaconda3/lib/python3.7/site-packages/torch/jit/annotations.py”, line 97, in parse_type_line
raise RuntimeError(“Failed to parse the argument list of a type annotation: {}”.format(str(e)))
RuntimeError: Failed to parse the argument list of a type annotation: name ‘Optional’ is not defined
Can someone propose a solution? Thanks a lot. |
st183077 | There was bad commit in torchaudio. Solution: old version with pytorch 1.0.
pip3 install git+https://github.com/pytorch/audio@d92de5b |
st183078 | Maksim_Pershin:
pip3 install git+https://github.com/pytorch/audio@d92de5b
Any other alternative?
This fails on current version
transform = torchaudio.transforms.DownmixMono(channels_first=True)
__init__() got an unexpected keyword argument 'channels_first'
The __init__ method just contains a pass statement. |
st183079 | I also have the same issue.
edit:
Used Maksim_Pershin’s old version successfully for now. |
st183080 | Can you try this?
import sys
sys.version
'3.6.8 (default, Jan 14 2019, 11:02:34) \n[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]'
!cat /usr/local/cuda/version.txt
CUDA Version 10.0.130
# Install dependencies
!apt-get install sox libsox-dev libsox-fmt-all
Go to the following link and check for links corresponding to your python and CUDA versions 16
61643300-2d95c880-acc0-11e9-9c17-136ef854a1e4.png1135×416
!pip3 install https://download.pytorch.org/whl/cu100/torch-1.1.0-cp36-cp36m-linux_x86_64.whl
!pip3 install https://download.pytorch.org/whl/cu100/torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl
# install torchaudio
!pip install git+https://github.com/pytorch/[email protected]
# Run torchaudio
import torchaudio
print(torchaudio.__version__) # prints 0.2.0a0+7d7342f |
st183081 | Screen Shot 2019-06-19 at 3.39.31 PM.png2296×946 260 KB
Hi, I tried to install torchaudio by using the command “conda install -c derickl torchaudio” from https://anaconda.org/derickl/torchaudio 11, but when I tried to import torchaudio, the error message in the screenshot popped up. Can someone help to solve this? Thanks in advance! |
st183082 | I ran into an issue like that before. I think upgrading the pytorch version helped but can’t remember exactly |
st183083 | The error message indicates a mismatch between the PyTorch version expected by torchaudio and the one installed.
If possible, following the instructions on the web page would likely work best: https://github.com/pytorch/audio/#dependencies 144
Generally, compiling the auxiliary libraries (torchvision, torchaudio) is much easier and faster than it is PyTorch itself, but I don’t know about mac specifics.
Best regards
Thomas |
st183084 | it looks like the package is unofficial one and they did not declare minimum and maximum pytorch version properly.
@jamarshon is soon uploading official torchaudio packages to conda channel |
st183085 | Does this rnn = nn.LSTM(input_size, hidden_size) mean, it has hidden_size number of nn.LSTMCell() cells inside? |
st183086 | No, hidden_size is the hidden dimension of the LSTM cell.
LSTM is functionally equivalent to writing a loop over your input and feeding it to LSTMCell (modulo the API differences). LSTM exists because the cuDNN library provides a more efficient implementation when you know the full input at the start of the computation. Also, LSTM is more efficient when you are using more than one layer of LSTM cells. |
st183087 | @jlquinn
Thanks for replying.
So does it mean it’s just one LSTMCell? (in this case) |
st183088 | Hello,
I am getting confused when I use torchaudio.transforms.DownmixMono.
first, I load my data with sound = torchaudio.load(). This is correct that sound[0] is two channel data with torch.Size is ([2, 132300]) and sound[1] = 22050, which is the sample rate.Then I use soundData = torchaudio.transforms.DownmixMono(sound[0]) to downsample. But the result looks weird with torch.Size([2, 1]). If I understand it correctly, I can get soundData, which has only one channel? What’s wrong with that?
I check the document as well, the input format should be: tensor (Tensor): Tensor of audio of size (c x n) or (n x c), what does (c x n) mean? |
st183089 | It looks like you are passing your data as [channels, length], so you should pass channels_first=True. While the docs says:
channels_first (bool): Downmix across channels dimension. Default: True
The default seems to be in fact None, which results in dim1 as the default channel dimension:
channels_first = None
ch_dim = int(not channels_first)
print(ch_dim)
> 1
c x n should correspond to [channels, length].
Thanks for reporting this problem! I’ve created an issue here 14. |
st183090 | Hi @ptrblck,
How should I pass the channels_first keyword to the DownmixMono function?
I get the following error.
torchaudio.transforms.DownmixMono()(sound[0],channels_first = True)
__init__() got an unexpected keyword argument 'channels_first'
However this works but I get wrong dimensions as mentioned by OP:
torchaudio.transforms.DownmixMono()(sound[0]) |
st183091 | You should pass it while initializing the transformation:
transform = torchaudio.transforms.DownmixMono(channels_first=True) |
st183092 | @ptrblck Thanks for the reply;
I tried that too… but can’t get it work
image.png795×220 8.61 KB
transform = torchaudio.transforms.DownmixMono(channels_first=True) __init__() got an unexpected keyword argument 'channels_first'
Just to add up; I downloaded torchaudio as follows:
!git clone https://github.com/pytorch/audio.git
os.chdir("audio")
!git checkout 301e2e9
!python setup.py install |
st183093 | This argument was introduced after your specified commit hash.
If you look at the file, you’ll see that the __init__ method just contains a pass statement. |
st183094 | @ptrblck Tried all possible hashes but didn’t succeed.
Could you guide to a stable version/hash? I am on Google Colab. |
st183095 | No,
Here’s what I tried:
!apt-get install sox libsox-dev libsox-fmt-all
!git clone https://github.com/pytorch/audio.git
import os
os.chdir("audio")
!git checkout 5c9d33d #301e2e9 d92de5b
!python setup.py install
import torchaudio
Gives me following error:
RuntimeError: Failed to parse the argument list of a type annotation: name 'Optional' is not defined |
st183096 | I just installed the current master without a problem and it seems your error might be related to this issue 29. |
st183097 | Hi! I was working on an ASR model in tensorflow keras, but now I want to swich to pytorch. I’m trying to reimplement keras model in pytorch, but I think, I did a mistake, because the same model on the same data does not learn in pytorch.
Here is a full jupyter notebook of my problem: notebook on github 69
As You can see, the TF model overfits the random data, as expected, but the pytorch model does not learn anything.
I’m using pytorch 1.1.0 with CUDA, and Tensorflow 2.0.0-beta1 |
st183098 | Solved by a3VonG in post #5
Could be that I missed it but it seems like a possible reason is that you forgot to zero the gradients before/after running a batch. You only seem to do it at the start. Try adding the following INSIDE your training loop:
optimizer.zero_grad()
Does this solve your issue?
See here for an example o… |
st183099 | initialization of weights will matter here.
the nn.Linear layers that you created, initialize their weights to something other than the default initialization and see if it makes a difference. You can use https://pytorch.org/docs/stable/nn.html#torch-nn-init 7 for convenience to try different initializations. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.