id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st182500 | But constructing audio from data like Spectrogram, I face the problem of output size. I have audio data of different sizes. So the input and output (which will be a fully connected layer) will be of different sizes. So how do I deal with that?
Also, why is it so hard to reconstruct raw audio data? |
st182501 | I think you should look into how to generate variable speech data look into Tacotron and other idea’s there is VAE-Tail 2 which gives an idea on how to do it with just VAE.
Also, why is it so hard to reconstruct raw audio data?
Audio has a lot of information in other frequency domains. These features are not captured directly with the audio waveform, you need better feature representations for it. |
st182502 | So then how is Jukebox able to work so well? I don’t think so it uses any frequency domain data for audio reconstruction. |
st182503 | I am not familiar with it but a high level, understanding from what I see is that they do use some other feature representation to reconstruct like they used Transformers for text representations and conditioned the synthesis on it as well. |
st182504 | So this 4 is a good explanation of Jukebox and you can see in the architecture that they are not using any spectral information to recreate the original signal. |
st182505 | I don’t know! If it is still of interest to you, another major challenge that I forgot to mention regarding why people often avoid synthesising waveforms is the sampling rate of the raw audio waveform. Generally, it is >= 16000 Hz which means for one second of audio you need to synthesise 16000 data points, which unless done by some parallel process is a lot of datapoints to synthesise. Therefore people often try to synthesise spectrograms and then transform into waveforms with help of some vocoders. |
st182506 | Posting for the first time, please tell me if I made a mistake.
Hi, I am working on a speech enhancement problem, with a STFT → modification in the frequency domain → iSTFT workflow.
My problem is, I have only managed to reconstruct the full signal that I passed into torch.stft when using the center=True option. Using the librosa implementation it seems like it could work. Maybe I am missing something, how I can achieve the same behavior in pytorch.
The following is a dummy setup to transform a sine wave and transform it back.
import torch
import matplotlib.pyplot as plt
import numpy as np
import librosa
n_fft = 32
# Example singal:
signal = (torch.linspace(0, 2*n_fft, 2*n_fft) * 2 * np.pi).sin()
# Parameters
# stft parameters with and without center
centered = {
'n_fft': n_fft,
'hop_length': n_fft // 2,
'win_length': n_fft,
'window': torch.hann_window(n_fft),
'center': True,
'return_complex': True,
}
uncentered = centered.copy()
uncentered['center'] = False
# parameters for librosa stft for comparison
lr_centered = {
'n_fft': n_fft,
'hop_length': n_fft // 2,
'win_length': n_fft,
'window': 'hann',
'center': True,
}
lr_uncentered = lr_centered.copy()
lr_uncentered['center'] = False
# parameters for the istft
i_centered = {
'n_fft': n_fft,
'hop_length': n_fft // 2,
'win_length': n_fft,
'window': torch.hann_window(n_fft),
'center': True,
'return_complex': False,
# 'length': len(signal),
}
i_uncentered = i_centered.copy()
i_uncentered['center'] = False
i_lr_centered = {
'hop_length': n_fft // 2,
'win_length': n_fft,
'window': 'hann',
'center': True,
}
i_lr_uncentered = i_lr_centered.copy()
i_lr_uncentered['center'] = False
# stfts
stft_centered = signal.stft(**centered)
stft_uncentered = signal.stft(**uncentered)
stft_lr_centered = librosa.stft(signal.numpy(), **lr_centered)
stft_lr_uncentered = librosa.stft(signal.numpy(), **lr_uncentered)
# istfts
i_stft_centered = stft_centered.istft(**i_centered)
# i_stft_uncentered = stft_uncentered.istft(**i_uncentered) # ! this causes an error!
# RuntimeError: istft(CPUComplexFloatType[17, 3], n_fft=32, hop_length=16, win_length=32,
# window=torch.FloatTensor{[32]}, center=0, normalized=0, onesided=None, length=None,
# return_complex=0) window overlap add min: 0
i_stft_lr_centered = librosa.istft(stft_lr_centered, **i_lr_centered)
i_stft_lr_uncentered = librosa.istft(stft_lr_uncentered, **i_lr_uncentered)
# I used the centered parameters to see what happens:
i_stft_uncentered = stft_uncentered.istft(**i_centered)
With the code above, I get the following results:
The input Signal:
[Had to delete this picture, because new users can post only one]
The stfts:
[Had to delete this picture, because new users can post only one]
And the reconstructed signals:
As we can see, the librosa implementation can handle both centered and uncentered stfts with complete reconstruction, but the pytorch implementation fails at i_stft_uncentered = stft_uncentered.istft(**i_uncentered) (see above).
Is this a limitation of the pytorch implementation, or am I missing a way to get the same behavior in pytorch? |
st182507 | The STFTs:
I hope I am not breaking any rules by circumventing the 1 media element restriction like this |
st182508 | I am also curious about this:
Z = torch.stft(X, n_fft=512, center=False, window=torch.hann_window(512))
X2 = torch.istft(Z, n_fft=512, center=False, window=torch.hann_window(512))
gives:
~/opt/miniconda3/lib/python3.8/site-packages/torch/functional.py in istft(input, n_fft, hop_length, win_length, window, center, normalized, onesided, length, return_complex)
652 length=length, return_complex=return_complex)
653
--> 654 return _VF.istft(input, n_fft, hop_length, win_length, window, center, # type: ignore
655 normalized, onesided, length, return_complex)
656
RuntimeError: istft(torch.DoubleTensor[257, 686, 2], n_fft=512, hop_length=128, win_length=512, window=torch.FloatTensor{[512]}, center=0, normalized=0, onesided=None, length=None, return_complex=0) window overlap add min: 0 |
st182509 | I’m experiencing the same behavior.
I’m providing a workaround at pytorch/pytorch#62323 6; it consists in only using the non-zero portion of the window.
Surprisingly enough, torch and scipy windows are constructed differently, which makes any comparison between torch.stft and librosa.stft even more difficult. |
st182510 | I have saved a model as torch script, to calculate mel spectrograms. Saving the model, and using the model at python gives no error and works as expected. However when I try to use model in c++ with libtorch I have received the following error, which is unintuitive for me. I would be happy, if one explains what is wrong and how to fix it. Thanks
RuntimeError: stft(CUDAFloatType[3708000, 1], n_fft=400, hop_length=80, win_length=400, window=CUDAFloatType{[400]}, normalized=0, onesided=1, return_complex=1) : expected 0 < n_fft < 1, but got n_fft=400
] |
st182511 | Solved by tom in post #2
It would appear that the input might not have the right shape (should it be 1x3708000 or even 1x1x3708000 or something like this? If so, you need to use torch::Tensor::permute or somesuch).
For me, making a habit of checking the input shapes and possible a small portion of the tensor (say slicing t… |
st182512 | It would appear that the input might not have the right shape (should it be 1x3708000 or even 1x1x3708000 or something like this? If so, you need to use torch::Tensor::permute or somesuch).
For me, making a habit of checking the input shapes and possible a small portion of the tensor (say slicing the first 5 elements in each dimension) has reduced a great deal of confusion with the outputs when moving between Python and C++.
Best regards
Thomas |
st182513 | torchaudio: how to frame and append long wav files using different durations?
e.g., I have 40G of 100,000 wav files (5 labels) with different length.
I wish to do 2, 3 and 4 durations of frames among these labels. Append the frames (I do not want miss the file)
How to append them without save to my disk >> 100,000 files? |
st182514 | Hey, if by framing you mean slicing audio into series of optionally overlapping frames, see librosa.util.frame() 2 or soundfile.blocks() 2. |
st182515 | Thanks @JamesWright. This sounds very clear to me now. I will try to use soundfile.blocks with torchaudio. |
st182516 | Hi,
I am converting Hubert model to onnx format with this script:
import torch
import torchaudio
import numpy as np
import soundfile as sf
import torch.nn.functional as F
import onnx
import onnxruntime
device="cpu"
# https://pytorch.org/audio/stable/pipelines.html#hubert-large
bundle = torchaudio.pipelines.HUBERT_LARGE
model = bundle.get_model().to(device)
audio_file = "sample.wav" # shape: torch.Size([1, 101467])
x, sr = sf.read(audio_file, dtype='float32')
x = torch.Tensor(x).unsqueeze(0).cpu()
x = F.layer_norm(x, x.shape)
model_path = "torchaudio_hubert_large.onnx"
torch.onnx.export(model, x, 'torchaudio_hubert_large.onnx', input_names=['input'], output_names=['output'])
model = onnx.load(model_path)
model.graph.input[0].type.tensor_type.shape.dim[1].dim_param = '?'
onnx.save(model, model_path.replace(".onnx", "_dyn.onnx"))
Then I am trying to infer a sample with this code:
model_path = "torchaudio_hubert_large_dyn.onnx"
ort_session = onnxruntime.InferenceSession(model_path)
feat = ort_session.run(None, {'input': x.numpy().astype(np.float32)})
In this step, this error occurs:
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
/tmp/ipykernel_13203/1463404001.py in <module>
1 print(x.shape)
----> 2 feat = ort_session.run(None, {'input': x.numpy().astype(np.float32)})[0]
3 print(feat.shape)
/path/to/miniconda3/envs/onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
190 output_names = [output.name for output in self._outputs_meta]
191 try:
--> 192 return self._sess.run(output_names, input_feed, run_options)
193 except C.EPFail as err:
194 if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_208' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,316,1024}, requested shape:{1,49,16,64}
How can I solve this? |
st182517 | Hi @yunusemre, thanks for raising this issue. I have tried the code on my local machine and I can successfully run your script.
Could you post your onnx and torch version and PyThon version as well? You can get the environment information by this script: https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
Thanks! |
st182518 | My environment is:
PyTorch version: 1.7.1
Is debug build: False
CUDA used to build PyTorch: 11.0
Python version: 3.9.2
Is CUDA available: True
CUDA runtime version: 9.1.85
Versions of relevant libraries:
[pip3] numpy==1.21.0
[pip3] pytorch-lightning==1.1.6
[pip3] torch==1.7.1
[pip3] torch-lr-finder==0.2.1
[pip3] torch-tb-profiler==0.3.1
[pip3] torchaudio==0.7.0a0+a853dff
[pip3] torchmetrics==0.3.2
[pip3] torchvision==0.8.2
[conda] blas 1.0 mkl anaconda
[conda] cudatoolkit 11.0.3 h15472ef_8 conda-forge
[conda] mkl 2020.2 256 anaconda
[conda] numpy 1.21.0 pypi_0 pypi
[conda] pytorch 1.7.1 py3.9_cuda11.0.221_cudnn8.0.5_0 pytorch
[conda] pytorch-lightning 1.1.6 pypi_0 pypi
[conda] torch-lr-finder 0.2.1 pypi_0 pypi
[conda] torch-tb-profiler 0.3.1 pypi_0 pypi
[conda] torchaudio 0.7.2 py39 pytorch
[conda] torchmetrics 0.3.2 pypi_0 pypi
[conda] torchvision 0.8.2 py39_cu110 pytorch
onnx: ‘1.10.2’
onnxruntime: ‘1.10.0’ |
st182519 | Thanks. Looks like torchaudio.pipelines is introduced in 0.10.0 version. Could you update PyTorch and torchaudio and re-test the script? |
st182520 | Sorry for late reply, I reported a different version of my environment, sorry for that, but I updated torch packages according to torchaudio==0.10.1, however nothing changed.
Now, my environment is :
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.1
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.17.4 pypi_0 pypi
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] pytorch 1.10.1 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.7.1 pypi_0 pypi
[conda] torchaudio 0.10.1 py38_cu113 pytorch
[conda] torchvision 0.11.2 py38_cu113 pytorch
Can you share your env info? So I can change packages one by one according to yours ? |
st182521 | I’ve read the tutorial here 3, and tried to understand the sentence :
“using kaiser_window results in longer computation times than the default sinc_interpolation because it is more complex to compute the intermediate window values - a large GCD between the sample and resample rate will result in a simplification that allows for a smaller kernel and faster kernel computation.”
The results down there say that
“16 → 8 kHz” is slower than “48 → 44.1 kHz”. But I guess a GCD of 16,8 is 8 and that of 48, 44.1 is around 4 so GCD of 16,8 is larger and it should be faster according to the tutorial. Can anyone explain this? |
st182522 | Hi
Is there an integrated solution to compose torchaudio transforms in the same style as torchvision do? |
st182523 | Solved by vincentqb in post #2
torchaudio doesn’t provide a dedicated compose transformation since 0.3.0 (see release notes). Instead, one can simply apply them one after the other x = transform1(x); x = transform2(x), or use nn.Sequential(transform1, transform2). |
st182524 | torchaudio doesn’t provide a dedicated compose transformation since 0.3.0 (see release notes 25). Instead, one can simply apply them one after the other x = transform1(x); x = transform2(x), or use nn.Sequential(transform1, transform2). |
st182525 | Is there any specific reason to why there is no compose transformation for torchaudio anymore? |
st182526 | Hi.
I have the following setup:
[49, 49] matrix, where each row is a probabilities vector (obtained from softmax over logits). overall it has 49 probability vectors, each with 49 examples.
[49, x, y] matrix, containig 49 spectrograms of size [x,y] each.
I try to obtain a 49 different weighted spectrograms, from each of the 49 probability vectors and 49 spectrograms.
Output size shall be [49, x, y].
I tried my best to search the net, and tried many configurtations of torch matmul, bmm, etc… |
st182527 | Noy_Uzrad:
Output size shall be [49, x, y].
If I understand correctly, since you want differently weighted spectrograms with each of the 49 probability vectors and 49 spectrograms, the output size should be [49, 49, x, y]. No?
i.e., In the spectrogram size [49, x, y], batch size = 49. The same batch size as in the probability matrix of [49, 49]. For a particular spectrogram, you have 49 probability values corresponding to different classes. |
st182528 | Ok I managed to figure this out
with the following:
torch.einsum(“ik,klm->ilm”, probabilities, spectrograms)
@InnovArul you undertood me wrong. Each of the rows in the probabilities matrix is a probability vector with 49 values. Now I multiply in an inner-product manner with the 49 spectrograms to get a weigted spectrogram of size [x, y]. Since the probability matrix has 49 rows I get an overall 49 weighted spectrograms so out size is [49, x, y]
I just realized einsum and this is a SUPER powerful tool! I highly recommend to any deep learning practitioner to use it |
st182529 | Hi,
I’ve found Spectrogram and MelSpectrogram in torchaudio.transforms, but there seems to be no implementation of log scale frequency spectrogram.
Why is that? |
st182530 | Hi. Look at AmplitudeToDb https://pytorch.org/audio/stable/transforms.html#amplitudetodb 2torchaudio.transforms — Torchaudio 0.10.0 documentation 2 |
st182531 | Hi, I am brand new to PyTorch and am currently working on the tutorials. I am currently working on this one: Speech Command Classification with torchaudio — PyTorch Tutorials 1.10.0+cu102 documentation 2
Basically, I am trying to get the model to predict spoken words, however, all the predictions are wrong. What exactly am I doing wrong? I have run the model for 20 epochs and also tried to run it for 30 epochs, the results are the same. I used Google Colab to run the tutorial:
Google Colab 1.
My Output is the following:
Expected: backward. Predicted: right.
Expected: bed. Predicted: backward.
Expected: bird. Predicted: backward.
Expected: cat. Predicted: backward.
Expected: dog. Predicted: backward.
Expected: down. Predicted: backward.
Expected: eight. Predicted: backward.
Expected: five. Predicted: backward.
Expected: follow. Predicted: backward.
Expected: forward. Predicted: backward.
Expected: four. Predicted: backward.
Expected: go. Predicted: backward.
Expected: happy. Predicted: backward.
Expected: house. Predicted: one.
Expected: learn. Predicted: backward.
Expected: left. Predicted: backward.
Expected: marvin. Predicted: one.
Expected: nine. Predicted: backward.
Expected: no. Predicted: backward.
Expected: off. Predicted: backward.
Expected: on. Predicted: backward.
Expected: one. Predicted: backward.
Expected: right. Predicted: backward.
Expected: seven. Predicted: backward.
Expected: sheila. Predicted: backward.
Expected: six. Predicted: backward.
Expected: stop. Predicted: backward.
Expected: three. Predicted: backward.
Expected: tree. Predicted: backward.
Expected: two. Predicted: backward.
Expected: up. Predicted: backward.
Expected: visual. Predicted: backward.
Expected: wow. Predicted: backward.
Expected: yes. Predicted: backward.
Expected: zero. Predicted: backward. |
st182532 | I try to do some DSP thing and I need to convert frame into single bin and then restore the frame from that bin. Here’s minimal example:
import torch
import matplotlib.pyplot as plt
frame = torch.rand(1536)
bin = torch.stft(frame, n_fft=frame.shape[0], hop_length=frame.shape[0], center=False,
return_complex=True, window=torch.hann_window(frame.shape[0]))
rest_frame = torch.istft(bin, n_fft=frame.shape[0], hop_length=frame.shape[0], center=False,
return_complex=True, window=torch.hann_window(frame.shape[0]))
plt.plot(rest_frame-frame)
plt.show()
I expected to get the same frame, but instead I got
RuntimeError: Cannot have onesided output if window or input is complex
in istft. torch.stft produces the bin that I need, but I still can’t make istft work. What do I do wrong? |
st182533 | Hi @jBloodless, thanks for sharing. In this case, I suggest changing the return_complex to False in torch.istft.
Also the n_fft and hop_length and window length need to be carefully selected, as torch.istft requires the settings to pass NOLA check to make the signal invertible. Check this doc for more information: Invertibility of overlap-add processing |
st182534 | Hi,
I am trying to install the torchaudio library in google Colaboratory notebook. However I get this dependency error:
running install
running bdist_egg
running egg_info
creating torchaudio.egg-info
writing torchaudio.egg-info/PKG-INFO
writing dependency_links to torchaudio.egg-info/dependency_links.txt
writing top-level names to torchaudio.egg-info/top_level.txt
writing manifest file 'torchaudio.egg-info/SOURCES.txt'
reading manifest file 'torchaudio.egg-info/SOURCES.txt'
writing manifest file 'torchaudio.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building '_torch_sox' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/torchaudio
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fdebug-prefix-map=/build/python3.6-sXpGnM/python3.6-3.6.3=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/lib/include -I/usr/local/lib/python3.6/dist-packages/torch/lib/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/lib/include/THC -I/usr/include/python3.6m -c torchaudio/torch_sox.cpp -o build/temp.linux-x86_64-3.6/torchaudio/torch_sox.o -DTORCH_EXTENSION_NAME=_torch_sox -std=c++11
x86_64-linux-gnu-gcc: error: torchaudio/torch_sox.cpp: No such file or directory
x86_64-linux-gnu-gcc: fatal error: no input files
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Is there any workaround to fix this issue ?
Best Regards |
st182535 | I just installed it via pip on Google Colab directly from the GitHub repo and it works fine.
pip install git+git://github.com/pytorch/audio
Screen Shot 2018-05-22 at 1.57.46 PM.png880×193 26 KB
Note that you need to install the required dependencies via apt-get first as mentioned in the GitHub repo. Also you need to have PyTorch installed, otherwise you will get an error during the install.
EDIT:
The dependencies I mentioned above are the ones listed for Ubuntu Linux (https://github.com/pytorch/audio#dependencies 33). Worked fine for me in the Colaboratory Notebook:
apt-get install sox libsox-dev libsox-fmt-all |
st182536 | Actually I was installing it manually and found that I should have installed ‘cffi’ first as additional dependency. However the direct installation via pip you have provided is awesome. Thaanks |
st182537 | Tried the following:
!sudo apt-get install sox libsox-dev libsox-fmt-all
!pip install git+git://github.com/pytorch/audio
Got the following message:
Successfully built torchaudio
However, got the following error while importing torchaudio:
import torchaudio
RuntimeError: Failed to parse the argument list of a type annotation: name 'Optional' is not defined
Am I missing something? |
st182538 | This worked finally:
!git clone https://github.com/pytorch/audio.git
os.chdir("audio")
!git checkout 301e2e9
!python setup.py install
Source : https://github.com/pytorch/audio/issues/71 19 |
st182539 | If you stumble upon this thread in 2021, you can install the latest pip wheel in Colab as follows:
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
and then restart the runtime.
(Check GitHub - pytorch/audio: Data manipulation and transformation for audio signal processing, powered by PyTorch 7) |
st182540 | If u are facing a problem in Nov 2021, you can try
!pip uninstall -y torchaudio
!pip3 install torchaudio==0.10.0 -f https://download.pytorch.org/whl/cu111/torch_stable.html
this will first uninstall the current version(if exists) and will download the stable version |
st182541 | we use pytorch version of 1.7.0+cu101 to turn torchscript,and I successfully loaded the torchscript model in torch version of 1.7.0+cu101 to inference, but when I use pytorch version of 1.8.1+cu101 to inference, there are some error
python3 torch_test_jit_1109.py
Traceback (most recent call last): File "torch_test_jit_1109.py", line 35, in <module> main(sys.argv[1], sys.argv[2]) File "torch_test_jit_1109.py", line 17, in main e2e_model = torch.jit.load(script_model) File "/data3/users/suziyi/Miniconda3/lib/python3.8/site-packages/torch/jit/_serialization.py", line 161, in load cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files) RuntimeError: Class Namespace cannot be used as a value: Serialized File "code/__torch__/torchaudio/sox_effects/sox_effects.py", line 5 effects: List[List[str]], channels_first: bool=True) -> Tuple[Tensor, int]: in_signal = __torch__.torch.classes.torchaudio.TensorSignal.__new__(__torch__.torch.classes.torchaudio.TensorSignal) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _0 = (in_signal).__init__(tensor, sample_rate, channels_first, ) out_signal = ops.torchaudio.sox_effects_apply_effects_tensor(in_signal, effects) 'apply_effects_tensor' is being compiled since it was called from 'SynthesizerTrn.forward' Serialized File "code/__torch__/models_torchaudio_torchscript.py", line 41 sr: Tensor, vol: Tensor) -> Tensor: _0 = __torch__.torchaudio.sox_effects.sox_effects.apply_effects_tensor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _1 = torch.tensor([(torch.size(x))[-1]], dtype=None, device=None, requires_grad=False) x_lengths = torch.to(_1, 4, False, False, None)
we use torchaudio to change audio speed,volume and so on, please help me
when we create torchscript the pytorch version is 1.7.0+cu101 , the torchaudio version is 0.7.0 |
st182542 | Hi @c9412600, thanks for sharing the issue. Could you share your script that uses sox_effect?
Also you can post it on torchaudio’s GitHub page. This helps other people who also meet this problem. |
st182543 | Thanks for your reply,that’s my script ,please check
import torch
import torchaudio.sox_effects as sox_effects
def forward(self, audio: torch.Tensor, speed: torch.Tensor, pitch: torch.Tensor, sr: torch.Tensor, vol: torch.Tensor):
speed = str(speed.item())
pitch = str(pitch.item())
sr = str(int(sr.item()))
vol = str(vol.item())
effects = [['tempo','-s', speed], ['pitch', pitch],['rate', sr], ['vol', vol]]
waveform, _ = sox_effects.apply_effects_tensor(audio, int(sr), effects)
return waveform |
st182544 | Is n_frames equivalent to librosa’s duration?
pytorch_version: 1.9.0+cu111
torchaudio_version: 0.9.0
I’m currently working in audio classification task where I found that if the audio file is provided entirely in load_audio() function, it provides the stereo output of the sampled version with default sampling_rate
But, if i provide offset and n_frames into the function, it doesn’t resample it.
Actually I have a doubt that is n_frames is equivalent to librosa’s duration.
In my case the offset is 30 and duration is 10 seconds
reading the audio file with offset and n_frames
Screenshot from 2021-10-30 16-09-331920×1080 202 KB |
st182545 | Hi @ThiruRJST, thanks for sharing it. In torchaudio, the frame_offset and num_frames are in frames or samples, thus it’s different from librosa’s duration argument.
If you want to extract 10 seconds of audio from the 30th second, you can call the method torchaudio.load(file_path, frame_offset=30*sample_rate, num_frames=10*sample_rate). |
st182546 | is that:
creating a mel spectrogram
feed it to the neural net
backprop(and by the way what loss function should I use?)
repeat until the loss decreases
or should I pass in batches or something? i’m super newbie and its my first project, so I dont know how to create an audio dataser and pass in batches… the tutorials that pytorch provides cant help me unfortunately… maybe can someone explain the steps in the comments or provide a good tutorial? |
st182547 | There also are models working directly on waveforms, e.g. wav2letter 1.
For an even simpler entry, you might look at the Speech Commands tutorial 3 where the waveform is translated into class predictions for keywords (a much simpler problem and relatively fast to train).
Best regards
Thomas |
st182548 | Hello, I am tryig to create neural network for audio recognition. I want to recognize 6 types of speech. I have several problems maybe they could relate. I copied and a bit modified my nn from here. I want to put in neural network 8000 samples and get number from 0 to 5.
Problem is that I cant put there array of size [64, 8000] and I need to put there [64, 1, 8000]. Why is this? I think that I undestand that 64 is my batch size and 8000 is number of samples but why i need 3rd dim?
NN returns array (predicted) has size [64, 6] (64 is batch and 6 is full of zeros). Second dim should be only 1 and there should be number from 0 to 5 (my categories).
I am new in pytorch before that I used Matlab where it is from my view much easier.
# My model
class NeuralNetwork(nn.Module):
def __init__(self, n_input=1, n_output=6, stride=16, n_channel=32): # nchalnnesl 32
super().__init__()
self.conv1 = nn.Conv1d(n_input, n_channel, kernel_size=80, stride=stride)
self.bn1 = nn.BatchNorm1d(n_channel)
self.pool1 = nn.MaxPool1d(4)
self.conv2 = nn.Conv1d(n_channel, n_channel, kernel_size=3)
self.bn2 = nn.BatchNorm1d(n_channel)
self.pool2 = nn.MaxPool1d(4)
self.conv3 = nn.Conv1d(n_channel, 2 * n_channel, kernel_size=3)
self.bn3 = nn.BatchNorm1d(2 * n_channel)
self.pool3 = nn.MaxPool1d(4)
self.conv4 = nn.Conv1d(2 * n_channel, 2 * n_channel, kernel_size=3)
self.bn4 = nn.BatchNorm1d(2 * n_channel)
self.pool4 = nn.MaxPool1d(4)
self.fc1 = nn.Linear(2 * n_channel, n_output)
def forward(self, x):
x = self.conv1(x)
x = F.relu(self.bn1(x))
x = self.pool1(x)
x = self.conv2(x)
x = F.relu(self.bn2(x))
x = self.pool2(x)
x = self.conv3(x)
x = F.relu(self.bn3(x))
x = self.pool3(x)
x = self.conv4(x)
x = F.relu(self.bn4(x))
x = self.pool4(x)
x = F.avg_pool1d(x, x.shape[-1])
x = x.permute(0, 2, 1)
x = self.fc1(x)
return F.log_softmax(x, dim=2)
# My training
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
n_iterations = int(len(train_dataset)/batch_size)
for epoch in range(num_epochs):
running_loss = 0.0
for i, (inputs, labels) in enumerate(train_loader):
inputs, labels = inputs.cuda(), labels.cuda()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs.squeeze(), labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if (i+1) % 5 == 0:
print(f'epoch {epoch+1}/{num_epochs}, step {i+1}/{n_iterations}, input {inputs.shape}')
correct = 0
total = 0
model = NeuralNetwork()
model.load_state_dict(torch.load("./model.pth"))
correct_pred = {classname: 0 for classname in v.LABELS}
total_pred = {classname: 0 for classname in v.LABELS}
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in test_loader:
inputs, labels = data
# calculate outputs by running images through the network
outputs = model(inputs)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted[:, 0] == labels).sum().item()
for label, predict in zip(labels, predicted[:,0]):
if label == predict:
correct_pred[v.LABELS[label]] += 1
total_pred[v.LABELS[label]] += 1
print('Accuracy: %d %%' % (100 * correct / total))
# print accuracy for each class
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
print("Accuracy for class {:5s} is: {:.1f} %".format(classname,
accuracy)) |
st182549 | Hi @martin.hajek1
Problem is that I cant put there array of size [64, 8000] and I need to put there [64, 1, 8000]. Why is this?
So your model uses nn.Conv1d layers that requires input_channel and output_channel arguments. 64 is the batch size, 8000 is the number of audio samples in your batch. The input channel is 1 as there is only one channel of audio. That’s why you need your input to be of size [64, 1, 8000]. |
st182550 | I have the same question. I think it’s important for tasks like stereo matching. |
st182551 | Here is my implementation of grid sample 1d. And I also implement grid sample 1d with grid_sample in Pytorch by regarding 1d as a special case of 2d. Overall, my implementation is 2~3x faster in forward pass. Hope it helps~
GitHub
GitHub - luo3300612/grid_sample1d: pytorch cuda extension of grid_sample1d 19
pytorch cuda extension of grid_sample1d. Contribute to luo3300612/grid_sample1d development by creating an account on GitHub. |
st182552 | Hi,
I’m trying to finish a multi-classification problem. In detail, I’m trying to determine the speech either contains some specific noise or no noise.
After training my model 100 epochs, the Cuda utilization suddenly jumps down to 0 - 2%. In the previous epochs, that was about 60 -70%.
Even though the error message reports a format issue, I don’t think that’s the file format issue because in the previous epochs I have already successfully loaded the same file with the same code.
I guess this issue just pops out suddenly.
Does anyone have some thoughts on that?
I’ll really appreciate it!
a07a4e226ae93ade256148e5f732e941840×936 78.2 KB |
st182553 | Rui_Liu:
Even though the error message reports a format issue, I don’t think that’s the file format issue because in the previous epochs I have already successfully loaded the same file with the same code.
I guess this issue just pops out suddenly.
Does anyone have some thoughts on that?
One thing you could check if it is a memory issue. My thinking around this is that “unknown format” might really be something like “something went wrong while decoding” and if your training leaks memory, it might end up not being able to load whatever it needs for decoding.
It’s just a guess, though.
Best regards
Thomas |
st182554 | Thanks for your idea!
I’m not familiar with “training leaks memory”, but it does make sense to me.
I will search more on that. Thanks for your help anyway! |
st182555 | So there will be better ways to monitor this, but my very simple recipe for getting a first impression:
Start the training,
Every now and then I run the program “top” in a console window and after hitting “m” it will sort processes by memory use. The column RSS is the one I look at.
If it keeps increasing between the 5th and 10th epoch (or so) and continues to increase between 10th and 20th, you might have something that keeps using memory and never returns it (i.e. a memory “leak”).
Best regards
Thomas |
st182556 | I’m using the WavAugment 3 package, which imports torchaudio. I’m receiving the following warning:
UserWarning: "sox" backend is being deprecated. The default backend will be changed to "sox_io" backend in 0.8.0 and "sox" backend will be removed in 0.9.0. Please migrate to "sox_io" backend. Please refer to https://github.com/pytorch/audio/issues/903 for the detail.
'"sox" backend is being deprecated. '
I tried adding the following line as recommended:
torchaudio.set_audio_backend("sox_io")
But I still get the warning. What am I doing wrong? |
st182557 | Solved by nateanl in post #2
Hi @Tzeviya, what’s your version of torchaudio? Upgrading to 0.9.0 should resolve this issue. |
st182558 | Hi @Tzeviya, what’s your version of torchaudio? Upgrading to 0.9.0 should resolve this issue. |
st182559 | I created a music dataset including vocal and instrument, and I tried to use Conv-tasnet to sepatate the vocal and instrument. I have successfully used other people’s datasets for training first, but when I use my dataset to train, it keeps failing and I can’t find the problem. Here is the error and my datasets. Did I miss something?
2021-10-21 16-50-20 的螢幕擷圖1920×1080 237 KB
2021-10-21 16-51-29 的螢幕擷圖1920×1080 101 KB
2021-10-21 16-51-33 的螢幕擷圖1920×1080 102 KB
2021-10-21 16-59-34 的螢幕擷圖1920×1080 294 KB |
st182560 | The code in your screenshots is quite hard to read, so please post code snippets by wrapping them into three backticks ```, as it’s easier to read/debug, and the search engine would be able to index it as well if other users hit similar issues.
Based on what I can see it seems that a local variable is references before its assignment, so you would need to check where its definition is. |
st182561 | Thank you for your reply. I am sorry that I forgot to give the complete code.
“i” is the index of “data”,and there is no other same variable outside the def.
2021-10-21 17-41-51 的螢幕擷圖1920×1080 276 KB |
st182562 | If the data_loader is empty, i will be undefined:
def fun():
for i in range(0):
print('inside loop')
return 1 / i
fun()
> UnboundLocalError: local variable 'i' referenced before assignment
PS: I would still recommend to post code snippets directly instead of screenshots. |
st182563 | Hi @userTsai, there is a training recipe of Conv-TasNet in torchaudio. You can follow the implementation here: audio/lightning_train.py at main · pytorch/audio · GitHub 2 for your own dataset. |
st182564 | Hello peeps,
Sorry for the noob question.
I am doing a multi-label classification for audio data. My input data is features from audio frames. The label classes are 34 phonemes, and each class has many labels (descriptors of that particular phoneme/class). The total number of descriptors are 10. The classes and labels look like this:
voiced b, d, g, j, l , v, z, Z,
unvoiced f, k, p, s, t
labial p, b, m, f, v
fricative f, s, S, x,
(...10)
My understanding is that this is a multi-label classification and I need to map each class/phoneme to its lables/descriptors., so that I have a matrix of (classes, labels) where each class is a row that contain 1 where label is present and 0 where label is not present.
Internet posts suggest this could be done with sklearn.preprocessing.OneHotEncoder,
but I would like to know if there is a more proper pytorch way to create the mapping matrix, and if it’s even the best way to go for this kind of task.
Also, what would the input shape look like.
Thanks |
st182565 | I am running the same code on two machines running Ubuntu 20.04. In one machine the code runs fine, in the other I get the error reported below. The only difference I can see are the graphic cards (NVIDIA Titan Xp in the machine that works and NVIDIA GeForce RTX 3090 in the one that doesn’t). The version of torch and torchaudio are 1.5.0 and 0.5.0 (these are required for the experiment to be reproducible). The machine that works runs python 3.7.9, the one that doesn’t runs python 3.8.10. Both machines have cuda V10.1.243.
Let me know if there is other information that I missed and that is relevant to this question.
Do you have any suggestion on what to look for to solve this problem?
Thank you!
Giampiero
Using device ‘cuda:0’
Creating Datasets
Calculating mean and std from dataset
Traceback (most recent call last):
File “main.py”, line 665, in
main(args)
File “main.py”, line 621, in main
task_and_more, dataloaders, model_and_more = prepare_for_task(args)
File “main.py”, line 555, in prepare_for_task
datasets = create_datasets(args, paths[‘data_path’], flags[‘load_dataset_extra_stats’])
File “main.py”, line 252, in create_datasets
datasets = set_transforms_on_datasets(args, datasets, transforms_device)
File “main.py”, line 303, in set_transforms_on_datasets
stats = get_dataset_stats_and_write(datasets[‘train’], args[‘device’],
File “…/preprocessing.py”, line 79, in get_dataset_stats_and_write
mean, std, min_val, max_val = calc_dataset_stats(dataloader, device=device)
File “…/processing.py”, line 72, in calc_dataset_stats
input_channel = dataloader.dataset[0][‘image’].shape[0]
File “…/CLEAR_dataset.py”, line 433, in getitem
game_with_image = self.transforms(game_with_image)
File “…/venv/lib/python3.8/site-packages/torchvision/transforms/transforms.py”, line 61, in call
img = t(img)
File “…/data_interfaces/transforms.py”, line 36, in call
specgram = self.spectrogram_transform(sample[‘image’])[0, :, :]
File “…/venv/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 550, in call
result = self.forward(*input, **kwargs)
File “…/venv/lib/python3.8/site-packages/torchaudio/transforms.py”, line 81, in forward
return F.spectrogram(waveform, self.pad, self.window, self.n_fft, self.hop_length,
File “…/venv/lib/python3.8/site-packages/torchaudio/functional.py”, line 276, in spectrogram
spec_f /= window.pow(2.).sum().sqrt()
RuntimeError: CUDA error: invalid device function |
st182566 | Your Ampere GPU (RTX 3090) needs CUDA>=11.0 and I don’t know which exact PyTorch binary (or source build) you’ve installed.
In any case, the CUDA11 support came on PyTorch ~1.7 so your 1.5.0 installation will most likely be using CUDA10, which is causing the error.
giampierosalvi:
Both machines have cuda V10.1.243.
The pip wheels and conda binaries ship with their own CUDA runtime and your local CUDA toolkit is used for a source build or to build custom CUDA extensions. |
st182567 | Thank you for the quick reply. You are right, I got it to work with
pip uninstall torch torchvision torchaudio
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
Best |
st182568 | I’m using a custom loop to get the STFT of an audio signal, while doing some processing on a frame by frame basis. The number of resulting frames corresponds to the formula
number_of_samples = 1440000 + winsize I pad either end of the signal with winsize //2
number_of_frames = (number_of_samples)//(window_size - window_size//2)
% = (1440000 + 1024)//(1024 - 512)
% = 2814
During the loop to prevent aliasing I zero pad the end of both the window and input frame by length winsize but my hopsize still remains at 512. This extra zero padding shouldn’t affect the number of frames as it’s done on a frame by frame basis to the current time domain segment under analysis.
I then want to use torchaudio.transforms.Melspectrogram on the original audio signal - this would result in me having two STFT signals, one with linearly spaced frequency bins and one with mel spaced. However the number of frames outputted from the transform is not as expected depending on the value of n_fft. With the n_fft = winsize and center=True it outputs 2816 frames and with center=False it outputs the expected 2814. However if n_fft = 2048 and winsize = 1024 it outputs 2812 frames. I can’t work out why n_fft would effect the number of total frames if frames are based on signal length, window size, and hopsize. |
st182569 | Hi @Mole_m7b5, thanks for posting the question. The number of frames is hard coded in torch.stft
int64_t n_frames = 1 + (len - n_fft) / hop_length;
// time2col
input = input.as_strided(
{batch, n_frames, n_fft},
{input.stride(0), hop_length * input.stride(1), input.stride(1)}
);
if (window_.defined()) {
input = input.mul(window_);
}
If you increase the value of n_fft, the number of frames will be decreased. |
st182570 | Hi @nateanl, thanks for the response.
How does that interact when having a value n_fft > win_length? Does it mean the FFT window actually extends outside of the segment covered by your (for example) Hann window? Or is the win_length overridden to match n_fft? |
st182571 | if win_length is shorter than n_fft, it will be zero padded on both sides to match the value of n_fft. When computing stft, only the samples multiplying the non-zero window values are used.
In other words, if your win_length and hop_length keep the same, and n_fft is increased, the frequency axis will have higher resolution (be up-sampled). |
st182572 | That makes sense.
But if the n_fft value is matched by padding on a frame by frame basis after the windowed section has been grabbed, and only the original windowed section of the signal is transformed (but at a higher fft resolution due to the zero-padding), would it make more sense for the number of output frames to be hardcoded according to win_length and hop_length? Or is there a particular reason it’s hardcoded according to the n_fft? |
st182573 | The design follows that of librosa stft 1, which also uses n_fft to detect number of frames.
There’s also a discussion about win_length and n_fft you might be interested: Semantics of n_fft, window length, and frame length · Issue #695 · librosa/librosa · GitHub 2 |
st182574 | Hello peeps,
I have some audio data for which i computed the audio features. The labels for each input feature is 20 binary classes. i.e. for each input element there are 20 classes (1 or 0).
My initial understanding is that this is a multi-label classification that can be addressed using nn.BCELoss.
Can you guys let me know if I got this right and to explain how nn.BCELoss works?
Thank you |
st182575 | Solved by xdwang0726 in post #6
If you are dealing with multi-label classification sigmoid + nn.BCELoss should be what you are looking for; and if you are looking for multi-class classification (each of your sample will belong to only one of the 20 classes) then softmax + nn.CrossEntropyLoss are the things. |
st182576 | Hi @lima, nn.BCELoss 1 is designed for binary classification task. The prediction and label are both of shape (batch, ...).
In your case, you have 20 classes, which is a multi-class classification task. You can use nn.CrossEntropyLoss where the prediction is a float tensor of shape (batch, class, ...) and the label is a long tensor of shape (batch, ...). |
st182577 | Hi @nateanl thank you for your answer. but what about the fact that each class should either be 1 or 0? |
st182578 | I see, if the input has more than one classes that are labeled as 1, then you should use nn.BCELoss and make your prediction and label both of shape (batch, class). |
st182579 | Hi @nateanl The idea is the following. I have computed the mfcc features for some audio data, and I have trained my model to do phoneme classification.
Now instead of the phoneme classification, I would like to do a phonetic feature classification (i.e. classify the individual phonetic features (total 20) for each audio frame). I believe nn.BCELoss can do the job, but what about the prediction and labels shape? |
st182580 | If you are dealing with multi-label classification sigmoid + nn.BCELoss should be what you are looking for; and if you are looking for multi-class classification (each of your sample will belong to only one of the 20 classes) then softmax + nn.CrossEntropyLoss are the things. |
st182581 | Hello,
import torchaudio.sox_effects as sox_effects
wav, sr = sox_effects.apply_effects_file(
wav_file,
[[‘speed’, str(speed)],[‘pitch’, ‘0’], [‘rate’, str(sample_rate)]])
This method apply_effects_file is used to change the speed of speech, but the pitch is changed even I set the pitch shift is zero. I want to know how to change the speed of audio but not change the pitch.
In addition, I found another method “torchaudio.transforms.TimeStretch” to stretch on the stft and keep the pitch unchanged. If the “torchaudio.transforms.TimeStretch” can make the speed change but the pitch unchanged?
Thanks very much for your help. |
st182582 | Hi @Alva-2020, you can apply tempo argument for changing the speed while keeping the pitch. As described in sox document.
Yes, torchaudio.transforms.TimeStretch can change the speech while keeping the pitch. Note that the method applied is different from sox, the values may not be fully identical. |
st182583 | Hi, I noticed there is a difference in the values from mp3 file when loaded using torchaudio.load vs librosa.load. Also, the shapes of the tensors are different. I am loading an mp3 file with 44.1kHz sampling frequency of 1 sec. duration and I am getting the following output.
librosa_audio, sr_librosa = librosa.load(os.path.join(root, path), sr=44100)
torch_audio, sr_torch = torchaudio.load(os.path.join(root, path))
print(librosa_audio.shape, sr_librosa)
print(torch_audio.shape, sr_torch)
# (44100,) 44100
# torch.Size([1, 46040]) 44100
I am loading a one second audio and I expect the shape to be 44100 for both the case. Can someone please explain what is happening here?
Thanks. |
st182584 | I think you might be hitting this issue 5 so feel free to comment on this issue with your use case and description of the difference. |
st182585 | Thanks. Yes, I am indeed hitting the same issues. It seems there is some issue with mp3 loading. |
st182586 | I’m trying to build a speech-to-text system my data is (4 - 10 seconds audio wave files) and their transcription (preprocessing steps are char-level encoding to transcription and extract mel-Spectrograms from audio files).
this is my model architecture is ( a 3 conv1d layers with positional encoding to the audio file - embedding and positional encoding to encoded transcription and then use those as input to transformer model and lastly a dense layer)
the loss function is cross entropy and optimizer is Adam.
the problem is that the loss is always stuck at some point it starts around 3.8 (I have 46 classes) and after some batches it decreases to (e.g. 2,8) and stuck their. it bounces around that value and never decrease again.
I tried changing parameters of the model, I’ve changed the optimizer and learning rate always result the same problem.
I don’t understand what I’m doing wrong
Screenshot 2021-09-28 1925021423×611 56.9 KB |
st182587 | Hi @Abdulrahman_Adel, what’s the format of your label? Usually the length of the text label is not the same as the feature sequence length, hence there is a CTC loss for addressing the issue. The output of the model contains “” symbol and it’s excluded when evaluating the word error rate. |
st182588 | Hi,
I am using torchaudio to load and save audio files but the number of samples seems to be wrong.
Here is my code:
path_audio = 'example.mp3'
save_path = 'example_new.mp3'
#show info about file
print(torchaudio.info(path_audio))
Output: AudioMetaData(sample_rate=16000, num_frames=90432, num_channels=1, bits_per_sample=0, encoding=MP3)
#load example file
audio_tuple = torchaudio.load(path_audio)
audio = audio_tuple[0]
samplerate = audio_tuple[1]
print(audio.shape[1])
Output: 89856 #why is this different from info?
#cut audio to 1 second @ 16kHz
audio = audio[:,0:16000]
#check
print(audio.shape[1])
Output: 16000
#save audio and get info of saved file
torchaudio.save(save_path, audio, samplerate)
print(torchaudio.info(save_path))
Output: AudioMetaData(sample_rate=16000, num_frames=17280, num_channels=1, bits_per_sample=0, encoding=MP3) #why is this not 1600?
#load saved file:
reloaded_audio_tuple = torchaudio.load(save_path)
reloaded_audio = reloaded_audio_tuple[0]
print(reloaded_audio.shape)
Output: torch.Size([1, 16704]) #this should be 16000!
As you can see the output file has too many samples. They have not been added to the end of the file so i can’t just cut the file again. Can someone help me?
Thx! |
st182589 | Hi everyone,
I would really appreciate if someone could let me know how to replicate compliance.kaldi.fbank() function in librosa ? I’ve gone through alot of literature and forums but haven’t really found a way to replicate their parameters. Any help would be appreciated. |
st182590 | I have an audio file containing too much silent segment, and i want to filter the silent segments with torchaudio.functional.vad(). But the funciton can only trim the front silent part of audio, it still remains silent segment in the mid and back. Can someone tell me why? Thanks!
Besides, I want to know the meaning of each parameter in vad(). Beacuse I try to split the audio into many segments with shorter duration, like 0.03s, process each with vad, and concat the process segment together finally, but i don’t know how to set the parameters.
Thanks a lot ! |
st182591 | I am trying to use wav2vec2_base model for audio feature extraction. I am getting the error in the title for this code:
class Identity(nn.Module):
@overrides
def forward(self, input_):
return input_
model = wav2vec2_base(num_out=32)
model.load_state_dict(torch.load("wav2vec2-base-960h.pt"))
model.encoder = Identity()
sample_rate, samples = wavfile.read(wav_file)
samples = samples.reshape((1,samples.shape[0]))
samples = torch.from_numpy(samples)
#samples = samples.type(torch.short)
print("Start")
samples = samples.float()
output = model(samples.float())
I have tried to select the related parts in the code but there is not much left besides these. What is wrong with my forward function? Thanks for any help! |
st182592 | Could you post the entire model definition as well as the error message including the stack trace? |
st182593 | I made a neural network with the urban sound dataset according to a tutorial, but now I wanna create my own dataset and network, which will recognize a wake word for sound assistant (hopefully). where should I start? |
st182594 | There is a voice activity detection example in torchaudio. I used that for inspiration when doing my raspberry lightbulb control.
Best regards
Thomas |
st182595 | pytorch.org
PyTorch 3
An open source machine learning framework that accelerates the path from research prototyping to production deployment.
Is it this one? |
st182596 | Hi,
I’ve been looking into using a Constant Q Transform in my pipeline, which I’m currently doing with librosa. I would like to rewrite this function, so that I only need to use pytorch/torchaudio for my application, and also so that it can be written in c++ like torch.stft. I am however unsure on how to get started. Where is the c++ part of torch.stft defined, so that I can get a sense of how to proceed with writing a VQT function. (VQT and CQT are essentially the same).
Thanks! |
st182597 | Hi @lewiswolf! Thanks for the proposal. It’ll be great to add the VQT and CQT functions. Here is the reference for torch.stft 3.
I also recommend adding the functions to torchaudio repo 2, it already supports several audio processing functions like MFCC, PitchShift, and so on. Feel free to create an issue for feature proposal and add your PR there |
st182598 | @lewiswolf
You can try nnAudio 10, which is built upon pytorch and would probably fit your needs. |
st182599 | @yoyololicon I checked the implementation in nnAudio. I think it can be refactored by using the native torch complex dtype. Do you think it’s worthy implementing it in C++ like torch.stft, or it’s good enough to implement in Python, like LFCC/MFCC? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.