source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
431
This has been one of the holes in my cheddar cheese block of understanding DSP, so what is the physical interpretation of having a negative frequency? If you have a physical tone at some frequency and it is DFT'd, you get a result in both the positive and negative frequencies - why and how does this occur? What does it mean? Edit: Oct 18th 2011. I have provided my own answer, but expanded the question to include the roots of why negative frequencies MUST exist.
Negative frequency doesn't make much sense for sinusoids, but the Fourier transform doesn't break up a signal into sinusoids, it breaks it up into complex exponentials (also called "complex sinusoids" or " cisoid s"): $$F(\omega) = \int_{-\infty}^{\infty} f(t) \color{Red}{e^{- j\omega t}}\,dt$$ These are actually spirals, spinning around in the complex plane: ( Source: Richard Lyons ) Spirals can be either left-handed or right-handed (rotating clockwise or counterclockwise), which is where the concept of negative frequency comes from. You can also think of it as the phase angle going forward or backward in time. In the case of real signals, there are always two equal-amplitude complex exponentials, rotating in opposite directions, so that their real parts combine and imaginary parts cancel out, leaving only a real sinusoid as the result. This is why the spectrum of a sine wave always has 2 spikes, one positive frequency and one negative. Depending on the phase of the two spirals, they could cancel out, leaving a purely real sine wave, or a real cosine wave, or a purely imaginary sine wave, etc. The negative and positive frequency components are both necessary to produce the real signal, but if you already know that it's a real signal, the other side of the spectrum doesn't provide any extra information, so it's often hand-waved and ignored. For the general case of complex signals, you need to know both sides of the frequency spectrum.
{ "source": [ "https://dsp.stackexchange.com/questions/431", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/293/" ] }
470
I'm familiar with the Radon transform from learning about CT scans, but not the Hough transform. Wikipedia says The (r,θ) plane is sometimes referred to as Hough space for the set of straight lines in two dimensions. This representation makes the Hough transform conceptually very close to the two-dimensional Radon transform. (They can be seen as different ways of looking at the same transform.[5]) Their output looks the same to me: Wolfram Alpha: Radon Wolfram Alpha: Hough So I don't understand what the difference is. Are they just the same thing seen in different ways? What are the benefits of each different view? Why aren't they combined into "the Hough-Radon transform"?
The Hough transform and the Radon transform are indeed very similar to each other and their relation can be loosely defined as the former being a discretized form of the latter. The Radon transform is a mathematical integral transform, defined for continuous functions on $\mathbb{R}^n$ on hyperplanes in $\mathbb{R}^n$. The Hough transform, on the other hand, is inherently a discrete algorithm that detects lines (extendable to other shapes) in an image by polling and binning (or voting). I think a reasonable analogy for the difference between the two would be like the difference between calculating the characteristic function of a random variable as the Fourier transform of its probability density function (PDF) and generating a random sequence, calculating its empirical PDF by histogram binning and then transforming it appropriately. However, the Hough transform is a quick algorithm that can be prone to certain artifacts. Radon, being more mathematically sound, is more accurate but slower. You can in fact see the artifacts in your Hough transform example as vertical striations. Here's another quick example in Mathematica: img = Import["http://i.stack.imgur.com/mODZj.gif"]; radon = Radon[img, Method -> "Radon"]; hough = Radon[img, Method -> "Hough"]; GraphicsRow[{#1, #2, ColorNegate@ImageDifference[#1, #2]} & @@ {radon,hough}] The last image is really faint, even though I negated it to show the striations in dark color, but it is there. Tilting the monitor will help. You can click all figures for a larger image. Part of the reason why the similarity between the two is not very well known is because different fields of science & engineering have historically used only one of these two for their needs. For example, in tomography (medical, seismic, etc.), microscopy, etc., Radon transform is perhaps used exclusively. I think the reason for this is that keeping artifacts to a minimum is of utmost importance (an artifact could be a misdiagnosed tumor). On the other hand, in image processing, computer vision, etc., it is the Hough transform that is used because speed is primary. You might find this article quite interesting and topical: M. van Ginkel, C. L. Luengo Hendriks and L. J. van Vliet, A short introduction to the Radon and Hough transforms and how they relate to each other , Quantitative Imaging Group, Imaging Science & Technology Department, TU Delft The authors argue that although the two are very closely related (in their original definitions) and equivalent if you write the Hough transform as a continuous transform, the Radon has the advantage of being more intuitive and having a solid mathematical basis. There is also the generalized Radon transform similar to the generalized Hough transform, which works with parametrized curves instead of lines. Here is a reference that deals with it: Toft, P. A., "Using the generalized Radon transform for detection of curves in noisy images" , IEEE ICASSP-96, Vol. 4, 2219-2222 (1996)
{ "source": [ "https://dsp.stackexchange.com/questions/470", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/29/" ] }
471
Are there some good sites or blogs where I can keep myself updated on the latest news and papers about image and signal processing research, or I should just check out "classical" providers like IEEE Transactions, Elsevier, etc?
There are many for different subjects - Efg's algorithm collection : http://www.efg2.com/Lab/Library/ImageProcessing/index.html DSP Forum : http://www.dsprelated.com/ Data compression - http://datacompression.info/ About rendering - http://www.realtimerendering.com/portal.html For all research papers - http://ieeexplore.ieee.org/Xplore/guesthome.jsp Resources on Mp3 and Audio - http://www.mp3-tech.org/programmer/docs/index.php Steve on Image Processing - http://blogs.mathworks.com/steve/ Image Processing and Retrieval http://savvash.blogspot.com/ Accelerated Image Processing - http://visionexperts.blogspot.com/ The Digital Signal Processing Blog - http://centerk.net/dspblog/ Noise & Vibration Measurement Blog - http://blog.prosig.com/ Image Processing with Matlab, Open Blog - http://imageprocessingblog.com/
{ "source": [ "https://dsp.stackexchange.com/questions/471", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/119/" ] }
509
If I have a signal that is time limited, say a sinusoid that only lasts for $T$ seconds, and I take the FFT of that signal, I see the frequency response. In the example this would be a spike at the sinusoid's main frequency. Now, say I take that same time signal and delay it by some time constant and then take the FFT, how do things change? Is the FFT able to represent that time delay? I recognize that a time delay represents a $\exp(-j\omega t)$ change in the frequency domain, but I'm having a hard time determining what that actually means . Practically speaking, is the frequency domain an appropriate place to determine the time delay between various signals?
The discrete Fourier transform (DFT) , commonly implemented by the fast Fourier transform (FFT) , maps a finite-length sequence of discrete time-domain samples into an equal-length sequence of frequency-domain samples. The samples in the frequency domain are in general complex numbers; they represent coefficients that can be used in a weighted sum of complex exponential functions in the time domain to reconstruct the original time-domain signal. These complex numbers represent an amplitude and phase that is associated with each exponential function. Thus, each number in the FFT output sequence can be interpreted as: $$ X[k] = \sum_{n=0}^{N-1} x[n] e^{\frac{-j 2 \pi n k}{N}} = A_k e^{j \phi_k} $$ You can interpret this as follows: if you want to reconstruct x[n], the signal that you started with, you can take a bunch of complex exponential functions $e^{\frac{j 2 \pi n k}{N}}, k = 0, 1, \ldots, N-1$, weight each one by $X[k] = A_k e^{j \phi_k}$, and sum them. The result is exactly equal (within numerical precision) to $x[n]$. This is just a word-based definition of the inverse DFT. So, speaking to your question, the various flavors of the Fourier transform have the property that a delay in the time domain maps to a phase shift in the frequency domain. For the DFT, this property is: $$ x[n] \leftrightarrow X[k] $$ $$ x[n-D] \leftrightarrow e^{\frac{-j2 \pi k D}{N}}X[k] $$ That is, if you delay your input signal by $D$ samples, then each complex value in the FFT of the signal is multiplied by the constant $e^{\frac{-j2 \pi k D}{N}}$. It's common for people to not realize that the outputs of the DFT/FFT are complex values, because they are often visualized as magnitudes only (or sometimes as magnitude and phase). Edit: I want to point out that there are some subtleties to this rule for the DFT due to its finiteness in time coverage. Specifically, the shift in your signal must be circular for the relation to hold; that is, when you delay $x[n]$ by $D$ samples, you need to wrap the last $D$ samples that were at the end of $x[n]$ to the front of the delayed signal. This wouldn't really match what you would see in a real situation where the signal just doesn't start until after the beginning of the DFT aperture (and is preceded by zeros, for example). You can always get around this by zero-padding the original signal $x[n]$ so that when you delay by $D$ samples, you just wrap around zeros to the front anyway. This relationship only applies to the DFT since it is finite in time; it's does not apply to the classic Fourier transform or discrete-time Fourier transform .
{ "source": [ "https://dsp.stackexchange.com/questions/509", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/461/" ] }
536
Can anyone state the difference between frequency response and impulse response in simple English?
The impulse response and frequency response are two attributes that are useful for characterizing linear time-invariant (LTI) systems. They provide two different ways of calculating what an LTI system's output will be for a given input signal. A continuous-time LTI system is usually illustrated like this: In general, the system $H$ maps its input signal $x(t)$ to a corresponding output signal $y(t)$. There are many types of LTI systems that can have apply very different transformations to the signals that pass through them. But, they all share two key characteristics: The system is linear , so it obeys the principle of superposition . Stated simply, if you linearly combine two signals and input them to the system, the output is the same linear combination of what the outputs would have been had the signals been passed through individually. That is, if $x_1(t)$ maps to an output of $y_1(t)$ and $x_2(t)$ maps to an output of $y_2(t)$, then for all values of $a_1$ and $a_2$, $$ H\{a_1 x_1(t) + a_2 x_2(t)\} = a_1 y_1(t) + a_2 y_2(t) $$ The system is time-invariant , so its characteristics do not change with time. If you add a delay to the input signal, then you simply add the same delay to the output. For an input signal $x(t)$ that maps to an output signal $y(t)$, then for all values of $\tau$, $$ H\{x(t - \tau)\} = y(t - \tau) $$ Discrete-time LTI systems have the same properties; the notation is different because of the discrete-versus-continuous difference, but they are a lot alike. These characteristics allow the operation of the system to be straightforwardly characterized using its impulse and frequency responses. They provide two perspectives on the system that can be used in different contexts. Impulse Response: The impulse that is referred to in the term impulse response is generally a short-duration time-domain signal. For continuous-time systems, this is the Dirac delta function $\delta(t)$, while for discrete-time systems, the Kronecker delta function $\delta[n]$ is typically used. A system's impulse response (often annotated as $h(t)$ for continuous-time systems or $h[n]$ for discrete-time systems) is defined as the output signal that results when an impulse is applied to the system input. Why is this useful? It allows us to predict what the system's output will look like in the time domain. Remember the linearity and time-invariance properties mentioned above? If we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. What if we could decompose our input signal into a sum of scaled and time-shifted impulses? Then, the output would be equal to the sum of copies of the impulse response, scaled and time-shifted in the same way. For discrete-time systems, this is possible, because you can write any signal $x[n]$ as a sum of scaled and time-shifted Kronecker delta functions: $$ x[n] = \sum_{k=0}^{\infty} x[k] \delta[n - k] $$ Each term in the sum is an impulse scaled by the value of $x[n]$ at that time instant. What would we get if we passed $x[n]$ through an LTI system to yield $y[n]$? Simple: each scaled and time-delayed impulse that we put in yields a scaled and time-delayed copy of the impulse response at the output. That is: $$ y[n] = \sum_{k=0}^{\infty} x[k] h[n-k] $$ where $h[n]$ is the system's impulse response. The above equation is the convolution theorem for discrete-time LTI systems. That is, for any signal $x[n]$ that is input to an LTI system, the system's output $y[n]$ is equal to the discrete convolution of the input signal and the system's impulse response. For continuous-time systems, the above straightforward decomposition isn't possible in a strict mathematical sense (the Dirac delta has zero width and infinite height), but at an engineering level, it's an approximate, intuitive way of looking at the problem. A similar convolution theorem holds for these systems: $$ y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) d\tau $$ where, again, $h(t)$ is the system's impulse response. There are a number of ways of deriving this relationship (I think you could make a similar argument as above by claiming that Dirac delta functions at all time shifts make up an orthogonal basis for the $L^2$ Hilbert space, noting that you can use the delta function's sifting property to project any function in $L^2$ onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse responses), but I'm not a licensed mathematician, so I'll leave that aside). One method that relies only upon the aforementioned LTI system properties is shown here . In summary: For both discrete- and continuous-time systems, the impulse response is useful because it allows us to calculate the output of these systems for any input signal; the output is simply the input signal convolved with the impulse response function. Frequency response: An LTI system's frequency response provides a similar function: it allows you to calculate the effect that a system will have on an input signal, except those effects are illustrated in the frequency domain . Recall the definition of the Fourier transform : $$ X(f) = \int_{-\infty}^{\infty} x(t) e^{-j 2 \pi ft} dt $$ More importantly for the sake of this illustration, look at its inverse: $$ x(t) = \int_{-\infty}^{\infty} X(f) e^{j 2 \pi ft} df $$ In essence, this relation tells us that any time-domain signal $x(t)$ can be broken up into a linear combination of many complex exponential functions at varying frequencies (there is an analogous relationship for discrete-time signals called the discrete-time Fourier transform ; I only treat the continuous-time case below for simplicity). For a time-domain signal $x(t)$, the Fourier transform yields a corresponding function $X(f)$ that specifies, for each frequency $f$, the scaling factor to apply to the complex exponential at frequency $f$ in the aforementioned linear combination. These scaling factors are, in general, complex numbers. One way of looking at complex numbers is in amplitude/phase format, that is: $$ X(f) = A(f) e^{j \phi(f)} $$ Looking at it this way, then, $x(t)$ can be written as a linear combination of many complex exponential functions, each scaled in amplitude by the function $A(f)$ and shifted in phase by the function $\phi(f)$. This lines up well with the LTI system properties that we discussed previously; if we can decompose our input signal $x(t)$ into a linear combination of a bunch of complex exponential functions, then we can write the output of the system as the same linear combination of the system response to those complex exponential functions. Here's where it gets better: exponential functions are the eigenfunctions of linear time-invariant systems. The idea is, similar to eigenvectors in linear algebra, if you put an exponential function into an LTI system, you get the same exponential function out, scaled by a (generally complex) value. This has the effect of changing the amplitude and phase of the exponential function that you put in. This is immensely useful when combined with the Fourier-transform-based decomposition discussed above. As we said before, we can write any signal $x(t)$ as a linear combination of many complex exponential functions at varying frequencies. If we pass $x(t)$ into an LTI system, then (because those exponentials are eigenfunctions of the system), the output contains complex exponentials at the same frequencies, only scaled in amplitude and shifted in phase. These effects on the exponentials' amplitudes and phases, as a function of frequency, is the system's frequency response . That is, for an input signal with Fourier transform $X(f)$ passed into system $H$ to yield an output with a Fourier transform $Y(f)$, $$ Y(f) = H(f) X(f) = A(f) e^{j \phi(f)} X(f) $$ In summary: So, if we know a system's frequency response $H(f)$ and the Fourier transform of the signal that we put into it $X(f)$, then it is straightforward to calculate the Fourier transform of the system's output; it is merely the product of the frequency response and the input signal's transform. For each complex exponential frequency that is present in the spectrum $X(f)$, the system has the effect of scaling that exponential in amplitude by $A(f)$ and shifting the exponential in phase by $\phi(f)$ radians. Bringing them together: An LTI system's impulse response and frequency response are intimately related. The frequency response is simply the Fourier transform of the system's impulse response (to see why this relation holds, see the answers to this other question ). So, for a continuous-time system: $$ H(f) = \int_{-\infty}^{\infty} h(t) e^{-j 2 \pi ft} dt $$ So, given either a system's impulse response or its frequency response, you can calculate the other. Either one is sufficient to fully characterize the behavior of the system; the impulse response is useful when operating in the time domain and the frequency response is useful when analyzing behavior in the frequency domain.
{ "source": [ "https://dsp.stackexchange.com/questions/536", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/381/" ] }
672
I have heard anecdotaly that sampling complex signals need not follow Nyquist sampling rates but can actually be gotten away with half Nyquist sampling rates. I am wondering if there is any truth to this? From Nyquist, we know that to unambiguously sample a signal, we need to sample at least higher than double the bandwidth of that signal. (I am defining bandwidth here as they do in the wiki link, aka, the occupancy of the positive frequency). In other words, if my signal exists from -B to B, I need to sample at least > 2*B to satisfy nyquist. If I mixed this signal up to fc, and wished to do bandpass sampling, I would need to sample at least > 4*B. This is all great for real signals. My question is, is there any truth to the statement that a complex baseband signal (aka, one that only exists on one side of the frequency spectrum) need not be sampled at a rate of at least > 2*B, but can in fact be adequately sampled at a rate of at least > B? (I tend to think that if this is the case this is simply semantics, because you still have to take two samples (one real and one imaginary) per sample time in order to completely represent the rotating phasor, thereby strictly still following Nyquist...) What are your thoughts?
Your understanding is correct. If you sample at rate $f_s$, then with real samples only, you can unambiguously represent frequency content in the region $[0, \frac{f_s}{2})$ (although the caveat that allows bandpass sampling still applies). No additional information can be held in the other half of the spectrum when the samples are real, because real signals exhibit conjugate symmetry in the frequency domain; if your signal is real and you know its spectrum from $0$ to $\frac{f_s}{2}$, then you can trivially conclude what the other half of its spectrum is. There is no such restriction for complex signals, so a complex signal sampled at rate $f_s$ can unambiguously contain content from $-\frac{f_s}{2}$ to $\frac{f_s}{2}$ (for a total bandwidth of $f_s$). As you noted, however, there's not an inherent efficiency improvement to be made here, as each complex sample contains two components (real and imaginary), so while you require half as many samples, each requires twice the amount of data storage, which cancels out any immediate benefit. Complex signals are often used in signal processing, however, where you have problems that map well to that structure (such as in quadrature communications systems).
{ "source": [ "https://dsp.stackexchange.com/questions/672", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/293/" ] }
736
I have to do cross correlation of two audio file to prove they are similar. I have taken the FFT of the two audio files and have their power spectrum values in separate arrays. How should I proceed further to cross-correlate them and prove that they're similar? Is there a better way to do it? Any basic ideas will be helpful for me to learn and apply it.
Cross-correlation and convolution are closely related. In short, to do convolution with FFTs, you zero-pad the input signals a and b (add zeros to the end of each. The zero padding should fill the vectors until they reach a size of at least N = size(a)+size(b)-1) take the FFT of both signals multiply the results together (element-wise multiplication) do the inverse FFT conv(a, b) = ifft(fft(a_and_zeros) * fft(b_and_zeros)) You need to do the zero-padding because the FFT method is actually circular cross-correlation, meaning the signal wraps around at the ends. So you add enough zeros to get rid of the overlap, to simulate a signal that is zero out to infinity. To get cross-correlation instead of convolution, you either need to time-reverse one of the signals before doing the FFT, or take the complex conjugate of one of the signals after the FFT: corr(a, b) = ifft(fft(a_and_zeros) * fft(b_and_zeros[reversed])) corr(a, b) = ifft(fft(a_and_zeros) * conj(fft(b_and_zeros))) whichever is easier with your hardware/software. For autocorrelation (cross-correlation of a signal with itself), it's better to do the complex conjugate, because then you only need to calculate the FFT once. If the signals are real, you can use real FFTs (RFFT/IRFFT) and save half your computation time by only calculating half of the spectrum. Also you can save computation time by padding to a larger size that the FFT is optimized for (such as a 5-smooth number for FFTPACK, a ~13-smooth number for FFTW , or a power of 2 for a simple hardware implementation). Here's an example in Python of FFT correlation compared with brute-force correlation: https://stackoverflow.com/a/1768140/125507 This will give you the cross-correlation function, which is a measure of similarity vs offset. To get the offset at which the waves are "lined up" with each other, there will be a peak in the correlation function: The x value of the peak is the offset, which could be negative or positive. I've only seen this used to find the offset between two waves. You can get a more precise estimate of the offset (better than the resolution of your samples) by using parabolic/quadratic interpolation on the peak. To get a similarity value between -1 and 1 (a negative value indicating one of the signals decreases as the other increases) you'd need to scale the amplitude according to the length of the inputs, length of the FFT, your particular FFT implementation's scaling, etc. The autocorrelation of a wave with itself will give you the value of the maximum possible match. Note that this will only work on waves that have the same shape. If they've been sampled on different hardware or have some noise added, but otherwise still have the same shape, this comparison will work, but if the wave shape has been changed by filtering or phase shifts, they may sound the same, but won't correlate as well.
{ "source": [ "https://dsp.stackexchange.com/questions/736", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/-1/" ] }
741
In an answer to a previous question , it was stated that one should zero-pad the input signals (add zeros to the end so that at least half of the wave is "blank") What's the reason for this?
Zero padding allows one to use a longer FFT, which will produce a longer FFT result vector. A longer FFT result has more frequency bins that are more closely spaced in frequency. But they will be essentially providing the same result as a high quality Sinc interpolation of a shorter non-zero-padded FFT of the original data. This might result in a smoother looking spectrum when plotted without further interpolation. Although this interpolation won't help with resolving or the resolution of and/or between adjacent or nearby frequencies, it might make it easier to visually resolve the peak of a single isolated frequency that does not have any significant adjacent signals or noise in the spectrum. Statistically, the higher density of FFT result bins will probably make it more likely that the peak magnitude bin is closer to the frequency of a random isolated input frequency sinusoid, and without further interpolation (parabolic, et.al.). But, essentially, zero padding before a DFT/FFT is a computationally efficient method of interpolating a large number of points. Zero-padding for cross-correlation, auto-correlation, or convolution filtering is used to not mix convolution results (due to circular convolution). The full result of a linear convolution is longer than either of the two input vectors. If you don't provide a place to put the end of this longer convolution result, FFT fast convolution will just mix it in with and cruft up your desired result. Zero-padding provides a bunch zeros into which to mix the longer result. And it's far far easier to un-mix something that has only been mixed/summed with a vector of zeros.
{ "source": [ "https://dsp.stackexchange.com/questions/741", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/59/" ] }
803
Pretty simple question - I am trying to figure out what exactly is different between 'de-noising' a signal, and simply filtering it (as we commonly know) to remove noise. Is this a case of lexical overlap or is there something fundamentally different? Why is it called 'de-noising'? Edit: Perhaps crucially, when we talk of filtering a signal to maximize its SNR, we usually mean AWGN in the colloquial context. So is the 'noise' being referred to in de-noising also AWGN, and if so, is de-noising simply a different way of removing it, or is it a different type of noise (non-gaussian, colored, etc) to begin with?
Zero padding allows one to use a longer FFT, which will produce a longer FFT result vector. A longer FFT result has more frequency bins that are more closely spaced in frequency. But they will be essentially providing the same result as a high quality Sinc interpolation of a shorter non-zero-padded FFT of the original data. This might result in a smoother looking spectrum when plotted without further interpolation. Although this interpolation won't help with resolving or the resolution of and/or between adjacent or nearby frequencies, it might make it easier to visually resolve the peak of a single isolated frequency that does not have any significant adjacent signals or noise in the spectrum. Statistically, the higher density of FFT result bins will probably make it more likely that the peak magnitude bin is closer to the frequency of a random isolated input frequency sinusoid, and without further interpolation (parabolic, et.al.). But, essentially, zero padding before a DFT/FFT is a computationally efficient method of interpolating a large number of points. Zero-padding for cross-correlation, auto-correlation, or convolution filtering is used to not mix convolution results (due to circular convolution). The full result of a linear convolution is longer than either of the two input vectors. If you don't provide a place to put the end of this longer convolution result, FFT fast convolution will just mix it in with and cruft up your desired result. Zero-padding provides a bunch zeros into which to mix the longer result. And it's far far easier to un-mix something that has only been mixed/summed with a vector of zeros.
{ "source": [ "https://dsp.stackexchange.com/questions/803", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/293/" ] }
811
What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time. I'm assuming a dedicated DSP would do this pretty readily, but is there any "shortcut" that may not require something so complicated?
There's a flaw in Jason R's answer, which is discussed in Knuth's "Art of Computer Programming" vol. 2. The problem comes if you have a standard deviation which is a small fraction of the mean: the calculation of E(x^2) - (E(x)^2) suffers from severe sensitivity to floating point rounding errors. You can even try this yourself in a Python script: ofs = 1e9 A = [ofs+x for x in [1,-1,2,3,0,4.02,5]] A2 = [x*x for x in A] (sum(A2)/len(A))-(sum(A)/len(A))**2 I get -128.0 as an answer, which clearly isn't computationally valid, since the math predicts that the result should be nonnegative. Knuth cites an approach (I don't remember the name of the inventor) for calculating running mean and standard deviation which goes something like this: initialize: m = 0; S = 0; n = 0; for each incoming sample x: prev_mean = m; n = n + 1; m = m + (x-m)/n; S = S + (x-m)*(x-prev_mean); and then after each step, the value of m is the mean, and the standard deviation can be calculated as sqrt(S/n) or sqrt(S/n-1) depending on which is your favorite definition of standard deviation. The equation I write above is slightly different than the one in Knuth, but it's computationally equivalent. When I have a few more minutes, I'll code up the above formula in Python and show that you'll get a nonnegative answer (that hopefully is close to the correct value). update: here it is. test1.py: import math def stats(x): n = 0 S = 0.0 m = 0.0 for x_i in x: n = n + 1 m_prev = m m = m + (x_i - m) / n S = S + (x_i - m) * (x_i - m_prev) return {'mean': m, 'variance': S/n} def naive_stats(x): S1 = sum(x) n = len(x) S2 = sum([x_i**2 for x_i in x]) return {'mean': S1/n, 'variance': (S2/n - (S1/n)**2) } x1 = [1,-1,2,3,0,4.02,5] x2 = [x+1e9 for x in x1] print "naive_stats:" print naive_stats(x1) print naive_stats(x2) print "stats:" print stats(x1) print stats(x2) result: naive_stats: {'variance': 4.0114775510204073, 'mean': 2.0028571428571427} {'variance': -128.0, 'mean': 1000000002.0028572} stats: {'variance': 4.0114775510204073, 'mean': 2.0028571428571431} {'variance': 4.0114775868357446, 'mean': 1000000002.0028571} You'll note that there's still some rounding error, but it's not bad, whereas naive_stats just pukes. edit: Just noticed Belisarius's comment citing Wikipedia which does mention the Knuth algorithm.
{ "source": [ "https://dsp.stackexchange.com/questions/811", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/76/" ] }
912
I understand OpenCV is the de facto library for programming image processing in C/C++; I'm wondering if there is a C or C++ library like that for audio processing. I basically want to filter raw waves from a microphone, and analyze them with some machine learning algorithms. But I may eventually also need: Multiplatform audio capture and audio playback DSP - Audio filters Tone detection Tonal property analysis Tone synthesis Recognition given some recognition corpus and model Speech / music synthesis Any advice would be appreciated.
Consider the following: clam-project.org : CLAM (C++ Library for Audio and Music) is a full-fledged software framework for research and application development in the Audio and Music Domain. It offers a conceptual model as well as tools for the analysis, synthesis and processing of audio signals. MARF : MARF is an open-source research platform and a collection of voice/sound/speech/text and natural language processing (NLP) algorithms written in Java and arranged into a modular and extensible framework facilitating addition of new algorithms. MARF can run distributedly over the network and may act as a library in applications or be used as a source for learning and extension. aubio : aubio is a tool designed for the extraction of annotations from audio signals. Its features include segmenting a sound file before each of its attacks, performing pitch detection, tapping the beat and producing midi streams from live audio.
{ "source": [ "https://dsp.stackexchange.com/questions/912", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1011/" ] }
971
I am able to write a basic sine wave generator for audio, but I want it to be able to smoothly transition from one frequency to another. If I just stop generating one frequency and immediately switch to another there will be a discontinuity in the signal and a "click" will be heard. My question is, what is a good algorithm to generate a wave that starts at, say 250Hz, and then transitions to 300Hz, without introducing any clicks. If the algorithm includes an optional glide/portamento time, then so much the better. I can think of a few possible approaches such as oversampling followed by a low pass filter, or maybe using a wavetable, but I am sure this is a common enough problem that there is a standard way of tackling it.
One approach that I have used in the past is to maintain a phase accumulator which is used as an index into a waveform lookup table. A phase delta value is added to the accumulator at each sample interval: phase_index += phase_delta To change frequency you change the phase delta that is added to the phase accumulator at each sample, e.g. phase_delta = N * f / Fs where: phase_delta is the number of LUT samples to increment freq is the desired output frequency Fs is the sample rate N is the size of the LUT This guarantees that the output waveform is continuous even if you change phase_delta dynamically, e.g. for frequency changes, FM, etc. For smoother changes in frequency (portamento) you can ramp the phase_delta value between its old value and new value over a suitable number of samples intervals rather than just changing it instantaneously. Note that phase_index and phase_delta both have an integer and a fractional component, i.e. they need to be floating point or fixed point. The integer part of phase_index (modulo table size) is used as an index into the waveform LUT, and the fractional part may optionally be used for interpolation between adjacent LUT values for higher quality output and/or smaller LUT size.
{ "source": [ "https://dsp.stackexchange.com/questions/971", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/688/" ] }
1,167
I have trouble distinguishing between these two concepts. This is my understanding so far. A stationary process is a stochastic process whose statistical properties do not change with time. For a strict-sense stationary process, this means that its joint probability distribution is constant; for a wide-sense stationary process, this means that its 1st and 2nd moments are constant. An ergodic process is one where its statistical properties, like variance, can be deduced from a sufficiently long sample. E.g., the sample mean converges to the true mean of the signal, if you average long enough. Now, it seems to me that a signal would have to be stationary, in order to be ergodic. And what kinds of signals could be stationary, but not ergodic? If a signal has the same variance for all time, for example, how could the time-averaged variance not converge to the true value? So, what is the real distinction between these two concepts? Can you give me an example of a process that is stationary without being ergodic, or ergodic without being stationary?
A random process is a collection of random variables, one for each time instant under consideration. Typically this may be continuous time ( $-\infty < t < \infty$ ) or discrete time (all integers $n$ , or all time instants $nT$ where $T$ is the sample interval). Stationarity refers to the distributions of the random variables. Specifically, in a stationary process, all the random variables have the same distribution function, and more generally, for every positive integer $n$ and $n$ time instants $t_1, t_2, \ldots, t_n$ , the joint distribution of the $n$ random variables $X(t_1), X(t_2), \cdots, X(t_n)$ is the same as the joint distribution of $X(t_1+\tau), X(t_2+\tau), \cdots, X(t_n+\tau)$ . That is, if we shift all time instants by $\tau$ , the statistical description of the process does not change at all: the process is stationary . Ergodicity, on the other hand, doesn't look at statistical properties of the random variables but at the sample paths , i.e. what you observe physically. Referring back to the random variables, recall that random variables are mappings from a sample space to the real numbers; each outcome is mapped onto a real number, and different random variables will typically map any given outcome to different numbers. So, imagine that some higher being as performed the experiment which has resulted in an outcome $\omega$ in the sample space, and this outcome has been mapped onto (typically different) real numbers by all the random variables in the process: specifically, the random variable $X(t)$ has mapped $\omega$ to a real number we shall denote as $x(t)$ . The numbers $x(t)$ , regarded as a waveform, are the sample path corresponding to $\omega$ , and different outcomes will give us different sample paths. Ergodicity then deals with properties of the sample paths and how these properties relate to the properties of the random variables comprising the random process. Now, for a sample path $x(t)$ from a stationary process, we can compute the time average $$\bar{x} = \frac{1}{2T} \int_{-T}^T x(t) \,\mathrm dt$$ but, what does $\bar{x}$ have to do with $\mu = E[X(t)]$ , the mean of the random process? (Note that it doesn't matter which value of $t$ we use; all the random variables have the same distribution and so have the same mean (if the mean exists)). As the OP says, the average value or DC component of a sample path converges to the mean value of the process if the sample path is observed long enough, provided the process is ergodic and stationary, etc. That is, ergodicity is what enables us to connect the results of the two calculations and to assert that $$\lim_{T\to \infty}\bar{x} = \lim_{T\to \infty}\frac{1}{2T} \int_{-T}^T x(t) \,\mathrm dt ~~~ \textbf{equals} ~~~\mu = E[X(t)] = \int_{-\infty}^\infty uf_X(u) \,\mathrm du.$$ A process for which such equality holds is said to be mean-ergodic , and a process is mean-ergodic if its autocovariance function $C_X(\tau)$ has the property: $$\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T C_X(\tau) \mathrm d\tau = 0.$$ Thus, not all stationary processes need be mean-ergodic. But there are other forms of ergodicity too. For example, for an autocovariance-ergodic process, the autocovariance function of a finite segment (say for $t\in (-T, T)$ of the sample path $x(t)$ converges to the autocovariance function $C_X(\tau)$ of the process as $T\to \infty$ . A blanket statement that a process is ergodic might mean any of the various forms or it might mean a specific form; one just can't tell, As an example of the difference between the two concepts, suppose that $X(t) = Y$ for all $t$ under consideration. Here $Y$ is a random variable. This is a stationary process: each $X(t)$ has the same distribution (namely, the distribution of $Y$ ), same mean $E[X(t)] = E[Y]$ , same variance etc.; each $X(t_1)$ and $X(t_2)$ have the same joint distribution (though it is degenerate) and so on. But the process is not ergodic because each sample path is a constant . Specifically, if a trial of the experiment (as performed by you, or by a superior being) results in $Y$ having value $\alpha$ , then the sample path of the random process that corresponds to this experimental outcome has value $\alpha$ for all $t$ , and the DC value of the sample path is $\alpha$ , not $E[X(t)] = E[Y]$ , no matter how long you observe the (rather boring) sample path. In a parallel universe, the trial would result in $Y = \beta$ and the sample path in that universe would have value $\beta$ for all $t$ . It is not easy to write mathematical specifications to exclude such trivialities from the class of stationary processes, and so this is a very minimal example of a stationary random process that is not ergodic. Can there be a random process that is not stationary but is ergodic? Well, NO , not if by ergodic we mean ergodic in every possible way one can think of: for example, if we measure the fraction of time during which a long segment of the sample path $x(t)$ has value at most $\alpha$ , this is a good estimate of $P(X(t) \leq \alpha) = F_X(\alpha)$ , the value of the (common) CDF $F_X$ of the $X(t)$ 's at $\alpha$ if the process is assumed to be ergodic with respect to the distribution functions. But , we can have random processes that are not stationary but are nonetheless mean -ergodic and autocovariance -ergodic. For example, consider the process $\{X(t)\colon X(t)= \cos (t + \Theta), -\infty < t < \infty\}$ where $\Theta$ takes on four equally likely values $0, \pi/2, \pi$ and $3\pi/2$ . Note that each $X(t)$ is a discrete random variable that, in general, takes on four equally likely values $\cos(t), \cos(t+\pi/2)=-\sin(t), \cos(t+\pi) = -\cos(t)$ and $\cos(t+3\pi/2)=\sin(t)$ , It is easy to see that in general $X(t)$ and $X(s)$ have different distributions, and so the process is not even first-order stationary. On the other hand, $$E[X(t)] = \frac 14\cos(t)+ \frac 14(-\sin(t)) + \frac 14(-\cos(t))+\frac 14 \sin(t) = 0$$ for every $t$ while \begin{align} E[X(t)X(s)]&= \left.\left.\frac 14\right[\cos(t)\cos(s) + (-\cos(t))(-\cos(s)) + \sin(t)\sin(s) + (-\sin(t))(-\sin(s))\right]\\ &= \left.\left.\frac 12\right[\cos(t)\cos(s) + \sin(t)\sin(s)\right]\\ &= \frac 12 \cos(t-s). \end{align} In short, the process has zero mean and its autocorrelation (and autocovariance) function depends only on the time difference $t-s$ , and so the process is wide sense stationary. But it is not first-order stationary and so cannot be stationary to higher orders either. Now, when the experiment is performed and the value of $\Theta$ is known, we get the sample function which clearly must be one of $\pm \cos(t)$ and $\pm \sin(t)$ which have DC value $0$ which equals $0$ , and whose autocorrelation function is $\frac 12 \cos(\tau)$ , same as $R_X(\tau)$ , and so this process is mean-ergodic and autocorrelation-ergodic even though it is not stationary at all. In closing, I remark that the process is not ergodic with respect to the distribution function , that is, it cannot be said to be ergodic in all respects.
{ "source": [ "https://dsp.stackexchange.com/questions/1167", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/759/" ] }
1,198
I am doing some research on Gabor filters for detecting dents in cars. I know Gabor filters have had widespread use for pattern recognition, fingerprint recognition, etc. I have an image. Using some code from the MathWorks File Exchange site, I got the following output. which somehow isn't the output that one would expect. This isn't a good result. The my script is as follows: I = imread('dent.jpg'); I = rgb2gray(I); [G, gabout] = gaborfilter1(I, 2, 4, 16, pi/2); figure imshow(uint8(gabout)); EDIT: Applying a different code to the following image : Output image after different orientations of gabor filter: how do i isolate this DENT which is got being detected properly?
This is an extremely difficult problem. I was part of a team that worked on it for several years, and having developed and supported other such applications for a long time I can say that dent detection is a particularly tricky problem, and much harder than it looks at first. Having an algorithm work under lab conditions or on known images is one thing; developing a system that is accurate and robust for "natural" images such as cars seen on a parking lot would likely require a team working for several years. In addition to the core problem of creating the algorithm, there are numerous other engineering difficulties. The sample code you tested isn't a bad start. If you could find the edges around the dark right side of the dent you could compare the edge map of the car with the ding to an edge map of a known good car imaged from the same angle with the same lighting. Controlling the lighting will help quite a bit. Problems to consider include the following: Lighting (much more difficult than it would first seem) Expected 3D surface of assembled outer panel (e.g. from CAD data) Criteria characterizing a dent: area, depth, profile, etc. Criteria for false negatives and false positives Means to save dent data and/or map dents onto a model of the car (or butterfly layout) Methodology and device to measure "true" dent characteristics: depth, area, etc. Extensive database of dents from a random sampling of vehicles Dealing with different paint colors and finishes 1. Lighting As Martin B noted correctly above, correct lighting is critical for this problem. Even with good structured lighting, you're going to have great difficulty detecting small dents near feature lines, gaps between panels, handles, and so on. The Wikipedia entry for structured lighting is a bit thin, but it's a start for understanding the principle: http://en.wikipedia.org/wiki/Structured_light Light stripes can be used to detect in-dings (dents) and out-dings (pimples). To see a ding, you'll need relative motion between the light source and the car. Either the light + camera move together relative to the car, or the car moves past the light + camera. Although in-dings and out-dings have characteristic appearances when seen at the edge of a light stripe, the detectability of a given dent depends on the size and depth of the dent relative to the width of the light stripe. A car's curvature is complex, so it's quite difficult to present a consistent light stripe to a camera. As the light stripe moves across the car body, the curvature and even the intensity of the light stripe will vary. One partial solution is to ensure that the camera and light stripe always at a consistent angle relative to the normal (the 3D perpendicular) of the portion of the surface being inspected. In practice a robot would be required to move the camera accurately relative to the body surface. Moving the robot accurately requires knowledge of the pose (position and 3D angles) of the car body, which is a nasty problem by itself. For any inspection for automotive applications, you need to completely control the lighting. That means not only placing lights of your choice at known locations, but also blocking all other light. This will mean a fairly large enclosure. Since the car's panels are curved outward (almost like a spherical surface), they'll reflect light from sources all around them. To greatly simplify this problem, you could use a high frequency flourescent bar inside an enclosure shrouded with black velvet. Quite often it's necessary to go to extremes like that for inspection applications. 2. 3D surface A car's outer surface is composed of complex curves. In order to know whether a suspicious spot is a ding, you have to compare that spot to known features of the car. That means you would need to match the 2D image from a camera to a 3D model viewed at a certain angle. This is not a problem solved quickly, and it's difficult enough to do well that some companies specialize in it. 3. Defect characterization For academic research or lab testing it may be sufficient to develop an algorithm that shows promise or improves on an existing method. To properly solve this problem for real commercial or industrial use, you need to have a highly detailed specification for the size dents you'd like to detect. When we tackled this problem, there were no reasonable industry or national standards for dents (3D deformations). That is, there was no agreed-upon technique to characterize a dent by its area, depth, and shape. We just had samples that industry experts agreed were bad, not too bad, and marginal in terms of severity. Defining the "depth" of a ding is tricky, too, since a ding is a 3D indentation in (typically) a 3D surface curving outward. Bigger dings are easier to detect, but they're also less common. An experienced auto worker can scan a car body quickly--much more quickly than an untrained observer--and find shallow dings the size of your pinky finger quickly. To justify the cost of an automated system, you would likely have to match an experienced observer's ability. 4. Criteria for detection errors Early on you should set criteria for acceptable false negatives and false positives. Even if you're just studying this problem as an R & D project and don't intend to develop a product, try to define your detection criteria. false negative: dent present, but not detected false positive: unblemished area identified as a dent There's typically a tradeoff: increase sensitivity and you'll find more dings (decrease false negatives), but you'll also find more dings that aren't there (increase false positives). It is quite easy to convince oneself that an algorithm performs better than it actually does: our natural bias is to notice defects detected by the algorithm and explain away those it hasn't detected. Conduct blind, automated tests. If possible, have someone else measure the dings and assign severity so that you don't know what the true measurements are. 5. Save data and/or map it A dent is characterized by its severity and its location on the car body. To know its location, you must solve the 2D-to-3D correspondence problem mentioned above. 6. Determining "true" shape of dents Dents are hard to measure. A sharp dent and a rounded dent of the same surface area and depth will appear different. Measuring dents by mechanical means leads to subjective judgments, and it's also quite tedious to use depth gauges, rulers, etc., when you'll likely have to measures dozens if not more. This is one of the harder engineering problems to solve for any defect detection project for manufacturing: how does one measure a defect and characterize it? if there is a standard for doing so, does the standard correlate well to something the inspection system measures? if the inspection system doesn't find a ding it "should have" found, who's to blame? That said, if an inspection system works well enough for a sample of known defects, then users may eventually come to trust it, and the system itself becomes the standard for defining defect severity. 7. Extensive database of dents Ideally you would have hundreds if not thousands of sample images of dents of different severities at different locations on vehicles from difficult manufacturers. If you're interested in finding dents caused by accidents during the assembly process, then it could take a long time to collect that kind of data. Dents caused during the assembly process are not common. If you're only interested in finding dents caused by accidents or environmental damage, then that's a different matter. The types of dents will be different from those caused by accidental bumps inside an auto assembly plant. 8. Dealing with different paint colors It's true that edge detectors can be reasonably robust at detecting edges in images of varying levels of contrast, but it can be quite disheartening to see what "varying levels of contrast" really means for different automotive paints and finishes. A light stripe that looks great on a shiny black car could be hardly detectable on a white car with old paint. Most cameras have relatively limited dynamic range, so achieving good contrast for both black shiny surfaces and white dull surfaces is tricky. It's quite likely you'll have to automatically control lighting intensity. That's hard, too.
{ "source": [ "https://dsp.stackexchange.com/questions/1198", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/654/" ] }
1,288
As far as I understand, both SURF and SIFT are patent protected. Are there any alternative methods that can be used in a commercial application freely? For more info on the patent check out: http://opencv-users.1802565.n2.nabble.com/SURF-protected-by-patent-td3458734.html
Both SIFT and SURF authors require license fees for usage of their original algorithms. I have done some research about the situation and here are the possible alternatives: Keypoint detector: Harris corner detector Harris-Laplace - scale-invariant version of Harris detector (an affine invariant version also exists, presented by Mikolajczyk and Schmidt, and I believe is also patent free). Multi-Scale Oriented Patches (MOPs) - athough it is patented, the detector is basically the multi-scale Harris, so there would be no problems with that (the descriptor is 2D wavelet-transformed image patch) LoG filter - since the patented SIFT uses DoG (Difference of Gaussian) approximation of LoG (Laplacian of Gaussian) to localize interest points in scale, LoG alone can be used in modified, patent-free algorithm, tough the implementation could run a little slower FAST BRISK (includes a descriptor) ORB (includes a descriptor) KAZE - free to use, M-SURF descriptor (modified for KAZE's nonlinear scale space), outperforms both SIFT and SURF A-KAZE - accelerated version of KAZE, free to use, M-LDB descriptor (modified fast binary descriptor) Keypoint descriptor: Normalized gradient - simple, working solution PCA transformed image patch Wavelet transformed image patch - details are given in MOPs paper, but can be implemented differently to avoid the patent issue (e.g. using different wavelet basis or different indexing scheme) Histogram of oriented gradients GLOH LESH BRISK ORB FREAK LDB Note that if you assign orientation to the interest point and rotate the image patch accordingly, you get rotational invariance for free. Even Harris corners are rotationally invariant and the descriptor may be made so as well. Some more complete solution is done in Hugin, because they also struggled to have a patent-free interest point detector.
{ "source": [ "https://dsp.stackexchange.com/questions/1288", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/698/" ] }
1,317
I was looking in the Android app store for a guitar tuner. I found a tuner app that claimed it was faster than other apps. It claimed it could find the frequency without using the DFT (I wish I still had the URL to this specification). I have never heard of this. Can you acquire an audio signal and compute the frequency without using the DFT or FFT algorithm?
FFT is actually not a great way of making a tuner. FFT has inherently a finite frequency resolution and it's not easy to detect very small frequency changes without making the time window extremely long which makes it unwieldy and sluggish. Better solutions can be based on phase-locked loops , delay-locked loops , auto correlation, zero crossing detection and tracking, max or min detection and tracking and certainly intelligent combination of these methods. Pre-processing always helps.
{ "source": [ "https://dsp.stackexchange.com/questions/1317", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/730/" ] }
1,433
I am trying to find a list of possible image features like color, oriented edges and so on for measuring their usability in case of finding same/similar objects in images. Does anyone know such a list or at least some features?
The field itself is too vast. So i doubt you can have a fully exhaustive list here. However, MPEG 7 is one of the primary effort in standardizing this area. So what is included here is not universal - but at least the most primary. Here are some key feature set which are identified in MPEG7 ( I can really talk only about Visual Descriptors not others see this for full scope). There are 4 catagory of Visual Descriptors: 1. Color Descriptors which includes : Dominant color, Color Layout (essentially Primary color on block-by-block basis) Scalable Color (essentially Color histogram), Color Structure (essentially local Color histogram), and Color spaces to make things interoperable. 2. Texture Descriptors (see also this ) which includes : Texture Browsing Descriptor - which defines granularity/coarseness, regularity,and direction. Homogeneous Texture Descriptor - which is based on Gabor filter bank. and Edge Histogram 3. Shape Descriptors which includes : Region based descriptors are scalar attributes of shape under consideration - such as area, ecentricities etc. Contour based which captures actual characteristic shape features and 3D descriptors 4. Motion Descriptors for Video Camera Motion (3-D camera motion parameters) Motion Trajectory (of objects in the scene) [e.g. extracted by tracking algorithms] Parametric Motion (e.g. motion vectors, which allows description of motion of scene. But it can be more complex models on various objects). Activity which is more of a semantic descriptor. MPEG 7 doesn't define "How these are extracted" - it only defines what they mean and how to represent/store them. So research does exists on how to extract and use them. Here is another good paper that gives insight in this subject. But yes, many of these features are rather basic and may be more research will create more sophisticated (and complex) feature set.
{ "source": [ "https://dsp.stackexchange.com/questions/1433", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/969/" ] }
1,463
I am trying to implement a content based image retrieval system but before I do so I would like to get an overview of some programming languages suitable for this task (having good libs and such). Does anyone know some good languages and libs for that kind of task? What about Python or Java? Best
Maybe you can be more specific about the scope and scale of your work (academic project? Desktop or Mobile commercial product? Web-based commercial project?). Some recommendations and comments: Matlab is common in the academic world, and quite good for sketching/validating ideas. You will have access to a large body of code from other researchers (in CV and machine learning); prototyping and debugging will be very fast and easy, but whatever you will have developed in this environment will be hard to put in production. Depending on what your code is doing, you might have memory/performance problems (there are situations where you can't describe what you want to do in terms of Matlab's primitives and have to start looping on pixels and Matlab's being an interpreted language is not helping in this context). Interaction with databases, web servers etc is not easy, sometimes impossible (you won't get a Matlab program to become a Thrift server called by a web front-end). Costs $$$. C++ is what is used for many production-grade CV systems (think of something at the scale of Google's image search or Streetview, or many commercial robotics applications). Good libraries like OpenCV, excellent performance, easy to put into a production environment. If you need to do machine learning, there are many libraries out there (LibSVM / SVMlight, Torch). If you have to resort to "loop on all pixels" code it will perform well. Easy to use for coding the systems/storage layers needed in a large scale retrieval system (eg: a very large on-disk hash map for storing an inverted index mapping feature hashes to images). Things like Thrift / Message Pack can turn your retrieval program into a RPC server which can be called by a web front-end. However: not very agile for prototyping, quite terrible for trying out new ideas, slower development time; and put in the hands of inexperienced coders might have hard to track performances and/or instability problems. Python is somehow a middle ground between both. You can use it for Matlab style numerical computing (with numpy and scipy) + have bindings to libraries like OpenCV. You can do systems / data structure stuff with it and get acceptable performances. There are quite a few machine learning packages out there though less than in Matlab or C++. Unless you have to resort to "loop on all pixels" code, you will be able to code pretty much everything you could have done with C++ with a 1:1.5 to 1:3 ratio of performance and 2:1 to 10:1 ratio of source code size (debatable). But depending on the success of your project there will be a point where performance will be an issue and when rewriting to C++ won't be an option.
{ "source": [ "https://dsp.stackexchange.com/questions/1463", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/969/" ] }
1,499
I'm now processing MP3 file and encounter this problem. My MP3 is stereo encoded. What I want to do is extract vocal part for further processing(whatever mode of output signals, mono or stereo are both OK). As far as I know, audio is encoded into different dis-joint sub frequency bands in MP3. I think I can limit the signals to the vocal range through high-pass/low-pass filter with cutting-off frequency proper set. However, result must contain parts of pure music signal in this case. Or after googling, I think I may calculate the background signals first(by inverting one channel adding with signals from the other channel assuming vocal part is centered in the stereo audio called phase cancellation). After this transformation, the signal is mono. Then I should merge the original stereo into mono from which extracting the background signal. Given the effectiveness, which one is preferred(or any other solutions:)? If the 2nd one, let two channels A and B, will (B-A) or (A-B) used when compute the background? As with merging two channels, does the arithmetic mean accurate enough? Or I can downsample each channel by a factor of two and interleave the downsampled signals as mono result? Thanks and best regards.
First of all, how the data is encoded in a mp3 file is irrelevant to the question unless you aim at doing compressed-domain processing (which would be quite foolish). So you can assume your algorithm will work with decompressed time-domain data. The sum / difference is a very, very basic trick for vocal suppression (not extraction). It is based on the assumption that the vocals are mixed at the center of the stereo field, while other instruments are panned laterally. This is rarely true. L-R and R-L will sound the same (the human ear is insensitive to a global phase shift) and will give you a mono mix without the instruments mixed at the center. The problem is, once you have recovered the background, what will you do with it? Try to suppress it from the center (average) signal? This won't work, you will be doing (L + R) / 2 - (L - R), this is not very interesting... You can try any linear combinations of those (averaged and "center removed"), nothing will come out of it! Regarding filtering approaches: the f0 of the voice rarely exceeds 1000 Hz but its harmonics can go over that. Removing the highest frequency will make consonants (especially sss, chhh) unpleasant. Some male voices go below 100 Hz. You can safely cut whatever is below 50 or 60 Hz (bass, kick), though Some recent developments in voice separation worth exploring: Jean Louis Durrieu's background NMF + harmonic comb > filter model. Python code here . Rafii's background extraction approach . Straightforward to code and works well on computer-produced music with very repetitive patterns like Electro, Hip-hop... Hsu's approached based on f0 detection, tracking and masking. "A Tandem Algorithm for Singing Pitch Extraction and Voice Separation from Music Accompaniment" (can't find accessible PDF).
{ "source": [ "https://dsp.stackexchange.com/questions/1499", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/942/" ] }
1,506
I am attempting to implement the Gabor Descriptor based on the following paper Image Copy Detection Using a Robust Gabor Texture Descriptor . It is the best paper that I have found on the subject. It is not perfect, as it has some ordering issues with some equations and paragraphs that only make sense after later sections. No biggie, it is only a matter of time consumed to decipher the intent. What is stopping me right now, is that one of the parameters is never defined in the paper. It first appears in $(1)$ of Section 2. There we have the following factor: $e^{2\pi jWx}$ $W$ is later assumed to be 1 and $x$ is the parameter of the function, but $j$ is never defined (though it is used again in several subsequent equations, especially in section 3.1, of most interest is $(11)$). So I am seeking either a definition of $j$. A nice bonus would be a better paper (or book) defining the Gabor Descriptor.
First of all, how the data is encoded in a mp3 file is irrelevant to the question unless you aim at doing compressed-domain processing (which would be quite foolish). So you can assume your algorithm will work with decompressed time-domain data. The sum / difference is a very, very basic trick for vocal suppression (not extraction). It is based on the assumption that the vocals are mixed at the center of the stereo field, while other instruments are panned laterally. This is rarely true. L-R and R-L will sound the same (the human ear is insensitive to a global phase shift) and will give you a mono mix without the instruments mixed at the center. The problem is, once you have recovered the background, what will you do with it? Try to suppress it from the center (average) signal? This won't work, you will be doing (L + R) / 2 - (L - R), this is not very interesting... You can try any linear combinations of those (averaged and "center removed"), nothing will come out of it! Regarding filtering approaches: the f0 of the voice rarely exceeds 1000 Hz but its harmonics can go over that. Removing the highest frequency will make consonants (especially sss, chhh) unpleasant. Some male voices go below 100 Hz. You can safely cut whatever is below 50 or 60 Hz (bass, kick), though Some recent developments in voice separation worth exploring: Jean Louis Durrieu's background NMF + harmonic comb > filter model. Python code here . Rafii's background extraction approach . Straightforward to code and works well on computer-produced music with very repetitive patterns like Electro, Hip-hop... Hsu's approached based on f0 detection, tracking and masking. "A Tandem Algorithm for Singing Pitch Extraction and Voice Separation from Music Accompaniment" (can't find accessible PDF).
{ "source": [ "https://dsp.stackexchange.com/questions/1506", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/787/" ] }
1,522
Below is a signal which represents a recording of someone talking. I would like to create a series of smaller audio signals based on this. The idea being to detect when 'important' sound starts and ends and use those for markers to make new snippet of audio. In other words, I would like to use the silence as indicators as to when an audio 'chunk' has started or stopped and make new audio buffers based on this. So for example, if a person records himself saying Hi [some silence] My name is Bob [some silence] How are you? then I would like to make three audio clips from this. One that says Hi , one that says My name is Bob and one that says How are you? . My initial idea is to run through the audio buffer constantly checking where there are areas of low amplitude. Maybe I could do this by taking the first ten samples, average the values and if the result is low then label it as silent. I would proceed down the buffer by checking the next ten samples. Incrementing along in this way I could detect where envelopes start and stop. If anyone has any advice on a good, but simple way to do this that would be great. For my purposes the solution can be quite rudimentary. I'm not a pro at DSP, but understand some basic concepts. Also, I would be doing this programmatically so it would be best to talk about algorithms and digital samples. Thanks for all the help! EDIT 1 Great responses so far! Just wanted to clarify that this is not on live audio and I will be writing the algorithms myself in C or Objective-C so any solutions that use libraries aren't really an option.
This is the classic problem of speech detection . First thing to do would be to Google the concept. It is widely used in digital communication and there's been a lot of research conducted on the subject and there are good papers out there. Generally, the more background noise you have to deal with the more elaborate your method of speech detection must be. If you're using recordings taken in a quiet room, you can do it very easily (more later). If you have all sorts of noise while someone is talking (trucks passing by, dogs barking, plates smashing, aliens attacking), you'll have to use something much more clever. Looking at the waveform you attached, your noise is minimal, so I suggest the following: Extract signal envelope Pick a good threshold Detect places where envelope magnitude exceeds threshold What does this all mean? An envelope of a signal is a curve that describes its magnitude over time, independently of how its frequency content makes it oscillate (see image below). Envelope extraction can be done by creating a new signal that contains absolute values of you original signal, e.g. $\{ 1, 45, -6, 2, -43, 2 \ldots \}$ becomes $\{ 1, 45, 6, 2, 43, 2 \ldots \}$, and then low-pass filtering the result. The simplest low-pass filter can be implemented by replacing each sample value by an average of its N neighbors on both sides. The best value of N can be found experimentally and can depend on several things such as your sampling rate. You can see from the image that is you don't have much noise present, your signal envelope will always be above a certain threshold (loudness level), and you can consider those regions as speech detected regions.
{ "source": [ "https://dsp.stackexchange.com/questions/1522", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/893/" ] }
1,637
I was just learning about the frequency domain in images. I can understand the frequency spectrum in case of waves. It denotes what frequencies are present in a wave. If we draw the frequency spectrum of $\cos(2\pi f t)$, we get an impulse signal at $-f$ and $+f$. And we can use corresponding filters to extract particular information. But what does frequency spectrum means in case of images? When we take the FFT of a image in OpenCV, we get a weird picture. What does this image denote? And what is its application? I read some books, but they give a lot of mathematical equations rather than the physical implication. So can anyone provide a simple explanation of the frequency domain in images with a simple application of it in image processing?
But what does frequency spectrum means in case of images? The "mathematical equations" are important, so don't skip them entirely. But the 2d FFT has an intuitive interpretation, too. For illustration, I've calculated the inverse FFT of a few sample images: As you can see, only one pixel is set in the frequency domain. The result in the image domain (I've only displayed the real part) is a "rotated cosine pattern" (the imaginary part would be the corresponding sine). If I set a different pixel in the frequency domain (at the left border): I get a different 2d frequency pattern. If I set more than one pixel in the frequency domain: you get the sum of two cosines. So like a 1d wave, that can be represented as a sum of sines and cosines, any 2d image can be represented (loosely speaking) as a sum of "rotated sines and cosines", as shown above. when we take fft of a image in opencv, we get weird picture. What does this image denote? It denotes the amplitudes and frequencies of the sines/cosines that, when added up, will give you the original image. And what is its application? There are really too many to name them all. Correlation and convolution can be calculated very efficiently using an FFT, but that's more of an optimization, you don't "look" at the FFT result for that. It's used for image compression, because the high frequency components are usually just noise.
{ "source": [ "https://dsp.stackexchange.com/questions/1637", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/818/" ] }
1,714
I have done a lot of research and found out methods like adaptive thresholding , watershed etc that can be used of detecting veins in leaves . However thresholding isn't good as it introduces a lot of noise All my images are gray image please could anyone suggest what approaches to adopt while considering this problem in urgent need of help EDIT:My original image After thresholding As suggested by the answer i have tried the following edge detection Canny Too much noise and unwanted disturbances Sobel Roberts EDIT: Tried one more operation i get the following result its better than what i tried with canny and adaptive What do you feel?
You're not looking for edges (=borders between extended areas of high and low gray value), you're looking for ridges (thin lines darker or brighter than their neighborhood), so edge filters might not be ideal: An edge filter will give you two flanks (one on each side of the line) and a low response in the middle of the line: ADD : If've been asked to explain the difference between an edge detector and a ridge detector more clearly. I apologize in advance if this answer is getting very long. An edge detector is (usually) a first derivative operator: If you imagine the input image as a 3D landscape, an edge detector measures the steepness of the slope at each point of that landscape: If you want to detect the border of an extended bright or dark region, this is just fine. But for the veins in the OP's image it will give you just the same: the outlines left and right of each vein: That also explains the "double line pattern" in the Canny edge detector results: So, how do you detect these thin lines (i.e. ridges), then? The idea is that the pixel values can be (locally) approximated by a 2nd order polynomial, i.e. if the image function is $g$ , then for small values of $x$ and $y$ : $g(x,y)\approx \frac{1}{2} x^2 \frac{\partial ^2g}{\partial x^2}+x y \frac{\partial ^2g}{\partial x\, \partial y}+\frac{1}{2} y^2 \frac{\partial ^2g}{\partial y\, ^2}+x \frac{\partial g}{\partial x}+y \frac{\partial g}{\partial y}+g(0,0)$ or, in matrix form: $g(x,y)\approx \frac{1}{2} \left( \begin{array}{c} x & y \end{array} \right).\left( \begin{array}{cc} \frac{\partial ^2g}{\partial x^2} & \frac{\partial ^2g}{\partial x\, \partial y} \\ \frac{\partial ^2g}{\partial x\, \partial y} & \frac{\partial ^2g}{\partial y\, ^2} \end{array} \right).\left( \begin{array}{cc} x \\ y \end{array} \right)+\left( \begin{array}{cc} \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right).\left( \begin{array}{c} x \\ y \end{array} \right)+g(0,0)$ The second order derivative matrix $\left( \begin{array}{cc} \frac{\partial ^2g}{\partial x^2} & \frac{\partial ^2g}{\partial x\, \partial y} \\ \frac{\partial ^2g}{\partial x\, \partial y} & \frac{\partial ^2g}{\partial y\, ^2} \end{array} \right)$ is called the "Hessian matrix". It describes the 2nd order structure we're interested in. The 2nd order part of this function can be transformed into the sum of two parabolas $\lambda _1 x^2 + \lambda _2 y^2$ rotated by some angle, by decomposing the Hessian matrix above to a rotation times a diagonal matrix of it's eigenvalues ( Matrix decomposition ). We don't care about the rotation (we want to detect ridges in any orientation), so we're only interested in $\lambda _1$ and $\lambda _2$ What kind of shapes can this function approximation have? Actually, not that many: To detect ridges, we want to find areas in the image that look like the last of the plots above, so we're looking for areas where the major eigenvalue of the Hessian is large (compared to the minor eigenvalue). The simplest way to detect that is just to calculate the major eigenvalue at each pixel - and that's what the ridge filter below does. A ridge filter will probably give better results. I've tried Mathematica's built in RidgeFilter (which calculates the major eigenvalue of the Hessian matrix at each pixel) on your image: As you can see, there's only a single peak for every thin dark line. Binarizing and skeletonizing yields: After pruning the skeleton and removing small components (noise) from the image, I get this final skeleton: Full Mathematica code: ridges = RidgeFilter[ColorNegate@src]; skeleton = SkeletonTransform[Binarize[ridges, 0.007]]; DeleteSmallComponents[Pruning[skeleton, 50], 50] ADD: I'm not a Matlab expert, I don't know if it has a built in ridge filter, but I can show you how to implement it "by hand" (again, using Matematica). As I said, the ridge filter is the major eigenvalue of the Hessian matrix. I can calculate that eigenvalue symbolically in Mathematica: $\text{eigenvalue}=\text{Last}\left[\text{Eigenvalues}\left[\left( \begin{array}{cc} H_{\text{xx}} & H_{\text{xy}} \\ H_{\text{xy}} & H_{\text{yy}} \end{array} \right)\right]\right]$ => $\frac{1}{2} \left(H_{\text{xx}}+H_{\text{yy}}+\sqrt{H_{\text{xx}}^2+4 H_{\text{xy}}^2-2 H_{\text{xx}} H_{\text{yy}}+H_{\text{yy}}^2}\right)$ So what you have to do is calculate the second derivatives $H_{\text{xx}}$ , $H_{\text{xy}}$ , $H_{\text{yy}}$ (using a sobel or derivative of gaussian filter) and insert them into the expression above, and you've got your ridge filter.
{ "source": [ "https://dsp.stackexchange.com/questions/1714", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/654/" ] }
1,728
I'm struggling to figure out how the time points of an STFT are calculated, and I can't find a definitive answer. Let's say I have a 4Hz stationary signal and I'm going to use a 64 second window with 3 second overlap. So that's a 256 point window and a 12 point overlap. Assuming I start at time=0 , take the first 64 seconds, and perform the FFT/Power Spectrum Density/etc... Can I then say that is the value at t=32 ? Is the next window, after the 3 second slide localized at t=35 , and so on? If so, and I really wanted to start at t=0 , would I then effectively start at t=-32 , fill the first 128 points with zeros and take the first 128 points from my signal, thus centering on t=0 ?
You're not looking for edges (=borders between extended areas of high and low gray value), you're looking for ridges (thin lines darker or brighter than their neighborhood), so edge filters might not be ideal: An edge filter will give you two flanks (one on each side of the line) and a low response in the middle of the line: ADD : If've been asked to explain the difference between an edge detector and a ridge detector more clearly. I apologize in advance if this answer is getting very long. An edge detector is (usually) a first derivative operator: If you imagine the input image as a 3D landscape, an edge detector measures the steepness of the slope at each point of that landscape: If you want to detect the border of an extended bright or dark region, this is just fine. But for the veins in the OP's image it will give you just the same: the outlines left and right of each vein: That also explains the "double line pattern" in the Canny edge detector results: So, how do you detect these thin lines (i.e. ridges), then? The idea is that the pixel values can be (locally) approximated by a 2nd order polynomial, i.e. if the image function is $g$ , then for small values of $x$ and $y$ : $g(x,y)\approx \frac{1}{2} x^2 \frac{\partial ^2g}{\partial x^2}+x y \frac{\partial ^2g}{\partial x\, \partial y}+\frac{1}{2} y^2 \frac{\partial ^2g}{\partial y\, ^2}+x \frac{\partial g}{\partial x}+y \frac{\partial g}{\partial y}+g(0,0)$ or, in matrix form: $g(x,y)\approx \frac{1}{2} \left( \begin{array}{c} x & y \end{array} \right).\left( \begin{array}{cc} \frac{\partial ^2g}{\partial x^2} & \frac{\partial ^2g}{\partial x\, \partial y} \\ \frac{\partial ^2g}{\partial x\, \partial y} & \frac{\partial ^2g}{\partial y\, ^2} \end{array} \right).\left( \begin{array}{cc} x \\ y \end{array} \right)+\left( \begin{array}{cc} \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{array} \right).\left( \begin{array}{c} x \\ y \end{array} \right)+g(0,0)$ The second order derivative matrix $\left( \begin{array}{cc} \frac{\partial ^2g}{\partial x^2} & \frac{\partial ^2g}{\partial x\, \partial y} \\ \frac{\partial ^2g}{\partial x\, \partial y} & \frac{\partial ^2g}{\partial y\, ^2} \end{array} \right)$ is called the "Hessian matrix". It describes the 2nd order structure we're interested in. The 2nd order part of this function can be transformed into the sum of two parabolas $\lambda _1 x^2 + \lambda _2 y^2$ rotated by some angle, by decomposing the Hessian matrix above to a rotation times a diagonal matrix of it's eigenvalues ( Matrix decomposition ). We don't care about the rotation (we want to detect ridges in any orientation), so we're only interested in $\lambda _1$ and $\lambda _2$ What kind of shapes can this function approximation have? Actually, not that many: To detect ridges, we want to find areas in the image that look like the last of the plots above, so we're looking for areas where the major eigenvalue of the Hessian is large (compared to the minor eigenvalue). The simplest way to detect that is just to calculate the major eigenvalue at each pixel - and that's what the ridge filter below does. A ridge filter will probably give better results. I've tried Mathematica's built in RidgeFilter (which calculates the major eigenvalue of the Hessian matrix at each pixel) on your image: As you can see, there's only a single peak for every thin dark line. Binarizing and skeletonizing yields: After pruning the skeleton and removing small components (noise) from the image, I get this final skeleton: Full Mathematica code: ridges = RidgeFilter[ColorNegate@src]; skeleton = SkeletonTransform[Binarize[ridges, 0.007]]; DeleteSmallComponents[Pruning[skeleton, 50], 50] ADD: I'm not a Matlab expert, I don't know if it has a built in ridge filter, but I can show you how to implement it "by hand" (again, using Matematica). As I said, the ridge filter is the major eigenvalue of the Hessian matrix. I can calculate that eigenvalue symbolically in Mathematica: $\text{eigenvalue}=\text{Last}\left[\text{Eigenvalues}\left[\left( \begin{array}{cc} H_{\text{xx}} & H_{\text{xy}} \\ H_{\text{xy}} & H_{\text{yy}} \end{array} \right)\right]\right]$ => $\frac{1}{2} \left(H_{\text{xx}}+H_{\text{yy}}+\sqrt{H_{\text{xx}}^2+4 H_{\text{xy}}^2-2 H_{\text{xx}} H_{\text{yy}}+H_{\text{yy}}^2}\right)$ So what you have to do is calculate the second derivatives $H_{\text{xx}}$ , $H_{\text{xy}}$ , $H_{\text{yy}}$ (using a sobel or derivative of gaussian filter) and insert them into the expression above, and you've got your ridge filter.
{ "source": [ "https://dsp.stackexchange.com/questions/1728", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1105/" ] }
1,932
I'm trying to implement various binarization algorithms to the image shown: Here's the code: clc; clear; x=imread('n2.jpg'); %load original image % Now we resize the images so that computational work becomes easier later onwards for us. size(x); x=imresize(x,[500 800]); figure; imshow(x); title('original image'); z=rgb2hsv(x); %extract the value part of hsv plane v=z(:,:,3); v=imadjust(v); %now we find the mean and standard deviation required for niblack and %sauvola algorithms m = mean(v(:)) s=std(v(:)) k=-.4; value=m+ k*s; temp=v; % implementing niblack thresholding algorithm: for p=1:1:500 for q=1:1:800 pixel=temp(p,q); if(pixel>value) temp(p,q)=1; else temp(p,q)=0; end end end figure; imshow(temp); title('result by niblack'); k=kittlerMet(g); figure; imshow(k); title('result by kittlerMet'); % implementing sauvola thresholding algorithm: val2=m*(1+.1*((s/128)-1)); t2=v; for p=1:1:500 for q=1:1:800 pixel=t2(p,q); if(pixel>value) t2(p,q)=1; else t2(p,q)=0; end end end figure; imshow(t2); title('result by sauvola'); The results I obtained are as shown: As you can see the resultant images are degraded at the darker spots.Could someone please suggest how to optimize my result??
Your image doesn't have uniform brightness,so you shouldn't work with a uniform threshold. You need an adaptive threshold. This can be implemented by preprocessing the image to make the brightness more uniform across the image (code written in Mathematica, you'll have to implement the Matlab version for yourself): A simple way to make the brightness uniform is to remove the actual text from the image using a closing filter: white = Closing[src, DiskMatrix[5]] The filter size should be chosen larger than the font stroke width and smaller than the size of the stains you're trying to remove. EDIT: I was asked in the comments to explain what a closing operation does. It's a morphological dilation followed by a morphological erosion . The dilation essentially moves the structuring element at every position in the image, and picks the brightest pixel under the mask, thus : removing dark structures smaller than the structuring element shrinking larger dark structures by the size of the structuring element enlarging bright structures The erosion operation does the opposite (it picks the darkest pixel under inside the structuring element), so if you apply it on the dilated image: the dark structures that were removed because they're smaller than the structuring element are still gone the darker structures that were shrunk are enlarged again to their original size (though their shape will be smoother) the bright structures are reduced to their original size So the closing operation removes small dark objects with only minor changes to larger dark objects and bright objects. Here's an example with different structuring element sizes: As the size of the structuring element increases, more and more of the characters is removed. At radius=5, all of the characters are removed. If the radius is increased further, the smaller stains are removed, too: Now you just divide the original image by this "white image" to get an image of (nearly) uniform brightness: whiteAdjusted = Image[ImageData[src]/ImageData[white]*0.85] This image can now be binarized with a constant threshold: Binarize[whiteAdjusted , 0.6]
{ "source": [ "https://dsp.stackexchange.com/questions/1932", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1173/" ] }
1,933
I have made a python code to smoothen a given signal using the Weierstrass transform, which is basically the convolution of a normalised gaussian with a signal. The code is as follows: #Importing relevant libraries from __future__ import division from scipy.signal import fftconvolve import numpy as np def smooth_func(sig, x, t= 0.002): ''' x is an 1-D array, sig is the input signal and a function of x. ''' N = len(x) x1 = x[-1] x0 = x[0] # defining a new array y which is symmetric around zero, to make the gaussian symmetric. y = np.linspace(-(x1-x0)/2, (x1-x0)/2, N) #gaussian centered around zero. gaus = np.exp(-y**(2)/t) #using fftconvolve to speed up the convolution; gaus.sum() is the normalization constant. return fftconvolve(sig, gaus/gaus.sum(), mode='same') If I run this code for say a step function, it smoothens the corner, but at the boundary it interprets another corner and smoothens that too, as a result giving unnecessary behaviour at the boundary. I explain this with a figure shown in the link below. Boundary effects This problem does not arise if we directly integrate to find convolution. Hence the problem is not in Weierstrass transform, and hence the problem is in the fftconvolve function of scipy. To understand why this problem arises we first need to understand the working of fftconvolve in scipy. The fftconvolve function basically uses the convolution theorem to speed up the computation. In short it says: convolution(int1,int2)=ifft(fft(int1)*fft(int2)) If we directly apply this theorem we dont get the desired result. To get the desired result we need to take the fft on a array double the size of max(int1,int2). But this leads to the undesired boundary effects. This is because in the fft code, if size(int) is greater than the size(over which to take fft) it zero pads the input and then takes the fft. This zero padding is exactly what is responsible for the undesired boundary effects. Can you suggest a way to remove this boundary effects? I have tried to remove it by a simple trick. After smoothening the function I am compairing the value of the smoothened signal with the original signal near the boundaries and if they dont match I replace the value of the smoothened func with the input signal at that point. It is as follows: i = 0 eps=1e-3 while abs(smooth[i]-sig[i])> eps: #compairing the signals on the left boundary smooth[i] = sig[i] i = i + 1 j = -1 while abs(smooth[j]-sig[j])> eps: # compairing on the right boundary. smooth[j] = sig[j] j = j - 1 There is a problem with this method, because of using an epsilon there are small jumps in the smoothened function, as shown below: jumps in the smooth func Can there be any changes made in the above method to solve this boundary problem? Also I tried removing the zero padding in the fft source code and replaced it with a constant value, but it gave undesired results. Can you suggest a way of removing this zero padding in the scipy fft source code?
Your image doesn't have uniform brightness,so you shouldn't work with a uniform threshold. You need an adaptive threshold. This can be implemented by preprocessing the image to make the brightness more uniform across the image (code written in Mathematica, you'll have to implement the Matlab version for yourself): A simple way to make the brightness uniform is to remove the actual text from the image using a closing filter: white = Closing[src, DiskMatrix[5]] The filter size should be chosen larger than the font stroke width and smaller than the size of the stains you're trying to remove. EDIT: I was asked in the comments to explain what a closing operation does. It's a morphological dilation followed by a morphological erosion . The dilation essentially moves the structuring element at every position in the image, and picks the brightest pixel under the mask, thus : removing dark structures smaller than the structuring element shrinking larger dark structures by the size of the structuring element enlarging bright structures The erosion operation does the opposite (it picks the darkest pixel under inside the structuring element), so if you apply it on the dilated image: the dark structures that were removed because they're smaller than the structuring element are still gone the darker structures that were shrunk are enlarged again to their original size (though their shape will be smoother) the bright structures are reduced to their original size So the closing operation removes small dark objects with only minor changes to larger dark objects and bright objects. Here's an example with different structuring element sizes: As the size of the structuring element increases, more and more of the characters is removed. At radius=5, all of the characters are removed. If the radius is increased further, the smaller stains are removed, too: Now you just divide the original image by this "white image" to get an image of (nearly) uniform brightness: whiteAdjusted = Image[ImageData[src]/ImageData[white]*0.85] This image can now be binarized with a constant threshold: Binarize[whiteAdjusted , 0.6]
{ "source": [ "https://dsp.stackexchange.com/questions/1933", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1179/" ] }
2,096
Welch's method has been my go-to algorithm for computing power spectral density (PSD) of evenly-sampled timeseries. I noticed that there are many other methods for computing PSD. For example, in Matlab I see: PSD using Burg method PSD using covariance method PSD using periodogram PSD using modified covariance method PSD using multitaper method (MTM) PSD using Welch's method PSD using Yule-Walker AR method Spectrogram using short-time Fourier transform Spectral estimation What are the advantages of these various methods? As a practical question, when would I want to use something other than Welch's method?
I have no familiarity with the Multitaper method. That said, you've asked quite a question. In pursuit of my MSEE degree, I took an entire course that covered PSD estimation. The course covered all of what you listed (with exception to the Multitaper method), and also subspace methods. Even this only covers some of the main ideas, and there are many methods stemming from these concepts. For starters, there are two main methods of power spectral density estimation: non-parametric and parametric. Non-parametric methods are used when little is known about the signal ahead of time. They typically have less computational complexity than parametric models. Methods in this group are further divided into two categories: periodograms and correlograms. Periodograms are also sometimes referred to as direct methods, as they result in a direct transformation of the data. These include the sample spectrum, Bartlett's method, Welch's method, and the Daniell Periodogram. Correlograms are sometimes referred to as indirect methods, as they exploit the Wiener-Khinchin theorem. Therefore these methods are based on taking the Fourier transform of some sort of estimate of the autocorrelation sequence. Because of the high amount of variance associated with higher order lags (due to a small amount of data samples used in the correlations), windowing is used. The Blackman-Tukey method generalizes the correlogram methods. Parametric methods typically assume some sort of signal model prior to calculation of the PSD estimate. Therefore, it is assumed that some knowledge of the signal is known ahead of time. There are two main parametric method categories: autoregressive methods and subspace methods. Autoregressive methods assume that the signal can be modeled as the output of an autoregressive filter (such as an IIR filter) driven by a white noise sequence. Therefore all of these methods attempt to solve for the IIR coefficients, whereby the resulting power spectral density is easily calculated. The model order (or number of taps), however, must be determined. If the model order is too small, the spectrum will be highly smoothed, and lack resolution. If the model order is too high, false peaks from an abundant amount of poles begin to appear. If the signal may be modeled by an AR process of model 'p', then the output of the filter of order >= p driven by the signal will produce white noise. There are hundreds of metrics for model order selection. Note that these methods are excellent for high-to-moderate SNR, narrowband signals. The former is because the model breaks down in significant noise, and is better modeled as an ARMA process. The latter is due to the impulsive nature of the resulting spectrum from the poles in the Fourier transform of the resulting model. AR methods are based on linear prediction, which is what's used to extrapolate the signal outside of its known values. As a result, they do not suffer from sidelobes and require no windowing. Subspace methods decompose the signal into a signal subspace and noise subspace. Exploiting orthogonality between the two subspaces allows a pseudospectrum to be formed where large peaks at narrowband components can appear. These methods work very well in low SNR environments, but are computationally very expensive. They can be grouped into two categories: noise subspace methods and signal subspace methods. Both categories can be utilized in one of two ways: eigenvalue decomposition of the autocorrelation matrix or singular value decomposition of the data matrix. Noise subspace methods attempt to solve for 1 or more of the noise subspace eigenvectors. Then, the orthogonality between the noise subspace and the signal subspace produces zeros in the denominator of the resulting spectrum estimates, resulting in large values or spikes at true signal components. The number of discrete sinusoids, or the rank of the signal subspace, must be determined/estimated, or known ahead of time. Signal subspace methods attempt to discard the noise subspace prior to spectral estimation, improving the SNR. A reduced rank autocorrelation matrix is formed with only the eigenvectors determined to belong to the signal subspace (again, a model order problem), and the reduced rank matrix is used in any one of the other methods. Now, I'll try to quickly cover your list: PSD using Burg method : The Burg method leverages the Levinson recursion slightly differently than the Yule-Walker method, in that it estimates the reflection coefficients by minimizing the average of the forward and backward linear prediction error. This results in a harmonic mean of the partial correlation coefficients of the forward and backward linear prediction error. It produces very high resolution estimates, like all autoregressive methods, because it uses linear prediction to extrapolate the signal outside of its known data record. This effectively removes all sidelobe phenomena. It is superior to the YW method for short data records, and also removes the tradeoff between utilizing the biased and unbiased autocorrelation estimates, as the weighting factors divide out. One disadvantage is that it can exhibit spectral line splitting. In addition, it suffers from the same problems all AR methods have. That is, low to moderate SNR severely degrades the performance, as it is no longer properly modeled by an AR process, but rather an ARMA process. ARMA methods are rarely used as they generally result in a nonlinear set of equations with respect to the moving average parameters. PSD using covariance method : The covariance method is a special case of the least-squares method, whereby the windowed portion of the linear prediction errors is discarded. This has superior performance to the Burg method, but unlike the YW method, the matrix inverse to be solved for is not Hermitian Toeplitz in general, but rather the product of two Toeplitz matrices. Therefore, the Levinson recursion cannot be used to solve for the coefficients. In addition, the filter generated by this method is not guaranteed to be stable. However, for spectral estimation this is a good thing, resulting in very large peaks for sinusoidal content. PSD using periodogram : This is one of the worst estimators, and is a special case of Welch's method with a single segment, rectangular or triangular windowing (depending on which autocorrelation estimate is used, biased or unbiased), and no overlap. However, it's one of the "cheapest" computationally speaking. The resulting variance can be quite high. PSD using modified covariance method : This improves on both the covariance method and the Burg method. It can be compared to the Burg method, whereby the Burg method only minimizes the average forward/backward linear prediction error with respect to the reflection coefficient, the MC method minimizes it with respect to ALL of the AR coefficients. In addition, it does not suffer from spectral line splitting, and provides much less distortion than the previously listed methods. In addition, while it does not guarantee a stable IIR filter, it's lattice filter realization is stable. It is more computationally demanding than the other two methods as well. PSD using Welch's method : Welch's method improves upon the periodogram by addressing the lack of the ensemble averaging which is present in the true PSD formula. It generalizes Barlett's method by using overlap and windowing to provide more PSD "samples" for the pseudo-ensemble average. It can be a cheap, effective method depending on the application. However, if you have a situation with closely spaced sinusoids, AR methods may be better suited. However, it does not require estimating the model order like AR methods, so if little is known about your spectrum a priori, it can be an excellent starting point. PSD using Yule-Walker AR method : This is a special case of the least squares method where the complete error residuals are utilized. This results in diminished performance compared to the covariance methods, but may be efficiently solved using the Levinson recursion. It's also known as the autocorrelation method. Spectrogram using short-time Fourier transform : Now you're crossing into a different domain. This is used for time-varying spectra. That is, one whose spectrum changes with time. This opens up a whole other can of worms, and there are just as many methods as you have listed for time-frequency analysis. This is certainly the cheapest, which is why its so frequently used. Spectral estimation : This is not a method, but a blanket term for the rest of your post. Sometimes the Periodogram is referred to as the "sample spectrum" or the "Schuster Periodogram", the former of which may be what you're referring to. If you are interested, you may also look into subspace methods such as MUSIC and Pisarenko Harmonic Decomposition. These decompose the signal into signal and noise subspace, and exploits the orthogonality between the noise subspace and the signal subspace eigenvectors to produce a pseudospectrum. Much like the AR methods, you may not get a "true" PSD estimate, in that power most likely is not conserved, and the amplitudes between spectral components is relative. However, it all depends on your application.
{ "source": [ "https://dsp.stackexchange.com/questions/2096", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/64/" ] }
2,159
In image processing, what does it mean when a filter is called non-linear? Does it mean the equation of the filter contains derivatives and if it didn't, it would have been called linear?
A filter F is called "linear", iff for any scalars $c_1$, $c_2$ and any images $I_1$ and $I_2$: $F\left(c_1\cdot I_1+c_2\cdot I_2\right)=c_1\cdot F\left(I_1\right)+c_2\cdot F\left(I_2\right)$ This includes: Derivatives Integrals Fourier transform Z-Transform Geometric transformations (rotate, translate, scale, warp) Convolution and Correlation the composition of any tuple of linear filters (i.e. applying some linear filter to the output of another linear filter $F(G(I))$) the sum of the result of any two linear filters (i.e. the output of one filter, added pixel by pixel to the output of another filter $F(I) + G(I)$) and many others. Examples of non-linear filters are: the square, absolute, square root, exp or logarithm of the result of any linear filter the product of the result of any two linear filters (i.e. the output of one filter, multiplied pixel by pixel with the output of another filter $F(I)\cdot G(I)$) morphological filters median filter
{ "source": [ "https://dsp.stackexchange.com/questions/2159", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1289/" ] }
2,241
What is the true meaning of a minimum phase system? Reading the Wikipedia article and Oppenheim is some help, in that, we understand that for an LTI system, minimum phase means the inverse is causal and stable. (So that means zeros and poles are inside the unit circle), but what does "phase" and "minimum" have to do with it? Can we tell a system is minimum phase by looking at the phase response of the DFT somehow?
The relation of "minimum" to "phase" in a minimum phase system or filter can be seen if you plot the unwrapped phase against frequency. You can use a pole zero diagram of the system response to help do a incremental graphical plot of the frequency response and phase angle. This method helps in doing a phase plot without phase wrapping discontinuities. Put all the zeros inside the unit circle (or in left half plane in the continuous-time case), where all the poles have to be as well for system stability. Add up the angles from all the poles, and the negative of the angles from all the zeros, to calculate total phase to a point on the unit circle, as that frequency response reference point moves around the unit circle. Plot phase vs. frequency. Now compare this plot with a similar plot for a pole-zero diagram with any of the zeros swapped outside the unit circle (non-minimum phase). The overall average slope of the line with all the zeros inside will be lower than the average slope of any other line representing the same LTI system response (e.g. with a zero reflected outside the unit circle). This is because the "wind ups" in phase angle are all mostly cancelled by the "wind downs" in phase angle only when both the poles and zeros are on the same side of the unit circle line. Otherwise, for each zero outside, there will be an extra "wind up" of increasing phase angle that will remain mostly uncancelled as the plot reference point "winds" around the unit circle from 0 to PI. (...or up the vertical axis in the continuous-time case.) This arrangement, all the zeros inside the unit circle, thus corresponds to the minimum total increase in phase, which corresponds to minimum average total phase delay, which corresponds to maximum compactness in time, for any given (stable) set of poles and zeros with the exact same frequency magnitude response. Thus the relationship between "minimum" and "phase" for this particular arrangement of poles and zeros. Also see my old word picture with strange crank handles in the ancient usenet comp.dsp archives: https://groups.google.com/d/msg/comp.dsp/ulAX0_Tn65c/Fgqph7gqd3kJ
{ "source": [ "https://dsp.stackexchange.com/questions/2241", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1329/" ] }
2,348
What is the relationship, if any, between Kalman filtering and (repeated, if necessary) least squares polynomial regression?
1. There is a Difference in terms of optimality criteria Kalman filter is a Linear estimator. It is a linear optimal estimator - i.e. infers model parameters of interest from indirect, inaccurate and uncertain observations. But optimal in what sense? If all noise is Gaussian, the Kalman filter minimizes the mean square error of the estimated parameters. This means, that when underlying noise is NOT Gaussian the promise no longer holds. In case of nonlinear dynamics, it is well-known that the problem of state estimation becomes difficult. In this context, no filtering scheme clearly outperforms all other strategies. In such case, Non-linear estimators may be better if they can better model the system with additional information. [See Ref 1-2] Polynomial regression is a form of linear regression in which the relationship between the independent variable x and the dependent variable y is modeled as an nth order polynomial. $$ Y = a_0 + a_1x + a_2x^2 + \epsilon $$ Note that, while polynomial regression fits a nonlinear model to the data, these models are all linear from the point of view of estimation, since the regression function is linear in terms of the unknown parameters $a_0, a_1, a_2$ . If we treat $x, x^2$ as different variables, polynomial regression can also be treated as multiple linear regression . Polynomial regression models are usually fit using the method of least squares. In the least squares method also, we minimize the mean squared error. The least-squares method minimizes the variance of the unbiased estimators of the coefficients, under the conditions of the Gauss–Markov theorem . This theorem, states that ordinary least squares (OLS) or linear least squares is the Best Linear Unbaised Estimator (BLUE) under following conditions: a. when errors have expectation zero i.e. $E(e_i) = 0 $ b. have equal variances i.e. $ Variance(e_i) = \sigma^2 < \infty $ c. and errors are uncorrelated i.e. $ cov(e_i,e_j) = 0 $ NOTE: that here, errors don't have to be Gaussian nor need to be IID. It only needs to be uncorrelated. 2. Kalman Filter is an evolution of estimators from least square In 1970, H. W. Sorenson published an IEEE Spectrum article titled "Least-squares estimation: from Gauss to Kalman. " [See Ref 3.] This is a seminal paper that provides great insight about how Gauss' original idea of least squares to today's modern estimators like Kalman. Gauss' work not only introduced the least square framework but it was actually one of the earliest work that used a probabilistic view. While least squares evolved in the form of various regression methods, there was another critical work that brought filter theory to be used as an estimator. The theory of filtering to be used for stationary time series estimation was constructed by Norbert Wiener during 1940s (during WW-II) and published in 1949 which is now known as Wiener filter. The work was done much earlier, but was classified until well after World War II). The discrete-time equivalent of Wiener's work was derived independently by Kolmogorov and published in 1941. Hence the theory is often called the Wiener-Kolmogorov filtering theory . Traditionally filters are designed for the desired frequency response. However, in case of Wiener filter, it reduces the amount of noise present in a signal by comparison with an estimation of the desired noiseless signal. Weiner filter is actually an estimator. In an important paper, however, Levinson (1947) [See Ref 6] showed that in discrete time, the entire theory could be reduced to least squares and so was mathematically very simple. See Ref 4 Thus, we can see that Weiner's work gave a new approach for estimation problem; an evolution from using least squares to another well-established filter theory. However, the critical limitation is that Wiener filter assumes the inputs are stationary. We can say that Kalman filter is a next step in the evolution which drops the stationary criteria. In Kalman filter, state space model can dynamically be adapted to deal with non-stationary nature of signal or system. The Kalman filters are based on linear dynamic systems in discrete time domain. Hence it is capable of dealing with potentially time varying signal as opposed to Wiener. As the Sorenson's paper draws parallel between Gauss' least squares and Kalman filter as ...therefore, one sees that the basic assumption of Gauss and Kalman are identical except that later allows the state to change from one time to next. The difference introduces a non-trivial modification to Gauss' problem but one that can be treated within the least squares framework. 3. They are same as far as causality direction of prediction is concerned; besides implementation efficiency Sometimes it is perceived that Kalman filter is used for prediction of future events based on past data where as regression or least squares does smoothing within end to end points. This is not really true. Readers should note that both the estimators (and almost all estimators you can think of) can do either job. You can apply Kalman filter to apply Kalman smoothing . Similarly, regression based models can also be used for prediction. Given the training vector, $X_t$ and you applied $Y_t$ and discovered the model parameters $α_0 ... a_K$ now for another sample $X_k$ we can extrapolate $Y_K$ based on the model. Hence, both methods can be used in the form of smoothing or fitting (non-causal) as well as for future predictions (causal case). However, the critical difference is the implementation which is significant. In case of polynomial regression - with entire process needs to get repeated and hence, while it may be possible to implement causal estimation but it might be computationally expensive. [While, I am sure there must be some research by now to make things iterative]. On the other hand, Kalman filter is inherently recursive. Hence, using it for prediction for future only using on past data will be very efficient. Here is another good presentation that compares several methods: Ref 5 References Best Introduction to Kalman Filter - Dan Simon Kalman Filtering Embedded Systems Programming JUNE 2001 page 72 Presentation: Lindsay Kleeman Understanding and Applying Kalman Filtering H. W. Sorenson Least-squares estimation: from Gauss to Kalman IEEE Spectrum, July 1970. pp 63-68. Lecture Note MIT Course ware - Inference from Data and Models (12.864) - Wiener and Kalman Filters Presentation Simo Särkkä From Linear Regression to Kalman Filter and Beyond Helsinki University of Technology Levinson, N. (1947). "The Wiener RMS error criterion in filter design and prediction." J. Math. Phys., v. 25, pp. 261–278.
{ "source": [ "https://dsp.stackexchange.com/questions/2348", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/154/" ] }
2,411
Adaptive thresholding has been discussed in a few questions earlier: Adaptive Thresholding for liver segmentation using Matlab What are the best algorithms for document image thresholding in this example? Of course, there are many algorithms for Adaptive thresholding. I want to know which ones you have found most effective and useful. Which Adaptive algorithms you have used the most and for which application; how do you come to choose this algorithm?
I do not think mine will be a complete answer, but I'll offer what I know and since this is a community edited site, I hope somebody will give a complimentary answer soon :) Adaptive thresholding methods are those that do not use the same threshold throughout the whole image . But, for some simpler usages, it is sometimes enough to just pick a threshold with a method smarter than the most simple iterative method . Otsu's method is a popular thresholding method that assumes the image contains two classes of pixels - foreground and background , and has a bi-modal histogram . It then attempts to minimize their combined spread (intra-class variance). The simplest algorithms that can be considered truly adaptive thresholding methods would be the ones that split the image into a grid of cells and then apply a simple thresholding method (e.g. iterative or Otsu's method) on each cell treating it as a separate image (and presuming a bi-modal histogram). If a sub-image can not be thresholded good the threshold from one of the neighboring cells can be used. Alternative approach to finding the local threshold is to statistically examine the intensity values of the local neighborhood of each pixel . The threshold is different for each pixel and calculated from it's local neighborhood (a median, average, and other choices are possible). There is an implementation of this kind of methods included in OpenCV library in the cv::adaptiveThresholding function. I found another similar method called Bradley Local Thresholding . It also examines the neighborhood of each pixel, setting the brightnes to black if the pixels brightness is t percent lower than the average brightness of surrounding pixels. The corresponding paper can be found here . This stackoverflow answer mentiones a local (adaptive) thresholding method called Niblack but I have not heard of it before. Lastly, there is a method I have used in one of my previous smaller projects, called Image Thresholding by Variational Minimax Optimization . It is an iterative method, based on optimizing an energy function that is a nonlinear combination of two components. One component aims to calculate the threshold based on the position of strongest intensity changes in the image. The other component aims to smooth the threshold at the (object)border areas. It has proven fairly good on images of analog instruments (various shading and reflection from glass/plastic present), but required a careful choice of the number of iterations. Late edit : Inspired by the comment to this answer . There is one more way I know of to work around uneven lighting conditions. I will write here about bright objects on a dark background, but the same reasoning can be applied if the situation is reverse. Threshold the white top-hat transform of the image with a constant threshold instead of the original image . A white top hat of an image is nothing but a difference between the image $f$ and it's opening $\gamma(f)$. As further explanation let me offer a quote from P. Soille: Morphological Image Analysis : An opening of the original image with a large square SE removes all relevant image structures but preserves the illumination function. The white top-hat of the original image or subtraction of the illumination function from the original image outputs an image with a homogeneous illumination.
{ "source": [ "https://dsp.stackexchange.com/questions/2411", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/440/" ] }
2,426
In term of proper or accepted naming conventions of DSP graphics or instrumentation output, what is the difference between the words spectrum, spectrogram, spectrograph, and similar terms, and what type of chart, graph, CRT display, or etc. does each best describe. ADDED: Also I found the term sonogram used in a couple books for spectrum-vs-time graphics. When might thus be appropriate in preference to one of the above terms, or vice versa?
It depends on context. In signal processing, a spectrum (plural is spectra ) shows the frequency content of an entire signal. It's a 1-dimensional function of amplitude (vertical axis) vs frequency (horizontal axis): Spectra are often shown with a logarithmic amplitude axis (such as dB ), but this isn't necessary. A machine that produces a spectrum is usually called a spectrum analyzer . In other fields, the machine is called a spectrograph or spectrometer . A spectrogram shows how the frequency content of a signal changes over time. It's a 2-dimensional function of amplitude (brightness or color) vs frequency (vertical axis) vs time (horizontal axis): Sometimes this is called a sonogram . The time and frequency axes are sometimes swapped. If amplitude is shown as a 3D surface rather than using color, it's called a waterfall plot . Confusingly, a machine that produces a spectrogram is also called a spectrograph , or spectrograph is used as a synonym for spectrogram . Also the line is kind of blurred, because if you view the spectrum of a live signal on a spectrum analyzer, it's displaying the spectrum of small chunks of the signal and you're seeing how it changes over time, which is essentially the same thing as a spectrogram. I think the important distinction is just the way they're displayed: A spectrum is a 1D plot and a spectrogram is a 2D plot.
{ "source": [ "https://dsp.stackexchange.com/questions/2426", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/154/" ] }
2,625
This question is on dsp.SE as I'm mostly interested in the signal processing part. There is an Indian movie Mughal-e-Azam which was released in 1960 in black & white which has been reproduced in color in 2004. How did they color each pixel perfectly? What technique have they used to identify the color placement on each pixel? Look at one of the screenshots from the movie: I've got an Einstein black-and-white photo which I want to colorize. How is it possible to do so without knowing what he was wearing back then and what was the actual color of his clothes, background etc.
There is no way of recovering the original color information from a black and white photo, so whether Einstein (resp. Waheeda Rehman) was wearing a pink or green sweater (resp. Dupatta) is up to your imagination. Historically, this has been done by hand , by painting over the film. The first digital techniques to automate the process consisted in "painting" a few dots of color on each frame, at the center of each uniformly colored region, and using something like a voronoi partition + some blurring to get a color map for each frame (see for example US patent 4606625). Today, this can be done relatively easily (though manually) with video editing software, by using vector masks to indicate regions of uniform color on a few keyframes, and interpolating between them. Then a color transform is applied to each mask. See it in action here . Standard image segmentation and region tracking techniques can be used to automate the task of segmentation and the marking of regions on each keyframe - for example by propagating manual annotations to similar/adjacent pixels in space/time , or by detecting uniformly textured regions . Texture and gray level similarity can be used to propagate color cues from a color image to a greyscale image depicting a similar subject - in this case the manual process only consists in finding a template color image - this later task can be itself automated using content-based image retrieval techniques .
{ "source": [ "https://dsp.stackexchange.com/questions/2625", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1324/" ] }
2,636
What differences or other criteria can be used to help decide between using overlap-add and overlap-save for filtering? Both overlap-add and overlap-save are described as algorithms for doing FFT based fast convolution of data streams with FIR filter kernels. What are the latency, computational efficiency or caching locality (etc.) differences, if any? Or are they the same?
Essentially, OS is slightly more efficient since it does not require the addition of the overlapping transients. However, you may want to use OA if you need to reuse the FFTs with zero-padding rather than repeated samples. Here is a quick overview from an article I wrote a while ago Fast convolution refers to the blockwise use of circular convolution to accomplish linear convolution. Fast convolution can be accomplished by OA or OS methods. OS is also known as “overlap- scrap” . In OA filtering, each signal data block contains only as many samples as allows circular convolution to be equivalent to linear convolution. The signal data block is zero-padded prior to the FFT to prevent the filter impulse response from “wrapping around” the end of the sequence. OA filtering adds the input-on transient from one block with the input-off transient from the previous block. In OS filtering, shown in Figure 1, no zero-padding is performed on the input data, thus the circular convolution is not equivalent to linear convolution. The portions that “wrap around” are useless and discarded. To compensate for this, the last part of the previous input block is used as the beginning of the next block. OS requires no addition of transients, making it faster than OA.
{ "source": [ "https://dsp.stackexchange.com/questions/2636", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/154/" ] }
2,654
I've found on multiple sites that convolution and cross-correlation are similar (including the tag wiki for convolution), but I didn't find anywhere how they differ. What is the difference between the two? Can you say that autocorrelation is also a kind of a convolution?
The only difference between cross-correlation and convolution is a time reversal on one of the inputs. Discrete convolution and cross-correlation are defined as follows (for real signals; I neglected the conjugates needed when the signals are complex): $$ x[n] * h[n] = \sum_{k=0}^{\infty}h[k] x[n-k] $$ $$ corr(x[n],h[n]) = \sum_{k=0}^{\infty}h[k] x[n+k] $$ This implies that you can use fast convolution algorithms like overlap-save to implement cross-correlation efficiently; just time reverse one of the input signals first. Autocorrelation is identical to the above, except $h[n] = x[n]$, so you can view it as related to convolution in the same way. Edit: Since someone else just asked a duplicate question, I've been inspired to add one more piece of information: if you implement correlation in the frequency domain using a fast convolution algorithm like overlap-save, you can avoid the hassle of time-reversing one of the signals first by instead conjugating one of the signals in the frequency domain. It can be shown that conjugation in the frequency domain is equivalent to reversal in the time domain.
{ "source": [ "https://dsp.stackexchange.com/questions/2654", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1524/" ] }
2,687
I see the HSV colour space used all over the place: for tracking, human detection, etc... I'm wondering, why? What is it about this colour space that makes it better than using RGB?
The color information is usually much more noisy than the HSV information. Let me give you an example: Me and some friends were involved in a project dealing with the recognition of traffic signs in real scene videos (noise, shadows and sometimes occlusion present). It was a part of a bigger project, so that allowed us time to try out different approaches to this particular problem (and re-use older approaches). I did not try color-based approach myself, but I remember an interesting information: _The dominant RGB component in a STOP sign was often not red! (mostly due to shadows) You can frequently get better information from a HSV colorspace . Let me try and give a personal experience example again: Try imagining you have an image of a single-color plane with a shadow on it. In RGB colorspace, the shadow part will most likely have very different characteristics than the part without shadows. In HSV colorspace, the hue component of both patches is more likely to be similar: the shadow will primarily influence the value , or maybe saturation component, while the hue , indicating the primary "color" (without it's brightness and diluted-ness by white/black) should not change so much. If this explanations do not sound intuitive to you, I suggest: try and better understand components used to represent a color in HSV colorspace, and renew your knowledge of RGB try and see the reasons why these kinds of color representation were developed: it is always in some way, based on some view of human interpretation of color e.g. children do not actually like highly colored == valued objects, they prefer highly saturated objects, objects in which the color is intense and non-diluted after you get this and develop some intuition, you should play with images: try decomposing various images in their R-G-B and H-S-V components Your goal would be to see and understand a difference in these decompositions for images that contain shadows, strong illumination, light reflection. if you have a particular type of images you like to play with, try decomposing them: who knows, maybe RGB is really more suited for your needs than HSV :)
{ "source": [ "https://dsp.stackexchange.com/questions/2687", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1551/" ] }
3,002
In 1d signal processing, many types of low pass filters are used. Gaussian filters are almost never used, though. Why are they so popular in image processing applications? Are these filters a result of optimizing any criterion or are just ad hoc solution since image 'bandwidth' is usually not well defined.
Image processing applications are different from say audio processing applications, because many of them are tuned for the eye. Gaussian masks nearly perfectly simulate optical blur (see also point spread functions ). In any image processing application oriented at artistic production, Gaussian filters are used for blurring by default. Another important quantitative property of Gaussian filters is that they're everywhere non-negative . This is important because most 1D signals vary about 0 ($x \in \mathbb{R}$) and can have either positive or negative values. Images are different in the sense that all values of an image are non-negative ($x \in \mathbb{R}^+$). Convolution with a Gaussian kernel (filter) guarantees a non-negative result, so such function maps non-negative values to other non-negative values ($f: \mathbb{R}^+ \rightarrow \mathbb{R}^+$). The result is therefore always another valid image. In general, frequency rejection in Image processing in not as crucial as in 1D signals. For example, in modulation schemes your filters need to be very precise to reject other channels transmitted on different carrier frequencies, and so on. I can't think of anything just as constraining for image processing problems.
{ "source": [ "https://dsp.stackexchange.com/questions/3002", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1082/" ] }
3,581
I'd like to mix two or more PCM audio channels (eg recorded samples) digitally in an acoustically-faithful manner, preferably in near-real-time (meaning little or no peek-ahead). The physically "correct" way to do this is summing the samples. However when you add two arbitrary samples, the resulting value could be up to twice the maximum value. For example, if your samples are 16-bit values, the result will be up to 65536*2. This results in clipping. The naive solution here is to divide by N, where N is the number of channels being mixed. However, this results in each sample being 1/Nth as loud, which is completely unrealistic. In the real world, when two instruments play simultaneously, each instrument does not become half as loud. From reading around, a common method of mixing is: result = A + B - AB, where A and B are the two normalized samples being mixed, and AB is a term to ensure louder sounds are increasingly "soft-clipped". However, this introduces a distortion of the signal. Is this level of distortion acceptable in high-quality audio synthesis? What other methods are there to solve this problem? I'm interested in efficient lesser-quality algorithms as well as less-efficient high-quality algorithms. I'm asking my question in the context of digital music synthesis, for the purpose of mixing multiple instrument tracks together. The tracks could be synthesised audio, pre-recorded samples, or real-time microphone input.
It's very hard to point you to relevant techniques without knowing any context for your problem. The obvious answer would be to tell you to adjust the gain of each sample so that clipping rarely occurs. It is not that unrealistic to assume that musicians would play softer in an ensemble than when asked to play solo. The distortion introduced by A + B - AB is just not acceptable. It creates mirror images of A on each side of B's harmonics - equivalent to ring-modulation - which is pretty awful if A and B have a rich spectrum with harmonics which are not at integer ratios. Try it on two square waves at 220 and 400 Hz for example. A more "natural" clipping function which works on a sample-per-sample basis, is the tanh function - it actually matches the soft-limiting behavior of some analog elements. Beyond that, you can look into classic dynamic compression techniques - if your system can look ahead and see peaks comings in advance this is even better.
{ "source": [ "https://dsp.stackexchange.com/questions/3581", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/2076/" ] }
4,723
If we convolve 2 signals we get a third signal. What does this third signal represent in relation to the input signals?
There's not particularly any "physical" meaning to the convolution operation. The main use of convolution in engineering is in describing the output of a linear, time-invariant (LTI) system. The input-output behavior of an LTI system can be characterized via its impulse response , and the output of an LTI system for any input signal $x(t)$ can be expressed as the convolution of the input signal with the system's impulse response. Namely, if the signal $x(t)$ is applied to an LTI system with impulse response $h(t)$, then the output signal is: $$ y(t) = x(t) * h(t) = \int_{-\infty}^{\infty}x(\tau)h(t - \tau)d\tau $$ Like I said, there's not much of a physical interpretation, but you can think of a convolution qualitatively as "smearing" the energy present in $x(t)$ out in time in some way, dependent upon the shape of the impulse response $h(t)$. At an engineering level (rigorous mathematicians wouldn't approve), you can get some insight by looking more closely at the structure of the integrand itself. You can think of the output $y(t)$ as the sum of an infinite number of copies of the impulse response, each shifted by a slightly different time delay ($\tau$) and scaled according to the value of the input signal at the value of $t$ that corresponds to the delay: $x(\tau)$. This sort of interpretation is similar to taking discrete-time convolution (discussed in Atul Ingle's answer) to a limit of an infinitesimally-short sample period, which again isn't fully mathematically sound, but makes for a decently intuitive way to visualize the action for a continuous-time system.
{ "source": [ "https://dsp.stackexchange.com/questions/4723", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/3128/" ] }
4,825
If you do an FFT plot of a simple signal, like: t = 0:0.01:1 ; N = max(size(t)); x = 1 + sin( 2*pi*t ) ; y = abs( fft( x ) ) ; stem( N*t, y ) 1Hz sinusoid + DC FFT of above I understand that the number in the first bin is "how much DC" there is in the signal. y(1) %DC > 101.0000 The number in the second bin should be "how much 1-cycle over the whole signal" there is: y(2) %1 cycle in the N samples > 50.6665 But it's not 101! It's about 50.5. There's another entry at the end of the fft signal, equal in magnitude: y(101) > 50.2971 So 50.5 again. My question is, why is the FFT mirrored like this? Why isn't it just a 101 in y(2) (which would of course mean, all 101 bins of your signal have a 1 Hz sinusoid in it?) Would it be accurate to do: mid = round( N/2 ) ; % Prepend y(1), then add y(2:middle) with the mirror FLIPPED vector % from y(middle+1:end) z = [ y(1), y( 2:mid ) + fliplr( y(mid+1:end) ) ]; stem( z ) Flip and add-in the second half of the FFT vector I thought now, the mirrored part on the right hand side is added in correctly, giving me the desired "all 101 bins of the FFT contain a 1Hz sinusoid" >> z(2) ans = 100.5943
Real signals are "mirrored" in the real and negative halves of the Fourier transform because of the nature of the Fourier transform. The Fourier transform is defined as the following- $$H(f) = \int h(t)e^{-j2\pi ft}dt$$ Basically it correlates the signal with a bunch of complex sinusoids, each with its own frequency. So what do those complex sinusoids look like? The picture below illustrates one complex sinusoid. The "corkscrew" is the rotating complex sinusoid in time, while the two sinusoids that follow it are the extracted real and imaginary components of the complex sinusoid. The astute reader will note that the real and imaginary components are the exact same, only they are out of phase with each other by 90 degrees ( $\frac{\pi}{2}$ ). Because they are 90 degrees out of phase they are orthogonal and can "catch" any component of the signal at that frequency. The relationship between the exponential and the cosine/sine is given by Euler's formula- $$e^{jx} = \cos(x) + j\cdot\sin(x)$$ This allows us to modify the Fourier transform as follows- $$ H(f) = \int h(t)e^{-j2\pi ft}dt \\ = \int h(t)(\cos(2\pi ft) - j\cdot\sin(2\pi ft))dt $$ At the negative frequencies the Fourier transform becomes the following- $$ H(-f) = \int h(t)(\cos(2\pi (-f)t) - j\sin(2\pi (-f)t))dt \\ = \int h(t)(\cos(2\pi ft) + j\cdot\sin(2\pi ft))dt $$ Comparing the negative frequency version with the positive frequency version shows that the cosine is the same while the sine is inverted. They are still 90 degrees out of phase with each other, though, allowing them to catch any signal component at that (negative) frequency. Because both the positive and negative frequency sinusoids are 90 degrees out of phase and have the same magnitude, they will both respond to real signals in the same way. Or rather, the magnitude of their response will be the same, but the correlation phase will be different. EDIT: Specifically, the negative frequency correlation is the conjugate of the positive frequency correlation (due to the inverted imaginary sine component) for real signals. In mathematical terms, this is, as Dilip pointed out, the following $H(-f) = [H(f)]^*$ Another way to think about it: Imaginary components are just that..Imaginary! They are a tool, which allows the employ of an extra plane to view things on and makes much of digital (and analog) signal processing possible, if not much easier than using differential equations! But we can't break the logical laws of nature, we can't do anything 'real' with the imaginary content $^\dagger$ and so it must effectively cancel itself out before returning to reality. How does this look in the Fourier Transform of a time based signal(complex frequency domain)? If we add/sum the positive and negative frequency components of the signal the imaginary parts cancel, this is what we mean by saying the positive and negative elements are conjugate to each-other. Notice that when an FT is taken of a time-signal there exists these conjugate signals, with the 'real' part of each sharing the magnitude, half in the positive domain, half in the negative, so in effect adding the conjugates together removes the imaginary content and provides the real content only. $^\dagger$ Meaning we can't create a voltage that is $5i$ volts. Obviously, we can use imaginary numbers to represent real-world signals that are two-vector-valued, such as circularly polarized EM waves.
{ "source": [ "https://dsp.stackexchange.com/questions/4825", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1040/" ] }
5,959
How do I add odd or even harmonics to a floating point signal? Do I have to use tanh or sin? What I'm trying to do is achieve some very simple distortion effects, but I'm having a hard time finding exact references. What I'd like is something similar to what the Culture Vulture does by adding odd and even harmonics in its pentode and triode settings. The float value is a single sample in a sample flow.
What your distortion box does is apply a non-linear transfer function to the signal: output = function(input) or y = f(x) . You're just applying the same function to every individual input sample to get the corresponding output sample. When your input signal is a sine wave, a specific type of distortion is produced called harmonic distortion . All of the new tones created by the distortion are perfect harmonics of the input signal: If your transfer function has odd symmetry (can be rotated 180° about the origin), then it will produce only odd harmonics (1f, 3f, 5f, ...). An example of a system with odd symmetry is a symmetrically-clipping amplifier. If your transfer function has even symmetry (can be reflected across the Y axis), then the harmonics produced will only be even-order harmonics (0f, 2f, 4f, 6f, ...) The fundamental 1f is an odd harmonic, and gets removed. An example of a system with even symmetry is a full-wave rectifier. So yes, if you want to add odd harmonics, put your signal through an odd-symmetric transfer function like y = tanh(x) or y = x^3 . If you want to add only even harmonics, put your signal through a transfer function that's even symmetric plus an identity function, to keep the original fundamental. Something like y = x + x^4 or y = x + abs(x) . The x + keeps the fundamental that would otherwise be destroyed, while the x^4 is even-symmetric and produces only even harmonics (including DC, which you probably want to remove afterwards with a high-pass filter). Even symmetry: Transfer function with even symmetry: Original signal in gray, with distorted signal in blue and spectrum of distorted signal showing only even harmonics and no fundamental: Odd symmetry: Transfer function with odd symmetry: Original signal in gray, with distorted signal in blue and spectrum of distorted signal showing only odd harmonics, including fundamental: Even symmetry + fundamental: Transfer function with even symmetry plus identity function: Original signal in gray, with distorted signal in blue and spectrum of distorted signal showing even harmonics plus fundamental: This is what people are talking about when they say that a distortion box "adds odd harmonics", but it's not really accurate. The problem is that harmonic distortion only exists for sine wave input . Most people play instruments, not sine waves, so their input signal has multiple sine wave components. In that case, you get intermodulation distortion , not harmonic distortion, and these rules about odd and even harmonics no longer apply. For instance, applying a full-wave rectifier (even symmetry) to the following signals: sine wave (fundamental odd harmonic only) → full-wave rectified sine (even harmonics only) square wave (odd harmonics only) → DC (even 0th harmonic only) sawtooth wave (odd and even harmonics) → triangle wave (odd harmonics only) triangle wave (odd harmonics only) → 2× triangle wave (odd harmonics only) So the output spectrum depends strongly on the input signal, not the distortion device, and whenever someone says " our amplifier/effect produces more-musical even-order harmonics ", you should take it with a grain of salt . (There is some truth to the claim that sounds with even harmonics are "more musical" than sounds with only odd harmonics , but these spectra aren't actually being produced here, as explained above, and this claim is only valid in the context of Western scales anyway. Odd-harmonic sounds (square waves, clarinets, etc.) are more consonant on a Bohlen–Pierce musical scale based around the 3:1 ratio instead of the 2:1 octave.) Another thing to remember is that digital non-linear processes can cause aliasing, which can be badly audible. See Is there such a thing as band-limited non-linear distortion?
{ "source": [ "https://dsp.stackexchange.com/questions/5959", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/3260/" ] }
5,992
During convolution on a signal, why do we need to flip the impulse response during the process?
Adapted from an answer to a different question (as mentioned in a comment) in the hope that this question will not get thrown up repeatedly by Community Wiki as one of the Top Questions.... There is no "flipping" of the impulse response by a linear (time-invariant) system. The output of a linear time-invariant system is the sum of scaled and time-delayed versions of the impulse response, not the "flipped" impulse response. We break down the input signal $x$ into a sum of scaled unit pulse signals. The system response to the unit pulse signal $\cdots, ~0, ~0, ~1, ~0, ~0, \cdots$ is the impulse response or pulse response $$h[0], ~h[1], \cdots, ~h[n], \cdots$$ and so by the scaling property the single input value $x[0]$ , or, if you prefer $$x[0](\cdots, ~0, ~0, ~1, ~0,~ 0, \cdots) = \cdots ~0, ~0, ~x[0], ~0, ~0, \cdots$$ creates a response $$x[0]h[0], ~~x[0]h[1], \cdots, ~~x[0]h[n], \cdots$$ Similarly, the single input value $x[1]$ or $$x[1](\cdots, ~0, ~0, ~0, ~1,~ 0, \cdots) = \cdots ~0, ~0, ~0, ~x[1], ~0, \cdots$$ creates a response $$0, x[1]h[0], ~~x[1]h[1], \cdots, ~~x[1]h[n-1], x[1]h[n] \cdots$$ Notice the delay in the response to $x[1]$ . We can continue further in this vein, but it is best to switch to a more tabular form and show the various outputs aligned properly in time. We have $$\begin{array}{l|l|l|l|l|l|l|l} \text{time} \to & 0 &1 &2 & \cdots & n & n+1 & \cdots \\ \hline x[0] & x[0]h[0] &x[0]h[1] &x[0]h[2] & \cdots &x[0]h[n] & x[0]h[n+1] & \cdots\\ \hline x[1] & 0 & x[1]h[0] &x[1]h[1] & \cdots &x[1]h[n-1] & x[1]h[n] & \cdots\\ \hline x[2] & 0 & 0 &x[2]h[0] & \cdots &x[2]h[n-2] & x[2]h[n-1] & \cdots\\ \hline \vdots & \vdots & \vdots & \vdots & \ddots & \\ \hline x[m] & 0 &0 & 0 & \cdots & x[m]h[n-m] & x[m]h[n-m+1] & \cdots \\ \hline \vdots & \vdots & \vdots & \vdots & \ddots \end{array}$$ The rows in the above array are precisely the scaled and delayed versions of the impulse response that add up to the response $y$ to input signal $x$ . But if you ask a more specific question such as What is the output at time $n$ ? then you can get the answer by summing the $n$ -th column to get $$\begin{align*} y[n] &= x[0]h[n] + x[1]h[n-1] + x[2]h[n-2] + \cdots + x[m]h[n-m] + \cdots\\ &= \sum_{m=0}^{\infty} x[m]h[n-m], \end{align*}$$ the beloved convolution formula that befuddles generations of students because the impulse response seems to be "flipped over" or running backwards in time. But, what people seem to forget is that instead we could have written $$\begin{align*} y[n] &= x[n]h[0] + x[n-1]h[1] + x[n-2]h[2] + \cdots + x[0]h[n] + \cdots\\ &= \sum_{m=0}^{\infty} x[n-m]h[m], \end{align*}$$ so that it is the input that seems "flipped over" or running backwards in time! In other words, it is human beings who flip the impulse response (or the input) over when computing the response at time $n$ using the convolution formula, but the system itself does nothing of the sort.
{ "source": [ "https://dsp.stackexchange.com/questions/5992", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/3277/" ] }
6,220
It's very easy to filter a signal by performing an FFT on it, zeroing out some of the bins, and then performing an IFFT. For instance: t = linspace(0, 1, 256, endpoint=False) x = sin(2 * pi * 3 * t) + cos(2 * pi * 100 * t) X = fft(x) X[64:192] = 0 y = ifft(X) The high frequency component is completely removed by this "brickwall" FFT filter. But I've heard this is not a good method to use. Why is it generally a bad idea? Are there circumstances in which it's an ok or good choice? [ as suggested by pichenettes ]
Zeroing bins in the frequency domain is the same as multiplying by a rectangular window in the frequency domain. Multiplying by a window in the frequency domain is the same as circular convolution by the transform of that window in the time domain. The transform of a rectangular window is the Sinc function ( $\sin(\omega t)/\omega t$ ). Note that the Sinc function has lots of large ripples and ripples that extend the full width of time domain aperture. If a time-domain filter that can output all those ripples (ringing) is a "bad idea", then so is zeroing bins. These ripples will be largest for any spectral content that is "between bins" or non-integer-periodic in the FFT aperture width. So if your original FFT input data is a window on any data that is somewhat non-periodic in that window (e.g. most non-synchronously sampled "real world" signals), then those particular artifacts will be produced by zero-ing bins. Another way to look at it is that each FFT result bin represents a certain frequency of sine wave in the time domain. Thus zeroing a bin will produce the same result as subtracting that sine wave, or, equivalently, adding a sine wave of an exact FFT bin center frequency but with the opposite phase. Note that if the frequency of some content in the time domain is not purely integer periodic in the FFT width, then trying to cancel a non-integer periodic signal by adding the inverse of an exactly integer periodic sine wave will produce, not silence, but something that looks more like a "beat" note (AM modulated sine wave of a different frequency). Again, probably not what is wanted. Conversely, if your original time domain signal is just a few pure unmodulated sinusoids that are all exactly integer periodic in the FFT aperture width, then zero-ing FFT bins will remove the designated ones without artifacts.
{ "source": [ "https://dsp.stackexchange.com/questions/6220", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/29/" ] }
7,833
Is there any existing application to sample someone's voice and use it to modulate any other voice or synthesize a text to resemble the original one? For example, this Text-to-Speech Demo by AT&T lets you choose a voice and a language from presets that I guess are based on some human voice that have been sampled. How do you call this process? Is it voice modulation? Voice synthesis?
A first note: Most modern text-to-speech systems, like the one from AT&T you have linked to, use concatenative speech synthesis . This technique uses a large database of recordings of one person's voice uttering a long collection of sentences - selected so that the largest number of phoneme combinations are present. Synthesizing a sentence can be done just by stringing together segments from this corpus - the challenging bit is making the stringing together seamless and expressive. There are two big hurdles if you want to use this technique to make president Obama say embarrassing words: You need to have access to a large collection of sentences of the target voice, preferably recorded with uniform recording conditions and good quality. AT&T has a budget to record dozens of hours of the same speaker in the same studio, but if you want to fake someone's voice from just 5 mins of recording it will be difficult. There is a considerable amount of manual alignment and preprocessing before the raw material recorded is in the right "format" to be exploited by a concatenative speech synthesis system. Your intuition that this is a possible solution is valid - provided you have the budget to tackle these two problems. Fortunately, there are other techniques which can work with less supervision and less data. The field of speech synthesis interested in "faking" or "mimicking" one voice from a recording is known as voice conversion . You have a recording A1 of target speaker A saying sentence 1, and a recording B2 of source speaker B saying sentence 2, you aim at producing a recording A2 of speaker A saying sentence 2, possibly with access to a recording B1 of speaker B reproducing with his/her voice the same utterance as the target speaker. The outline of a voice conversion system is the following: Audio features are extracted from recording A1, and they are clustered into acoustic classes. At this stage, it is a bit like having bags will all "a" of speaker A, all "o" of speaker A, etc. Note that this is a much simpler and rough operation than true speech recognition - we are not interested in recognizing correctly formed words - and we don't even know which bag contains "o" and which bag contains "a" - we just know that we have multiple instances of the same sound in each bag. The same process is applied on B2. The acoustic classes from A1 and B2 are aligned. To continue with the bags analogy, this is equivalent to pairing the bags from step 1 and 2, so that all the sounds we have in this bag from speaker A should correspond to the sounds we have in that bag from speaker B. This matching is much easier to do if B1 is used at step 2. A mapping function is estimated for each pair of bags. Since we know that this bag contains sounds from speaker A, and that bag the same sounds but said by speaker B - we can find an operation (for example a matrix multiplication on feature vectors) that make them correspond. In other words, we now know how to make speaker 2's "o" sound like speaker 1's "o". At this stage we have all cards in hand to perform the voice conversion. From each slice of the recording of B2, we use the result of step 2. to figure out which acoustic class it corresponds to. We then use the mapping function estimated at step 4 to transform the slice. I insist on the fact that this operates at a much lower level than performing speech recognition on B2, and then doing TTS using A1's voice as a corpus. Various statistical techniques are used for steps 1 and 2 - GMM or VQ being the most common ones. Various alignment algorithms are used for part 2 - this is the trickiest part, and it is obviously easier to align A1 vs B1, than A1 vs B2. In the simpler case, methods like Dynamic Time Warping can be used to make the alignment. As for step 4, the most common transform are linear transforms (matrix multiplication) on feature vectors. More complex transforms make for more realistic imitations but the regression problem to find the optimal mapping is more complex to solve. Finally, as for step 5, the quality of resynthesis is limited by the features used. LPC are generally easier to deal with a simple transformation method (take signal frame -> estimate residual and LPC spectrum -> if necessary pitch-shift residual -> apply modified LPC spectrum to modified residual). Using a representation of speech that can be inverted back to the time domain, and which provide good separation between prosody and phonemes is the key here! Finally, provided you have access to aligned recordings of speaker A and B saying the same sentence, there are statistical models which simultaneously tackle steps 1, 2, 3 and 4 in one single model estimation procedure. I might come back with a bibliography later, but a very good place to start to get a feel for the problem and the overall framework used to solve it is Stylianou, Moulines and Cappé's "A system for voice conversion based on probabilistic classification and a harmonic plus noise model". There is to my knowledge no widely piece of software performing voice conversion - only software modifying properties of the source voice - like pitch and vocal tract length parameters (For example IRCAM TRAX transformer) - with which you have to mess in the hope of making your recording sound closer to the target voice.
{ "source": [ "https://dsp.stackexchange.com/questions/7833", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/3784/" ] }
8,629
It could seem an easy question and without any doubts it is but I'm trying to calculate the variance of white Gaussian noise without any result. The power spectral density (PSD) of additive white Gaussian noise (AWGN) is $\frac{N_0}{2}$ while the autocorrelation is $\frac{N_0}{2}\delta(\tau)$, so variance is infinite?
White Gaussian noise in the continuous-time case is not what is called a second-order process (meaning $E[X^2(t)]$ is finite) and so, yes, the variance is infinite. Fortunately, we can never observe a white noise process (whether Gaussian or not) in nature; it is only observable through some kind of device, e.g. a (BIBO-stable) linear filter with transfer function $H(f)$ in which case what you get is a stationary Gaussian process with power spectral density $\frac{N_0}{2}|H(f)|^2$ and finite variance $$\sigma^2 = \int_{-\infty}^\infty \frac{N_0}{2}|H(f)|^2\,\mathrm df.$$ More than what you probably want to know about white Gaussian noise can be found in the Appendix of this lecture note of mine.
{ "source": [ "https://dsp.stackexchange.com/questions/8629", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1141/" ] }
8,685
I'm learning DSP slowly and trying to wrap my head around some terminology: Question 1 : Suppose I have the following filter difference equation: $$y[n] = 2 x[n] + 4 x[n-2] + 6 x[n-3] + 8 x[n-4]$$ There are 4 coefficients on the right-hand side. Are the "number of taps" also 4? Is the "filter order" also 4? Question 2 : I am trying to use the MATLAB fir1(n, Wn) function. If I wanted to create a 10-tap filter, would I set $n=10$? Question 3 : Suppose I have the following recursive (presumably IIR) filter difference equation: $$y[n] + 2 y[n-1] = 2 x[n] + 4 x[n-2] + 6 x[n-3] + 8 x[n-4]$$ How would I determine the "number of taps" and the "filter order" since the number of coefficients differ on the left-hand and right-hand sides? Question 4 : Are the following logical if-and-only-if statements true? The filter is recursive $\iff$ The filter is IIR. The filter is nonrecursive $\iff$ The filter is FIR.
OK, I'll try to answer your questions: Q1: the number of taps is not equal the to the filter order. In your example the filter length is 5, i.e. the filter extends over 5 input samples [$x(n), x(n-1), x(n-2), x(n-3), x(n-4)$]. The number of taps is the same as the filter length. In your case you have one tap equal to zero (the coefficient for $x(n-1)$), so you happen to have 4 non-zero taps. Still, the filter length is 5. The order of an FIR filter is filter length minus 1, i.e. the filter order in your example is 4. Q2: the $n$ in the Matlab function fir1() is the filter order, i.e. you get a vector with $n+1$ elements as a result (so $n+1$ is your filter length = number of taps). Q3: the filter order is again 4. You can see it from the maximum delay needed to implement your filter. It is indeed a recursive IIR filter. If by number of taps you mean the number of filter coefficients, then for an $n^{th}$ order IIR filter you generally have $2(n+1)$ coefficients, even though in your example several of them are zero. Q4: this is a slightly tricky one. Let's start with the simple case: a non-recursive filter always has a finite impulse response, i.e. it is a FIR filter. Usually a recursive filter has an infinite impulse response, i.e. it is an IIR filter, but there are degenerate cases where a finite impulse response is implemented using a recursive structure. But the latter case is the exception.
{ "source": [ "https://dsp.stackexchange.com/questions/8685", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/709/" ] }
9,094
I have a question about matched filtering. Does the matched filter maximise the SNR at the moment of decision only? As far as I understand, if you put, say, NRZ through a matched filter, the SNR will be maximised at the decision point only and that is the advantage of the matched filter. Does it maximise the SNR anywhere else in the output function, or just at the point of decision? According to Wikipedia The matched filter is the optimal linear filter for maximizing the signal to noise ratio (SNR) in the presence of additive stochastic noise This to me implies that it maximises it everywhere, but I don't see how that is possible. I've looked at the maths in my communications engineering textbooks, and from what I can tell, it's just at the decision point. Another question I have is, why not make a filter that makes a really tall skinny spike at the point of decision. Wouldn't that make the SNR even better? Thanks. Edit: I guess what I'm also thinking is, say you have a some NRZ data and you use a matched filter, the matched filter could be implemented with an I&D (integrate and dump). The I&D will basically ramp up until it gets to the sampling time and the idea is that one samples at the peak of the I&D because at that point, the SNR is a maximum. What I don't get is, why not create a filter that double integrates it or something like that, that way, you'd have a squared increase (rather than a ramp) and the point at which you sample would be even higher up and from what I can tell, more likely to be interpreted correctly by the decision circuit (and give a lower Pe (probability of error))?
Since this question has multiple sub-questions in edits, comments on answers, etc., and these have not been addressed, here goes. Matched filters Consider a finite-energy signal $s(t)$ that is the input to a (linear time-invariant BIBO-stable) filter with impulse response $h(t)$, transfer function $H(f)$, and produces the output signal $$y(\tau) = \int_{-\infty}^\infty s(\tau-t)h(t)\,\mathrm dt.\tag{1}$$ What choice of $h(t)$ will produce a maximum response at a given time $t_0$? That is, we are looking for a filter such that the global maximum of $y(\tau)$ occurs at $t_0$. This really is a very loosely phrased (and really unanswerable) question because clearly the filter with impulse response $2h(t)$ will have larger response than the filter with impulse response $h(t)$, and so there is no such thing as the filter that maximizes the response. So, rather than compare apples and oranges, let us include the constraint that we seek the filter that maximizes $y(t_0)$ subject to the impulse response having a fixed energy, for example, subject to $$\int_{-\infty}^\infty |h(t)|^2\,\mathrm dt = \mathbb E = \int_{-\infty}^\infty |s(t)|^2 \,\mathrm dt.\tag{2}$$ Here onwards, "filter" shall mean a linear time-invariant filter whose impulse response satisfies (2). The Cauchy-Schwarz inequality provides an answer to this question. We have $$y(t_0) = \int_{-\infty}^\infty s(t_0-t)h(t)\,\mathrm dt \leq \sqrt{\int_{-\infty}^\infty |s(t_0-t)|^2 \,\mathrm dt} \sqrt{\int_{-\infty}^\infty |h(t)|^2\,\mathrm dt} = \mathbb E$$ with equality occurring if $h(t) = \lambda s(t_0-t)$ with $\lambda > 0$ where from (2) we get that $\lambda = 1$, that is, the filter with impulse response $h(t) = s(t_0-t)$ produces the maximal response $y(t_0) = \mathbb E$ at the specified time $t_0$. In the (non-stochastic) sense described above, this filter is said to be the filter matched to $s(t)$ at time $t_0$ or the matched filter for $s(t)$ at time $t_0.$ There are several points worth noting about this result. The output of the matched filter has a unique global maximum value of $\mathbb E$ at $t_0$; for any other $t$, we have $y(t) < y(t_0) = \mathbb E$. The impulse response $s(t_0-t) = s(-(t-t_0))$ of the matched filter for time $t_0$ is just $s(t)$ "reversed in time" and moved to the right by $t_0$. a. If $s(t)$ has finite support, say, $[0,T]$, then the matched filter is noncausal if $t_0 < T$. b. The filter matched to $s(t)$ at time $t_1 > t_0$ is just the filter matched at time $t_0$ with an additional delay of $t_1-t_0$. For this reason, some people call the filter with impulse response $s(-t)$, (that is, the filter matched to $s(t)$ at $t=0$) the matched filter for $s(t)$ with the understanding that the exact time of match can be incorporated into the discussion as and when needed. If $s(t) = 0$ for $t < 0$, then the matched filter is noncausal. With this, we can rephrase 1. as The matched filter for $s(t)$ produces a unique global maximum value $y(0) = \mathbb E$ at time $t=0$. Furthermore, $$y(t) = \int_{-\infty}^\infty s(t-\tau)s(-\tau)\,\mathrm d\tau = \int_{-\infty}^\infty s(\tau-t)s(\tau)\,\mathrm d\tau = R_s(t)$$ is the autocorrelation function of the signal $s(t)$. It is well-known, of course, that $R_s(t)$ is an even function of $t$ with a unique peak at the origin. Note that the output of the filter matched at time $t_0$ is just $R_s(t-t_0)$, the autocorrelation function delayed to peak at time $t_0$. No filter other than the matched filter for time $t_0$ can produce an output as large as $\mathbb E$ at $t_0$. However, for any $t_0$, it is possible to find filters that have outputs that exceed $R_s(t_0)$ at $t_0$. Note that $R_s(t_0) < \mathbb E$. The transfer function of the matched filter is $H(f)=S^*(f)$, the complex conjugate of the spectrum of $S(f)$. Thus, $Y(f) = \mathfrak F[y(t)]= |S(f)|^2$. Think of this result as follows. Since $x^2 > x$ for $x > 1$ and $x^2< x$ for $0 < x < 1$, the matched filter has low gain at those frequencies where $S(f)$ is small, and high gain at those frequencies where $S(f)$ is large. Thus, the matched filter is reducing the weak spectral components and enhancing the strong spectral components in $S(f)$. (It is also doing phase compensation to adjust all the "sinusoids" so that they all peak at $t=0$). ------- But what about noise and SNR and stuff like that which is what the OP was asking about? If the signal $s(t)$ plus additive white Gaussian noise with two-sided power spectral density $\frac{N_0}{2}$ is processed through a filter with impulse response $h(t)$, then the output noise process is a zero-mean stationary Gaussian process with autocorrelation function $\frac{N_0}{2}R_s(t)$. Thus, the variance is $$\sigma^2 = \frac{N_0}{2} R_s(0) = \frac{N_0}{2}\int_{-\infty}^{\infty} |h(t)|^2\,\mathrm dt.$$ It is important to note that the variance is the same regardless of when we sample the filter output. So, what choice of $h(t)$ will maximize the SNR $y(t_0)/\sigma$ at time $t_0$? Well, from the Cauchy-Schwarz inequality, we have $$\text{SNR} = \frac{y(t_0)}{\sigma} = \frac{\int_{-\infty}^\infty s(t_0-t)h(t)\,\mathrm dt}{\sqrt{\frac{N_0}{2}\int_{-\infty}^\infty |h(t)|^2\,\mathrm dt}} \leq \frac{\sqrt{\int_{-\infty}^\infty |s(t_0-t)|^2 \,\mathrm dt} \sqrt{\int_{-\infty}^\infty |h(t)|^2\,\mathrm dt}}{\sqrt{\frac{N_0}{2}\int_{-\infty}^\infty |h(t)|^2\,\mathrm dt}} = \sqrt{\frac{2\mathbb E}{N_0}}$$ with equality exactly when $h(t) = s(t_0-t)$, the filter that is matched to $s(t)$ at time $t_0$!! Note that $\sigma^2 = \mathbb EN_0/2$. If we use this matched filter for our desired sample time, then at other times $t_1$, the SNR will be $y(t_1)/\sigma < y(t_0)/\sigma = \sqrt{\frac{2\mathbb E}{N_0}}$. Could another filter give a larger SNR at time $t_1$? Sure, because $\sigma$ is the same for all filters under consideration, and we have noted above that it is possible to have a signal output larger than $y(t_1)$ at time $t_1$ by use of a different non-matched filter. In short, "does the matched filter maximize the SNR only at the sampling instant, or everywhere?" has the answer that the SNR is maximized only at the sampling instant $t_0$. At other times, other filters could give a larger SNR than what the matched filter is providing at time $t_1$, but this still smaller than the SNR $\sqrt{\frac{2\mathbb E}{N_0}}$ that the matched filter is giving you at $t_0$, and if desired, the matched filter could be redesigned to produce its peak at time $t_1$ instead of $t_0$. "why not make a filter that makes a really tall skinny spike at the point of decision. Wouldn't that make the SNR even better?" The matched filter does produce a spike of sorts at the sampling time but it is constrained by the shape of the autocorrelation function. Any other filter that you can devise to produce a tall skinny (time-domain) spike is not a matched filter and so will not give you the largest possible SNR. Note that increasing the amplitude of the filter impulse response (or using a time-varying filter that boosts the gain at the time of sampling) does not change the SNR since both the signal and the noise standard deviation increase proportionately. "The I&D will basically ramp up until it gets to the sampling time and the idea is that one samples at the peak of the I&D because at that point, the SNR is a maximum." For NRZ data and rectangular pulses, the matched filter impulse response is also a rectangular pulse. The integrate-and-dump circuit is a correlator whose output equals the matched filter output only at the sampling instants , and not in-between. See the figure below. If you sample the correlator output at other times, you get noise with smaller variance but you can't simply add up the samples of I&D output taken at different times because the noise variables are highly correlated, and the net variance works out to be much larger. Nor should you expect to be able to take multiple samples from the matched filter output and combine them in any way to get a better SNR. It doesn't work. What you have in effect is a different filter, and you cannot do better than the (linear) matched filter in Gaussian noise; no nonlinear processing will give a smaller error probability than the matched fiter.
{ "source": [ "https://dsp.stackexchange.com/questions/9094", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/1146/" ] }
9,408
I know there are 4 types of FIR filters with linear phase, i.e. constant group delay: (M = length of impulse response) Impulse response symmetrical, M = odd Imp. resp. symmetrical, M = even Imp. resp. anti-symmetrical, M = odd Imp. resp. anti-symmetrical, M = even each with its traits. Which of these types is most commonly used in FIR filter with linear phase design and why? :)
When choosing one of these 4 types of linear phase filters there are mainly 3 things to consider: constraints on the zeros of $H(z)$ at $z=1$ and $z=-1$ integer/non-integer group delay phase shift (apart from the linear phase) For type I filters (odd number of taps, even symmetry) there are no constraints on the zeros at $z=1$ and $z=-1$, the phase shift is zero (apart from the linear phase), and the group delay is an integer value. Type II filters (even number of taps, even symmetry) always have a zero at $z=-1$ (i.e., half the sampling frequency), they have a zero phase shift, and they have a non-integer group delay. Type III filters (odd number of taps, odd symmetry) always have zeros at $z=1$ and $z=-1$ (i.e. at $f=0$ and $f=f_s/2$), they have a 90 degrees phase shift, and an integer group delay. Type IV filters (even number of taps, odd symmetry) always have a zero at $z=1$, a phase shift of 90 degrees, and a non-integer group delay. This implies (among other things) the following: Type I filters are pretty universal, but they cannot be used whenever a 90 degrees phase shift is necessary, e.g. for differentiators or Hilbert transformers. Type II filters would normally not be used for high pass or band stop filters, due to the zero at $z=-1$, i.e. at $f=f_s/2$. Neither can they be used for applications where a 90 degrees phase shift is necessary. Type III filters cannot be used for standard frequency selective filters because in these cases the 90 degrees phase shift is usually undesirable. For Hilbert transformers, type III filters have a relatively bad magnitude approximation at very low and very high frequencies due to the zeros at $z=1$ and $z=-1$. On the other hand, a type III Hilbert transformer can be implemented more efficiently than a type IV Hilbert transformer because in this case every other tap is zero. Type IV filters cannot be used for standard frequency selective filters, for the same reasons as type III filters. They are well suited for differentiators and Hilbert transformers, and their magnitude approximation is usually better because, unlike type III filters, they have no zero at $z=-1$. In some applications an integer group delay is desirable. In these cases type I or type III filters are preferred.
{ "source": [ "https://dsp.stackexchange.com/questions/9408", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/4710/" ] }
9,417
I have an audio buffer with 1024 samples per second. The sample rate is 44100. I am trying to identify the frequency tones. Using the Gorzel algorithm, I can get the frequency but it's not that accurate. I was trying some other scope program and I got better results with the same hardware. How can I decrease the id-bandwidth (to be sharper) so the id will be accurate for the target frequency? (I can't use FFT because it's real time and takes some memory.) float goertzel_mag(int16_t* data ,int SAMPLING_RATE ,double TARGET_FREQUENCY,int numSamples ) { int k,i; float floatnumSamples; float omega,sine,cosine,coeff,q0,q1,q2,magnitude,real,imag; float scalingFactor = numSamples / 2; // -2 floatnumSamples = (float) numSamples; k = (int) (0.5 + ((floatnumSamples * TARGET_FREQUENCY) / SAMPLING_RATE)); omega = (2.0 * M_PI * k) / floatnumSamples; sine = sin(omega); cosine = cos(omega); coeff = 2.0 * cosine; q0=0; q1=0; q2=0; for(i=0; i<numSamples; i++) { q0 = coeff * q1 - q2 + data[i]; q2 = q1; q1 = q0; } real = (q1 - q2 * cosine) / scalingFactor; imag = (q2 * sine) / scalingFactor; //double theta = atan2 ( imag, real); //PHASE magnitude = sqrtf(real*real + imag*imag); return magnitude; }
When choosing one of these 4 types of linear phase filters there are mainly 3 things to consider: constraints on the zeros of $H(z)$ at $z=1$ and $z=-1$ integer/non-integer group delay phase shift (apart from the linear phase) For type I filters (odd number of taps, even symmetry) there are no constraints on the zeros at $z=1$ and $z=-1$, the phase shift is zero (apart from the linear phase), and the group delay is an integer value. Type II filters (even number of taps, even symmetry) always have a zero at $z=-1$ (i.e., half the sampling frequency), they have a zero phase shift, and they have a non-integer group delay. Type III filters (odd number of taps, odd symmetry) always have zeros at $z=1$ and $z=-1$ (i.e. at $f=0$ and $f=f_s/2$), they have a 90 degrees phase shift, and an integer group delay. Type IV filters (even number of taps, odd symmetry) always have a zero at $z=1$, a phase shift of 90 degrees, and a non-integer group delay. This implies (among other things) the following: Type I filters are pretty universal, but they cannot be used whenever a 90 degrees phase shift is necessary, e.g. for differentiators or Hilbert transformers. Type II filters would normally not be used for high pass or band stop filters, due to the zero at $z=-1$, i.e. at $f=f_s/2$. Neither can they be used for applications where a 90 degrees phase shift is necessary. Type III filters cannot be used for standard frequency selective filters because in these cases the 90 degrees phase shift is usually undesirable. For Hilbert transformers, type III filters have a relatively bad magnitude approximation at very low and very high frequencies due to the zeros at $z=1$ and $z=-1$. On the other hand, a type III Hilbert transformer can be implemented more efficiently than a type IV Hilbert transformer because in this case every other tap is zero. Type IV filters cannot be used for standard frequency selective filters, for the same reasons as type III filters. They are well suited for differentiators and Hilbert transformers, and their magnitude approximation is usually better because, unlike type III filters, they have no zero at $z=-1$. In some applications an integer group delay is desirable. In these cases type I or type III filters are preferred.
{ "source": [ "https://dsp.stackexchange.com/questions/9417", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/17182/" ] }
9,467
MATLAB's filtfilt does a forward-backward filtering, i.e., filter, reverse the signal, filter again and then reverse again. Apparently this done to reduce phase lags? What are the advantages/disadvantages of using such a filtering (I guess it would result in an effective increase in filter order). Would it be preferable to use filtfilt always instead of filter (i.e., only forward filtering)? Are there any applications where it is necessary to use this and where it shouldn't be used?
You can best look at it in the frequency domain. If $x[n]$ is the input sequence and $h[n]$ is the filter's impulse response, then the result of the first filter pass is $$X(e^{j\omega})H(e^{j\omega})$$ with $X(e^{j\omega})$ and $H(e^{j\omega})$ the Fourier transforms of $x[n]$ and $h[n]$, respectively. Time reversal corresponds to replacing $\omega$ by $-\omega$ in the frequency domain, so after time-reversal we get $$X(e^{-j\omega})H(e^{-j\omega})$$ The second filter pass corresponds to another multiplication with $H(e^{j\omega})$: $$X(e^{-j\omega})H(e^{j\omega})H(e^{-j\omega})$$ which after time-reversal finally gives for the spectrum of the output signal $$Y(e^{j\omega})=X(e^{j\omega})H(e^{j\omega})H(e^{-j\omega})= X(e^{j\omega})|H(e^{j\omega})|^2\tag{1}$$ because for real-valued filter coefficients we have $H(e^{-j\omega})=H^{*}(e^{j\omega})$. Equation (1) shows that the output spectrum is obtained by filtering with a filter with frequency response $|H(e^{j\omega})|^2$, which is purely real-valued, i.e. its phase is zero and consequently there are no phase distortions. This is the theory. In real-time processing there is of course quite a large delay because time-reversal only works if you allow a latency corresponding to the length of the input block. But this does not change the fact that there are no phase distortions, it's just an additional delay of the output data. For FIR filtering, this approach is not especially useful because you might as well define a new filter $\hat{h}[n]=h[n]*h[-n]$ and get the same result with ordinary filtering. It is more interesting to use this method with IIR filters, because they cannot have zero-phase (or linear phase, i.e. a pure delay). In sum: if you have or need an IIR filter and you want zero phase distortion, AND processing delay is no problem then this method is useful if processing delay is an issue you shouldn't use it if you have an FIR filter, you can easily compute a new FIR filter response which is equivalent to using this method. Note that with FIR filters an exactly linear phase can always be realized.
{ "source": [ "https://dsp.stackexchange.com/questions/9467", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/-1/" ] }
9,966
I need to design a moving average filter that has a cut-off frequency of 7.8 Hz. I have used moving average filters before, but as far as I'm aware, the only parameter that can be fed in is the number of points to be averaged... How can this relate to a cut-off frequency? The inverse of 7.8 Hz is ~130 ms, and I'm working with data that are sampled at 1000 Hz. Does this imply that I ought to be using a moving average filter window size of 130 samples, or is there something else that I'm missing here?
The moving average filter (sometimes known colloquially as a boxcar filter ) has a rectangular impulse response: $$ h[n] = \frac{1}{N}\sum_{k=0}^{N-1} \delta[n-k] $$ Or, stated differently: $$ h[n] = \begin{cases} \frac{1}{N}, && 0 \le n < N \\ 0, && \text{otherwise} \end{cases} $$ Remembering that a discrete-time system's frequency response is equal to the discrete-time Fourier transform of its impulse response, we can calculate it as follows: $$ \begin{align} H(\omega) &= \sum_{n=-\infty}^{\infty} x[n] e^{-j\omega n} \\ &= \frac{1}{N}\sum_{n=0}^{N-1} e^{-j\omega n} \end{align} $$ To simplify this, we can use the known formula for the sum of the first $N$ terms of a geometric series : $$ \sum_{n=0}^{N-1} e^{-j\omega n} = \frac{1-e^{-j \omega N}}{1 - e^{-j\omega}} $$ What we're most interested in for your case is the magnitude response of the filter, $|H(\omega)|$. Using a couple simple manipulations, we can get that in an easier-to-comprehend form: $$ \begin{align} H(\omega) &= \frac{1}{N}\sum_{n=0}^{N-1} e^{-j\omega n} \\ &= \frac{1}{N} \frac{1-e^{-j \omega N}}{1 - e^{-j\omega}} \\ &= \frac{1}{N} \frac{e^{-j \omega N/2}}{e^{-j \omega/2}} \frac{e^{j\omega N/2} - e^{-j\omega N/2}}{e^{j\omega /2} - e^{-j\omega /2}} \end{align} $$ This may not look any easier to understand. However, due to Euler's identity , recall that: $$ \sin(\omega) = \frac{e^{j\omega} - e^{-j\omega}}{j2} $$ Therefore, we can write the above as: $$ \begin{align} H(\omega) &= \frac{1}{N} \frac{e^{-j \omega N/2}}{e^{-j \omega/2}} \frac{j2 \sin\left(\frac{\omega N}{2}\right)}{j2 \sin\left(\frac{\omega}{2}\right)} \\ &= \frac{1}{N} \frac{e^{-j \omega N/2}}{e^{-j \omega/2}} \frac{\sin\left(\frac{\omega N}{2}\right)}{\sin\left(\frac{\omega}{2}\right)} \end{align} $$ As I stated before, what you're really concerned about is the magnitude of the frequency response. So, we can take the magnitude of the above to simplify it further: $$ |H(\omega)| = \frac{1}{N} \left|\frac{\sin\left(\frac{\omega N}{2}\right)}{\sin\left(\frac{\omega}{2}\right)}\right| $$ Note: We are able to drop the exponential terms out because they don't affect the magnitude of the result; $|e^{j\omega}| = 1$ for all values of $\omega$. Since $|xy| = |x||y|$ for any two finite complex numbers $x$ and $y$, we can conclude that the presence of the exponential terms don't affect the overall magnitude response (instead, they affect the system's phase response). The resulting function inside the magnitude brackets is a form of a Dirichlet kernel . It is sometimes called a periodic sinc function, because it resembles the sinc function somewhat in appearance, but is periodic instead. Anyway, since the definition of cutoff frequency is somewhat underspecified (-3 dB point? -6 dB point? first sidelobe null?), you can use the above equation to solve for whatever you need. Specifically, you can do the following: Set $|H(\omega)|$ to the value corresponding to the filter response that you want at the cutoff frequency. Set $\omega$ equal to the cutoff frequency. To map a continuous-time frequency to the discrete-time domain, remember that $\omega = 2\pi \frac{f}{f_s}$, where $f_s$ is your sample rate. Find the value of $N$ that gives you the best agreement between the left and right hand sides of the equation. That should be the length of your moving average.
{ "source": [ "https://dsp.stackexchange.com/questions/9966", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/4961/" ] }
10,057
I've implemented a gaussian blur fragment shader in GLSL. I understand the main concepts behind all of it: convolution, separation of x and y using linearity, multiple passes to increase radius... I still have a few questions though: What's the relationship between sigma and radius? I've read that sigma is equivalent to radius, I don't see how sigma is expressed in pixels. Or is "radius" just a name for sigma, not related to pixels? How do I choose sigma? Considering I use multiple passes to increase sigma, how do I choose a good sigma to obtain the sigma I want at any given pass? If the resulting sigma is equal to the square root of the sum of the squares of the sigmas and sigma is equivalent to radius, what's an easy way to get any desired radius? What's the good size for a kernel, and how does it relate to sigma? I've seen most implementations use a 5x5 kernel. This is probably a good choice for a fast implementation with decent quality, but is there another reason to choose another kernel size? How does sigma relate to the kernel size? Should I find the best sigma so that coefficients outside my kernel are negligible and just normalize?
What's the relationship between sigma and radius? I've read that sigma is equivalent to radius, I don't see how sigma is expressed in pixels. Or is "radius" just a name for sigma, not related to pixels? There are three things at play here. The variance, ($\sigma^2$), the radius, and the number of pixels. Since this is a 2-dimensional gaussian function, it makes sense to talk of the covariance matrix $\boldsymbol{\Sigma}$ instead. Be that as it may however, those three concepts are weakly related. First of all, the 2-D gaussian is given by the equation: $$ g({\bf z}) = \frac{1}{\sqrt{(2 \pi)^2 |\boldsymbol{\Sigma}|}} e^{-\frac{1}{2} ({\bf z}-\boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} \ ({\bf z}-\boldsymbol{\mu})} $$ Where ${\bf z}$ is a column vector containing the $x$ and $y$ coordinate in your image. So, ${\bf z} = \begin{bmatrix} x \\ y\end{bmatrix}$, and $\boldsymbol{\mu}$ is a column vector codifying the mean of your gaussian function, in the $x$ and $y$ directions $\boldsymbol{\mu} = \begin{bmatrix} \mu_x \\ \mu_y\end{bmatrix}$. Example: Now, let us say that we set the covariance matrix $\boldsymbol{\Sigma} = \begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}$, and $\boldsymbol{\mu} = \begin{bmatrix} 0 \\ 0\end{bmatrix}$. I will also set the number of pixels to be $100$ x $100$. Furthermore, my 'grid', where I evaluate this PDF, is going to be going from $-10$ to $10$, in both $x$ and $y$. This means I have a grid resolution of $\frac{10 - (-10)}{100} = 0.2$. But this is completely arbitrary. With those settings, I will get the probability density function image on the left. Now, if I change the 'variance', (really, the covariance), such that $\boldsymbol{\Sigma} = \begin{bmatrix} 9 & 0 \\ 0 & 9\end{bmatrix}$ and keep everything else the same, I get the image on the right. The number of pixels are still the same for both, $100$ x $100$, but we changed the variance. Suppose instead we do the same experiment, but use $20$ x $20$ pixels instead, but I still ran from $-10$ to $10$. Then, my grid has a resolution of $\frac{10-(-10)}{20} = 1$. If I use the same covariances as before, I get this: These are how you must understand the interplay between those variables. If you would like the code, I can post that here as well. How do I choose sigma? The choice of the variance/covariance-matrix of your gaussian filter is extremely application dependent. There is no 'right' answer. That is like asking what bandwidth should one choose for a filter. Again, it depends on your application. Typically, you want to choose a gaussian filter such that you are nulling out a considerable amount of high frequency components in your image. One thing you can do to get a good measure, is compute the 2D DFT of your image, and overlay its co-efficients with your 2D gaussian image. This will tell you what co-efficients are being heavily penalized. For example, if your gaussian image has a covariance so wide that it is encompassing many high frequency coefficients of your image, then you need to make its covariance elements smaller.
{ "source": [ "https://dsp.stackexchange.com/questions/10057", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/5098/" ] }
10,423
I just studied about SURF and I'm going for its implementation, but I still didn't understand why we use descriptors. I understand what keypoints are and their purpose, but when we extract the keypoints than why do we need to use descriptors ? What is their importance and role in recognition?
One important thing to understand is that after extracting the keypoints, you only obtain information about their position , and sometimes their coverage area (usually approximated by a circle or ellipse) in the image. While the information about keypoint position might sometimes be useful, it does not say much about the keypoints themselves. Depending on the algorithm used to extract keypoint (SIFT, Harris corners, MSER), you will know some general characteristics of the extracted keypoints (e.g. they are centered around blobs, edges, prominent corners...) but you will not know how different or similar one keypoint is to the other. Here's two simple examples where only the position and keypoint area will not help us: If you have an image A (of a bear on a white background), and another image B, exact copy of A but translated for a few pixels: the extracted keypoints will be the same (on the same part of that bear). Those two images should be recognized as same, or similar. But, if the only information we have is their position, and that changed because of the translation, you can not compare the images. If you have an image A (let's say, of a duck this time), and another image B, exactly the same duck as in A except twice the size: the extracted keypoints will be the same (same parts of the duck). Those are also same (similar) images. But all their sizes (areas) will be different: all the keypoints from the image B will be twice the size of those from image A. So, here come descriptors : they are the way to compare the keypoints. They summarize, in vector format (of constant length) some characteristics about the keypoints. For example, it could be their intensity in the direction of their most pronounced orientation. It's assigning a numerical description to the area of the image the keypoint refers to. Some important things for descriptors are: they should be independent of keypoint position If the same keypoint is extracted at different positions (e.g. because of translation) the descriptor should be the same. they should be robust against image transformations Some examples are changes of contrast (e.g. image of the same place during a sunny and cloudy day) and changes of perspective (image of a building from center-right and center-left, we would still like to recognize it as a same building). Of course, no descriptor is completely robust against all transformations (nor against any single one if it is strong, e.g. big change in perspective). Different descriptors are designed to be robust against different transformations which is sometimes opposed to the speed it takes to calculate them. they should be scale independent The descriptors should take scale in to account. If the "prominent" part of the one keypoint is a vertical line of 10px (inside a circular area with radius of 8px), and the prominent part of another a vertical line of 5px (inside a circular area with radius of 4px) -- these keypoints should be assigned similar descriptors. Now, that you calculated descriptors for all the keypoinst, you have a way to compare those keypoints . For a simple example of image matching (when you know the images are of the same object, and would like to identify the parts in different images that depict the same part of the scene, or would like to identify the perspective change between two images), you would compare every keypoint descriptor of one image to every keypoint descriptor of the other image. As the descriptors are vectors of numbers, you can compare them with something as simple as Euclidian distance . There are some more complex distances that can be used as a similarity measure, of course. But, in the end, you would say that the keypoints whose descriptors have the smallest distance between them are matches , e.g. same "places" or "parts of objects" in different images. For more complex use of keypoints/descriptors, you should take a look at this question -- especially the "low-level local approach" in my answer and "Bag-of-words" approach in @Maurits answer. Also, links provided in those answers are useful.
{ "source": [ "https://dsp.stackexchange.com/questions/10423", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/5057/" ] }
11,560
Here is a sinusoid of frequency f = 236.4 Hz (it is 10 milliseconds long; it has N=441 points at sampling rate fs=44100Hz ) and its DFT, without zero-padding : The only conclusion we can give by looking at the DFT is: "The frequency is approximatively 200Hz". Here is the signal and its DFT, with a large zero-padding : Now we can give a much more precise conclusion : "By looking carefully at the maximum of the spectrum, I can estimate the frequency 236Hz" (I zoomed and found the maximum is near 236). My question is : why do we say that "zero-padding doesn't increase resolution" ? (I have seen this sentence very often, then they say "it only adds interpolation") => With my example, zero-padding helped me to find the right frequency with a more precise resolution !
Resolution has a very specific definition in this context. It refers to your ability to resolve two separate tones at nearby frequencies. You have increased the sample rate of your spectrum estimate, but you haven't gained any ability to discriminate between two tones that might be at, for instance, 236 Hz and 237 Hz. Instead, they will "melt together" into a single blob, no matter how much zero-padding you apply. The solution to increasing resolution is to observe the signal for a longer time period, then use a larger DFT. This will result in main lobes whose width are inversely proportional to the DFT size, so if you observe for long enough, you can actually resolve the frequencies of multiple tones that are nearby one another. -- To see how this plays out, here's a plot of the zoomed-in FFT of the addition of two signals: your original sinusoid, and one that differs in frequency from it by 0 to 100 Hz. It's only towards the 100Hz difference end of the plot (left-hand side here) that you can distinguish (resolve) the two. Scilab code for generating the plot below. f = 236.4; d = 10; N=441; fs=44100; extra_padding = 10000; t=[0:1/fs:(d/1000-1/fs)] ff = [0:(N+extra_padding-1)]*fs/(N+extra_padding); x = sin(2*%pi*f*t); XX = []; for delta_f = [0:100]; y = sin(2*%pi*(f+delta_f)*t); FFTX = abs(fft([x+y zeros(1,extra_padding)])); XX = [XX; FFTX]; end mtlb_axis([0 1300 0 500]) figure(1); clf [XXX,YYY] = meshgrid(ff,0:100); mesh(XXX(1:100,[50:90]),YYY(1:100,[50:90]),XX(1:100,[50:90]))
{ "source": [ "https://dsp.stackexchange.com/questions/11560", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/5648/" ] }
15,206
I have a friend working in wireless communications research. He told me that we can transmit more than one symbol in a given slot using one frequency (of course we can decode them at the receiver). The technique as he said uses a new modulation scheme. Therefore if one transmitting node transmits to one receiving node over a wireless channel and using one antenna at each node, the technique can transmit two symbols at one slot over one frequency. I am not asking about this technique and I do not know whether it is correct or not but I want to know if one can do this or not? Is this even possible? Can the Shannon limit be broken? Can we prove the impossibility of such technique mathematically? Other thing I want to know, if this technique is correct what are the consequences? For example what would such technique imply for the famous open problem of the interference channel? Any suggestions please? Any reference is appreciated.
Most certainly not. While there has been some claims to break Shannon here and there, it usually turned out that the Shannon theorem was just applied in the wrong way. I've yet to see any such claim to actually prove true. There are some methods known that allow for transmission of multiple data streams at the same time on the same frequency. The MIMO principle employs spatial diversity to achieve that. Comparing a MIMO transmission in a scenario that offers high diversity with the Shannon limit for a SISO transmission in an otherwise similar scenario might actually imply that the MIMO transmission breaks Shannon. Yet, when you write down the Shannon limit correctly for the MIMO transmission, you again see that it still holds. Another technique to transmit on the same frequency at the same time in the same area would be CDMA (Code Division Multiple Access). Here, the individual signals are multiplied with a set of orthogonal codes so that they can be (perfectly in the ideal case) separated again at the receiver. But multiplying the signal with the orthogonal code will also spread its bandwidth. In the end, each signal employs much more bandwidth than it needs and I've never seen an example where the sum of the rates was higher than Shannon for the whole bandwidth. While you can never be sure that breaking Shannon is actually impossible, it is a very fundamental law that stood the test of time for a long time. Anyone claiming to break Shannon has most likely made a mistake. There needs to be overwhelming proof for such a claim to be accepted. On the other hand, transmitting two signals on the same frequency at the same time in the same area is easily possible using the correct method. This is by no means an implication that Shannon is broken.
{ "source": [ "https://dsp.stackexchange.com/questions/15206", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/8342/" ] }
16,586
I have read many articles about DTFT and DFT but am not able to discern the difference between the two except for a few visible things like DTFT goes till infinity while DFT is only till N-1. Can anyone please explain the difference and when to use what? Wiki says The DFT differs from the discrete-time Fourier transform (DTFT) in that its input and output sequences are both finite; it is therefore said to be the Fourier analysis of finite-domain (or periodic) discrete-time functions. Is it the only difference? Edit: This article nicely explains the difference
The discrete-time Fourier transform (DTFT) is the (conventional) Fourier transform of a discrete-time signal. Its output is continous in frequency and periodic. Example: to find the spectrum of the sampled version $x(kT)$ of a continous-time signal $x(t)$ the DTFT can be used. The discrete Fourier transform (DFT) can be seen as the sampled version (in frequency-domain) of the DTFT output. It's used to calculate the frequency spectrum of a discrete-time signal with a computer, because computers can only handle a finite number of values. I would argue against the DFT output being finite. It is periodic as well and can therefore be continued infinitely. To sum it up: DTFT | DFT input discrete, infinite | discrete, finite *) output contin., periodic | discrete, finite *) *) A mathematical property of the DFT is that both its input and output are periodic with the DFT length $N$. That is, although the input vector to the DFT is finite in practice, it's only correct to say that the DFT is the sampled spectrum if the DFT input is thought to be periodic.
{ "source": [ "https://dsp.stackexchange.com/questions/16586", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/6940/" ] }
16,885
I have code like below that applies a bandpass filter onto a signal. I am quite a noob at DSP and I want to understand what is going on behind the scenes before I proceed. To do this, I want to know how to plot the frequency response of the filter without using freqz . [b, a] = butter(order, [flo fhi]); filtered_signal = filter(b, a, unfiltered_signal) Given the outputs [b, a] how would I do this? This seems like it would be a simple task, but I'm having a hard time finding what I need in the documentation or online. I'd also like to understand how to do this as quickly as possible, e.g. using an fft or other fast algorithm.
We know that in general transfer function of a filter is given by: $$H(z)=\dfrac{\sum_{k=0}^{M}b_kz^{-k}}{\sum_{k=0}^{N}a_kz^{-k}} $$ Now substitute $z=e^{j\omega}$ to evaluate the transfer function on the unit circle: $$H(e^{j\omega})=\dfrac{\sum_{k=0}^{M}b_ke^{-j\omega k}}{\sum_{k=0}^{N}a_ke^{-j\omega k}} $$ Thus this becomes only a problem of polynomial evaluation at a given $\omega$ . Here are the steps: Create a vector of angular frequencies $\omega = [0, \ldots,\pi]$ for the first half of spectrum (no need to go up to $2\pi$ ) and save it in w . Pre-compute exponent $e^{-j\omega}$ at all of them and store it in variable ze . Use the polyval function to calculate the values of numerator and denominator by calling: polyval(b, ze) , divide them and store in H . Because we are interested in amplitude, then take the absolute value of the result. Convert to dB scale by using: $H_{dB}=20\log_{10} H $ - in this case $1$ is the reference value. Putting all of that in code: %% Filter definition a = [1 -0.5 -0.25]; % Some filter with lot's of static gain b = [1 3 2]; %% My freqz calculation N = 1024; % Number of points to evaluate at upp = pi; % Evaluate only up to fs/2 % Create the vector of angular frequencies at one more point. % After that remove the last element (Nyquist frequency) w = linspace(0, pi, N+1); w(end) = []; ze = exp(-1j*w); % Pre-compute exponent H = polyval(b, ze)./polyval(a, ze); % Evaluate transfer function and take the amplitude Ha = abs(H); Hdb = 20*log10(Ha); % Convert to dB scale wn = w/pi; % Plot and set axis limits xlim = ([0 1]); plot(wn, Hdb) grid on %% MATLAB freqz figure freqz(b,a) Original output of freqz : And the output of my script: And quick comparison in linear scale - looks great! [h_f, w_f] = freqz(b,a); figure xlim = ([0 1]); plot(w, Ha) % mine grid on hold on plot(w_f, abs(h_f), '--r') % MATLAB legend({'my freqz','MATLAB freqz'}) Now you can rewrite it into some function and add few conditions to make it more useful. Another way (previously proposed is more reliable) would be to use the fundamental property, that frequency response of a filter is a Fourier Transform of its impulse response: $$H(\omega)=\mathcal{F}\{h(t)\} $$ Therefore you must feed into your system $\delta(t)$ signal, calculate the response of your filter and take the FFT of it: d = [zeros(1,length(w_f)) 1 zeros(1,length(w_f)-1)]; h = filter(b, a, d); HH = abs(fft(h)); HH = HH(1:length(w_f)); In comparison this will produce the following:
{ "source": [ "https://dsp.stackexchange.com/questions/16885", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/10228/" ] }
17,685
Peoples' ears can hear sound whose frequencies range from 20 Hz to 20 kHz. Based on the Nyquist theorem, the recording rate should be at least 40 kHz. Is it the reason for choosing 44.1 kHz?
It is true that, like any convention, the choice of 44.1 kHz is sort of a historical accident. There are a few other historical reasons. Of course, the sampling rate must exceed 40 kHz if you want high quality audio with a bandwidth of 20 kHz. There was discussion of making it 48.0 kHz (it was nicely congruent with 24 frame/second films and the ostensible 30 frames/second in North American TV), but given the physical size of 120 mm, there was a limit to how much data the CD could hold, and given that an error detection and correction scheme was needed and that requires some redundancy in data, the amount of logical data the CD could store (about 700 MB) is about half of the amount of physical data. Given all of that, at the rate of 48 kHz, we were told that it could not hold all of Beethoven's 9th, but that it could hold the entire 9th on one disc at a slightly slower rate. So 48 kHz is out. Still, why 44.1 and not 44.0 or 45.0 kHz or some nice round number? Then at the time, there existed a product in the late 1970s called the Sony F1 that was designed to record digital audio onto readily-available video tape (Betamax, not VHS). That was at 44.1 kHz (or more precisely 44.056 kHz). So this would make it easy to transfer recordings, without resampling and interpolation, from the F1 to CD or in the other direction. My understanding of how it gets there is that the horizontal scan rate of NTSC TV was 15.750 kHz and 44.1 kHz is exactly 2.8 times that. I'm not entirely sure, but I believe what that means is that you can have three stereo sample pairs per horizontal line, and for every 5 lines, where you would normally have 15 samples, there are 14 samples plus one additional sample for some parity check or redundancy in the F1. 14 samples for 5 lines is the same as 2.8 samples per horizontal line and with 15,750 lines per second, that comes out to be 44,100 samples per second. Now, since color TV was introduced, they had to bump down slightly the horizontal line rate to 15734 lines per second. That adjustment leads to the 44,056 samples per second in the Sony F1.
{ "source": [ "https://dsp.stackexchange.com/questions/17685", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/10767/" ] }
19,084
I saw in a SO thread a suggestion to use filtfilt which performs backwards/forwards filtering instead of lfilter . What is the motivation for using one against the other technique?
filtfilt is zero-phase filtering, which doesn't shift the signal as it filters. Since the phase is zero at all frequencies, it is also linear-phase. Filtering backwards in time requires you to predict the future, so it can't be used in "online" real-life applications, only for offline processing of recordings of signals. lfilter is causal forward-in-time filtering only, similar to a real-life electronic filter. It can't be zero-phase. It can be linear-phase (symmetrical FIR), but usually isn't. Usually it adds different amounts of delay at different frequencies. An example and image should make it obvious. Although the magnitude of the frequency response of the filters is identical (top left and top right), the zero-phase lowpass lines up with the original signal, just without high frequency content, while the minimum phase filtering delays the signal in a causal way: from __future__ import division, print_function import numpy as np from numpy.random import randn from numpy.fft import rfft from scipy import signal import matplotlib.pyplot as plt b, a = signal.butter(4, 0.03, analog=False) # Show that frequency response is the same impulse = np.zeros(1000) impulse[500] = 1 # Applies filter forward and backward in time imp_ff = signal.filtfilt(b, a, impulse) # Applies filter forward in time twice (for same frequency response) imp_lf = signal.lfilter(b, a, signal.lfilter(b, a, impulse)) plt.subplot(2, 2, 1) plt.semilogx(20*np.log10(np.abs(rfft(imp_lf)))) plt.ylim(-100, 20) plt.grid(True, which='both') plt.title('lfilter') plt.subplot(2, 2, 2) plt.semilogx(20*np.log10(np.abs(rfft(imp_ff)))) plt.ylim(-100, 20) plt.grid(True, which='both') plt.title('filtfilt') sig = np.cumsum(randn(800)) # Brownian noise sig_ff = signal.filtfilt(b, a, sig) sig_lf = signal.lfilter(b, a, signal.lfilter(b, a, sig)) plt.subplot(2, 1, 2) plt.plot(sig, color='silver', label='Original') plt.plot(sig_ff, color='#3465a4', label='filtfilt') plt.plot(sig_lf, color='#cc0000', label='lfilter') plt.grid(True, which='both') plt.legend(loc="best")
{ "source": [ "https://dsp.stackexchange.com/questions/19084", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/11401/" ] }
24,780
What's the difference between these? Both are measurements of some form of signal power, but surely there's some difference between the power they are measuring?
The fast Fourier transform ($\textrm{FFT}$) algorithms are fast algorithms for computing the discrete Fourier transform ($\textrm{DFT}$). This is achieved by successive decomposition of the $N$-point $\textrm{DFT}$ into smaller-block $\textrm{DFT}$, and taking advantage of periodicity and symmetry. Now, the $N$-point $\textrm{DFT}$ of a sequence $\{x[0], x[1],\cdots, x[N-1]\}$ is: \begin{equation} X(f_k) = \displaystyle \sum_{n = 0}^{N - 1} x[n]\exp\left(-j2\pi f_kn\right) \tag{1} \end{equation} For $f_k = k/N$ and $k = 0, 1, \cdots, N - 1$. And the $\textrm{FFT}$ magnitude at bin $k$ is the $\textrm{DFT}$magnitude at bin $k$. For a given $N$ that is: \begin{equation} \left|X(f_k)\right| = \left|X\left(\frac{k}{N}\right)\right| = \left|X(k)\right| = \displaystyle \left|\sum_{n = 0}^{N - 1} x[n]\exp\left(-j2\pi nk/N\right)\right| \tag{2} \end{equation} You talk about power spectral estimation when the signals being analyzed are characterized as random processes. With random fluctuations in such signals, statistical characteristics and average characteristics are normally adopted. For a wide sense stationary $(\textrm{WSS})$ discrete random process, the PSD is defined as: \begin{equation} P(f) = \displaystyle \sum_{m = -\infty}^{\infty} r_{xx}[m]\exp\left(-j2\pi fm\right) \tag{3} \end{equation} For reasons you can find in this answer , you see that the squared magnitude of the signal's $\textrm{DFT}$ is taken as the estimate of the PSD in most practical situations. One form, among other variations/methods, is: \begin{equation} P(f_k) = \frac{1}{N} \displaystyle \left| \sum_{n = 0}^{N - 1} x[n]\exp\left(-j2\pi f_kn\right) \right|^2 \tag{4} \end{equation} What's the difference between these? Comparing $(2)$ and $(4)$, you have: \begin{equation} P(f_k) = \frac{1}{N} \left| X(f_k)\right|^2 \end{equation} From the bin number $k$ to frequency in $\textrm{Hz}$, $F = \frac{F_s}{N}k$ For more reading on PSD estimation check this question , that question , and this question . EDIT : The power spectral density, $\textrm{PSD}$, describes how the power of your signal is distributed over frequency whilst the $\textrm{DFT}$ shows the spectral content of your signal, the amplitude and phase of harmonics in your signal. You pick one or the other depending on what you want to observe/analyze. And no they're not the same as you can see from the equations above and links given. Their spectra are generally not the same. One is estimated as the squared magnitude of the other.
{ "source": [ "https://dsp.stackexchange.com/questions/24780", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/16003/" ] }
24,880
I was glancing through "The Fourier Transform & Its Applications" by Ronald N. Bracewell, which is a good intro book on Fourier Transforms. In it, he says that if you take the Fourier transform of a function 4 times, you get back the original function, i.e. $$\mathcal F\Bigg\{ \mathcal F\bigg\{ \mathcal F\big\{ \mathcal F\left\{ g(x) \right\} \big\} \bigg\} \Bigg\} = g(x)\,. $$ Could someone kindly show me how this is possible? I'm assuming the above statement is for complex $x$ , and this has something to do with $i^0=1$ , $i^1=i$ , $i^2=-1$ , $i^3 = -i$ , $i^4=1$ ? Thank you for your enlightenment.
I'll use the non-unitary Fourier transform (but this is not important, it's just a preference): $$X(\omega)=\int_{-\infty}^{\infty}x(t)e^{-i\omega t}dt\tag{1}$$ $$x(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}X(\omega)e^{i\omega t}d\omega\tag{2}$$ where (1) is the Fourier transform, and (2) is the inverse Fourier transform. Now if you formally take the Fourier transform of $X(\omega)$ you get $$\mathcal{F}\{X(\omega)\}=\mathcal{F}^2\{x(t)\}=\int_{-\infty}^{\infty}X(\omega)e^{-i\omega t}d\omega\tag{3}$$ Comparing (3) with (2) we have $$\mathcal{F}^2\{x(t)\}=2\pi x(-t)\tag{4}$$ So the Fourier transform equals an inverse Fourier transform with a sign change of the independent variable (apart from a scale factor due to the use of the non-unitary Fourier transform). Since the Fourier transform of $x(-t)$ equals $X(-\omega)$ , the Fourier transform of (4) is $$\mathcal{F}^3\{x(t)\}=2\pi X(-\omega)\tag{5}$$ And, by an argument similar to the one used in (3) and (4), the Fourier transform of $X(-\omega)$ equals $2\pi x(t)$ . So we obtain for the Fourier transform of (5) $$\mathcal{F}^4\{x(t)\}=2\pi\mathcal{F}\{X(-\omega)\}=(2\pi)^2x(t)\tag{6}$$ which is the desired result. Note that the factor $(2\pi)^2$ in (6) is a consequence of using the non-unitary Fourier transform. If you use the unitary Fourier transform (where both the transform and its inverse get a factor $1/\sqrt{2\pi}$ ) this factor would disappear. In sum, apart from irrelevant constant factors, you get $$\bbox[#f8f1ea, 0.6em, border: 0.15em solid #fd8105]{x(t)\overset{\mathcal{F}}{\Longrightarrow} X(\omega)\overset{\mathcal{F}}{\Longrightarrow} x(-t)\overset{\mathcal{F}}{\Longrightarrow} X(-\omega)\overset{\mathcal{F}}{\Longrightarrow} x(t)}$$
{ "source": [ "https://dsp.stackexchange.com/questions/24880", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/13407/" ] }
25,845
I understand the Fourier Transform which is a mathematical operation that lets you see the frequency content of a given signal. But now, in my comm. course, the professor introduced the Hilbert Transform. I understand that it is somewhat linked to the frequency content given the fact that the Hilbert Transform is multiplying a FFT by $-j\operatorname{sign}(W(f))$ or convolving the time function with $1/\pi t$ . What is the meaning of the Hilbert transform? What information do we get by applying that transform to a given signal?
One application of the Hilbert Transform is to obtain a so-called Analytic Signal. For signal $s(t)$ , its Hilbert Transform $\hat{s}(t)$ is defined as a composition: $$s_A(t)=s(t)+j\hat{s}(t) $$ The Analytic Signal that we obtain is complex valued, therefore we can express it in exponential notation: $$s_A(t)=A(t)e^{j\psi(t)}$$ where: $A(t)$ is the instantaneous amplitude (envelope) $\psi(t)$ is the instantaneous phase. So how are these helpful? The instantaneous amplitude can be useful in many cases (it is widely used for finding the envelope of simple harmonic signals). Here is an example for an impulse response: Secondly, based on the phase, we can calculate the instantaneous frequency: $$f(t)=\dfrac{1}{2\pi}\dfrac{d\psi}{dt}(t)$$ Which is again helpful in many applications, such as frequency detection of a sweeping tone, rotating engines, etc. Other examples of usage include: Sampling of narrowband signals in telecommunications (mostly using Hilbert filters). Medical imaging. Array processing for Direction of Arrival. System response analysis.
{ "source": [ "https://dsp.stackexchange.com/questions/25845", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/17386/" ] }
26,697
I'm looking forward to enroll in an MSc in Signal and Image processing, or maybe Computer Vision (I have not decided yet), and this question emerged. My concern is, since deep learning doesn't need feature extraction and almost no input pre-processing, is it killing image processing (or signal processing in general)? I'm not an expert in deep learning, but it seems to work very well in recognition and classification tasks taking images directly instead of a feature vector like other techniques. Is there any case in which a traditional feature extraction + classification approach would be better, making use of image processing techniques, or is this dying because of deep learning?
On the top of this answer, you can see a section of updated links, where artificial intelligence, machine intelligence, deep learning or and database machine learning progressively step of the grounds of traditional signal processing/image analysis/computer vision. Below, variations on the original answer. For a short version: successes of convolutional neural networks and deep learning have been looked like as a sort of Galilean revolution. For a practical point of view, classical signal processing or computer vision were dead... provided that you have enough or good-enough labeled data, that you care little about evident classification failures (aka deep flaws or deep fakes ), that you have infinite energy to run tests without thinking about the carbon footprint , and don't bother causal or rational explanations. For the others, this made us rethink about all what we did before: preprocessing, standard analysis, feature extraction, optimization (cf. my colleague J.-C. Pesquet work on Deep Neural Network Structures Solving Variational Inequalities ), invariance, quantification, etc. And really interesting research is emerging from that, hopefully catching up with firmly grounded principles and similar performance. Updated links: 2021/04/10: Hierarchical Image Peeling: A Flexible Scale-space Filtering Framework 2019/07/19: The Verge: If you can identify what’s in these images, you’re smarter than AI , or do you see a ship wreck, or insects on a dead leaf? 2019/07/16: Preprint: Natural Adversarial Examples We introduce natural adversarial examples -- real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A. This dataset serves as a new way to measure classifier robustness. Like l_p adversarial examples, ImageNet-A examples successfully transfer to unseen or black-box classifiers. For example, on ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%. Recovering this accuracy is not simple because ImageNet-A examples exploit deep flaws in current classifiers including their over-reliance on color, texture, and background cues. We observe that popular training techniques for improving robustness have little effect, but we show that some architectural changes can enhance robustness to natural adversarial examples. Future research is required to enable robust generalization to this hard ImageNet test set. 2019/05/03: Deep learning: the final frontier for signal processing and time series analysis? "In this article, I want to show several areas where signals or time series are vital" 2018/04/23: I just come back from the yearly international conference on acoustics, speech and signal processing, ICASSP 2018 . I was amazed by the quantity of papers somewhat relying on deep Learning, Deep Networks, etc. Two pleanaries out of four (by Alex Acero and Yann LeCun) were devoted to such topic. At the same time, most of the researchers I have met were kind of joking about that ("Sorry, my poster is on filter banks, not on Deep Learning", "I am not into that, I have small datasets"), or were wondering about gaining 0.5% on grand challenges, and losing the interested in modeling the physics or statistical priors. 2018/01/14: Can A Deep Net See A Cat? , from "abstract cat", to "best cat" inverted, drawn, etc. and somehow surprizing results on sketches 2017/11/02: added references to scattering transforms/networks 2017/10/21: A Review of Convolutional Neural Networks for Inverse Problems in Imaging Deep Learning and Its Applications to Signal and Information Processing , IEEE Signal Processing Magazine, January 2011 Deep learning references "stepping" on standard signal/image processing can be found at the bottom. Michael Elad just wrote Deep, Deep Trouble: Deep Learning’s Impact on Image Processing, Mathematics, and Humanity (SIAM News, 2017/05), excerpt: Then neural networks suddenly came back, and with a vengeance. This tribune is of interest, as it shows a shift from traditional "image processing", trying to model/understand the data, to a realm of correctness, without so much insight. This domain is evolving quite fast. This does not mean it evolves in some intentional or constant direction. Neither right nor wrong. But this morning, I heard the following saying (or is it a joke?): a bad algorithm with a huge set of data can do better than a smart algorithm with pauce data. Here was my very short try: deep learning may provide state-of-the-art results, but one does not always understand why , and part of our scientist job remains on explaining why things work, what is the content of a piece of data, etc. Deep learning used too require (huge) well-tagged databases. Any time you do craftwork on single or singular images (i. e. without a huge database behind), especially in places unlikely to yield "free user-based tagged images" (in the complementary set of the set " funny cats playing games and faces "), you can stick to traditional image processing for a while, and for profit. A recent tweet summarizes that: (lots of) labeled data (with no missing vars) requirement is a deal breaker (& unnecessary) for many domains If they are being killed (which I doubt at a short term notice), they are not dead yet. So any skill you acquire in signal processing, image analysis, computer vision will help you in the future. This is for instance discussed in the blog post: Have We Forgotten about Geometry in Computer Vision? by Alex Kendall: Deep learning has revolutionised computer vision. Today, there are not many problems where the best performing solution is not based on an end-to-end deep learning model. In particular, convolutional neural networks are popular as they tend to work fairly well out of the box. However, these models are largely big black-boxes. There are a lot of things we don’t understand about them. A concrete example can be the following: a couple of very dark (eg surveillance) images from the same location, needing to evaluate if one of them contains a specific change that should be detected, is potentially a matter of traditional image processing, more than Deep Learning (as of today). On the other side, as successful as Deep Learning is on a large scale, it can lead to misclassification of a small sets of data, which might be harmless "in average" for some applications. Two images that just slightly differ to the human eye could be classified differently via DL. Or random images could be set to a specific class. See for instance Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (Nguyen A, Yosinski J, Clune J. Proc. Computer Vision and Pattern Recognition 2015), or Does Deep Learning Have Deep Flaws? , on adversarial negatives: The network may misclassify an image after the researchers applied a certain imperceptible perturbation. The perturbations are found by adjusting the pixel values to maximize the prediction error. With all due respect to "Deep Learning", think about "mass production responding to a registered, known, mass-validable or expected behaviour" versus "singular piece of craft". None is better (yet) in a single index scale. Both may have to coexist for a while. However, deep learning pervades many novel areas, as described in references below. Many not-linear, complex features might be revealed by deep learning, that had not been seen before by traditional processing. Deep learning for image compression Real-Time Adaptive Image Compression , ICML 2017 Full Resolution Image Compression with Recurrent Neural Networks End-to-end optimized image compression , ICRL 2017 Deep learning for video compression Can deep learning be applied to video compression? Deep learning for denoising, restoration, artifact removal CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression Super-Resolution with Deep Convolutional Sufficient Statistics Luckily, some folks are trying to find mathematical rationale behind deep learning, an example of which are scattering networks or transforms proposed by Stéphane Mallat and co-authors, see ENS site for scattering . Harmonic analysis and non-linear operators, Lipschitz functions, translation/rotation invariance, better for the average signal processing person. See for instance Understanding Deep Convolutional Networks .
{ "source": [ "https://dsp.stackexchange.com/questions/26697", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/17997/" ] }
27,451
I am trying to understand the difference between convolution to cross-correlation. I have read an understood This answer. I also understand the picture below. But, in terms of signal processing, (a field which I know little about..), Given two signals (or maybe a signal and a filter?), When will we use convolution and when will we prefer to use cross correlation, I mean, When in real life analysing will we prefer convolution, and when, cross-correlation. It seems like these two terms has a lot of use, so, what is that use? *The cross-correlation here should read g*f instead of f*g
In signal processing, two problems are common: What is the output of this filter when its input is $x(t)$? The answer is given by $x(t)\ast h(t)$, where $h(t)$ is a signal called the "impulse response" of the filter, and $\ast$ is the convolution operation. Given a noisy signal $y(t)$, is the signal $x(t)$ somehow present in $y(t)$? In other words, is $y(t)$ of the form $x(t)+n(t)$, where $n(t)$ is noise? The answer can be found by the correlation of $y(t)$ and $x(t)$. If the correlation is large for a given time delay $\tau$, then we may be confident in saying that the answer is yes. Note that when the signals involved are symmetric, convolution and cross-correlation become the same operation; this case is also very common in some areas of DSP.
{ "source": [ "https://dsp.stackexchange.com/questions/27451", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/18562/" ] }
28,814
I am in need of computing atan2(x,y) on an FPGA with a continuous input/output stream of data. I managed to implement it using unrolled, pipelined CORDIC kernels, but to get the accuracy I need, I had to perform 32 iterations. This led to a pretty large amount of LUTs being devoted to this one task. I tried changing the flow to use partially unrolled CORDIC kernels, but then I needed a multiplied clock frequency to execute repeated loops while still maintaining a continuous input/output flow. With this, I could not meet timing. So now I am reaching out for alternative ways of computing atan2(x,y) . I thought about using block-RAM lookup tables with interpolation, but since there are 2 variables I would need 2 dimensions of lookup tables, and this is very resource intensive in terms of block-RAM usage. I then thought about using the fact that atan2(x,y) is related to atan(x/y) with quadrant adjustment. The problem with this is that x/y needs a true divide since y is not a constant, and divisions on FPGAs are very resource intensive. Are there any more novel ways to implement atan2(x,y) on an FPGA that would result in lower LUT usage, but still provide good accuracy ?
You can use logarithms to get rid of the division. For $(x, y)$ in the first quadrant: $$z = \log_2(y)-\log_2(x)\\ \text{atan2}(y, x) = \text{atan}(y/x) = \text{atan}(2^z)$$ Figure 1. Plot of $\text{atan}(2^z)$ You would need to approximate $\text{atan}(2^z)$ in range $-30 < z < 30$ to get your required accuracy of 1E-9. You can take advantage of the symmetry $\text{atan}(2^{-z}) = \frac{\pi}{2}-\text{atan}(2^z)$ or alternatively ensure that $(x, y)$ is in a known octant. To approximate $\log_2(a)$ : $$b = \text{floor}(\log_2(a))\\ c = \frac{a}{2^b}\\ \log_2(a) = b + \log_2(c)$$ $b$ can be calculated by finding the location of the most significant non-zero bit. $c$ can be calculated by a bit shift. You would need to approximate $\log_2(c)$ in range $1 \le c < 2$ . Figure 2. Plot of $\log_2(c)$ For your accuracy requirements, linear interpolation and uniform sampling, $2^{14} + 1 = 16385$ samples of $\log_2(c)$ and $30\times 2^{12} + 1 = 122881$ samples of $\text{atan}(2^z)$ for $0 < z < 30$ should suffice. The latter table is pretty large. With it, the error due to interpolation depends greatly on $z$ : Figure 3. $\text{atan}(2^z)$ approximation largest absolute error for different ranges of $z$ (horizontal axis) for different numbers of samples (32 to 8192) per unit interval of $z$ . The largest absolute error for $0 \le z < 1$ (omitted) is slightly less than for $\text{floor}(\log_2(z)) = 0$ . The $\text{atan}(2^z)$ table can be split into multiple subtables that correspond to $0 \le z < 1$ and different $\text{floor}(\log_2(z))$ with $z \ge 1$ , which is easy to calculate. The table lengths can be chosen as guided by Fig. 3. The within-subtable index can be calculated by a simple bit string manipulation. For your accuracy requirements the $\text{atan}(2^z)$ subtables will have a total of 29217 samples if you extend the range of $z$ to $0 \le z < 32$ for simplicity. For later reference, here is the clunky Python script that I used to calculate the approximation errors: from numpy import * from math import * N = 10 M = 20 x = array(range(N + 1))/double(N) + 1 y = empty(N + 1, double) for i in range(N + 1): y[i] = log(x[i], 2) maxErr = 0 for i in range(N): for j in range(M): a = y[i] + (y[i + 1] - y[i])*j/M if N*M < 1000: print str((i*M + j)/double(N*M) + 1) + ' ' + str(a) b = log((i*M + j)/double(N*M) + 1, 2) err = abs(a - b) if err > maxErr: maxErr = err print maxErr y2 = empty(N + 1, double) for i in range(1, N): y2[i] = -1.0/16.0*y[i-1] + 9.0/8.0*y[i] - 1.0/16.0*y[i+1] y2[0] = -1.0/16.0*log(-1.0/N + 1, 2) + 9.0/8.0*y[0] - 1.0/16.0*y[1] y2[N] = -1.0/16.0*y[N-1] + 9.0/8.0*y[N] - 1.0/16.0*log((N+1.0)/N + 1, 2) maxErr = 0 for i in range(N): for j in range(M): a = y2[i] + (y2[i + 1] - y2[i])*j/M b = log((i*M + j)/double(N*M) + 1, 2) if N*M < 1000: print a err = abs(a - b) if err > maxErr: maxErr = err print maxErr y2[0] = 15.0/16.0*y[0] + 1.0/8.0*y[1] - 1.0/16.0*y[2] y2[N] = -1.0/16.0*y[N - 2] + 1.0/8.0*y[N - 1] + 15.0/16.0*y[N] maxErr = 0 for i in range(N): for j in range(M): a = y2[i] + (y2[i + 1] - y2[i])*j/M b = log((i*M + j)/double(N*M) + 1, 2) if N*M < 1000: print str(a) + ' ' + str(b) err = abs(a - b) if err > maxErr: maxErr = err print maxErr P = 32 NN = 13 M = 8 for k in range(NN): N = 2**k x = array(range(N*P + 1))/double(N) y = empty((N*P + 1, NN), double) maxErr = zeros(P) for i in range(N*P + 1): y[i] = atan(2**x[i]) for i in range(N*P): for j in range(M): a = y[i] + (y[i + 1] - y[i])*j/M b = atan(2**((i*M + j)/double(N*M))) err = abs(a - b) if (i*M + j > 0 and err > maxErr[int(i/N)]): maxErr[int(i/N)] = err print N for i in range(P): print str(i) + " " + str(maxErr[i]) The local maximum error from approximating a function $f(x)$ by linearly interpolating $\hat{f}(x)$ from samples of $f(x)$ , taken by uniform sampling with sampling interval $\Delta x$ , can be approximated analytically by: $$\widehat{f}(x) - f(x) \approx (\Delta x)^2\lim_{\Delta x\rightarrow 0}\frac{\frac{f(x) + f(x + \Delta x)}{2} - f(x + \frac{\Delta x}{2})}{(\Delta x)^2} = \frac{(\Delta x)^2 f''(x)}{8},$$ where $f''(x)$ is the second derivative of $f(x)$ and $x$ is at a local maximum of the absolute error. With the above we get the approximations: $$\widehat{\text{atan}}(2^z) - \text{atan}(2^z) \approx \frac{(\Delta z)^2 2^z(1 - 4^z)\ln(2)^2}{8(4^z + 1)^2},\\ \widehat{\log_2}(a) - \log_2(a) \approx \frac{-(\Delta a)^2}{8 a^2\ln(2)}.$$ Because the functions are concave and the samples match the function, the error is always to one direction. The local maximum absolute error could be halved if the sign of the error was made to alternate back and forth once every sampling interval. With linear interpolation, close to optimal results can be achieved by prefiltering each table by: $$y[k] = \cases{\begin{array}{rrrrrl}&&b_0x[k]&\negthickspace\negthickspace\negthickspace+ b_1x[k+1]&\negthickspace\negthickspace\negthickspace+ b_2x[k+2]&\text{if } k = 0,\\ &c_1x[k-1]&\negthickspace\negthickspace\negthickspace+ c_0x[k]&\negthickspace\negthickspace\negthickspace+ c_1x[k+1]&&\text{if }0 < k < N,\\ b_2x[k-2]&\negthickspace\negthickspace\negthickspace+ b_1x[k-1]&\negthickspace\negthickspace\negthickspace+ b_0x[k]&&&\text{if } k = N, \end{array}}$$ where $x$ and $y$ are the original and the filtered table both spanning $0 \le k \le N$ and the weights are $c_0 = \frac{9}{8}, c_1 = -\frac{1}{16}, b_0 = \frac{15}{16}, b_1 = \frac{1}{8}, b_2 = -\frac{1}{16}$ . The end conditioning (first and last row in the above equation) reduces error at the ends of the table compared to using samples of the function outside of the table, because the first and the last sample need not be adjusted to reduce the error from interpolation between it and a sample just outside the table. Subtables with different sampling intervals should be prefiltered separately. The values of the weights $c_0, c_1$ were found by minimizing sequentially for increasing exponent $N$ the maximum absolute value of the approximate error: $$(\Delta x)^N\lim_{\Delta x\rightarrow 0}\frac{\left(c_1f(x - \Delta x) + c_0f(x) + c_1f(x + \Delta x)\right)(1-a) + \left(c_1f(x) + c_0f(x + \Delta x) + c_1f(x + 2 \Delta x)\right)a - f(x + a\Delta x)}{(\Delta x)^N} =\left\{\begin{array}{ll}(c_0 + 2c_1 - 1)f(x) &\text{if } N = 0, \bigg| c_1 = \frac{1 - c_0}{2}\\ 0&\text{if }N = 1,\\ \frac{1+a-a^2-c_0}{2}(\Delta x)^2 f''(x)&\text{if }N=2, \bigg|c_0 = \frac{9}{8}\end{array}\right.$$ for inter-sample interpolation positions $0 \le a < 1$ , with a concave or convex function $f(x)$ (for example $f(x) = e^x$ ). With those weights solved, the values of the end conditioning weights $b_0, b_1, b_2$ were found by minimizing similarly the maximum absolute value of: $$(\Delta x)^N\lim_{\Delta x\rightarrow 0}\frac{\left(b_0f(x) + b_1f(x + \Delta x) + b_2f(x + 2 \Delta x)\right)(1-a) + \left(c_1f(x) + c_0f(x + \Delta x) + c_1f(x + 2 \Delta x)\right)a - f(x + a\Delta x)}{(\Delta x)^N} =\left\{\begin{array}{ll}\left(b_0 + b_1 + b_2 - 1 + a(1 - b_0 - b_1 - b_2)\right)f(x) &\text{if } N = 0, \bigg| b_2 = 1 - b_0 - b_1\\ (a-1)(2b_0+b_1-2)\Delta x f'(x)&\text{if }N = 1,\bigg|b_1=2-2b_0\\ \left(-\frac{1}{2}a^2 + \left(\frac{23}{16} - b_0\right)a + b_0 - 1\right)(\Delta x)^2f''(x)&\text{if }N=2, \bigg|b_0 = \frac{15}{16}\end{array}\right.$$ for $0 \le a < 1$ . Use of the prefilter about halves the approximation error and is easier to do than full optimization of the tables. Figure 4. Approximation error of $\log_2(a)$ from 11 samples, with and without prefilter and with and without end conditioning. Without end conditioning the prefilter has access to values of the function just outside of the table. This article probably presents a very similar algorithm: R. Gutierrez, V. Torres, and J. Valls, “ FPGA-implementation of atan(Y/X) based on logarithmic transformation and LUT-based techniques, ” Journal of Systems Architecture , vol. 56, 2010. The abstract says their implementation beats previous CORDIC-based algorithms in speed and LUT-based algorithms in footprint size.
{ "source": [ "https://dsp.stackexchange.com/questions/28814", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/19517/" ] }
30,060
There are nice technical definitions in textbooks and wikipedia, but I'm having a hard time understanding what differentiates stationary and non-stationary signals in practice? Which of the following discrete signals are stationary? why?: white noise - YES (according to every possible information found) colored noise - YES (according to Colored noises: Stationary or non-stationary? ) chirp (sinus with changing frequency) - ? sinus - ? sum of multiple sinuses with different periods and amplitudes - ? ECG, EEG, PPT and similar - ? Chaotic system output (mackey-glass, logistic map) - ? Record of outdoors temperature - ? Record of forex market currency pair development - ? Thank you.
There is no stationary signal. Stationary and non-stationary are characterisations of the process that generated the signal. A signal is an observation. A recording of something that has happened. A recording of a series of events as a result of some process. If the properties of the process that generates the events DOES NOT change in time, then the process is stationary. We know what a signal $x(n)$ is, it is a collection of events (measurements) at different time instances ($n$). But how can we describe the process that generated it? One way of capturing the properties of a process is to obtain the probability distribution of the events it describes. Practically, this could look like a histogram but that's not entirely useful here because it only provides information on each event as if it was unrelated to its neighbour events. Another type of "histogram" is one where we could fix an event and ask what is the probability that the other events happen GIVEN another event has already taken place. So, if we were to capture this "monster histogram" that describes the probability of transition from any possible event to any other possible event, we would be able to describe any process. Furthermore, if we were to obtain this at two different time instances and the event-to-event probabilities did not seem to change then that process would be called a stationary process. (Absolute knowledge of the characteristics of a process in nature is rarely assumed of course). Having said this, let's look at the examples: White Noise: White noise is stationary because any signal value (event) is equally probable to happen given any other signal value (another event) at any two time instances no matter how far apart they are. Coloured Noise: What is coloured noise? It is essentially white-noise with some additional constraints. The constraints mean that the event-to-event probabilities are now not equal BUT this doesn't mean that they are allowed to change with time. So, Pink noise is filtered white noise whose frequency spectrum decreases following a specific relationship. This means that pink noise has more low frequencies which in turn means that any two neighbouring events would have higher probabilities of occurring but that would not hold for any two events (as it was in the case of white noise). Fine, but if we were to obtain these event-to-event probabilities at two different time instances and they did not seem to change, then the process that generated the signals would be stationary. Chirp: Non stationary, because the event-to-event probabilities change with time. Here is a relatively easy way to visualise this: Consider a sampled version of the lowest frequency sinusoid at some sampling frequency. This has some event-to-event probabilities. For example, you can't really go from -1 to 1, if you are at -1 then the next probable value is much more likely to be closer to -0.9 depending of course on the sampling frequency. But, actually, to generate the higher frequencies you can resample this low frequency sinusoid. All you have to do for the low frequency to change pitch is to "play it faster". AHA! THEREFORE, YES! You can actually move from -1 to 1 in one sample, provided that the sinusoid is resampled really really fast. THEREFORE!!! The event-to-event probabilities CHANGE WITH TIME!, we have by passed so many different values and went from -1 to 1 in this extreme case....So, this is a non-stationary process. Sinus(oid) Stationary...Self-explanatory, given #3 Sum of multiple sinuses with different periods and amplitudes Self explanatory given #1, #2,#3 and #4. If the periods and amplitudes of the components do not change in time, then the constraints between the samples do not change in time, therefore the process will end up stationary. ECG, EEG, PPT and similar I am not really sure what PPT is but ECG and EEG are prime examples of non-stationary signals. Why? The ECG represents the electrical activity of the heart. The heart has its own oscillator which is modulated by signals from the brain AT EVERY HEARTBEAT! Therefore, since the process changes with time (i.e. the way that the heart beats changes at each heart beat) then it is considered non-stationary. The same applies for the EEG. The EEG represents a sum of localised electrical activity of neurons in the brain. The brain cannot be considered stationary in time since a human being performs different activities. Conversely, if we were to fix the observation window we could claim some form of stationarity. For example, in neuroscience, you can say that 30 subjects were instructed to stay at rest with their eyes closed while EEG recordings were obtained for 30 seconds and then say that FOR THOSE SPECIFIC 30 SEC AND CONDITION (rest, eyes closed) THE BRAIN (as a process) IS ASSUMED TO BE STATIONARY. Chaotic system output. Similar to #6, chaotic systems could be considered stationary over brief periods of time but that's not general. Temperature recordings: Similar to #6 and #7. Weather is a prime example of a chaotic process, it cannot be considered stationary for too long. Financial indicators: Similar to #6,#7,#8,#9. In general cannot be considered stationary. A useful concept to keep in mind when talking about practical situations is ergodicity . Also, there is something that eventually creeps up here and that is the scale of observation. Look too close and it's not stationary, look from very far away and everything is stationary. The scale of observation is context dependent. For more information and a large number of illustrating examples as far as the chaotic systems are concenred, I would recommend this book and specifically chapters 1,6,7,10,12 and 13 which are really central on stationarity and periodicity. Hope this helps.
{ "source": [ "https://dsp.stackexchange.com/questions/30060", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/19697/" ] }
31,066
I am looking to design a set of FIR filters to implement a low pass filter. I am also trying to reduce the latency of the signal through the filter so I am wondering what the minimum number of taps I can use might be. I know that more taps can lead to a sharper cutoff of the frequency and better stop band rejection etc. However what I'm interested in is more fundamental - if I want to implement a low pass filter with cutoff at $\frac{f_s}{100}$ say does that mean that I need at least 100 taps in order to attenuate the lower frequency signals? Or can I get away with less taps and if so is there some theoretical lower limit?
Citing Bellanger's classic Digital Processing of Signals – Theory and Practice , the point is not where your cut-off frequency is, but how much attenuation you need, how much ripple in the signal you want to preserve you can tolerate and, most importantly, how narrow your transition from pass- to stopband (transition width) needs to be. I assume you want a linear phase filter (though you specify minimum latency, I don't think a minimum phase filter is a good idea, in general, unless you know damn well what you're going to be doing with your signal afterwards). In that case, the filter order (which is the number of taps) is $$N\approx \frac 23 \log_{10} \left[\frac1{10 \delta_1\delta_2}\right]\,\frac{f_s}{\Delta f}$$ with $$\begin{align} f_s &\text{ the sampling rate}\\ \Delta f& \text{ the transition width,}\\ & \text{ ie. the difference between end of pass band and start of stop band}\\ \delta_1 &\text{ the ripple in passband,}\\ &\text{ ie. "how much of the original amplitude can you afford to vary"}\\ \delta_2 &\text{ the suppresion in the stop band}. \end{align}$$ Let's plug in some numbers! You specified a cut-off frequency of $\frac{f_s}{100}$ , so I'll just go ahead and claim your transition width will not be more than half of that, so $\Delta f=\frac{f_s}{200}$ . Coming from SDR / RF technology, 60 dB of suppression is typically fully sufficient – hardware, without crazy costs, won't be better at keeping unwanted signals out of your input, so meh, let's not waste CPU on having a fantastic filter that's better than what your hardware can do. Hence, $\delta_2 = -60\text{ dB} = 10^{-3}$ . Let's say you can live with a amplitude variation of 0.1% in the passband (if you can live with more, also consider making the suppression requirement less strict). That's $\delta_1 = 10^{-4}$ . So, plugging this in: $$\begin{align} N_\text{Tommy's filter} &\approx \frac 23 \log_{10} \left[\frac1{10 \delta_1\delta_2}\right]\,\frac{f_s}{\Delta f}\\ &= \frac 23 \log_{10} \left[\frac1{10 \cdot 10^{-4}\cdot10^{-3}}\right]\,\frac{f_s}{\frac{f_s}{200}}\\ &= \frac 23 \log_{10} \left[\frac1{10 \cdot 10^{-7}}\right]\,200\\ &= \frac 23 \log_{10} \left[\frac1{10^{-6}}\right]\,200\\ &= \frac 23 \left(\log_{10} 10^6\right) \,200\\ &= \frac 23 \cdot 6 \cdot 200\\ &= 800\text{ .} \end{align}$$ So with your 200 taps, you're far off, iff you use an extremely narrow pass band in your filter like I assumed you would. Note that this doesn't have to be a problem – first of all, a 800-taps filter is scary, but frankly, only at first sight: As I tested in this answer over at StackOverflow : CPUs nowadays are fast , if you use someone's CPU-optimized FIR implementation. For example, I used GNU Radio's FFT-FIR implementation with exactly the filter specification outline above. I got a performance of 141 million samples per second – that might or might not be enough for you. So here's our question-specific test case (which took me seconds to produce): Decimation: If you are only going to keep a fraction of the input bandwidth, the output of your filter will be drastically oversampled. Introducing a decimation of $M$ means that your filter doesn't give you every output sample, but every $M$ th one only – which normally would lead to lots and lots of aliasing, but since you're eradicating all signal that could alias, you can savely do so. Clever filter implementations (polyphase decimators) can reduce the computational effort by M, this way. In your case, you could easily decimate by $M=50$ , and then, your computer would only have to calculate $\frac{1200}{50}= 24$ multiplications/accumulations per input sample – much much easier. The filters in GNU Radio generally do have that capability. And this way, even out of the FFT FIR (which doesn't lend itself very well to a polyphasing decimator implementation), I can squeeze another factor of 2 in performance. Can't do much more. That's pretty close to RAM bandwidth, in my experience, on my system. For Latency: Don't care about it. Really, don't, unless you need to. You're doing this with typical audio sampling rates? Remember, $96\,\frac{\text{kS}}{\text{s}}\overset{\text{ridiculously}}{\ll}141\,\frac{\text{MS}}{\text{s}}$ mentioned above. So the time spent computing the filter output will only be relevant for MS/s live signal streaming. For DSP with offline data: well, add a delay to whatever signal you have in parallel to your filter to compensate. (If your filter is linear phase, it's delay will be half the filter length.) This might be relevant in a hardware implementation of the FIR filter. Hardware implementation: So maybe your PC's or embedded device's CPU and OS really don't allow you to fulfill your latency constraints, and so you're looking into FPGA-implemented FIRs. The first thing you'll notice is that for hardware, there's different design paradigma – a "I suppress everything but $\frac1{100}$ of my input rate" filter needs a large bit width for the fixed point numbers you'd handle in Hardware (as oppposed to the floating point numbers on a CPU). So that's the first reason why you'd typically split that filter into multiple, cascaded, smaller, decimating FIR filters. Another reason is that you can, with every cascade "step", let your multipliers (typically, "DSP slices") run at a lower rate, and hence, multiplex them (number of DSP slices is usually very limited), using one multiplier for multiple taps. Yet another reason is that especially half-band filters, i.e. lowpasses that suppress half the input band and deliver half the input rate, are very efficiently implementable in hardware (as they have half the taps being zero, something that is hard to exploit in a CPU/SIMD implementation).
{ "source": [ "https://dsp.stackexchange.com/questions/31066", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/8164/" ] }
31,078
I've got a vector of $100 000$ numbers. All numbers are equal ($7000$ for example). If I perform FFT over this vector, what will I get? From my understanding, I should receive a fixed DC line. Is this true? Is there any website (i've searched) which can simulate that? Because I did tried to run FFT with Python np.fft.rfft and np.fft.rfftfreq and didn't get it... What am I missing here? (My question is derived from a broader perspective, so bear with me...)
Citing Bellanger's classic Digital Processing of Signals – Theory and Practice , the point is not where your cut-off frequency is, but how much attenuation you need, how much ripple in the signal you want to preserve you can tolerate and, most importantly, how narrow your transition from pass- to stopband (transition width) needs to be. I assume you want a linear phase filter (though you specify minimum latency, I don't think a minimum phase filter is a good idea, in general, unless you know damn well what you're going to be doing with your signal afterwards). In that case, the filter order (which is the number of taps) is $$N\approx \frac 23 \log_{10} \left[\frac1{10 \delta_1\delta_2}\right]\,\frac{f_s}{\Delta f}$$ with $$\begin{align} f_s &\text{ the sampling rate}\\ \Delta f& \text{ the transition width,}\\ & \text{ ie. the difference between end of pass band and start of stop band}\\ \delta_1 &\text{ the ripple in passband,}\\ &\text{ ie. "how much of the original amplitude can you afford to vary"}\\ \delta_2 &\text{ the suppresion in the stop band}. \end{align}$$ Let's plug in some numbers! You specified a cut-off frequency of $\frac{f_s}{100}$ , so I'll just go ahead and claim your transition width will not be more than half of that, so $\Delta f=\frac{f_s}{200}$ . Coming from SDR / RF technology, 60 dB of suppression is typically fully sufficient – hardware, without crazy costs, won't be better at keeping unwanted signals out of your input, so meh, let's not waste CPU on having a fantastic filter that's better than what your hardware can do. Hence, $\delta_2 = -60\text{ dB} = 10^{-3}$ . Let's say you can live with a amplitude variation of 0.1% in the passband (if you can live with more, also consider making the suppression requirement less strict). That's $\delta_1 = 10^{-4}$ . So, plugging this in: $$\begin{align} N_\text{Tommy's filter} &\approx \frac 23 \log_{10} \left[\frac1{10 \delta_1\delta_2}\right]\,\frac{f_s}{\Delta f}\\ &= \frac 23 \log_{10} \left[\frac1{10 \cdot 10^{-4}\cdot10^{-3}}\right]\,\frac{f_s}{\frac{f_s}{200}}\\ &= \frac 23 \log_{10} \left[\frac1{10 \cdot 10^{-7}}\right]\,200\\ &= \frac 23 \log_{10} \left[\frac1{10^{-6}}\right]\,200\\ &= \frac 23 \left(\log_{10} 10^6\right) \,200\\ &= \frac 23 \cdot 6 \cdot 200\\ &= 800\text{ .} \end{align}$$ So with your 200 taps, you're far off, iff you use an extremely narrow pass band in your filter like I assumed you would. Note that this doesn't have to be a problem – first of all, a 800-taps filter is scary, but frankly, only at first sight: As I tested in this answer over at StackOverflow : CPUs nowadays are fast , if you use someone's CPU-optimized FIR implementation. For example, I used GNU Radio's FFT-FIR implementation with exactly the filter specification outline above. I got a performance of 141 million samples per second – that might or might not be enough for you. So here's our question-specific test case (which took me seconds to produce): Decimation: If you are only going to keep a fraction of the input bandwidth, the output of your filter will be drastically oversampled. Introducing a decimation of $M$ means that your filter doesn't give you every output sample, but every $M$ th one only – which normally would lead to lots and lots of aliasing, but since you're eradicating all signal that could alias, you can savely do so. Clever filter implementations (polyphase decimators) can reduce the computational effort by M, this way. In your case, you could easily decimate by $M=50$ , and then, your computer would only have to calculate $\frac{1200}{50}= 24$ multiplications/accumulations per input sample – much much easier. The filters in GNU Radio generally do have that capability. And this way, even out of the FFT FIR (which doesn't lend itself very well to a polyphasing decimator implementation), I can squeeze another factor of 2 in performance. Can't do much more. That's pretty close to RAM bandwidth, in my experience, on my system. For Latency: Don't care about it. Really, don't, unless you need to. You're doing this with typical audio sampling rates? Remember, $96\,\frac{\text{kS}}{\text{s}}\overset{\text{ridiculously}}{\ll}141\,\frac{\text{MS}}{\text{s}}$ mentioned above. So the time spent computing the filter output will only be relevant for MS/s live signal streaming. For DSP with offline data: well, add a delay to whatever signal you have in parallel to your filter to compensate. (If your filter is linear phase, it's delay will be half the filter length.) This might be relevant in a hardware implementation of the FIR filter. Hardware implementation: So maybe your PC's or embedded device's CPU and OS really don't allow you to fulfill your latency constraints, and so you're looking into FPGA-implemented FIRs. The first thing you'll notice is that for hardware, there's different design paradigma – a "I suppress everything but $\frac1{100}$ of my input rate" filter needs a large bit width for the fixed point numbers you'd handle in Hardware (as oppposed to the floating point numbers on a CPU). So that's the first reason why you'd typically split that filter into multiple, cascaded, smaller, decimating FIR filters. Another reason is that you can, with every cascade "step", let your multipliers (typically, "DSP slices") run at a lower rate, and hence, multiplex them (number of DSP slices is usually very limited), using one multiplier for multiple taps. Yet another reason is that especially half-band filters, i.e. lowpasses that suppress half the input band and deliver half the input rate, are very efficiently implementable in hardware (as they have half the taps being zero, something that is hard to exploit in a CPU/SIMD implementation).
{ "source": [ "https://dsp.stackexchange.com/questions/31078", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/21196/" ] }
31,084
Structure Tensor is a matrix in form: $S=\begin{pmatrix} W \ast I_x^2 & W \ast (I_xI_y)\\ W \ast (I_xI_y) & W \ast I_y^2 \end{pmatrix}$ where $W$ is a smoothing kernel (e.g a Gaussian kernel) and $I_x$ is gradient in the direction of $x$ and so on. Therefore,size of structure tensor is $2N \times 2M$ (were $N$ is the image height and $M$ is its width). However it is supposed to be $2\times2$ matrix to decompose eigenvalues such that we obtain $\lambda_1$ and $\lambda_2$ as it is mentioned in many papers. So, how to calculate $S$ matrix?
Citing Bellanger's classic Digital Processing of Signals – Theory and Practice , the point is not where your cut-off frequency is, but how much attenuation you need, how much ripple in the signal you want to preserve you can tolerate and, most importantly, how narrow your transition from pass- to stopband (transition width) needs to be. I assume you want a linear phase filter (though you specify minimum latency, I don't think a minimum phase filter is a good idea, in general, unless you know damn well what you're going to be doing with your signal afterwards). In that case, the filter order (which is the number of taps) is $$N\approx \frac 23 \log_{10} \left[\frac1{10 \delta_1\delta_2}\right]\,\frac{f_s}{\Delta f}$$ with $$\begin{align} f_s &\text{ the sampling rate}\\ \Delta f& \text{ the transition width,}\\ & \text{ ie. the difference between end of pass band and start of stop band}\\ \delta_1 &\text{ the ripple in passband,}\\ &\text{ ie. "how much of the original amplitude can you afford to vary"}\\ \delta_2 &\text{ the suppresion in the stop band}. \end{align}$$ Let's plug in some numbers! You specified a cut-off frequency of $\frac{f_s}{100}$ , so I'll just go ahead and claim your transition width will not be more than half of that, so $\Delta f=\frac{f_s}{200}$ . Coming from SDR / RF technology, 60 dB of suppression is typically fully sufficient – hardware, without crazy costs, won't be better at keeping unwanted signals out of your input, so meh, let's not waste CPU on having a fantastic filter that's better than what your hardware can do. Hence, $\delta_2 = -60\text{ dB} = 10^{-3}$ . Let's say you can live with a amplitude variation of 0.1% in the passband (if you can live with more, also consider making the suppression requirement less strict). That's $\delta_1 = 10^{-4}$ . So, plugging this in: $$\begin{align} N_\text{Tommy's filter} &\approx \frac 23 \log_{10} \left[\frac1{10 \delta_1\delta_2}\right]\,\frac{f_s}{\Delta f}\\ &= \frac 23 \log_{10} \left[\frac1{10 \cdot 10^{-4}\cdot10^{-3}}\right]\,\frac{f_s}{\frac{f_s}{200}}\\ &= \frac 23 \log_{10} \left[\frac1{10 \cdot 10^{-7}}\right]\,200\\ &= \frac 23 \log_{10} \left[\frac1{10^{-6}}\right]\,200\\ &= \frac 23 \left(\log_{10} 10^6\right) \,200\\ &= \frac 23 \cdot 6 \cdot 200\\ &= 800\text{ .} \end{align}$$ So with your 200 taps, you're far off, iff you use an extremely narrow pass band in your filter like I assumed you would. Note that this doesn't have to be a problem – first of all, a 800-taps filter is scary, but frankly, only at first sight: As I tested in this answer over at StackOverflow : CPUs nowadays are fast , if you use someone's CPU-optimized FIR implementation. For example, I used GNU Radio's FFT-FIR implementation with exactly the filter specification outline above. I got a performance of 141 million samples per second – that might or might not be enough for you. So here's our question-specific test case (which took me seconds to produce): Decimation: If you are only going to keep a fraction of the input bandwidth, the output of your filter will be drastically oversampled. Introducing a decimation of $M$ means that your filter doesn't give you every output sample, but every $M$ th one only – which normally would lead to lots and lots of aliasing, but since you're eradicating all signal that could alias, you can savely do so. Clever filter implementations (polyphase decimators) can reduce the computational effort by M, this way. In your case, you could easily decimate by $M=50$ , and then, your computer would only have to calculate $\frac{1200}{50}= 24$ multiplications/accumulations per input sample – much much easier. The filters in GNU Radio generally do have that capability. And this way, even out of the FFT FIR (which doesn't lend itself very well to a polyphasing decimator implementation), I can squeeze another factor of 2 in performance. Can't do much more. That's pretty close to RAM bandwidth, in my experience, on my system. For Latency: Don't care about it. Really, don't, unless you need to. You're doing this with typical audio sampling rates? Remember, $96\,\frac{\text{kS}}{\text{s}}\overset{\text{ridiculously}}{\ll}141\,\frac{\text{MS}}{\text{s}}$ mentioned above. So the time spent computing the filter output will only be relevant for MS/s live signal streaming. For DSP with offline data: well, add a delay to whatever signal you have in parallel to your filter to compensate. (If your filter is linear phase, it's delay will be half the filter length.) This might be relevant in a hardware implementation of the FIR filter. Hardware implementation: So maybe your PC's or embedded device's CPU and OS really don't allow you to fulfill your latency constraints, and so you're looking into FPGA-implemented FIRs. The first thing you'll notice is that for hardware, there's different design paradigma – a "I suppress everything but $\frac1{100}$ of my input rate" filter needs a large bit width for the fixed point numbers you'd handle in Hardware (as oppposed to the floating point numbers on a CPU). So that's the first reason why you'd typically split that filter into multiple, cascaded, smaller, decimating FIR filters. Another reason is that you can, with every cascade "step", let your multipliers (typically, "DSP slices") run at a lower rate, and hence, multiplex them (number of DSP slices is usually very limited), using one multiplier for multiple taps. Yet another reason is that especially half-band filters, i.e. lowpasses that suppress half the input band and deliver half the input rate, are very efficiently implementable in hardware (as they have half the taps being zero, something that is hard to exploit in a CPU/SIMD implementation).
{ "source": [ "https://dsp.stackexchange.com/questions/31084", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/15752/" ] }
31,726
If symmetry conditions are met, FIR filters have a linear phase. This is not true for IIR filters. However, for what applications is it bad to apply filters that do not have this property and what would be the negative effect?
A linear phase filter will preserve the waveshape of the signal or component of the input signal (to the extent that's possible, given that some frequencies will be changed in amplitude by the action of the filter). This could be important in several domains: coherent signal processing and demodulation , where the waveshape is important because a thresholding decision must be made on the waveshape (possibly in quadrature space, and with many thresholds, e.g. 128 QAM modulation), in order to decide whether a received signal represented a "1" or "0". Therefore, preserving or recovering the originally transmitted waveshape is of utmost importance, else wrong thresholding decisions will be made, which would represent a bit error in the communications system. radar signal processing , where the waveshape of a returned radar signal might contain important information about the target's properties audio processing , where some believe (although many dispute the importance) that "time aligning" the different components of a complex waveshape is important for reproducing or maintaining subtle qualities of the listening experience (like the "stereo image", and the like)
{ "source": [ "https://dsp.stackexchange.com/questions/31726", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/22649/" ] }
31,733
Please preface your answer with spoiler notation by typing the following two characters first ">!" Below is an implementation of a "Gold Code" Generator formed by adding (in $GF(2)$) the outputs from two linear feedback shift registers (LFSRs). This generates a pseudo-random sequence and is similar to the C/A Code Generator used in GPS. An LFSR operates by shifting the contents of its shift register right on each cycle and computing the new input on the left side through the $GF(2)$ addition of certain outputs based on its "generator polynomial". Each LFSR generator, given that is uses a generator polynomial that supports a maximum length sequence (meaning the polynomial is "primitive") produces a pseudo-random sequence which does not repeat for $2^{10}-1 = 1023$ samples, or "chips". The combined output therefore also does not repeat for 1023 chips. The feedback taps in each LFSR are set by the specific generator polynomial used (as shown). Assume a black box with a a similar Gold Code generator, using two 10th order LFSRs but with unknown and primitive generators, and we only have access to the output code in high SNR conditions. What is the minimum number of chips that we would need to observe in order to determine what the generator polynomials are (in other words, to determine what the feedback taps are)?
A linear phase filter will preserve the waveshape of the signal or component of the input signal (to the extent that's possible, given that some frequencies will be changed in amplitude by the action of the filter). This could be important in several domains: coherent signal processing and demodulation , where the waveshape is important because a thresholding decision must be made on the waveshape (possibly in quadrature space, and with many thresholds, e.g. 128 QAM modulation), in order to decide whether a received signal represented a "1" or "0". Therefore, preserving or recovering the originally transmitted waveshape is of utmost importance, else wrong thresholding decisions will be made, which would represent a bit error in the communications system. radar signal processing , where the waveshape of a returned radar signal might contain important information about the target's properties audio processing , where some believe (although many dispute the importance) that "time aligning" the different components of a complex waveshape is important for reproducing or maintaining subtle qualities of the listening experience (like the "stereo image", and the like)
{ "source": [ "https://dsp.stackexchange.com/questions/31733", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/21048/" ] }
31,751
So I am a complete novice to ICA, so excuse my question if it is bad one, but I have the signal: $$\sin(2\pi x) + \sin(4\pi x) + \textrm{Additive White Gaussian Noise}$$ I want to try to separate the two signals and apply a Fourier Transform on each one individually, to see if I can get a better spectral estimate than if they were together. I was thinking of using scikit's FastICA Algorithm. So my plan was to take the signal of length $3200$ samples, split it into $16$ windows each with length $200$ samples, apply a Hanning Function on each window, and do an ICA on 4 subsequent windows at a time. Would this theoretically work, or does the ICA have to act on two different observations of a signal? Is the ICA dependent on each observation of the signal having a different mixing matrix, or can it be the same?
A linear phase filter will preserve the waveshape of the signal or component of the input signal (to the extent that's possible, given that some frequencies will be changed in amplitude by the action of the filter). This could be important in several domains: coherent signal processing and demodulation , where the waveshape is important because a thresholding decision must be made on the waveshape (possibly in quadrature space, and with many thresholds, e.g. 128 QAM modulation), in order to decide whether a received signal represented a "1" or "0". Therefore, preserving or recovering the originally transmitted waveshape is of utmost importance, else wrong thresholding decisions will be made, which would represent a bit error in the communications system. radar signal processing , where the waveshape of a returned radar signal might contain important information about the target's properties audio processing , where some believe (although many dispute the importance) that "time aligning" the different components of a complex waveshape is important for reproducing or maintaining subtle qualities of the listening experience (like the "stereo image", and the like)
{ "source": [ "https://dsp.stackexchange.com/questions/31751", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/22492/" ] }
35,238
Firstly, I am new to DSP and have no real education in it, but I am developing an audio visualization program and I am representing an FFT array as vertical bars as in a typical frequency spectrum visualization. The problem I had was that the audio signal values changed too rapidly to produce a pleasing visual output if I just mapped the FFT values directly: So I apply a simple function to the values in order to "smooth out" the result: // pseudo-code delta = fftValue - smoothedFftValue; smoothedFftValue += delta * 0.2; // 0.2 is arbitrary - the lower the number, the more "smoothing" In other words, I am taking the current value and comparing it to the last, and then adding a fraction of that delta to the last value. The result looks like this: So my question is: Is this a well-established pattern or function for which a term already exsits? Is so, what is the term? I use "smoothing" above but I am aware that this means something very specific in DSP and may not be correct. Other than that it seemed maybe related to a volume envelope, but also not quite the same thing. Are there better approaches or further study on solutions to this which I should look at? Thanks for your time and apologies if this is a stupid question (reading other discussions here, I am aware that my knowledge is much lower than the average it seems).
What you've implemented is a single-pole lowpass filter, sometimes called a leaky integrator . Your signal has the difference equation: $$ y[n] = 0.8 y[n-1] + 0.2 x[n] $$ where $x[n]$ is the input (the unsmoothed bin value) and $y[n]$ is the smoothed bin value. This is a common way of implementing a simple, low-complexity lowpass filter. I've written about them several times before in previous answers; see [1] [2] [3] .
{ "source": [ "https://dsp.stackexchange.com/questions/35238", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/24551/" ] }
38,131
I read in some places that music is mostly sampled at 44.1 kHz whereas we can only hear up to 20 kHz. Why is it?
The sampling rate of a real signal needs to be greater than twice the signal bandwidth. Audio practically starts at 0 Hz, so the highest frequency present in audio recorded at 44.1 kHz is 22.05 kHz (22.05 kHz bandwidth). Perfect brickwall filters are mathematically impossible, so we can't just perfectly cut off frequencies above 20 kHz. The extra 2 kHz is for the roll-off of the filters; it's "wiggle room" in which the audio can alias due to imperfect filters, but we can't hear it. The specific value of 44.1 kHz was compatible with both PAL and NTSC video frame rates used at the time. Note that the rationale is published in many places: Wikipedia: Why 44.1 kHz?
{ "source": [ "https://dsp.stackexchange.com/questions/38131", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/26981/" ] }
38,135
We know that IIR digital filter has nonlinear phase response in general. How can I linearize the phase response of IIR filter without altering its magnitude response?
The sampling rate of a real signal needs to be greater than twice the signal bandwidth. Audio practically starts at 0 Hz, so the highest frequency present in audio recorded at 44.1 kHz is 22.05 kHz (22.05 kHz bandwidth). Perfect brickwall filters are mathematically impossible, so we can't just perfectly cut off frequencies above 20 kHz. The extra 2 kHz is for the roll-off of the filters; it's "wiggle room" in which the audio can alias due to imperfect filters, but we can't hear it. The specific value of 44.1 kHz was compatible with both PAL and NTSC video frame rates used at the time. Note that the rationale is published in many places: Wikipedia: Why 44.1 kHz?
{ "source": [ "https://dsp.stackexchange.com/questions/38135", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/26985/" ] }
40,259
Being a non signal processing science student I have limited understanding of the concepts. I have a continuous periodic bearing faulty signal (with time amplitudes) which are sampled at $12\textrm{ kHz}$ and $48\textrm{ kHz}$ frequencies. I have utilized some machine learning techniques (Convolutional Neural Network) to classify faulty signals to the non faulty signals. When I am using $12\textrm{ kHz}$ I am able to achieve a classification accuracy $97 \pm 1.2 \%$ accuracy. Similarly I am able to achieve accuracy of $95\%$ when I applied the same technique on the same signal but sampled at $48\textrm{ kHz}$ despite the recording made at same RPM, load, and recording angle with the sensor. What could be the reason for this increased rate of misclassification? Are there any techniques to spot differences in the signal? Are higher resolution signals prone to higher noise? Details of the signal can be seen here , in chapter 3.
Sampling at a higher frequency will give you more effective number of bits (ENOB), up to the limits of the spurious free dynamic range of the Analog to Digital Converter (ADC) you are using (as well as other factors such as the analog input bandwidth of the ADC). However there are some important aspects to understand when doing this that I will detail further. This is due to the general nature of quantization noise, which under conditions of sampling a signal that is uncorrelated to the sampling clock is well approximated as a white (in frequency) uniform (in magnitude) noise distribution. Further, the Signal to Noise Ratio (SNR) of a full scale real sine-wave will be well approximated as: $$SNR = 6.02 \text{ dB/bit} + 1.76 \text{dB}$$ For example, a perfect 12 bit ADC samping a full scale sine wave will have an SNR of $6.02\times 12+1.76 = 74$ dB. By using a full scale sine wave, we establish a consistent reference line from which we can determine the total noise power due to quantization. Within reason, that noise power remains the same even as the sine wave amplitude is reduced, or when we use signals that are composites of multiple sine waves (meaning via the Fourier Series Expansion, any general signal). This classic formula is derived from the uniform distribution of the quantization noise, as for any uniform distribution the variance is $\frac{A^2}{12}$ , where A is the width of the distribution. This relationship and how we arrive at the formula above is detailed in the figure below, comparing the histogram and variance for a full-scale sine wave ( $\sigma_s^2$ ), to the histogram and variance for the quantization noise ( $\sigma_N^2$ ), where $\Delta$ is a quantization level and b is the number of bits. Therefore the sinewave has a peak to peak amplitude of $2^b\Delta$ . You will see that taking the square root of the equation shown below for the variance of the sine wave $\frac{(2^b\Delta)^2}{8}$ is the familiar $\frac{V_p}{\sqrt{2}}$ as the standard deviation of a sine wave at peak amplitude $V_p$ . Thus we have the variance of the signal divided by the variance of the noise as the SNR. Further as mentioned earlier, this noise level due to quantization is well approximated as a white noise process when the sampling rate is uncorrelated to the input (which occurs with incommensurate sampling with a sufficient number of bits and the input signal is fast enough that it is spanning multiple quantization levels from sample to sample, and incommensurate sampling means sampling with a clock that is not an integer multiple relationship in frequency with the input). As a white noise process in our digital sampled spectrum, the quantization noise power will be spread evenly from a frequency of 0 (DC) to half the sampling rate ( $f_s/2$ ) for a real signal, or $-f_s/2$ to $+f_s/2$ for a complex signal. In a perfect ADC, the total variance due to quantization remains the same independent of the sampling rate (it is proportional to the magnitude of the quantization level, which is independent of sampling rate). To see this clearly, consider the standard deviation of a sine wave which we reminded ourselves earlier is $\frac{V_p}{\sqrt{2}}$ ; no matter how fast we sample it as long as we sample it sufficiently to meet Nyquist's criteria, the same standard deviation will result. Notice that it has nothing to do with the sampling rate itself. Similarly the standard deviation and variance of the quantization noise is independent of frequency, but as long as each sample of quantization noise is independent and uncorrelated from each previous sample, then the noise is a white noise process meaning that it is spread evenly across our digital frequency range. If we raise the sampling rate, the noise density goes down. If we subsequently filter since our bandwidth of interest is lower, the total noise will go down. Specifically if you filter away half the spectrum, the noise will go down by 2 (3 dB). Filter 1/4 of the spectrum and the noise goes down by 6 dB which is equivalent to gaining 1 more bit of precision! Thus the formula for SNR that accounts for oversampling is given as: Actual ADC's in practice will have limitations including non-linearities, analog input bandwidth, aperture uncertainly etc that will limit how much we can oversample, and how many effective bits can be achieved. The analog input bandwidth will limit the maximum input frequency we can effectively sample. The non-linearities will lead to "spurs" which are correlated frequency tones that will not be spread out and therefore will not benefit from the same noise processing gain we saw earlier with the white quantization noise model. These spurs are quantified on ADC datasheets as the spurious-free dynamic range (SFDR). In practice I refer to the SFDR and usually take advantage of oversampling until the predicted quantization noise is on level with the SFDR, at which point if the strongest spur happens to be in band, there will be no further increase in SNR. To detail further I would need to refer to the specific design in more detail. All noise contributions are captured nicely in the effective number of bits (ENOB) specification also given on ADC data sheets. Basically the actual total ADC noise expected is quantified by reversing the SNR equation that I first gave to come up with the equivalent number of bits a perfect ADC would provide. It will always be less than the actual number of bits due to these degradation sources. Importantly, it will also go down as the sampling rate goes up so there will be a diminishing point of return from oversampling. For example, consider an actual ADC which has a specified ENOB of 11.3 bits and SFDR of 83 dB at 100 MSPS sampling rate. 11.3 ENOB is an SNR of 69.8 dB (70 dB) for a full scale sine wave. The actual signal sampled will likely be at a lower input level so as not to clip, but by knowing the absolute power level of a full scale sinewave, we now know the absolute power level of the total ADC noise. If for example the full scale sine wave that results in the maximum SFDR and ENOB is +9 dBm (also note that this level with best performance is typically 1-3 dB lower than the actual full scale where a sine wave would start to clip!), then the total ADC noise power will be +9dBm-70 dB = -61 dBm. Since the SFDR is 83 dB, then we can easily expect to gain up to that limit by oversampling (but not more if the spur is in our final band of interest). In order to achieve this 22 dB gain, the oversampling ratio N would need to be at least $N= 10^{\frac{83-61}{10}} = 158.5$ Therefore if our actual real signal bandwidth of interest was 50MHz/158.5 = 315.5 KHz, we could sample at 100 MHz and gain 22 dB or 3.7 additional bits from the oversampling, for a total ENOB of 11.3+ 3.7 = 15 bits. As a final note, know that Sigma Delta ADC architectures use feedback and noise shaping to achieve a much better increase in number of bits from oversampling than what I described here of what can be achieved with traditional ADC's. We saw an increase of 3dB/octave (every time we doubled the frequency we gained 3 dB in SNR). A simple first order Sigma Delta ADC has a gain of 9dB/octave, while a 3rd order Sigma Delta has a gain of 21 dB/octave! (Fifth order Sigma Delta's are not uncommmon!). Also see related responses at How do you simultaneously undersample and oversample? Oversampling while maintaining noise PSD How to choose FFT depth for ADC performance analysis (SINAD, ENOB) How increasing the Signal to Quantization noise increases the resolution of ADC
{ "source": [ "https://dsp.stackexchange.com/questions/40259", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/21205/" ] }
46,401
Most modern day Cathode Ray Tube (CRT) televisions manufactured after the 1960s (after the introduction of NTSC and PAL standards) supported the circuit-based decoding of colored signals. It is well known that the new color standards was created to permit the new TV sets to be backwards compatible with old black and white broadcasts of the day (among also being religiously backwards compatible with numerous other legacy features). The new color standards added the color information on a higher carrier frequency (but at the same duration of the luminosity). The color information is synchronized after the beginning of each horizontal line and is known as the colorburst . It would seem that when you feed noise into a television, the TV should create not only black and white noise but also color noise as there would be color information at each new horizontal line where each frame should be. But this is the not the case as all color TVs still make black and white noise! Why is this the case? Here is an example signal of a single horizontal scan. And here is the resulting picture if all horizontal scans are the same (you get bars!).
The color burst is also an indicator that there is a color signal. This is for compatibility with black and white signals. No color burst means B&W signal, so only decode the luminance signal (no croma). No signal, no color burst, so the decoder falls back to B&W mode. Same idea goes to FM stereo/mono. If there is no 19 kHz subcarrier present, then the FM demodulator falls back to mono.
{ "source": [ "https://dsp.stackexchange.com/questions/46401", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/29160/" ] }
59,921
I have read that the Fourier transform cannot distinguish components with the same frequency but different phase. For example, in Mathoverflow , or xrayphysics , where I got the title of my question from: "The Fourier transform cannot measure two phases at the same frequency." Why is this true mathematically?
It's because the simultaneous presence of two sinusoidal signals with the same frequency and different phases is actualy equivalent to a single sinusoidal at the same frequency, but, with a new phase and amplitude as follows: Let the two sinusodial components be summed like this : $$ x(t) = a \cos(\omega_0 t + \phi) + b \cos(\omega_0 t + \theta) $$ Then, by trigonometric manipulations it can be shown that : $$ x(t) = A \cos(\omega_0 t + \Phi) $$ where $$A = \sqrt{ a^2 + b^2 + 2 a b \cos(\theta-\phi) } $$ and $$ \Phi = \tan^{-1}\left(\frac{ a \sin(\phi) + b\sin(\theta) } { a \cos(\phi) + b\cos(\theta) } \right) $$ hence you actually have a single sinusoidal (with a new phase and amplitude), and therefore nothing to distinguish indeed...
{ "source": [ "https://dsp.stackexchange.com/questions/59921", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/27561/" ] }
70,700
I had heard that tape is still the best medium for storing large amounts of data. So I figured I can store a relatively large amount of data on a cassette tape. I was thinking of a little project to read/write digital data on a cassette tape from my computer sound card just for the retro feeling. (And perhaps read/write that tape with an Arduino too). But after reading up about it for a bit it turns out that they can store very small amounts of data. With baud rates varying between 300 to 2400 something between ~200KB to ~1.5MB can be stored on a 90 minute (2x45min) standard cassette tape. Now I have a lot of problems with understanding why that is. 1- These guys can store 90 minutes of audio. Even if we assume the analog audio quality on them was equivalent of 32Kbps that's about 21MB of data. I have a hard time believing what I listened to was 300bps quality audio. 2- I read about the Kansas City standard and I can't understand why the maximum frequency they're using is 4800Hz yielding a 2400 baud. Tape (according to my internet search) can go up to 15KHz. Why not use 10KHz frequency and achieve higher bauds? 3- Why do all FSK modulations assign a frequency spacing equal to baud rate? In the Kansas example they are using 4800Hz and 2400Hz signals for '1' and '0' bits. In MFSK-16 spacing is equal to baud rate as well. Why don't they use a MFSK system with a 256-element alphabet? With 20Hz space between each frequency the required bandwidth would be ~5KHZ. We have 10KHz in cassette tape so that should be plenty. Now even if all our symbols were the slowest one (5KHz) we would have 5*8 = 40000 baud. That's 27MB of data. Not too far from the 21MB estimation above. 4- If tape is so bad then how do they store Terabaytes on it?
I had heard that tape is still the best medium for storing large amounts of data. well, "best" is always a reduction to a single set of optimization parameters (e.g. cost per bit, durability, ...) and isn't ever "universally true". I can see, for example, that "large" is already a relative term, and for a small office, the optimum solution for backing up "large" amounts of data is a simple hard drive, or a hard drive array. For a company, backup tapes might be better, depending on how often they need their data back. (Tapes are inherently pretty slow and can't be accessed at "random" points) So I figured I can store a relatively large amount of data on a cassette tape. Uh, you might be thinking of a Music Casette, right? Although that's magnetic tape, too, it's definitely not the same tape your first sentence referred to: It's meant to store an analog audio signal with low audible distortion for playback in a least-cost cassette player, not for digital data with low probability of bit error in a computer system. Also, Music Cassettes are a technology from 1963 (small updates afterwards). Trying to use them for the amounts of data modern computers (even arduinos) deal with sounds like you're complaining your ox cart doesn't do 100 km/h on the autobahn. But after reading up about it for a bit it turns out that they can store very small amounts of data. With baud rates varying between 300 to 2400 something between ~200KB to ~1.5MB can be stored on a 90 minute (2x45min) standard cassette tape. Well, so that's a lot of data for when music-cassette-style things were last used with computers (the 1980s). Also, where do these data rates drop from? That sounds like you're basing your analysis on 1980's technology. These guys can store 90 minutes of audio. Even if we assume the analog audio quality on them was equivalent of 32Kbps that's about 21MB of data. 32 kb/s of what, exactly? If I play an Opus Voice , Opus Music or MPEG 4 AAC-HE file with a target bitrate of 32 kb/s next to the average audio cassette, I'm not sure the cassette will stand much of a chance, unless you want the "warm audio distortion" that casettes bring – but that's not anything you want to transport digital data. You must be very careful here, because audio cassette formulations are optimized for specific audio properties . That means your "perceptive" quality has little to do with the "digital data capacity". I have a hard time believing what I listened to was 300bps quality audio. again, you're comparing apples to oranges. Just because someone 40 to 50 years ago wrote a 300 bits per second modem that could reconstruct binary data from audio cassette-stored analog signals, doesn't mean 300 bps is the capacity of the music cassette channel. That's like saying "my Yorkshire Terrier can run 12 km/h on this racetrack, therefore I can't believe you can't have Formula 1 cars doing 350 km/h on it". I read about the Kansas City standard and I can't understand why the maximum frequency they're using is 4800Hz yielding a 2400 baud. Tape (according to my internet search) can go up to 15KHz. Why not use 10KHz frequency and achieve higher bauds? Complexity, and low quality of implementation and tapes. I mean, you're literally trying to argue that what was possible in 1975 is representative for what is possible today. That's 45 years in the past, they didn't come anywhere near theoretical limits. Why do all FSK modulations assign a frequency spacing equal to baud rate? They don't. Some do. Most modern FSK modulations don't (they're minimum shift keying standards, instead, where you choose the spacing to be half the symbol rate). In the Kansas example they are using 4800Hz and 2400Hz signals for '1' and '0' bits. In MFSK-16 spacing is equal to baud rate as well. Again, 1975 != all things possible today. Why don't they use a MFSK system with a 256-element alphabet? With 20Hz space between each frequency the required bandwidth would be ~5KHZ. We have 10KHz in cassette tape so that should be plenty. Now even if all our symbols were the slowest one (5KHz) we would have 5*8 = 40000 baud. That's 27MB of data. Not too far from the 21MB estimation above. Well, it's not that simple, because your system isn't free from noise and distortion, but as before: Low cost. They simply didn't. If tape is so bad then how do they store Terabaytes on it? You're comparing completely different types of tapes, and tape drives: This 100€ LTO-8 data backup tape vs this cassette tape type, of which child me remembers buying 5-packs at the supermarket for 9.99 DM, which, given retail overhead, probably means the individual cassette was in the < 1 DM range for business customers: and this 2500€ tape drive stuffed with bleeding edge technology and a metric farkton of error-correction code and other fancy digital technology vs this 9€ casette thing that is a 1990's least-cost design using components available since the 1970s, which is actually currently being cleared from Conrad's stock because it's so obsolete: At the end of the 1980s, digital audio became the "obvious next thing", and that was the time the DAT cassette was born, optimized for digital audio storage: These things, with pretty "old-schooley" technology (by 2020 standards) do 1.3 Gb/s when used as data cassettes (that technology was called DDS but soon parted from the audio recording standards). Anyway, that already totally breaks with the operating principles of the analog audio cassette as you're working with: in the audio cassette, the read head is fixed, and thus, the bandwidth of the signal is naturally limited by the product of spatial resolution of the magnetic material and the head and the tape speed. There's electronic limits to the first factor, and very mechanical ones to the second (can't shoot a delicate tape at supersonic speeds through a machine standing in your living room that's still affordable, can you). in DAT, the reading head is mounted on a rotating drum, mounted at a slant to the tape – that way, the speed of the head relative to the tape can be greatly increased, and thus, you get more data onto the same length of tape, at very moderate tape speeds (audio cassete: ~47 mm/s, DAT: ~9 mm/s) DAT is a digital format by design . This means zero focus was put into making the amplitude response "sound nice despite all imperfections"; instead, extensive error correction was applied (if one is to believe this source , concatenated Reed-Solomon codes of an overall rate of 0.71) and 8b-10b line coding (incorporating further overhead, that should put us at an effective rate of 0.5). Note how they do line coding on the medium : This is bits-to-tape, directly. Clearly, this leaves room for capacity increases, if one was to use the tape as the analog medium it actually is, and combined that ability with the density-enabling diagonal recording, to use the tape more like an analog noisy channel (and a slightly nonlinear at that) than a perfect 0/1 storage. Then, you'd not need the 8b-10b line coding. Also, while re-designing the storage, you'd drop the concatenated RS channel code (that's an interesting choice, sadly I couldn't find anything on why they chose to concatenate two RS codes) and directly go for much larger codes – since a tape isn't random access, an LDPC code (a typically 10000s of bits large code) would probably be the modern choice. You'd incorporate neighbor-interference cancellation and pilots to track system changes during playback. In essence, you'd build something that is closer to a modern hard drive on a different substrate than it would be to an audio cassette; and lo and behold, suddenly you have a very complex device that doesn't resemble your old-timey audio cassette player at all, but a the modern backup tape drive like I've linked to above.
{ "source": [ "https://dsp.stackexchange.com/questions/70700", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/52529/" ] }
71,398
How does Synchrosqueezing Wavelet Transform work, intuitively? What does the "synchrosqueezed" part do, and how is it different from simply the (continuous) Wavelet Transform?
Synchrosqueezing is a powerful reassignment method. To grasp its mechanisms, we dissect the (continuous) Wavelet Transform, and how its pitfalls can be remedied. Physical and statistical interpretations are provided. If unfamiliar with CWT, I recommend this tutorial. SSWT is implemented in MATLAB as wsst , and in Python, ssqueezepy . (-- All answer code ) Begin with CWT of a pure tone: A straight line in the time-frequency (rather, time-scale) plane, for our fixed-frequency sinusoid over all time - fair. ... except is it a straight line? No, it's a band of lines, seemingly centered about some maximum, likely the "true scale". Zooming, makes this more pronounced. Let's plot rows within this zoomed band, one by one: and all superimposed, each for samples 0 to 127 (horizontal zoom): Notice anything interesting? They all have the same frequency . It isn't particular to this sinusoid, but is how CWT works in correlating wavelets with signals. It appears much of information "repeats"; there is redundancy . Can we take advantage of this? Well, if we just assume that all these adjacent bands actually stem from one and the same band, then we can merge them into one - and this, in a nutshell, is what synchrosqueezing does. Naturally it's more nuanced, but the underlying idea is that we sum components of the same instantaneous frequency to obtain a sharper, focused time-frequency representation. Here's that same CWT, synchrosqueezed: Now that is a straight line. How's it work, exactly? We have an idea, but how exactly is this mathematically formulated? Motivated by speaker identification and Empirical Mode Decomposition, SSWT builds upon the modulation model : $$ f(t) = \sum_{k=1}^{K} A_k(t) \cos(\phi_k (t)), \tag{1} $$ where $A_k(t)$ is the instantaneous amplitude and $$ \omega_k(t) = \frac{d}{dt}(\phi_k(t)) \tag{2} $$ the instantaneous frequency of component $k$ , where we seek to find $K$ such "components" that sum to the original signal. More on this below, "MM vs FT". At this stage, we only have the CWT, $W_f(a, b)$ (a=scale, b=timeshift); how do we extract $\omega$ from it? Revisit the zoomed pure tone plots; again, the $b$ -dependence preserves the original harmonic oscillations at the correct frequency, regardless of $a$ . This suggests we compute, for any $(a, b)$ , the instantaneous frequency via $$ \omega(a, b) = -j[W_f(a, b)]^{-1} \frac{\partial}{\partial b}W_f(a, b), \tag{3} $$ where we've taken the log-derivative , $f' / f$ . To see why, we can show that CWT of $f(t)=A_0 \cos (\omega_0 t)$ is: $$ W_f(a, b) = \frac{A_0}{4 \pi} \sqrt{a} \overline{\hat{\psi}(a \omega_0)} e^{j b \omega_0} \tag{4} $$ and thus partial-diffing w.r.t. $b$ , we extract $\omega_0$ , and the rest in (3) gets divided out. ("But what if $f$ is less nice?" - see caveats). Finally, equipped with $\omega (a, b)$ , we transfer the information from the $(a, b)$ -plane to a $(\omega, b)$ plane: $$ \boxed{ S_f (\omega_l, b) = \sum_{a_k\text{ such that } |\omega(a_k, b) - w_l| \leq \Delta \omega / 2} W_f (a_k, b) a_k^{-3/2}} \tag{5} $$ with $w_l$ spaced apart by $\Delta w$ , and $a^{-3/2}$ for normalization (see "Notes"). And that's about it. Essentially, take our CWT, and reassign it, intelligently. So where are the "components"? -- Extracted from high-valued (ridge) curves in the SSWT plane; in the pure tone case, it's one line, and $K=1$ . More examples ; we select a part of the plane and invert over it as many times as needed. Modulation Model vs Fourier Transform : What's $(1)$ all about, and why not just use FT? Consider a pendulum oscillating with fixed period and constant damping, and its FT: $$ s(t) = e^{-t} \cos (25t) u(t)\ \Leftrightarrow\ S(\omega) = \frac{1 + j\omega}{(1 + j\omega)^2 + 625} $$ What does the Fourier Transform tell us? Infinitely many frequencies , but at least peaking at the pendulum's actual frequency. Is this a sensible physical description? Hardly (only in certain indirect senses); the problem is, FT uses fixed-amplitude complex sinusoid frequencies as its building blocks (basis functions, or "bases"), whereas here we have a variable amplitude that cannot be easily represented by constant frequencies, so FT is forced to "compensate" with all these additional "frequencies". This isn't limited to amplitude modulation; the less sinusoidal or non-periodic the function, the less meaningful its FT spectrum (though not always). Simple example: 1Hz triangle wave, multiple FT frequencies. Frequency-modulation suffers likewise; more intuition here . These are the pitfalls the Modulation Model aims to address - by decoupling amplitude and frequency over time from the global signal, rather than assuming the same (and constant!) amplitude and frequency for all time. Meanwhile, SSWT - perfection: Is synchrosqueezing magic? We seem to gain a lot by ssqueezing - an apparently perfect frequency resolution, violating Heisenberg's uncertainty, and partial noise cancellation ("Notes"). How can this be? A prior . We assume $f(t)$ is well-captured by the $A_k(t) \cos(\phi_k (t))$ components, e.g. based on our knowledge of the underlying physical process. In fact we assume much more than that, shown bit later, but the idea is, this works well on a subset of all possible signals: Indeed, there are many ways synchrosqueezing can go awry, and the more the input obeys SSWT's assumptions (which aren't too restrictive, and many signals naturally comply), the better the results. What are SSWT's assumptions? (when will it fail?) This is a topic of its own (which I may post on later), but briefly, the formulation's as follows. Firstly note that we must somehow restrict what $A(t)$ and $\psi(t)$ can be, else, for example, $A(t)$ can simply cancel out the cosine and become any other function. More precisely, the components are to be such that: More info in ref 2. How would it be implemented? There's now Python code , clean & commented. Regardless, worth noting: For very small CWT coefficients, phase is unstable (just like for DFT), which we work around by zeroing all such coefficients below a given threshold. For any frequency row/bin $w_l$ in SSWT plane, we reassign from $W_f(a, b)$ based on what's closest to $w_l$ according to $\omega (a, b)$ , and for log-scaled CWT we use log-distance . Summary : SSWT is a time-frequency analysis tool. CWT extracts the time-frequency information, and synchrosqueezing intelligently reassigns it - providing a sparser, sharper, noise-robust, and partly denoised representation. The success of synchrosqueezing is based in and explanied by its prior; the more the input obeys assumptions, the better the results. Notes & caveats : What if $f$ isn't nice in $\omega(a, b)$ example? Valid question ; in practice, the more the function satisfies aforementioned assumptions, the less of a problem this is, as authors demonstrate through various lemmas. In the SSWT of damped pendulum, I cheated a little by extending signal's time to $(-2, 6)$ ; this is only to prevent boundary effects, which is a CWT phenomenon that can be remedied; here's directly 0 to 6 . Partial noise cancellation? Indeed; see pg 536 of ref 1. What's the $a^{-3/2}$ in $(5)$ ? Synchrosqueezing effectively inverts $W_f$ onto the reassigned plane, using one-integral iCWT . "Fourier bad?" My earlier comparison is prone to criticism. To be clear, FT is the most solid and general-purpose basis that we have for a signals framework. But it's not an all-purpose -best; depending on context, other constructions are more meaningful and more useful. Where to learn more? The refernced papers are a good source, so are MATLAB's wsst and cwt docs and ssqueezepy 's source code. I may also write further Q&A's, which you can be notified of by subbing this thread . References : A Nonlinear Squeezing of the CWT Based on Auditory Nerve Models - I. Daubechies, S. Maes. Excellent origin paper with succinct intuitions. Synchrosqueezed Wavelet Transforms: a tool for Empirical Mode Decomposition - I. Daubechies, J. Lu, H.T. Wu. Good followup paper with examples. The Synchrosqueezing algorithm for time-varying spectral analysis: robustness properties and new paleoclimate applications - G. Thakur, E. Brevdo, et al. Further exploration of robustness properties and implementation details (including threshold-setting).
{ "source": [ "https://dsp.stackexchange.com/questions/71398", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/50076/" ] }
71,410
I read the source code of librosa.stft and sicpy.signal.stft, and notice that the calculation results of STFT(short-time fourier transform) in these two libraries are quite different: In scipy.signal.stft, the stft result is scaled by 1.0/win.sum() , while in librosa.stft no scaling or normalization procedure is done. So why scipy.signal.stft do the additional scaling procedure? And is there any other difference in the calculation of STFT in these two libraries?
Synchrosqueezing is a powerful reassignment method. To grasp its mechanisms, we dissect the (continuous) Wavelet Transform, and how its pitfalls can be remedied. Physical and statistical interpretations are provided. If unfamiliar with CWT, I recommend this tutorial. SSWT is implemented in MATLAB as wsst , and in Python, ssqueezepy . (-- All answer code ) Begin with CWT of a pure tone: A straight line in the time-frequency (rather, time-scale) plane, for our fixed-frequency sinusoid over all time - fair. ... except is it a straight line? No, it's a band of lines, seemingly centered about some maximum, likely the "true scale". Zooming, makes this more pronounced. Let's plot rows within this zoomed band, one by one: and all superimposed, each for samples 0 to 127 (horizontal zoom): Notice anything interesting? They all have the same frequency . It isn't particular to this sinusoid, but is how CWT works in correlating wavelets with signals. It appears much of information "repeats"; there is redundancy . Can we take advantage of this? Well, if we just assume that all these adjacent bands actually stem from one and the same band, then we can merge them into one - and this, in a nutshell, is what synchrosqueezing does. Naturally it's more nuanced, but the underlying idea is that we sum components of the same instantaneous frequency to obtain a sharper, focused time-frequency representation. Here's that same CWT, synchrosqueezed: Now that is a straight line. How's it work, exactly? We have an idea, but how exactly is this mathematically formulated? Motivated by speaker identification and Empirical Mode Decomposition, SSWT builds upon the modulation model : $$ f(t) = \sum_{k=1}^{K} A_k(t) \cos(\phi_k (t)), \tag{1} $$ where $A_k(t)$ is the instantaneous amplitude and $$ \omega_k(t) = \frac{d}{dt}(\phi_k(t)) \tag{2} $$ the instantaneous frequency of component $k$ , where we seek to find $K$ such "components" that sum to the original signal. More on this below, "MM vs FT". At this stage, we only have the CWT, $W_f(a, b)$ (a=scale, b=timeshift); how do we extract $\omega$ from it? Revisit the zoomed pure tone plots; again, the $b$ -dependence preserves the original harmonic oscillations at the correct frequency, regardless of $a$ . This suggests we compute, for any $(a, b)$ , the instantaneous frequency via $$ \omega(a, b) = -j[W_f(a, b)]^{-1} \frac{\partial}{\partial b}W_f(a, b), \tag{3} $$ where we've taken the log-derivative , $f' / f$ . To see why, we can show that CWT of $f(t)=A_0 \cos (\omega_0 t)$ is: $$ W_f(a, b) = \frac{A_0}{4 \pi} \sqrt{a} \overline{\hat{\psi}(a \omega_0)} e^{j b \omega_0} \tag{4} $$ and thus partial-diffing w.r.t. $b$ , we extract $\omega_0$ , and the rest in (3) gets divided out. ("But what if $f$ is less nice?" - see caveats). Finally, equipped with $\omega (a, b)$ , we transfer the information from the $(a, b)$ -plane to a $(\omega, b)$ plane: $$ \boxed{ S_f (\omega_l, b) = \sum_{a_k\text{ such that } |\omega(a_k, b) - w_l| \leq \Delta \omega / 2} W_f (a_k, b) a_k^{-3/2}} \tag{5} $$ with $w_l$ spaced apart by $\Delta w$ , and $a^{-3/2}$ for normalization (see "Notes"). And that's about it. Essentially, take our CWT, and reassign it, intelligently. So where are the "components"? -- Extracted from high-valued (ridge) curves in the SSWT plane; in the pure tone case, it's one line, and $K=1$ . More examples ; we select a part of the plane and invert over it as many times as needed. Modulation Model vs Fourier Transform : What's $(1)$ all about, and why not just use FT? Consider a pendulum oscillating with fixed period and constant damping, and its FT: $$ s(t) = e^{-t} \cos (25t) u(t)\ \Leftrightarrow\ S(\omega) = \frac{1 + j\omega}{(1 + j\omega)^2 + 625} $$ What does the Fourier Transform tell us? Infinitely many frequencies , but at least peaking at the pendulum's actual frequency. Is this a sensible physical description? Hardly (only in certain indirect senses); the problem is, FT uses fixed-amplitude complex sinusoid frequencies as its building blocks (basis functions, or "bases"), whereas here we have a variable amplitude that cannot be easily represented by constant frequencies, so FT is forced to "compensate" with all these additional "frequencies". This isn't limited to amplitude modulation; the less sinusoidal or non-periodic the function, the less meaningful its FT spectrum (though not always). Simple example: 1Hz triangle wave, multiple FT frequencies. Frequency-modulation suffers likewise; more intuition here . These are the pitfalls the Modulation Model aims to address - by decoupling amplitude and frequency over time from the global signal, rather than assuming the same (and constant!) amplitude and frequency for all time. Meanwhile, SSWT - perfection: Is synchrosqueezing magic? We seem to gain a lot by ssqueezing - an apparently perfect frequency resolution, violating Heisenberg's uncertainty, and partial noise cancellation ("Notes"). How can this be? A prior . We assume $f(t)$ is well-captured by the $A_k(t) \cos(\phi_k (t))$ components, e.g. based on our knowledge of the underlying physical process. In fact we assume much more than that, shown bit later, but the idea is, this works well on a subset of all possible signals: Indeed, there are many ways synchrosqueezing can go awry, and the more the input obeys SSWT's assumptions (which aren't too restrictive, and many signals naturally comply), the better the results. What are SSWT's assumptions? (when will it fail?) This is a topic of its own (which I may post on later), but briefly, the formulation's as follows. Firstly note that we must somehow restrict what $A(t)$ and $\psi(t)$ can be, else, for example, $A(t)$ can simply cancel out the cosine and become any other function. More precisely, the components are to be such that: More info in ref 2. How would it be implemented? There's now Python code , clean & commented. Regardless, worth noting: For very small CWT coefficients, phase is unstable (just like for DFT), which we work around by zeroing all such coefficients below a given threshold. For any frequency row/bin $w_l$ in SSWT plane, we reassign from $W_f(a, b)$ based on what's closest to $w_l$ according to $\omega (a, b)$ , and for log-scaled CWT we use log-distance . Summary : SSWT is a time-frequency analysis tool. CWT extracts the time-frequency information, and synchrosqueezing intelligently reassigns it - providing a sparser, sharper, noise-robust, and partly denoised representation. The success of synchrosqueezing is based in and explanied by its prior; the more the input obeys assumptions, the better the results. Notes & caveats : What if $f$ isn't nice in $\omega(a, b)$ example? Valid question ; in practice, the more the function satisfies aforementioned assumptions, the less of a problem this is, as authors demonstrate through various lemmas. In the SSWT of damped pendulum, I cheated a little by extending signal's time to $(-2, 6)$ ; this is only to prevent boundary effects, which is a CWT phenomenon that can be remedied; here's directly 0 to 6 . Partial noise cancellation? Indeed; see pg 536 of ref 1. What's the $a^{-3/2}$ in $(5)$ ? Synchrosqueezing effectively inverts $W_f$ onto the reassigned plane, using one-integral iCWT . "Fourier bad?" My earlier comparison is prone to criticism. To be clear, FT is the most solid and general-purpose basis that we have for a signals framework. But it's not an all-purpose -best; depending on context, other constructions are more meaningful and more useful. Where to learn more? The refernced papers are a good source, so are MATLAB's wsst and cwt docs and ssqueezepy 's source code. I may also write further Q&A's, which you can be notified of by subbing this thread . References : A Nonlinear Squeezing of the CWT Based on Auditory Nerve Models - I. Daubechies, S. Maes. Excellent origin paper with succinct intuitions. Synchrosqueezed Wavelet Transforms: a tool for Empirical Mode Decomposition - I. Daubechies, J. Lu, H.T. Wu. Good followup paper with examples. The Synchrosqueezing algorithm for time-varying spectral analysis: robustness properties and new paleoclimate applications - G. Thakur, E. Brevdo, et al. Further exploration of robustness properties and implementation details (including threshold-setting).
{ "source": [ "https://dsp.stackexchange.com/questions/71410", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/54096/" ] }
71,980
We already have DTFT so why do we need DFT? Please don't write DTFT is used for discrete, aperiodic signal which transfers in the frequency domain to periodic and continuous, but in DFT is discrete periodic transfer in discrete periodic, these things I already know.
The answer is the same to the question: "Why do we need computers to process data when we have paper and pencil?" DTFT as well as the continuous-time Fourier Transform is a theoretical tool for infinitely long hypothetical signals. the DFT is to observe the spectrum of actual data that is finite in size.
{ "source": [ "https://dsp.stackexchange.com/questions/71980", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/54617/" ] }
71,988
We have a FPGA system which takes input from an ADC and calculates a FFT. The system identifies the location of our frequencies of interest and sends the coefficients from those bins to my software. In addition to the coefficients I am sent the time series data While the hardware guys were working on the hardware I developed additional signal processing code. To enable testing and development I wrote some code to synthesize the signals I expected to receive from the hardware and that works fine. Not unexpectedly now that I have live data nothing works. In the process of debuging the problem I am taking the real world time series data and using np.numpy.fft.rfft to look at the spectrum. When I plot the spectrum I see our frequencies of interest in the correct FFT bins. I normalize the PSD and the Python's FFT and the hardware FFT match well. The problem I see is the phases of the coefficients do not match. The difference in phases don't look ordered. (Just looking at the phase difference in a plot). When I compute a FFT on the time series provided by the FPGA (this is the real world data) using numpy.fft.rfft I expect the coefficients at my frequencies of interest to have the same phases as the coefficients calculated by the FPGA FFT which is operating on the same time series. Does anyone have an idea of what could cause FFT's on the same real world time series data to have different coefficient phases? Thanks Justin
The answer is the same to the question: "Why do we need computers to process data when we have paper and pencil?" DTFT as well as the continuous-time Fourier Transform is a theoretical tool for infinitely long hypothetical signals. the DFT is to observe the spectrum of actual data that is finite in size.
{ "source": [ "https://dsp.stackexchange.com/questions/71988", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/54625/" ] }
74,343
The bandwidth of human hearing by empirical data is $20 \; Hz$ to $20 \; kHz$ . A cochlear implant stimulates the auditory or acoustic or Cochlear nerve directly so that the hearing can be improved in the case of stimulation mechanism upstream of the Cochlear nerve has degraded. Let us assume that the ear mechanism has not degraded (such as in a young and healthy adult). The Cochlear implant can likely improve hearing, even in this case, by increasing the bandwidth by amplifying the effect of the ear drum vibration (sensor actuation). However the neurons connecting the Cochlear nerve to the hearing region of the brain have an upper limit on the sampling rate on the order of $1 \; kHz$ . Does the the Nyquist sampling theorem limit the superhuman hearing and sound localization capability made possible by a Cochlear implant?
Does the Nyquist frequency of the Cochlear nerve impose the fundamental limit on human hearing? No. A quick run-through the human auditory system: The outer ear (pinnae, ear canal), spatially "encodes" the sound direction of incidence and funnel the sound pressure towards the ear drum, which converts sound into physical motions, i.e. mechanical energy The middle ear (ossicles) is a mechanical transformer (with some protective limiting built-in) that impedance matches the air-loaded ear drum to the liquid-loaded oval window of the Cochlea (inner ear). The vibration excites a bending wave on the basilar membrane. The membrane is highly resonant and transcodes frequency into location: for any given frequency the location of the resonance peak is in a different spot. High frequencies wiggle very close to the oval window, low frequencies towards the end of it. This motion is picked up by the Cochlea neurons, which transmit the intensity of the excitation at their location to the brain. About 20% of the neurons are efferent (come out of the brain) and are used to actively tune the resonance with a feedback loop (which causes tinnitus if misadjusted) So in essence the Basilar Membrane performs sort of a mechanical Fourier transform. The frequency selectivity of the Neurons is NOT determined by the firing pattern but simply by their location. A neuron at the beginning of the basilar membrane is sensitive to high frequencies and a neuron at the end detects low frequencies. But they are more or less the same type of Neurons. The Nyquist criteria doesn't come into play at all since no neuron is trying to pick up the original time domain waveform. The couldn't anyway: human neurons have a maximum firing rate of less than 1000 Hz and average firing rates are way below that. The firing rate of a cochlea neuron represents "Intensity at a certain frequency" where that frequency is determine by the location of that specific neuron. So you can think of it as a short term Fourier Transform. Instead of a single time domain signal you get a parallel stream of frequency domain signals where each individual signal has a much lower bandwidth. A cochlea implant basically does the short term Fourier transform internally and then connects the output for each frequency range to the "matching" neurons in the cochlea nerve. Theoretically you can create ">20 kHz" hearing with an implant that can actually receive and process higher frequencies and simply routes them to existing neurons, i.e. you could feed 40 kHz activity to the 10 kHz Neuron. The human would have a sensation when exposed to 40 kHz but it's unclear what they could do with that: they would have "relearn" how to hear. Aside from the highly questionable practical and ethical issues, it probably wouldn't be useful. In order to get to 40 kHz you'd have to give some other frequencies, and chances are that evolution has chosen the current "normal" range for humans pretty carefully.
{ "source": [ "https://dsp.stackexchange.com/questions/74343", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/54802/" ] }
79,138
I have the following homework question that confuses me: We have an audio emitter that can emit two signals: It either emits a sine wave at 23 kHz or it emits a sine wave at 25 kHz. The receiver has the following sampling frequencies available: 16 kHz, 32 kHz and 48 kHz. The question asks which is the best sampling frequency for the receiver to know when is 23 kHz being transmitted, and when is 25 kHz is. The Nyquist criterion states that the sampling frequency should be minimum twice the signal frequency. In this case it should be 50 kHz. Should I select 48 kHz, or is there a trick?
HINT When you sample at below the Nyquist rate, aliasing happens. That means frequencies higher than half the sampling rate get folded back down to below half the sampling rate. Have a read about bandpass sampling . PS: Tell your teacher, that's a really nice question. :-)
{ "source": [ "https://dsp.stackexchange.com/questions/79138", "https://dsp.stackexchange.com", "https://dsp.stackexchange.com/users/60082/" ] }
2
The title of the site is Earth Science , which implies questions should be limited to, well, Earth. But the term Geosciences may be considered a synonym, and the European Geosciences Union (EGU) does also cover planetary and space science. Should questions on topics such as, Extraplanetary atmospheres, Vulcanism on other planets, or Extrasolar planets, be considered on-topic on Earth Sciences?
Yes. All the physical processes on other planets are relevant to understanding our Earth. In fact, people who study the early Earth often interact significantly with people who study these processes on other planets. Also, planetary scientists, in general, interact more with geoscientists than they do with astrophysicists. A good dividing line for planetary science questions is this: in which community is the topic studied more in: the astronomy community or the geoscience community? Exoplanet observation should be categorized as astronomy. Exoplanetary atmospheres is more borderline, but many people in planetary atmospheres are more geoscience than astronomy. We also have to put in consideration the fact that the previous Astronomy SE died out, and the current Astronomy SE is having many of the similar issues that the previous one has.
{ "source": [ "https://earthscience.meta.stackexchange.com/questions/2", "https://earthscience.meta.stackexchange.com", "https://earthscience.meta.stackexchange.com/users/6/" ] }
124
Spurred on by the discussion on my earlier question about whether or not 'identify this rock' questions should be on topic and the appearance of our first such question ; I propose that we make a guide so that we can refer people to it when appropriate. I'll start a community wiki answer to this question where I'll put in a few of the basic things that I know about. I'm not a geologist though, so I'll need you guys to fill in most of the details. We should try to rely on tests that the average person can do reasonably quickly in their own home.
As of 2019-07-30, Rock identification questions are off-topic on Earth Science Stack Exchange . Please see this meta post . Please find another forum for your question. When you do find another forum, please do refer to the underlying guide, as it is universally useful regardless of where you ask. How to ask an "Identify this rock" question This guide is refered to rocks. If you think you have a fossil and not a rock, follow this guide Example of a good question for a rock . 1) Describe where you found it Be as specific as you can (just country or state is not enough!). What part of the world? Was it on a beach? Did you find it lying on top of the ground or did you have to dig for it? Were there lots of them around or was this the only one? Note that if you got the rock as a gift or you bought it, you won't be able to provide a location, so your question will most likely be closed as "off-topic". If you can, post a picture of the place where you found it. Let us a link to Google Maps or another online mapping service, with a pin on the exact place (single click on the place to pin, and copy the URL). Some places are covered with Google Street View and this may allow us to take a look at the place. 2) Post a well-lit, sharp photo with a scale Take a sharp photo in bright white lighting next to a scale or ruler of some sort. Try to use daylight (but not direct sunlight) or bright white fluorescent lights. No flash. Make sure that the rock is well lit but don't saturate the image. If possible, use a plain background, such as a sheet of white paper. Also remember to either get the units of your scale in the picture or post it in the question. If the rock has a visible crystal structure, make sure that it is clearly visible in the photo. If the piece is a rock and not valuable, break it an picture a fresh surface, as suggested here . Note : while a good photograph is important, it is not a substitute for a written description. Images of unknown things cannot be searched. For search purposes, questions should provide an image, and also describe the image in as much relevant detail as possible. 3) Describe its properties If you have broken your piece, describe the properties at the fresh surface. What color is it? What kind of lustre does it have? Is it made up of layers? Can you see grains? How easy is it to break pieces off? How homogeneous is it? Does it seem unusually light or heavy for its size? Does it leave any streak on a paper? Name its main property (color, structure) in the title ( as suggested here ). 3.1) Test its hardness Test the mineral’s hardness on the Mohs hardness scale . This is pretty easy to do by comparing it to some common household items. The list below gives the hardness of some common objects ; if these objects can scratch your rock, then the rock is softer, otherwise it is harder. Fingernail: 2.5 Penny or other US coin: 3 Knife blade: 5.5 Glass: 5.5 Steel file: 6.5 Quartz: 7 Diamond: 10 3.2) Measure its density Weight it and measure its volume as shown here , so we will know its density. 4) Be prepared to answer follow up questions More than likely some more information will be needed to identify your rock. Users will post clarification question in the comment section of your answer. If the questions only need a one line answer, then leave it as a comment. If the question requires a longer answer, then edit your post to include the additional information. 5) Tag your question with identification-request This tag will help identify your question as a identification question and make it easier to find for people that can answer your question. 6) Name your question in a relevant way. "Help me identify this rock" or "Rock identification needed" are very unspecific, and won't help you get good answers. A title like "Rock ID: soft, white, from Dover, UK" will help your question to stand out, and will also make it more interesting to experts in the region, who might have a better idea about the geology of their area. Why was my question closed as "off-topic"? If your question doesn't address the points above, it makes the question vague with many possible answers and low confidence. If you haven't responded to comments and have not provided enough information, your question will likely be closed as off-topic. If this happens, please edit your question to include the things above, which will automatically nominate it be reopened.
{ "source": [ "https://earthscience.meta.stackexchange.com/questions/124", "https://earthscience.meta.stackexchange.com", "https://earthscience.meta.stackexchange.com/users/53/" ] }
19
We use different weather models all the time, such as the ECMWF and the GFS . These models are simply amazing to me. How do these models work? I know they have to take in various data points - what are these, and how does the model use it? Also, how do they come up with a forecast or a map of what will happen in the future?
All numerical atmospheric models are built around calculations derived from primitive equations that describe atmospheric flow. Vilhelm Bjerknes discovered the relationships and thereby became the father of numerical weather prediction. Conceptually, the equations can be thought of as describing how a parcel of air would move in relationship to its surroundings. For instance, we learn at a young age that hot air rises. The hydrostatic vertical momentum equation explains why and quantifies under what condictions hot air would stop rising. (As the air rises it expands and cools until it reaches hydrostatic equilibrium.) The other equations consider other types of motion and heat transfer. Unfortunately, the equations are nonlinear, which means that you can't simply plug in a few numbers and get useful results. Instead, weather models are simulations which divide the atmosphere into three-dimensional grids and calculate how matter and energy will flow from one cube of space into another during discrete time increments. Actual atmospheric flow is continuous, not discrete, so by necessity the models are approximations. Different models make different approximations appropriate to their specific purpose. Numerical models have been improving over time for several reasons: More and better input data, Tighter grids, and Better approximations. Increasing computational power has allowed models to use smaller grid boxes. However, the number of computations increases exponentially with the number of boxes and the process suffers diminishing returns . On the input end of things, more and better sensors improve the accuracy of the initial conditions of the model. Synoptic scale and mesoscale models take input from General Circulation Models , which helps set reasonable intial conditions. On the output end, Model Output Statistics do a remarkable job of estimating local weather by comparing the current model state with historical data of times when the model showed similar results. Finally, ensemble models take the output of several models as input and produce a range of possibly outcomes.
{ "source": [ "https://earthscience.stackexchange.com/questions/19", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/20/" ] }
70
What do weather forecasters mean when they say "50% chance of rain"? Even more confusing: weather report often says something like "30% chance of rain. >10mm", then the next day "70% chance of rain <1mm". How should I deal with this information, for example, if I'm planning a picnic?
In the US, meteorologists forecast the probability of ANY amount of precipitation falling. The minimum amount of that we deem acceptable to meet this criteria is .01". So, we are forecasting the probability of one hundredth of an inch of precipitation to fall at a location. We look at observational data from ground stations, satellites, and computer models and combine it with our knowledge of meteorology/climate and our past experiences with similar weather systems to create a probability of precipitation. A 10% chance of rain means that 10 times out of 100 with this weather pattern, we can expect at least .01" at a given location. Likewise, a 90% chance of rain would mean that 90 times out of 100 with this weather pattern, we can expect at least .01" at a given location. Similarly, the Storm Prediction Center issues severe threat probabilities. You can see things like 5% chance of tornado or 15% chance of severe hail. This simply means 5 times out of 100 in this scenario we can expect a tornado. Or 15 times out of 100 you can expect severe hail with thunderstorms. By definition, there is no difference in the amount of rain forecast by a 10% chance or a 90% chance. Instead, that information is defined elsewhere, typically by a Quantitative Precipitation Forecast (QPF). Meteorologists are still struggling with the best ways to inform the public about the differences between high chance, low QPF events versus low chance, high QPF events, and everything else in between. Meteorologists are still human and have their own wet or dry biases that can hedge the chance of precipitation you see. Recently, forecasts are relying more on bias-correction techniques and statistical models to remove the human bias from precipitation forecasts.
{ "source": [ "https://earthscience.stackexchange.com/questions/70", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/56/" ] }
72
Approximately what proportion of the global warming seen over the the last century is attributed to anthropogenic sources?
Firstly it is worth demonstrating that 97% of climate scientists agree that climate-warming trends over the past century are very likely due to human activities. W. R. L. Anderegg, “Expert Credibility in Climate Change,” Proceedings of the National Academy of Sciences Vol. 107 No. 27, 12107-12109 (21 June 2010); DOI: 10.1073/pnas.1003187107. P. T. Doran & M. K. Zimmerman, "Examining the Scientific Consensus on Climate Change," Eos Transactions American Geophysical Union Vol. 90 Issue 3 (2009), 22; DOI: 10.1029/2009EO030002. N. Oreskes, “Beyond the Ivory Tower: The Scientific Consensus on Climate Change,” Science Vol. 306 no. 5702, p. 1686 (3 December 2004); DOI: 10.1126/science.1103618. But what percentage of the increase is attributed to humans? Probably almost of all of it. The scientist Gavin Schmidt from NASA was asked this question on realclimate.org His response was as follows: Over the last 40 or so years, natural drivers would have caused cooling, and so the warming there has been (and some) is caused by a combination of human drivers and some degree of internal variability. I would judge the maximum amplitude of the internal variability to be roughly 0.1 deg C over that time period, and so given the warming of ~0.5 deg C, I'd say somewhere between 80 to 120% of the warming. Slightly larger range if you want a large range for the internal stuff. [emphasis added] The rapid increase in the human-driven component of the forcing are increasingly dwarfing the small, slow natural forcings, rendering them increasingly irrelevant.
{ "source": [ "https://earthscience.stackexchange.com/questions/72", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/56/" ] }
73
Are earthquakes more common in mining regions than they would otherwise be? e.g. is the frequency of earthquakes in those regions different when mining is occurring than when it is not? I am interested in earthquakes generally, but also particularly interested in earthquakes strong enough to have damaging impacts (to life and infrastructure).
Yep, mining can trigger earthquakes. According to a Scientific American article : We've been monitoring [The Geysers] since 1975. All the earthquakes we see there are [human] induced. When they move production into a new area, earthquakes start there, and when they stop production, the earthquakes stop. This is talking about geothermal power. They create small little fractures, which cause tiny earthquakes. They then harness this power for electricity. Earthquakes can also be caused by coal mining and other mining, according to this study : Klose has identified more than 200 human-caused temblors, mostly in the past 60 years. "They were rare before World War II," he said. Most were caused by mining, he said, but nearly a third came from reservoir construction. Oil and gas production can also trigger earthquakes, he added. Three of the biggest human-caused earthquakes of all time, he pointed out, occurred in Uzbekistan's Gazli natural gas field between 1976 and 1984 (map of Uzbekistan). Another study also concluded this.
{ "source": [ "https://earthscience.stackexchange.com/questions/73", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/56/" ] }
88
According to textbook knowledge, the mass of the earth is about $6 × 10^{24}\,\mathrm{kg}$ . How is this number determined when one cannot just weight the earth using regular scales?
According to Newton's Law of Gravity based on attractive force (gravitational force) that two masses exert on each other: $$F=\frac{GmM}{r^2}$$ Where: $F$ is the gravitational force $G = 6.67 \times 10^{-11}\ \mathrm{m}^3\ \mathrm{kg}^{-1}\ \mathrm{s}^{-2}$ is a constant of proportionality $M$ and $m$ are the two masses exerting the forces $r$ is the distance between the two centers of mass. From Newton's second law of motion : $$F=ma$$ Where: $F$ is the force applied to an object $m$ is the mass of the object $a$ is its acceleration due to the force. Equating both the equations : $$F = \frac{GmM}{r^2} = ma$$ $$\frac{GM}{r^2}= a$$ (The $m$ 's canceled out.) Now solve for $M$ , the mass of the Earth. $$M = \frac{ar^2}{G}$$ Where $a = 9.8\ \mathrm{m}\ \mathrm{s}^{-2}$ , $r = 6.4 \times 10^6\ \mathrm{m}$ , and $G = 6.67 \times 10^{-11}\ \mathrm{m}^3\ \mathrm{kg}^{-1}\ \mathrm{s}^{-2}$ . $$M = 9.8 \times (6.4 \times 10^6)^2/(6.67 \times 10^{-11})\ \mathrm{kg}$$ Hence, $M = 6.0 \times 10^{24}\ \mathrm{kg}$
{ "source": [ "https://earthscience.stackexchange.com/questions/88", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/51/" ] }
96
Earth is the only planet in our solar system that has copious amounts of water on it. Where did this water come from and why is there so much water on Earth compared to every other planet in the solar system?
The water was already present when the Earth assembled itself out of the accretionary disk. Continued outgassing of volcanoes transferred the water into the atmosphere which was saturated with water. And rain transferred the water onto the surface. Compared to other planets and smaller solar system object Earth has a big advantage. It is large enough to prevent water molecules to leave the gravitational field, and it has a magnetic field which prevents atmospheric erosion (Wikipedia). This is due to the Earth's outer core being liquid (Moving charged liquid = magnetic field). Mars probably had oceans until its outer core solidified so much, that the convection was stopped. After the magnetic field disappeared a few million years of solar radiation removed all of the atmosphere and the oceans.
{ "source": [ "https://earthscience.stackexchange.com/questions/96", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/51/" ] }
99
A compass can tell me the directions of the Earth's North and South poles? What is it about the Earth that produces this "polarity" such that a compass can pick it up? The first thing that jumped into my head was the Earth's rotation, but if that is the explanation, why have I heard from people that the Earth's polarity can switch every million years or so?
Well, firstly it's important to recognise that the poles are merely the extremities of the shape of a magnetic field - the earth's magnetic field. All magnetic fields have polarities as such. However, you're asking why the field itself even exists, I gather. In this case it's generated by electric currents in the conductive molten iron (and other metals) in the core of the earth, as a result of convection currents generated by escaping heat. This is known as a geodynamo, and is one way of generating a magnetic field. However, exactly why this occurs is still under investigation. More is available in this Wiki piece - specifically on the physical origins of the earth's magnetic field .
{ "source": [ "https://earthscience.stackexchange.com/questions/99", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/51/" ] }
103
I have heard from many people that sinks do not empty in a particular pattern depending on what hemisphere you are in, but I have also heard from people who adamant that a sink of water would empty clockwise in one hemisphere and anti-clockwise in another. While I acknowledge the above idea is probably a myth, is there any simple experiment that one can do do determine what hemisphere they are in, utilizing the Coriolis effect, or otherwise?
You can use the Foucault pendulum to determine the hemisphere: Its plane of movement rotates: anti-clockwise in the southern hemisphere; clockwise in the northern hemisphere. The rotation of the plane can be explained by the Coriolis force.
{ "source": [ "https://earthscience.stackexchange.com/questions/103", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/51/" ] }
108
We've all learned at school that the Earth was a sphere. Actually, it is more nearly a slightly flattened sphere - an oblate ellipsoid of revolution, also called an oblate spheroid. This is an ellipse rotated about its shorter axis. What are the physical reasons for that phenomenon?
Normally in the absence of rotation, the natural tenancy of gravity is to pull the Earth together in the shape of a sphere. However the Earth in fact bulges at the equator, and the diameter across the equatorial plane is 42.72 km more than the diameter from pole to pole. This is due to the rotation of the Earth. As we can see in the image above, the spinning disk appears to bulge at the points on the disk furthest from the axis of rotation. This is because in order for the particles of the disk to remain in orbit, there must be an inward force, known as the centripetal force, given by: $$F = \frac{mv^2}{r},$$ where $F$ is the force, $m$ is the mass of rotating body, $v$ is the velocity and $r$ is the radius of particle from the axis of rotation. If the disk is rotating at a given angular velocity, say $\omega$, then the tangential velocity $v$, is given by $v = \omega r$. Therefore, $$F = m\omega^2r$$ Therefore the greater the radius of the particle, the more force is required to maintain such an orbit. Therefore particles on the Earth near the equator, which are farthest from the axis of rotation, will buldge outwards because they require a greater inward force to maintain their orbit. Additional details for more mathematically literate now that mathjax is enabled: The net force on an object rotating around the equator with a radius $r$ around a planet with a gravitational force of $\frac{Gm_1m_2}{r^2}$ is the centripetal force given by, $$F_{net} = \frac{Gm_1m_2}{r^2} - N = m\omega^2r,$$ where $N$ is the normal force. Re-arranging the above equation gives: $$N = \frac{Gm_1m_2}{r^2} - m\omega^2r$$ The normal force here is the perceived downward force that a rotating body observers. The equation shows that the perceived downward force is lessened due to the centripetal motion. The typical example to illustrate this is there is an appearance of 0 gravity in a satellite orbiting the Earth, because in this situation the centripetal force is exactly balanced by the gravitational force. On Earth however, the centripetal force is much less than the gravitational force, so we perceive almost the whole contribution of $mg$. Now we will examine how the perceived gravitational force differs at different angles of latitude. Let $\theta$ represent the angle of latitude. Let $F_G$ be the force of gravity. In vector notation we will take the $j$-direction to be parallel with the axis of rotation and the $i$-direction to be perpendicular with the axis of rotation. In the absence of the Earth's rotation, $$F_G = N = (-\frac{Gm_1m_2}{r^2}\cos\theta)\tilde{i} + (-\frac{Gm_1m_2}{r^2}\sin\theta)\tilde{j}$$ It is easily seen that the above equation represents the perceived force of gravity in the absence of rotation. Now the centripetal force acts only in the i-direction, since it acts perpendicular to the axis of rotation. If we let $R_{rot}$ be the radius of rotation, then the centripetal force is $m_1\omega^2R_{rot}$, which for an angle of latitude of $\theta$ corresponds to $m_1\omega^2r\cos{\theta}$ $$N = (-\frac{Gm_1m_2}{r^2} + m_1\omega^2r)\cos{\theta}\tilde{i} + (-\frac{Gm_1m_2}{r^2})\sin{\theta}\tilde{j}$$ By comparing this equation to the case shown earlier in the absence of rotation, it is apparent that as $\theta$ is increased (angle of latitude), the effect of rotation on perceived gravity becomes negligible, since the only difference lies in the $x$-component and $\cos\theta$ approaches 0 as $\theta$ approaches 90 degrees latitude. However it can also be seen that as theta approaches 0, near the equator, the $x$-component of gravity is reduced as a result of the Earth's rotation. Therefore, we can see that the magnitude of $N$ is slightly less at the equator than at the poles. The reduced apparent gravitational pull here is what gives rise to the slight bulging of the Earth at the equator, given that the Earth was not originally as rigid as it is today (see other answer).
{ "source": [ "https://earthscience.stackexchange.com/questions/108", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/91/" ] }
144
Mount Hood in Oregon is a dormant volcano, and in Washington Mount St. Helens and Mt. Ranier are both active volcanoes. What causes this line of volcanoes running parallel to the coastline along the northwest coast of North America?
The Cascades (the volcanic range that Mt. St. Helens and Mt. Ranier are a part of) are "arc volcanoes" (a.k.a. "a volcanic arc", etc). Volcanic arcs form at a regular distance (and fairly regular spacing) behind subduction zones. Subduction zones are areas where dense oceanic crust dives beneath more buoyant crust (either younger oceanic crust or continental crust). The down-going oceanic plate begins to generate fluids due to metamorphic reactions that occur at a particular pressure and temperature. These fluids cause partial melting of the mantle above the down-going slab. (Contrary to what's often taught, the oceanic crust itself doesn't melt at this point - it's the mantle above it that does.) This causes arc volcanoes to form at the surface at approximately where the down-going oceanic slab reaches ~100km depth (I may be mis-remembering the exact number). However, there's an interesting story with the Cascades, the San Andreas Fault, and the Sierra Nevada. Basically, Sierras are the old volcanic arc before an oceanic spreading ridge tried to subduct off the coast of California. (Search for the Farallon plate.) The ridge was too buoyant to subduct, so subduction stopped, shutting off the supply of magma to the volcanic arc. Because the motion of the Pacific Plate (on the other side of the spreading ridge that tried to subduct) was roughly parallel to the margin of North America, a strike slip boundary formed instead: The San Andreas Fault. Northward, there's still a remnant of the Farallon plate that's subducting beneath Northern CA, OR, WA, and BC. Therefore, the Cascades are still active arc volcanoes, while the Sierras are just the "roots" of the old arc.
{ "source": [ "https://earthscience.stackexchange.com/questions/144", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/53/" ] }
239
No known hurricane has ever crossed the equator. Hurricanes require the Coriolis force to develop and generally form at least 5° away from the equator since the Coriolis force is zero there. Are the physics of the earth and tropical systems such that it is impossible for a hurricane to cross the equator after forming, or are the forces working against this occurring so strong that an equator crossing hurricane is an exceedingly rare event we may not witness in 1000+ years?
Improbable. It is well known that the Coriolis force is needed to form a hurricane, and the figure of 5 o N/S as the minimum for formation is widely publicized. You can also find record of tropical storm formation near India as far south as 1.4 o N. The problem of crossing the Equator isn't one of hurricane formation though, it is one of hurricane motion. Due to Coriolis, a hurricane initially moving parallel to the Equator will start gaining a poleward component to its motion, thus moving it away from the Equator. But, because this is due to Coriolis, if you could get a storm close enough to the Equator, this effect would not be as strong. This would be an improbable track, but I'm not willing to call it impossible. We haven't had satellites all that long, and all we can really say is that it hasn't happened since we've been watching. If a storm did cross the equator though, what would it do? Nothing at first, but as it moved further into the opposite hemisphere, Coriolis would be working against the storm and it would spin down, become disorganized and cease to be a hurricane, probably becoming a remnant low. A tropical disturbance has crossed the equator. One such disturbance occurred June 27, 2008 in the Atlantic basin (south to north) that retained its clockwise motion for some time:
{ "source": [ "https://earthscience.stackexchange.com/questions/239", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/38/" ] }
245
I have heard that extreme storm events can be caused simply by a butterfly flapping its wings somewhere in a distant location. Is it true that such a small disturbance in the air in one location can result in such a large catastrophic event in another separate location? If so how can we know this is possible, and how is this even possible?
The butterfly is a colourful illustration of Chaos Theory , and the word butterfly came from the diagram of the state space (see below). A system that is chaotic is extremely sensitive on its initial value. In principle, if you know exactly how the state of the universe is now, you could calculate how it develops (but due to other reasons, it is theoretically impossible to know the state exactly — but that's not the main point here). The issue with a chaotic system is that a very small change in the initial state can cause a completely different outcome in the system (given enough time). So, suppose that we take the entire atmosphere and calculate the weather happening for the next 20 days; suppose for the moment that we actually do know every bit. Now, we repeat the calculation, but with one tiny tiny bit that is different; such as a butterfly flapping its wings. As the nature of a chaotic system is such that a very small change in the initial value can cause a very large change in the final state, the difference between these two initial systems may be that one gets a tornado, and the other doesn't. Is this to say that the butterfly flapping its wings results in a tornado? No, not really . It's just a matter of saying, but not really accurate. Many systems are chaotic: Try to drop a leave from a tree; it will never fall the same way twice. Hang a pendulum below another pendulum and track its motion: (Figure from Wikipedia) Or try to help your boyfriend in what must be one of the loveliest illustrations of Chaos Theory ever. Suppose you are running to catch the bus. You keep sight of a butterfly, which delays you by a split second. This split second causes you to miss the bus, which later crashes into a ravine, killing everyone on board. Later in life, you go on to be a major political dictator starting World War III ( Note: this is not the plot of the linked movie, but my own morbid reinterpretation ). Tell me, did this butterfly cause World War III? Not really. (Figure from Wikipedia)
{ "source": [ "https://earthscience.stackexchange.com/questions/245", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/51/" ] }
246
Inspired by the movie, " The Core ". Can we really travel through earth's core? I will provide 2 sub questions: Is there any substance that can resist the heat of earth's core? Between the crust and mantle, and the mantle and outer core is there any "wall" between them? And how hard is the wall (can we go through it)?
As Chris Mueller said, in short: it isn't, or at least highly infeasible. Projects to drill into the mantle, such as the Kola Superdeep Borehole , have all failed because drilling equipment can't withstand the heat at only ~15km deep. Even if we were to come up with some sort of cooling system that's able to cool to 6400km or 12800km deep (depending on whether you would drill from one side only or from both sides at the same time), pressure is the second barrier that holds us from traveling through the earth's core. According to Lide (2006) the pressure in the inner core is 330 to 360 GPa, at which iron becomes a solid even at the high temperatures in the core. If you could drill as far as the core you would have to build a device that's able to withstand that pressure, because if you can't, the material surrounding your well would immediately become liquid and fill the hole, if not shoot up your well towards the surface. There are no physical walls between the layers of the Earth, only transition zones where temperature and pressure combinations lead to different behaviour of the materials. An example is the Mohorovičić discontinuity, or Moho, which is the boundary between crust and mantle, below which temperatures are high enough and at the same time the pressure is low enough so that rock becomes either liquid or at least a "flowing" solid. Similarly, at the boundary between the inner and outer core the pressure is so high that even at those temperatures the iron becomes a solid. Lide, D.R., ed. (2006-2007). CRC Handbook of Chemistry and Physics (87th ed.). pp. j14–13.
{ "source": [ "https://earthscience.stackexchange.com/questions/246", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/132/" ] }