Algorithm of Wavelet Transform (with Qt class)

Leave a comment

There are many kind of wavelet transform, and I think the names are quite confusing.

For instance, there are continuous and discrete wavelet transforms, in which, the “continuous” and “discrete” are for the wavelet parameters, not for the “data” itself. Therefore, for discrete data, there are “continuous” and “discrete” wavelet transforms, and for function, there are also “continuous” and “discrete” wavelet transforms.

In here, we will focus on discrete wavelet transform for function first. This discrete wavelet transform is also called as wavelet series, which express a compact support function into series of wavelet.

For simplicity, we also focus on orthonormal wavelet.

As the wavelet span the entire space, any compact function can be expressed as

\displaystyle f(t) = \sum_{j,k} \left<f(t)|\psi_{j,k}(t)\right> \psi_{j,k}(t)

\psi_{j,k}(t) = 2^{j/2} \psi(2^j t - k)

where j, k are integer.


Now, we move to discrete data discrete wavelet transform. The data is discrete, we can imagine only t_n = t_0 + n \Delta points are known with finite n .

\displaystyle f_n = f(t_n) = \sum_{j,k} \left<f_n|\psi_{j,k}(t_n) \right> \psi_{j,k}(t_n)

the integration becomes a finite sum.

Without loss of generality, we can set t_0 = 0, \Delta = 1, and then the time axis becomes an integer number axis. We found that j  \leq 0 as the wavelet can only be expand, not shrink. Because there are finite number of data point, i.e. n < \infty, -Log_2(n) < j \leq 0 .

However, this double summation for each f_n is very time consuming. There is a Fast Discrete Wavelet Transform. Before we continuous, we must study the wavelet.


From the last post, we know that the scaling function that generate a MRA must be:

\displaystyle \phi(t) = \sum_{k} g_0(k) \phi(2t-k)

\left<\phi(t-k) | \phi(t-k') \right> = \delta_{kk'}

, where k are integer. The set of shifted scaling function span a space V_0 . For the wavelet,

\displaystyle \psi(t) = \sum_{k} g_1(k) \psi(2t-k)

\left<\psi(t-k) | \psi(t-k') \right> = \delta_{kk'}

The set of shifted wavelet span a space W_0, so that W_0 \perp V_0, so that

\left<\phi(t-k)|\psi(t-k') \right> = 0

Since the wavelet is generated from the scaling function, we expect the coefficient of g_0(k) and g_1(k) are related. In fact, the relationship for orthonormal scaling function and wavelet is

g_1(k) = (-1)^k g_0(1-k)


For discrete data x_i , it can be decomposed into the MRA space. We start by the largest V_0 space, where the wavelet is most shrunken.

\displaystyle x_i = \sum_{k} v_{0,k} \phi(i-k)

to decompose to the V_{-1} and W_{-1} space. We can use the nested property of the MRA space, \phi(2t) can be decomposed into \phi(t-k) and \psi(t-k) ,

\displaystyle \psi(2t-l) = \sum_{k} h_0(2k-l) \phi(t-k) + h_1(2k-l) \psi(t-k)

where (given that \phi(t) and $\latex \psi(t)$ are orthonormal ),

h_0(2k-l) = \left< \phi(2t-l) | \phi(t-k) \right>

h_1(2k-l) = \left< \phi(2t-l) | \psi(t-k) \right>

Therefore, using the coefficient of h_0 and h_1, the wavelet coefficient v_{0,k} can be decomposed to

\displaystyle v_{s-1,k} = \sum_{l} h_0(2k-l) v_{s,l}  

\displaystyle w_{s-1,k} = \sum_{l} h_1(2k-l) v_{s,l}

in graphic representation

h1.PNG

This is a fast discrete wavelet transform.


Due to the nested space of MRA, we also expect that the coefficient h_0 and h_1 are related to g_0 . For orthonormal wavelet,

\displaystyle h_0(k) = \frac{1}{2} g_0(-k)

\displaystyle h_1(k) = \frac{1}{2} (-1)^{k} g_0 (k+1)

Since the g_0 is finite, the g_1, h_0, h_1 are all finite. That greatly reduce the computation cost of the discrete wavelet transform.


To reconstruct the discrete data x_i, we don’t need to use

\displaystyle v_{s+1,l} = \sum_{k} v_{s,k} \phi(l - k) + w_{s,k} \psi(l-k)

using the nested space of MRA, \psi(t) = \sum_{k} g_1(k) \psi(2t-k) ,

\displaystyle v_{s+1,l} = \sum_{k} g_0(l-2k) v_{s,k} + g_1(l-2k) w_{s,k}

in graphical representation,

h2.PNG


I attached the wavelet transfrom class for Qt, feel free to modify.

https://drive.google.com/file/d/0BycN9tiDv9kmMVRfcVFMbDFWS0k/view?usp=sharing

https://drive.google.com/file/d/0BycN9tiDv9kmUmlmNk1kaVJCbEU/view?usp=sharing

in the code, the data did not transform to MRA space. The code treats the data already in the MRA space. Some people said this is a “crime”. But for the seek of “speed”, it is no need to map the original discrete data into MRA space. But i agree, for continuous function, we must map to MRA space.

 

 

Wavelet Analysis or MRA

2 Comments

Although the Fourier transform is a very powerful tool for data analysis, it has some limit due to lack of time information. From physics point of view, any time-data should live in time-frequency space. Since the Fourier transform has very narrow frequency resolution, according to  uncertainty principle, the time resolution will be very large, therefore, no time information can be given by Fourier transform.

Usually, such limitation would not be a problem. However, when analysis musics, long term performance of a device, or seismic survey, time information is very crucial.

To over come this difficulty, there a short-time Fourier transform (STFT) was developed. The idea is the applied a time-window (a piecewise uniform function, or Gaussian) on the data first, then FT. By applying the time-window on difference time of the data (or shifting the window), we can get the time information. However, since the frequency range of the time-window  always covers the low frequency, this means the high frequency  signal is hard to extract.

To improve the STFT, the time-window can be scaled (usually by 2). When the time window is shrink by factor of 2, the frequency range is expanded by factor of 2. If we can subtract the frequency ranges for the time-window and the shrink-time-window, the high frequency range is isolated.

To be more clear, let say the time-window function be

\phi_{[0,1)}(t) = 1 , 0 \leq t < 1

its FT is

\hat{\phi}(\omega) = sinc(\pi \omega)

Lets also define a dilation operator

Df(t) = \sqrt{2} f(2t)

the factor \sqrt{2} is for normalization.

The FT of D\phi(t) has smaller frequency range, like the following graph.

Capture.PNG

We can subtract the orange and blue curve to get the green curve. Then FT back the green curve to get the high-frequency time-window.

We can see that, we can repeat the dilation, or anti-dilation infinite time. Because of this, we can drop the FT basis Exp(-2\pi i t \omega), only use the low-pass time-window to see the low-frequency behaviour of the data, and use the high-pass time-window to see the high-frequency behaviour of the data. Now, we stepped into the Multi-resolution analysis (MRA).

In MRA, the low-pass time-window is called scaling function \phi(t) , and the high-pass time-window is called wavelet \psi(t).

Since the scaling function is craetd by dilation, it has the property

\phi(t) = \sum_{k} g_{0}(k) \phi(2t-k)

where k is integer. This means the vector space span by {\phi(t-k)}_{k}=V_0 is a subspace of the dilated space DV_0 =V_1. The dilation can be go one forever, so that the whole frequency domain will be covered by V_{\infty}.

Also, the space span by the wavelet, {\psi(t-k)}=W_0, is also a subspace of V_1. Thus, we can imagine the structure of MRA is:

Capture.PNG

Therefore, any function f(t) can also be expressed into the wavelet spaces. i.e.

f(t) = \sum_{j,k} w_{j,k} 2^{j/2}\psi(2^j t - k)

where j, k are integers.

I know this introduction is very rough, but it gives a relatively smooth transition from FT to WT (wavelet transform), when compare to the available material on the web.

 

 

 

 

Levenberg-Marquardt Algorithm

Leave a comment

In pervious post, we shows the Gauss-Newton method for fitting non-linear function. The disadvantage of that method is that the inverse matrix could be ill-defined. This makes the method unstable.

Back to the basic, we want to minimize the sum of square of residual (SSR). The SSR is,

SSR(\beta) = (Y - f(\beta))^T\cdot (Y-f(\beta))

The derivative on \beta,

\frac{d}{d\beta} SSR(\beta) = -2 (Y-f(\beta))^T \cdot \nabla f(\beta)

Many literatures denote \nabla f = J, which is the Jacobian. The second derivative of f is Hessian matrix H = \nabla^2 f \sim J^T\cdot J.

The Gradient Descent method is that ,

h = \beta - \beta_0 = \alpha J^T \cdot (Y - f(\beta_0))

where \alpha is a step size. The gradient descent changes the SSR using the steepest path. The step size \alpha has to be adjusted. The simplest way to adjust is testing the \delta = SSR(\beta_0 + h) - SSR(\beta_0). If \delta < 0 , the \alpha increases, else decreases. This method is slow but stable. It is slow because of finding the \alpha . It is stable because the method is always computable.


Thus, we have 2 methods, Gradient Descent is stable and slow, Gauss-Newton method is unstable but fast. Levenberg-Marquardt Algorithm combined this 2 methods so that it is stable and fast by solving,

(J^T \cdot J + \lambda I) h = J^T \cdot (Y - f)

where \lambda is an adjustable parameter. When \lambda >> 1 , the J^T\cdot J is neglectable and the method becomes Gradient Descent with small \alpha. When the \lambda << 1, the method becomes Gauss-Newton method.

Usually, the \lambda_0 is small. The Gauss-Newton method is very good near the minimum of SSR, while Gradient Descent is better far away.

When the \delta < 0, \lambda_{i+1} = \lambda_i / 10 , else \lambda_{i+1} = \lambda_i * 10. I don’t know the exact reason for this setting. In fact if you set oppositely, the method is still work in most cases.

The method add \lambda I on the J^T\cdot J , the inverse is always well-define. Therefore, this method is stable.

Non-linear Regression

Leave a comment

The fit equation is

Y = f(A) + \epsilon

We assume near Y , the curvy subspace of f(A) can be approximated by a plane.  This, using Taylor series,

Y = f(A_0) + F(A_0) \cdot (A - A_0)  + \cdots,

where F(A_0) is divergence of f(A) at A_0.

Using same technique in linear regression,

A - A_0 = (F(A_0)^T \cdot F(A_0))^{-1} \cdot F(A_0) \cdot ( Y-f(A_0))

With an initial guess, the interaction should approach to the best estimated parmeter \hat{A}.

The covariance is

Var(A) = \sigma^2 (F(A)^T \cdot F(A))^{-1}

The above method is also called Gauss-Newton method.

Multi-dimension Linear Regression

Leave a comment

In the field of science, collecting data and fitting it with model is essential. The most common type of fitting is 1-dimensional fitting, as there is only one independent variable. By fitting, we usually mean the least-squared method.

Suppose we want to find the n parameters in a linear function

f(x_1, x_2,\cdots, x_n) = \sum_{i=1} a_i x_i

with m observed experimental data

Y_j = f(x_{1j}, x_{2j}, \cdot, x_{nj} + \epsilon_j= \sum_{i=1} a_i x_{ij}+ \epsilon_j

Thus, we have a matrix equation

Y=X \cdot A + \epsilon

where Y is a m-dimensional data column vector, A is a n-dimensional parameter column vector, and X is a n-m non-square matrix.

In order to get the n parameter, the number of data m >= n. when m=n, it is not really a fitting because of degree-of-freedom is DF = m-n = 0, so that the fitting error is infinity.

The least square method in matrix algebra is like calculation. Take both side with transpose of X

X^T \cdot Y = (X^T \cdot X) \cdot A + X^T \cdot \epsilon

(X^T\cdot X)^{-1} \cdot X^T \cdot Y = A + (X^T \cdot X)^{-1} \cdot X^T \cdot \epsilon

Since the expectation of the \epsilon is zero. Thus the expected parameter is

A = (X^T \cdot X)^{-1} \cdot X^T \cdot Y

The unbiased variance is

\sigma^2 = (Y - X\cdot A)^T \cdot (Y - X\cdot A) / DF

where DF is the degree of freedom, which is the number of value that are free to vary. Many people will confuse by the “-1” issue. In fact, if you only want to calculate the sum of square of residual SSR, the degree of freedom is always m - n.

The covariance of the estimated parameters is

Var(A) = \sigma^2 (X^T\cdot X)^{-1}

This is only a fast-food notices on the linear regression. This has a geometrical meaning  that the matrix X is the sub-space of parameters with basis formed by the column vectors of X. Y is a bit out-side the sub-space. The linear regression is a method to find the shortest distance from Y to the sub-space X .

The from of the variance can be understood using Taylor series. This can be understood using variance in matrix notation Var(A) = E( A - E(A) )^T \cdot E(A  - E(A)) .

 

 

 

Solving radial SE numerically

Leave a comment

The time-independent Schrödinger equation is

(-\frac{\hbar^2}{2m}\nabla^2 + V ) \Psi = E \Psi

Using the Laplacian in spherical coordinate. and Set \Psi = R Y

\nabla^2 R Y - \frac{2m}{\hbar^2}(V-E) R Y = 0

\nabla^2 = \frac{1}{r^2}\frac{d}{dr}(r^2 \frac{d}{dr}) - \frac{1}{r^2} L^2

The angular part,

L^2 Y = l(l+1) Y

The radial part,

\frac{d}{dr}(r^2\frac{dR}{dr}) - l(l+1)R - \frac{2mr^2}{\hbar^2}(V-E) R = 0

To simplify the first term,

R = \frac{u}{r}

\frac{d}{dr}(r^2 \frac{dR}{dr})= r \frac{d^2u}{dr^2}

A more easy form of the radial function is,

\frac{d^2u}{dr^2} + \frac{l(l+1)}{r^2} u - \frac{2m}{\hbar^2} (V-E) u = 0

The effective potential U

U = V + \frac{\hbar^2}{m} \frac{l(l+1)}{r^2}

\frac{d^2u}{dr^2} + \frac{2m}{\hbar^2} (E - U) u = 0

We can use Rungu-Kutta method to numerically solve the equation.

RK4.PNG

The initial condition of u has to be 0. (home work)

I used excel to calculate a scattered state of L = 0 of energy 30 MeV. The potential is a Wood-Saxon of depth 50 MeV, radius 3.5 fm, diffusiveness 0.8 fm.

e2.PNG

Another example if bound state of L = 0. I have to search for the energy, so that the wavefunction is flat at large distance. The outermost eigen energy is -7.27 MeV. From the radial function, we know it is a 2s orbit.

ex1.PNG

Testing Hypothesis

Leave a comment

Testing hypothesis may be the most used and most misunderstood statistics tool. When we do even a simple fitting, and want to evaluate the fitting result, we have to use the hypothesis testing. One common quantity used is the reduced chi-squared.

A hypothesis testing means given an observation and hypothesis, Is the hypothesis NOT true? right, hypothesis test never tell us the trueness  of the hypothesis, but the wrongness of it. The core of the test is “Can we reject the null hypothesis?

There are one-tailed and two-tailed testing, as a result, the p-value has different meanings.

https://en.wikipedia.org/wiki/One-_and_two-tailed_tests

https://en.wikipedia.org/wiki/P-value

The p-value is the probability that the model agree with the observation. when the p-value too small, smaller than the confident level, the null hypothesis Rejected. But if the p-value is very large, in a 1-tailed test, we cannot say the null hypothesis is true, but we can say the null hypothesis CANNOT be rejected.

In 2-tailed test, there are two p-values, corresponding to each tail.

https://en.wikipedia.org/wiki/Confidence_interval

https://en.wikipedia.org/wiki/Type_I_and_type_II_errors

 

Older Entries