When a function can be expressed as a linear combination of a orthogonal basis , i.e.

then, the integration

That is.

Using this theorem, many complicated integration can be calculated as a sum.

an adventure in Nuclear Physics!

July 28, 2017

When a function can be expressed as a linear combination of a orthogonal basis , i.e.

then, the integration

That is.

Using this theorem, many complicated integration can be calculated as a sum.

July 25, 2017

Given a matrix we can find it eigen value on a given basis set . Suppose the eigen vector is

Put in the eigen equation

We can act on the left, but in general, the basis set are not orthogonal.

This is the General Eigen Value Problem.

One way to solve the problem, is the “reconfigure” the basis so that it is orthogonal. However, in computation, non-orthogonal basis could give supreme advantage. So, the other way is split the problem. First solve the ,

The eigen system of is

Here, is a diagonal matrix of eigen value. Now, we define a new non-unitary matrix

Notices that

Thus,

We know that, the form is a transform that from one basis to another basis, i.e.

and for any operator,

We put this back to the general problem

Thus, we can solve the , get the eigen system, then use

For example,

The eigen system is

The matrix is

We can verify that

Let a Hamilton is

The Hamiltonian in the basis is

The eigen values in the S frame are

which is not the correct eigen values.

Now, we transform the Hamiltonian into orthogonal basis

The eigen values are

where are the correct pair. The un-normalized eigen vector are

We can verify that

July 25, 2017

Suppose we have a prior probability , and we have a observation from the prior probability that the likelihood is , thus, according to the Bayesian probability theory, the updated, posterior probability is

Here, are called conjugate distribution. is called conjugate prior to the likelihood .

suppose we have no knowledge on the prior probability, it is a fair assumption that or uniform, then, the posterior probability is proportional to the likelihood function, i.e.

.

Now, suppose we know the prior probability, after updated with new information, if the prior probability is “stable”, the new (or posterior) probability should have the similar functional form as the odd (prior) probability!

When the likelihood function is Binomial distribution, it is found that the beta distribution is the “eigen” distribution that is unchanged after update.

where is the beta function, which served as the normalization factor.

After success trails and failure trial, the posterior is

When , the posterior probability is reduced to binomial distribution and equal to the Likelihood function.

It is interesting to write the Bayesian equation in a “complete” form

Unfortunately, the beta distribution is undefined for , therefore, when no “prior” trials was taken, there is no “eigen” probability.

Remark: this topic is strongly related to the Laplace rule of succession.

Update: the reason for same functional form of prior and posterior is that the inference of mean, variant is more “consistence”.

July 21, 2017

Suppose we want to measure the probability of tail of a coin. And found that after 100 toss, no tail is appear, with is the upper limit of the probability of tail?

Suppose we have a coin, we toss it for 1 time, it show head, obviously, we cannot say the coin is biased or not, because the probability base on the result has not much information. So, we toss it 2 times, it shows head again, and if we toss it 10 time, it shows 10 heads. So, we are quite confident that the coin is biased to head, but how biased it is?

Or, we ask, what is the likelihood of the probability of head?

The probability of having head in toss, given that the probability of head is is

Now, we want to find out the “inverse”

This will generate a likelihood function.

The peak position of the likelihood function for binomial distribution is

Since the distribution has a width, we can estimate the width by a given confident level.

From the graph, more the toss, the smaller of the width, and narrower of the uncertainly of the probability.

As the binomial distribution approach normal distribution when .

where and .

When , no head was shown in toss, the likelihood function is

The Full-width Half-maximum is

We can assign this is a “rough” the upper limit for the probability of head.

July 21, 2017

These concepts are very confusing. I made a diagram to help.

are two different basis or coordinate of a vector. The hat just an indicator for the difference. Many people use , but I feel it is confusing when the prime enters the transformation elements .

The Direct transform of a basis is

The Inverse transform is

Notice that the reciprocal basis transform can be different, .

But for some cases they are equal, for example, if the basis rotated, the reciprocal basis also rotated by the same rotation.

A vector

In the change of basis,

we can see the coordinate is always opposite to the change of basis. When the basis is under Direct transform, the coordinate is under Inverse transform, therefore, it is contravariant.

July 20, 2017

A general quantum propagator take the exponential form,

where has a unit with , which is a general coordinate, and is the Hermitian operator of a coordinate , such that

where is the system Lagrangian.

I discuss this topic on a previous post. I found that there is not a strong reason that a unitary operator has to be exponential. I mean, it can, does not mean it must.

I ask further,

Why a general quantum propagator take the exponential form?

Consider the is a GENERATOR of movement, so that

The above relation means that the amount or magnitude of the first order change of is .

Thus, the first order of the propagator is

The higher order change, using Taylor series of around , we can expect the complete form of the propagator should be taking exponential form.

This approach to understand this is kind of reverse, compare to the previous post.

The answer the that question is:

*Since the canonical momentum is the magnitude of the first order change of the canonical position. According to Taylor series, the canonical translation operator must take an exponential form. *

July 20, 2017

Every time I come across with approximation, I feel kind of reluctant and uncomfortable, especially in theory. For example, The rotation operator usually introduced using

And any higher order terms like will be neglected. Claiming that those term are too small to consider.

What if, the real world has infinite digit to store an information? such that those higher order terms are not NEGLECTABLE at all.

We know that there are Planck scale, or Planck unit for almost all physical quantities, like Planck length, Planck time, Planck constant for energy and so on. Everything smaller than those unit are “undefine” or “not known”. Therefore, the higher order terms are safely be neglected as the is already “infinitesimal”.

This situation is similar to computer that we have finite digit to store a number, everything smaller than the smallest digit will be neglected. That causing floating point error. Therefore, in high precision calculation, the memory has to be very large.

These connection, make me wonder,

Does the existence of Planck constant indicate that we are living in a computer simulation?