I have been mis-understood for a while.

First of all, the variance of a distribution is not equal its sigma, except for Normal distribution.

In an observation, there should be an intrinsic variance, for example, some hole size, or physical windows. And there is a resolution from the detection. As a result, we observed an overall effect of the intrinsic variance and the detector resolution. In an data analysis, one of the goal is the find out the resolution.

Lets denote the random variable of the intrinsic variance is

X \sim D(\mu, \sigma^2)

and the resolution of the detection is an other random variable of Normal distribution,

Y \sim N(0, s^2)

Then, we observed,

Z = X + Y \sim D(\mu, \sigma^2 + s^2),

according to the algebra of distribution.


If the \sigma >> s and the the intrinsic distribution is NOT a gaussian, say, it is a uniform distribution, then, the observed distribution is NOT a gaussian. One why to get the resolution is to do de-convolution. Since we are not interesting on the intrinsic distribution but the resolution, thus we can simply use the variance of the intrinsic distribution and the variance of the observed distribution to extract the resolution.

When \sigma <= s, the observed distribution is mostly a gaussian-like. and we can approximate the observed variance as the squared-sigma of a gaussian fit.


for example, in deducing the time resolution using time-difference method with a help of a tracking, that a narrow width of the position was gated.

The narrow width of the position is equivalent to a uniform distribution of time-difference. Thus, the time resolution is deduced using the observed variance and the variance of the uniform distribution. For the width of the position is \Delta X, the width of the time difference is

\Delta t = \Delta X / c/\beta,

Thus,

Var(resol.) = Var(obser.) - Var(\Delta t)

The variance of an uniform distribution is 1/12 of the square of the width.

Var(\Delta t) = (\Delta t)^2/ 12 \neq (\Delta t)^2

The effect of the factor 1/12 is very serious when the resolution is similar with the width of the time difference. But can be neglected when Var(resol.) >> Var( \Delta t).

Because of missing this 1/12 factor, the resolution will be smaller than actual resolution.

Here is an example, I generated 10,000 data, the intrinsic distribution is a uniform distribution from 0 to 100, the resolution is a gaussian of sigma of 20. The result distribution is a convolution of them.

Screen Shot 2016-03-22 at 0.52.07.png


When find resolution using projection from a 2-D distribution, we should be careful about the projected 1-D distribution and its variance. For example, a uniform 2-D disk projected to 1-D distribution, the 1-D distribution is not uniform but a half circle,

pdf =f(x) =\sqrt{(r^2-x^2)}

the variance is (0.5 r)^2.


the formula to calculate variance is

Var(X) = \int x^2 f(x) dx / \int f(x) dx ,

where f(x) is the pdf.


ToDo, estimate the error of the deduced resolution, with number of data. For small number of data, the error should be large. but how large?

 

Advertisements