We assumed each data point is taking from a distribution with mean $\mu$ and variance $\sigma^2$

$Y\sim D(\mu, \sigma^2)$

in which, the mean can be a function of X.

For example, we have a data $Y_i$, it has relation with an independent variable $X_i$. We would like to know the relationship between $Y_i$ and $X_i$, so we fit a function $y = f(x)$.

After the fitting (least square method), we will have so residual for each of the data

$e_i = y_i - Y_i$

This residual  should be follow the distribution

$e \sim D(0, \sigma_e^2)$

The goodness of fit, is a measure, to see the distribution of the residual, agree with the experimental error of each point, i.e. $\sigma$

Thus, we would like to divide the residual with $\sigma$ and define the chi-squared

$\chi^2 = (\sum (e_i^2)/\sigma_{e_i}^2 )$.

we can see, the distribution of

$e/\sigma_e \sim D(0, 1)$

and the sum of this distribution would be the chi-squared distribution. It has a mean of the degree of freedom $DF$. Note that the mean and the peak of the chi-squared distribution is not the same that the peak at  $DF-1$.

In the case we don’t know the error, then, the sample variance of the residual is out best estimator of the true variance. The unbiased sample variance is

$\sigma_s^2 = Var(e)/DF$,

where $DF$ is degree of freedom. In the cause of $f(x) = a x + b$, the $DF = n-1$, because there is 1 degree o freedom used in x. And because the 1  with the b is fixed, it provides no degree of freedom.