## Sunday, September 20, 2009

### Why weighted mean works?

Just an illustration. Consider a population, a fraction of which favors somthing. We take two samples of size (aka weight) m1 and m2 with favor fractions of p1 and p2 correspondingly. These fractions are statistic probabilities pn derived by dividing the number supporters fn by total number mn: pn = fn/mn. With the higher sample size, the statistic must converge to the true fraction of the population. In the two polls, different number of people were sampled (m1 != m2) and weighting is used to combine them rather than simple averaging. In the combined poll, the level of support is p = f/m = (f1+f2) / (m1+m2) = (p1m1 + p2m2) / (m1+m2). The pnmn product is a number of supporters in each distinct poll.

As follows from the latter, the weighting naturally occures in combination of the results, whereby, the weighted sum is nothing more than the number of supporters in the aggregate sample size. In fact, any group result aggregates individual ones and, thus, involves binary weighting: a group consists of m sampled individuals, each with weight 1 and vote of either 0 (zero support) or 1 (full support). Summing the ones (supporters) we do weighted summation (vector dot product).

An agreement can also be seen when aggregating two series resistances: i=u1/r1=u2/r2=(u1+u2)/(r1+r2). The last equation u1/r1=(u1+u2)/(r1+r2) can be derived from the first u1/r1=u2/r2. Ok, the fraction (i) can be > 1 in this case :)