Since not everybody seems to understand the idea, here, I elaborate the thesis, trying to prove it. The floating point number (FP) is encoded by two parts:
v = m * 2e
Any FP value is a two-argument function, therefore. Plot the surface it defines in the 3D space. Taking only positive m in the (m, e) definition domain, it will look like a set of stright lines connecting the points (0, e1) to (Mmax, 2e1) for any e1 in the range Emin to Emax.
What I'm going to show is that "slicing" this plot with the planes e=const we choose a scale. A scale is a number of values equally distributed between some min and max. For every e1 wi'll have a scale:
0 * 2e1 ≤ v(e1) ≤ Mmax* 2e1
0 ≤ v(e1) ≤ Mmax* 2e1
Both the m and e are discrete: they have min, max and some discretization step. We cannot choose any real mantiss for a given scale e and value: m = 2e/v. The mantiss discretization, thus, limits the precision. Taking that mantiss is an integer rounded, the defect will be exponentially scale e dependent:
dv = 0.5 * 2e = 2e-1.
Let's have a billion descretization for mantiss (Mmax = 109) and e ranging between -100 and 100. Choosing the first scale -100 (of say meters), we have our value varying 0 ≤ v(-100) ≤ * 109 * 2-100 = 2-73 VEEERY smoothly. So, despite there is too much precision to describe objects of Planck length (10-33 m), you cannot do that because it does not fit the scale. Moving from sub-sub-quantum level to the mechanic (1 meter) scale, the precision drops 273 times (from 1=1092e follows that 20 = 2272e and e = -27). A billion points in one meter will correspond to 1/nanometer error or 25 Mdpi. It is more than satisfactory to describe objects on the 1-meter sheet. Choosing the largest scale 100, we'll be able to place our points anywhere in the superuniverse but the precision will be very poor -- 2100 = 10127 kilometers. Comapre this precision with the Universe size of 15*1024 km and you'll see that it is far too low to hit into a universe of choice. One step of our mantiss leaps some 1050 Universes. The single-precision FP values vary the exponent in range -127 to 127, doubles scale from -1024 to 1024. What will you do at these scales?
So, what defines the precision? -- The distance from the origin (0). In order to place a point distantly from our origin, we have to push mantiss at maximum and increase the scale e. The closer a point is to the origin, the more precisely it can be placed when using FP coordinates. The exponential system of measurement is good for the cases, which have such an absolute point of reference (preference). You measure frequencies of order 1 Hz at 10 Hz scale and choose another scale to measure MHz signal. The basic electronic components, like resistors, are supplied in quite fixed set of values -- 24 values per 10x scale with 10 scales in total. As low as 24*10 = 240 different values satisfactorily describe the huge range: from fractions of an ohm to giga-ohms. There is no need to specify the precision up to 1 ohm for 1 Gohm resistor but 5% error for the latter is not allowed choosing the 1 ohm res. This is because such magnitues have their natural origin (0). The space is quite different.
So far, scientits have found no selected location, the point of reference, that is preffered to others. Concluding:
- Doubtfully, there is a point in your document or on the screen, which requires an absolute precision loosing the precision requirements for the distant parts.
- Furthermore, the fact that, as opposed to the fixed precision, the error (deviation) of the point placement grows with the distance from the origin means that the opposite edges of your rectangle will not be parallel -- the floating point coordinates preclude us from drawing a regular square!
No comments:
Post a Comment