version 2004.2 (Modified)
Preliminary Note
If you do not deal with cross-platform development, you can skip this section.
On computers, floating point arithmetic is more a technology than a mathematical science. For example, you learned in school that one-third (1/3) can be written as an infinite number of threes after the decimal point. A computer, on the other hand, does not know this and must calculate the expression. In the same way, you know conceptually that three times one third is equal to one; a computer calculates the expression to get the result. Depending on the type of computer you use, one-third is calculated as a limited number of threes after the decimal point. This number is called the "precision" of the machine.
On older 68K-based Macintosh, the precision number is 19; this means that 1/3 is calculated with 19 significant digits. On Windows and Power Macintosh, this number is 15; so 1/3 is displayed with 15 significant digits. If you display the expression 1/3 in the Debugger window of 4D, you will get 0.3333333333333333333 on 68K-based Macintosh and something like 0.3333333333333333148 on Windows or Power Macintosh. Note that the last three digits are different because the precision on Windows and Power Macintosh is less than on the 68K-based Macintosh. Yet, if you display the expression (1/3)*3, the result is 1 on both machines.
If your floating point arithmetic computations deal with the number of square feet in your backyard, you will say "Fine with me!" because you do not care about the digits after the decimal point. On the other hand, if you are filling out an IRS form, you may, in certain circumstances, care about the accuracy of your computer. However, remember that 19 or 15 digits after the decimal point are quite sufficient even if you manage billions of dollars of revenue.
Why does the value 1/3 seem different on 68K Macintosh and onWindows or Power Macintosh?
On 68K-based Macintosh, the operating system stores real numbers on 10 bytes (80 bits), while on Windows and Power Macintosh, it stores them on 8 bytes (64 bits). This is why real numbers have up to 19 significant digits on 68K-based Macintosh and up to 15 significant digits on Windows and Power Macintosh.
So, why does the expression (1/3)*3 return 1 on both machines?
A computer can only make approximate computations. Therefore, while comparing or computing numbers, a computer does not treat real numbers as mathematical objects but as approximate values. In our example, 0.3333... multiplied by 3 gives 0.9999...; the difference between 0.9999... and 1 is so small that the machine considers the result equal to 1, and consequently returns 1. For details on this subject, see the discussion for the command SET REAL COMPARISON LEVEL.
There is dual behavior of real numbers, so we must make the distinction between:
How they are calculated and compared
How they are displayed on the screen or printer
Originally, 4D handled real numbers using the standard 10-byte data type provided by the operating system of the 68K-based Macintosh. Consequently, real values stored in the data file on disk are saved using this format. In order to maintain compatibility between the 68K, Power Macintosh, and Windows versions of 4D, the 4D data files still hold the real values using the 10-byte data type. Because floating point arithmetic is performed on Windows or Power Macintosh using the 8 byte format, 4D converts the values from 10 bytes to 8 bytes, and vice versa. Therefore, if you load a record containing real values, which have been saved on 68K-based Macintosh, onto Windows or Power Macintosh, it is possible to lose some precision (from 19 to 15 significant digits). Yet, if you load a record containing real values, which have been saved on Windows or Power Macintosh, on a 68K-based Macintosh, there will be no loss of precision. Basically, if you use a database on 68K or Power Macintosh and Windows, count on floating point arithmetic with 15 significant digits, not 19.
Using the SET DATABASE PARAMETER command, you can set the number of digits to be skipped (4 by default) when simplifying the display of real numbers.