Quote:

>> You're not losing precision - you're misunderstanding the precision

>> of floating point math.

>> The error you have indicated here, is approximately 2e-53, which is

>> the normalized precision of double. This is a natural consequence

>> of the fact that double has a 53-bit mantissa.

>> Double::Episilon is the smallest value that Double can represent

>> (4.94065645841247e-324). This is not the same as the precision that

>> a calculation performed on double is guaranteed to provide.

>> -cd

> Alright, I am ok with this. Now, shouldn't the (==) and (!=) operator

> take that into account? If I am building a new type and I know that I

> cannot precisely represent a number beyond a certain number of bits,

> I should not say that 2 numbers are not equal if I cannot guarantee

> that they are different?

The problem is that floating point is an inexact representation, unlike

integer, so saying that two floating point number are "equal" is rarely

useful. Instead, what you typically need to know is whether two floating

point numbers are equivalent, within the precision requirements of your

problem domain. Of course, only you know what those precision requirements

are - there's no way a library could provide one. Equality, as implemented

by the hardware (the System.Double class is just a facade - all the real

work is implemented directly in silicon) simply compares the bit-patterns of

two numbers. If all the bits are the same, the numbers are "equal".

(Before someone pipe up - yes, there are special cases where that's not

true - NaN == NaN is never evaluated as true even if the two NaNs have

identical bit patterns).

Quote:

> So, in my case, if I have a variable a = 5E-6 and I compare it to z,

> .NET will tell me that they are different even though they are

> different in the 54th bit? I have written so many lines of code

> assuming (you know what they say about this word) that if I compare 2

> double precision numbers, this stuff is taken care of by the

> libraries. Do I now have to go through all of my code to see where I

> compare 2 double precision numbers and change it to something like:

> If (Math::Abs(a-z) < 1E-53)

> That sounds absurd, right?

It may sound absurd, but that's exactly what you have to do, if "equality"

is important to your program. Because floating point is an inexact

representation, you rarely want to compare for equality - you want to

compare for equivalence within the precision requirements of your program.

Quote:

> Also, why should 5*1E-6 cause an error. If I was writing a library

> and I saw that the numbers that I need to represent have integral

> mantissas, I will not do any special conversions. I would just

> multiply the mantissas and shift the exponent accordingly.... Am I

> missing something or do I sound really stupid here?

Floating point formats (at least on modern CPUs) are always based on binary

representations. No negative power of 10 can be accurately represented in

binary, so anytime you convert a decimal number to a floating point format,

the result is inexact. As Larry already replied, only numbers that can be

represented as a rational number with a power-of-two denominator can be

represented accurately in floating point.

You might want to find one of the classic works on computer programming,

such as Donald Knuth's "The Art of Computer Science" (volume 2, chapter 4)

and read up on the fundamentals and theory of floating point arithmetic.

-cd