epsilon intrinsic 
Author Message
 epsilon intrinsic

Hello,
I've thought that intrinsics epsilon should return "machine zero", i.e.
the number defined as
"largest number eps such that 1+eps > 1" - that's the definition I've
been taught at school.

However, this little program proved me wrong:

program machzero
integer,parameter:: rp = kind(1.e0)
real(rp):: acc,acc0,fac,sto
! checking for machine zero - the largest number eps such that 1._rp +
eps > 1._rp
acc = 1._rp
acc0 = acc
fac = 0.5_rp
do while (fac < 1._rp)
  acc = acc0*fac
  sto = 1._rp + acc
  if (sto > 1._rp) then
    acc0 = acc
  else
    fac = 1-(1-fac)/2
  end if
end do
print *,'machine zero = ',acc0
print *,'fortran supplied = ',epsilon(1._rp)
end program

Compiled with g95 -ffloat-store
It shows that epsilon(1._rp) is actually twice this number.

Could anyone explain to me why is epsilon defined as twice the machine
zero?

Jaroslav



Tue, 23 Dec 2008 15:43:59 GMT  
 epsilon intrinsic

Quote:

> Hello,
> I've thought that intrinsics epsilon should return "machine zero", i.e.
> the number defined as
> "largest number eps such that 1+eps > 1" - that's the definition I've
> been taught at school.

Well, no. You mean the _smallest_ positive number eps such that
  1. + eps > 1.

But epsilon does not return this. Epsilon(0.0) is indeed the smallest
positive
difference that you can obtain when you subtract 1.0 by a real number
In other words,
 abs(x-1.0) >= espilon(0.)  ! for every x of type real.

For instance, with the typical 32 bits representation, where 24 bits
are used
for the precision, epsilon(0.0) is 2.**(-23), which has an exact
representation
as a fortran real number.
If you try to sum to the number 1.0 that amount, it increases, if you
want to sum
a smaller number, well the result depends on how that number is rounded
to
carry on the sum.
So a good compiler should round the numbers between 2.**(-24) and
2**(-23)
to the upper value, 2**(-23), and this explains why also summing
numbers like
0.9 * 2.**(-23) will change the result.

In other words, the concept of machine zero is easy to understand for
fixed point numbers (fixed precision), but there are some tricky traps
for floating point numbers.

Quote:

> However, this little program proved me wrong:

> program machzero
> integer,parameter:: rp = kind(1.e0)
> real(rp):: acc,acc0,fac,sto

Here I missed the logic of the code (what are supposed to be acc, acc0,
fac and sto? Some accumulation and storage).

Quote:
> Compiled with g95 -ffloat-store
> It shows that epsilon(1._rp) is actually twice this number.

And with intel fortran compiler you get this funny result
 machine zero =   1.1102230E-16
 fortran supplied =   1.1920929E-07

Quote:
> Could anyone explain to me why is epsilon defined as twice the machine
> zero?

Well, that would be easy. But with the intel compiler there is an even
more puzzling result! I just missed the main idea behind your code, so
I'll leave the puzzle to other people.


Tue, 23 Dec 2008 20:22:30 GMT  
 epsilon intrinsic

Ugo schreef:

Quote:

> > Compiled with g95 -ffloat-store
> > It shows that epsilon(1._rp) is actually twice this number.

I have not studied the original code in detail, but it is easy to
get it wrong leading to a factor 0.5 or 2 in the answer :)

Quote:
> And with intel fortran compiler you get this funny result
>  machine zero =   1.1102230E-16
>  fortran supplied =   1.1920929E-07

> > Could anyone explain to me why is epsilon defined as twice the machine
> > zero?

> Well, that would be easy. But with the intel compiler there is an even
> more puzzling result! I just missed the main idea behind your code, so
> I'll leave the puzzle to other people.

The puzzle above is simply explained, but it is an example of a very
annoying aspect of today's computing:

The compiler can optimise the little program that was posted, to such
an
extent that intermediate results are left in the extended precision
memory
that is used for the actual arithmetic.

Therefore, the computations in the loop are done with a higher
precision
than you expect from the source code, leading to misleading results.

I guess that with "real" programs, which are far too large for this
kind
of optimisation, the issue is much less important, but I keep feeling
uneasy about it. The program above or variations thereof is the
only example I know of that really fails because of this phenomenon.

In more realistically sized programs the results usually differ in the
last few decimals from platform to platform. Annoying of course,
but not as fatal as this failure.

Regards,

Arjen



Tue, 23 Dec 2008 20:48:34 GMT  
 epsilon intrinsic
No pont in re-inventing the wheel. The following Fortran code
is available at Netlib and its results match the intrinsic Epsilon.

Skip Knoble

      DOUBLE PRECISION FUNCTION D1MACH ()
      INTEGER IDUM
!-----------------------------------------------------------------------
! This routine computes the unit roundoff of the machine.
! This is defined as the smallest positive machine number
! u such that  1.0 + u .ne. 1.0
!
! Subroutines/functions called by D1MACH.. None
!-----------------------------------------------------------------------
      DOUBLE PRECISION U, COMP
      U = 1.0D0
 10   U = U*0.5D0
      CALL DUMSUM(1.0D0, U, COMP)
      IF (COMP .NE. 1.0D0) GO TO 10
      D1MACH = U*2.0D0
      RETURN
!----------------------- End of Function D1MACH ------------------------
      END
      SUBROUTINE DUMSUM(A,B,C)
!     Routine to force normal storing of A + B, for D1MACH.
      DOUBLE PRECISION A, B, C
      C = A + B
      RETURN
      END


-|Hello,
-|I've thought that intrinsics epsilon should return "machine zero", i.e.
-|the number defined as
-|"largest number eps such that 1+eps > 1" - that's the definition I've
-|been taught at school.
-|
-|However, this little program proved me wrong:
-|
-|program machzero
-|integer,parameter:: rp = kind(1.e0)
-|real(rp):: acc,acc0,fac,sto
-|! checking for machine zero - the largest number eps such that 1._rp +
-|eps > 1._rp
-|acc = 1._rp
-|acc0 = acc
-|fac = 0.5_rp
-|do while (fac < 1._rp)
-|  acc = acc0*fac
-|  sto = 1._rp + acc
-|  if (sto > 1._rp) then
-|    acc0 = acc
-|  else
-|    fac = 1-(1-fac)/2
-|  end if
-|end do
-|print *,'machine zero = ',acc0
-|print *,'fortran supplied = ',epsilon(1._rp)
-|end program
-|
-|Compiled with g95 -ffloat-store
-|It shows that epsilon(1._rp) is actually twice this number.
-|
-|Could anyone explain to me why is epsilon defined as twice the machine
-|zero?
-|
-|Jaroslav



Tue, 23 Dec 2008 20:54:22 GMT  
 epsilon intrinsic

Quote:

> Ugo schreef:

>>> Compiled with g95 -ffloat-store
>>> It shows that epsilon(1._rp) is actually twice this number.

> I have not studied the original code in detail, but it is easy to
> get it wrong leading to a factor 0.5 or 2 in the answer :)

>> And with intel fortran compiler you get this funny result
>>  machine zero =   1.1102230E-16
>>  fortran supplied =   1.1920929E-07

>>> Could anyone explain to me why is epsilon defined as twice the machine
>>> zero?
>> Well, that would be easy. But with the intel compiler there is an even
>> more puzzling result! I just missed the main idea behind your code, so
>> I'll leave the puzzle to other people.

> The puzzle above is simply explained, but it is an example of a very
> annoying aspect of today's computing:

> The compiler can optimise the little program that was posted, to such
> an
> extent that intermediate results are left in the extended precision
> memory
> that is used for the actual arithmetic.

> Therefore, the computations in the loop are done with a higher
> precision
> than you expect from the source code, leading to misleading results.

> I guess that with "real" programs, which are far too large for this
> kind
> of optimisation, the issue is much less important, but I keep feeling
> uneasy about it. The program above or variations thereof is the
> only example I know of that really fails because of this phenomenon.

> In more realistically sized programs the results usually differ in the
> last few decimals from platform to platform. Annoying of course,
> but not as fatal as this failure.

Using compilers released during the last 5 years, and hardware platforms
of the last 7 years, you have a choice whether to generate code which
evaluates single precision expressions in double (sometimes extended
double) precision.  The fact that 32-bit PC compilers generally default
to extra precision evaluation does not require you to use that option
(unless you agree with the frequently expressed opinion that no compiler
of the last 5 years should be considered).
I can remember programming practice of 25 years ago, but I don't
consider the practices of 5 years ago "today's computing."  I do have
one of those old favorite Fortran compilers still installed on my
Windows laptop, but I use it mostly as reference material.


Tue, 23 Dec 2008 21:14:22 GMT  
 epsilon intrinsic

Quote:

> No pont in re-inventing the wheel. The following Fortran code
> is available at Netlib and its results match the intrinsic Epsilon.

> Skip Knoble

>       DOUBLE PRECISION FUNCTION D1MACH ()
>       INTEGER IDUM
> !-----------------------------------------------------------------------
> ! This routine computes the unit roundoff of the machine.
> ! This is defined as the smallest positive machine number
> ! u such that  1.0 + u .ne. 1.0
> !
> ! Subroutines/functions called by D1MACH.. None
> !-----------------------------------------------------------------------
>       DOUBLE PRECISION U, COMP
>       U = 1.0D0
>  10   U = U*0.5D0
>       CALL DUMSUM(1.0D0, U, COMP)
>       IF (COMP .NE. 1.0D0) GO TO 10
>       D1MACH = U*2.0D0
>       RETURN
> !----------------------- End of Function D1MACH ------------------------
>       END
>       SUBROUTINE DUMSUM(A,B,C)
> !     Routine to force normal storing of A + B, for D1MACH.
>       DOUBLE PRECISION A, B, C
>       C = A + B
>       RETURN
>       END

You would still require attention to compilation options to make this
work.  If you set your compiler to extra precision and in-lining
optimization, possibly with compile-time evaluation of the loop
termination expression, you will still have the "problem" which you
expected to avoid.


Tue, 23 Dec 2008 21:18:32 GMT  
 epsilon intrinsic
Tim:  Thanks. I think the original author, (Hindmarsh?,
added the DUMSUM to avoid these difficulties.
I don't know of any compiler oroptinos on IEEE platform where
this code does not work;  that does not mean there are none.  

Of course the whole point is to use the F90 Epsilon intrinsic.

Skip

-|> No pont in re-inventing the wheel. The following Fortran code
-|> is available at Netlib and its results match the intrinsic Epsilon.
-|>
-|> Skip Knoble
-|>
-|>       DOUBLE PRECISION FUNCTION D1MACH ()
-|>       INTEGER IDUM
-|> !-----------------------------------------------------------------------
-|> ! This routine computes the unit roundoff of the machine.
-|> ! This is defined as the smallest positive machine number
-|> ! u such that  1.0 + u .ne. 1.0
-|> !
-|> ! Subroutines/functions called by D1MACH.. None
-|> !-----------------------------------------------------------------------
-|>       DOUBLE PRECISION U, COMP
-|>       U = 1.0D0
-|>  10   U = U*0.5D0
-|>       CALL DUMSUM(1.0D0, U, COMP)
-|>       IF (COMP .NE. 1.0D0) GO TO 10
-|>       D1MACH = U*2.0D0
-|>       RETURN
-|> !----------------------- End of Function D1MACH ------------------------
-|>       END
-|>       SUBROUTINE DUMSUM(A,B,C)
-|> !     Routine to force normal storing of A + B, for D1MACH.
-|>       DOUBLE PRECISION A, B, C
-|>       C = A + B
-|>       RETURN
-|>       END
-|>
-|
-|You would still require attention to compilation options to make this
-|work.  If you set your compiler to extra precision and in-lining
-|optimization, possibly with compile-time evaluation of the loop
-|termination expression, you will still have the "problem" which you
-|expected to avoid.



Tue, 23 Dec 2008 21:40:56 GMT  
 epsilon intrinsic

Quote:

> No pont in re-inventing the wheel. The following Fortran code
> is available at Netlib and its results match the intrinsic Epsilon.

> Skip Knoble

>       DOUBLE PRECISION FUNCTION D1MACH ()
>       INTEGER IDUM
> !-----------------------------------------------------------------------
> ! This routine computes the unit roundoff of the machine.
> ! This is defined as the smallest positive machine number
> ! u such that  1.0 + u .ne. 1.0
> !

Thanks for the reference.
I wonder whether the above is true. As Ugo wrote, the Fortran epsilon
intrinsic is not what I thought, but
"the smallest nonzero difference between 1 and another number" (thanks
Ugo).
It is 2^-23 in single precision according to the Fortran standard,
which defines it in
terms of the bit model for real data. My program
computed approximately 2^-24 - thus, it seems to me that something like
1._rp + epsilon(1._rp)*0.75 == 1._rp + epsilon(1._rp)
should hold for both double and single precision.
This expression produces .true. with both g95 and intel
with rp = kind(1.e0) or kind(1.d0)
(with ifort, you must use -mp to maintain precision)
Thus, strictly speaking, the comment in D1MACH is incorrect.

As D1MACH comes from Netlib and I think is used by both BLAS and
LAPACK,
I'm not sure whether I'm not committing a sort of heresy...

Jaroslav



Tue, 23 Dec 2008 22:53:30 GMT  
 epsilon intrinsic
Jaroslav,  I respectfully disagree - I think:-)

I ran the code using the following ifort (v9.1.032)
on an opteron:

ifort testdumach.f90 -O3 -xW -static

The test source code I used may be found at:
http://ftp.cac.psu.edu/pub/ger/fortran/hdk/testdumach.f90

The output from running using above compile is:

  Dumach=  2.220446049250313E-016
 Epsilon(x)=  2.220446049250313E-016
 Dumach() = Epsilon(x)

Skip


-|(with ifort, you must use -mp to maintain precision)



Tue, 23 Dec 2008 23:20:14 GMT  
 epsilon intrinsic

Quote:

> Jaroslav,  I respectfully disagree - I think:-)

> I ran the code using the following ifort (v9.1.032)
> on an opteron:

> ifort testdumach.f90 -O3 -xW -static

With -xW, the Intel compiler uses SSE2 instructions rather than X87,
and you don't get the inconsistencies associated with X87.  In 9.1, -mp
is "deprecated" and there is a new -fp_model switch to give you finer
control over the floating point model with potentially less performance
impact than -mp had.

Nevertheless, I agree with the recommendations to use the standard
intrinsics such as EPSILON, RRSPACING, etc. rather than attempting to
compute these values at run-time, as there could still be optimization
and instruction ordering effects that change the result.

Steve



Tue, 23 Dec 2008 23:53:09 GMT  
 epsilon intrinsic
Thanks, Steve. It works with or without the -xW option - on the Opteron
or on the Dell Pentium 4 clusters.

Skip


-|> Jaroslav,  I respectfully disagree - I think:-)
-|>
-|> I ran the code using the following ifort (v9.1.032)
-|> on an opteron:
-|>
-|> ifort testdumach.f90 -O3 -xW -static
-|
-|With -xW, the Intel compiler uses SSE2 instructions rather than X87,
-|and you don't get the inconsistencies associated with X87.  In 9.1, -mp
-|is "deprecated" and there is a new -fp_model switch to give you finer
-|control over the floating point model with potentially less performance
-|impact than -mp had.
-|
-|Nevertheless, I agree with the recommendations to use the standard
-|intrinsics such as EPSILON, RRSPACING, etc. rather than attempting to
-|compute these values at run-time, as there could still be optimization
-|and instruction ordering effects that change the result.
-|
-|Steve



Wed, 24 Dec 2008 00:05:48 GMT  
 epsilon intrinsic

Quote:

> Jaroslav,  I respectfully disagree - I think:-)

Disagree with what? If D1MACH produces the same result as epsilon
(your little program shows that)
then the description in D1MACH is incorrect - saying that it is
"smallest positive machine number u such that  1.0 + u .ne. 1.0 "
(my little program shows that, or the simple expression in my previous
post)
because u = 0.75*epsilon satisifies this condition.

Quote:
> -|(with ifort, you must use -mp to maintain precision)

or you disagree with this? OK, I should have used "may" instead of
"must"

Perhaps D1MACH should be corrected...
Jaroslav



Wed, 24 Dec 2008 02:33:34 GMT  
 epsilon intrinsic
 [question about the definition of epsilon()]

Others have adequately answered the question you asked. Let me just
throw in a side comment that you didn't ask about. The complications in
making sure that your compiler doesn't "optimize" the epsilon
computation in a way that gets drastically wrong results (such as off by
8 orders of magnitude or so, much less just a factor of 2) are an
excellent reason why you should not try to compute it those kinds of
ways.

That's part of why f90 has the epsilon intrinsic. It has no funny
caveats to worry about, other than of course the fundamental one that
you have to be using f90 or later. But I don't advise writing new code
in f77 anyway...and it is getting darned hard to find compilers that are
still supported and do only f77. For a while g77, was a major one, but,
well... I'm not sure of the exact status of g77 support any more.

For old code.... if I'm using old code from someone else and it still
works, I'm likely to leave it alone, following the pholosophy of not
fixing things that aren't broken. But if it actually does break and
needs fixing, I'm likely to fix it by substituting the f90 intrinsic,
since I'll invariably be using the code with f90 compilers. For example,
I'm sure I have a copy of D1MACH converted to use f90 intrinsics
somewhere around here (and I've seen other people with the same).

--
Richard Maine                     | Good judgment comes from experience;
email: my first.last at org.domain| experience comes from bad judgment.
org: nasa, domain: gov            |       -- Mark Twain



Wed, 24 Dec 2008 02:43:11 GMT  
 epsilon intrinsic


Quote:

> That's part of why f90 has the epsilon intrinsic. It has no funny
> caveats to worry about, other than of course the fundamental one that
> you have to be using f90 or later. But I don't advise writing new code
> in f77 anyway...and it is getting darned hard to find compilers that are
> still supported and do only f77. For a while g77, was a major one, but,
> well... I'm not sure of the exact status of g77 support any more.

g77 is available in gcc-3.4.6 (and older).  If a bug is found in g77,
I doubt that it will ever get fixed because (I believe) 3.4.6 is the
last releas of the 3.4.x branch.

gfortran replaced g77 in gcc-4.x where I recommend using at least
gcc-4.1.1 if not the mainline for gfortran.

--
Steve
http://troutmask.apl.washington.edu/~kargl/



Wed, 24 Dec 2008 02:59:18 GMT  
 epsilon intrinsic

Quote:


>> Jaroslav,  I respectfully disagree - I think:-)

> Disagree with what? If D1MACH produces the same result as epsilon
> (your little program shows that)
> then the description in D1MACH is incorrect - saying that it is
> "smallest positive machine number u such that  1.0 + u .ne. 1.0 "
> (my little program shows that, or the simple expression in my previous
> post)
> because u = 0.75*epsilon satisifies this condition.

1.0d0 + 0.75*epsilion(1.0d0) is identically equal to
1.d0+epsilon(1.0d0) if the rounding mode is round to nearest.

What D1MACH computes is actually better described as:

NEAREST(1.0d0,1.0d0)-1.0d0

That is, it's the magnitude of the difference between 1.0d0
and the smallest representable number greater than 1.0d0.

--
J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies."   --  C. A. R. Hoare



Wed, 24 Dec 2008 04:53:33 GMT  
 
 [ 26 post ]  Go to page: [1] [2]

 Relevant Pages 

1. Use of EPSILON () Intrinsic Function

2. Normalising to numerical precision using EPSILON and SPACING intrinsic.

3. SCM under Epsilon under MSDOS

4. Ann: TeXCAD 4.0 (-\epsilon)

5. Ada mode for Epsilon?

6. What is epsilon?

7. TINY, HUGE and EPSILON in F77 ?

8. fortran-mode for EPSILON?

9. Quadruple machine epsilon on IBM 6000

10. Epsilon, Precision or Tiny ???

11. c = inverse(sqrt(epsilon nought *mu nought))

12. SAX Intrinsic Function Anomaly

 

 
Powered by phpBB® Forum Software