Internal Precision Vs. Unformatted Precision 
Author Message
 Internal Precision Vs. Unformatted Precision

Tracking down a bug at work, I've discovered something interesting.  For
my Solaris 2.5 platform with the Sun fortran 4.2, the compiler computes
maths internally to a different precision than it displays with an
unformatted write.  For example,

A = B - C

where A,B,C are REAL*4, B is an input, and C is the product of other
REAL*4s expressed to the limit of REAL*4 precision.

Over several iteration of computation, write(6,*) a,b,c results in

1.26e-6 1.0     1.0
1.26e-6 1.0     1.0
1.26e-6 1.0     1.0
1.26e-6 1.0     1.0
1.26e-6 1.0     1.0
0.0             1.0     1.0

That is, the while B and C show with no formatting to be 1.0 and 1.0,
they aren't to the compiler's internal maths.  And so only when the
internal precision says the difference is zero does the write of A shows
zero.

Does anyone here have any more insight into this delima?  It makes me
nervous that such glaring inconsistencies could exist.

If any yall know how to force common precision, RSVP.  Or any other
bright ideas.  TIA,

--
Dan Stephenson

(Remove the nospam from my email address before replying.)



Wed, 18 Jun 1902 08:00:00 GMT  
 Internal Precision Vs. Unformatted Precision
In article


Quote:
>Tracking down a bug at work, I've discovered something interesting.  For
>my Solaris 2.5 platform with the Sun FORTRAN 4.2, the compiler computes
>maths internally to a different precision than it displays with an
>unformatted write.  For example,

Your examples are formatted list-directed writes, not unformatted writes.
Unformatted data is the same as the internal representation, no bits are
lost going either way.  The same is not true for formatted list-directed
writes, which is what your example write statement uses.

[...]

Quote:
>Does anyone here have any more insight into this delima?  It makes me
>nervous that such glaring inconsistencies could exist.

>If any yall know how to force common precision, RSVP.  Or any other
>bright ideas.  

I'm not exactly sure what you think is the delimma, or the inconsistency.
Perhaps you just need to write the data with formats so that you can
control the number of digits displayed?  If you really need bit-by-bit
consistency between the internal and external representation, then you may
need to use unformatted I/O.  If you just need more digits, then use a
format.

$.02 -Ron Shepard



Wed, 18 Jun 1902 08:00:00 GMT  
 Internal Precision Vs. Unformatted Precision


Quote:
>Tracking down a bug at work, I've discovered something interesting.  For
>my Solaris 2.5 platform with the Sun FORTRAN 4.2, the compiler computes
>maths internally to a different precision than it displays with an
>unformatted write.  For example,

>A = B - C

>where A,B,C are REAL*4, B is an input, and C is the product of other
>REAL*4s expressed to the limit of REAL*4 precision.

>Over several iteration of computation, write(6,*) a,b,c results in

>1.26e-6     1.0     1.0
>1.26e-6     1.0     1.0
>1.26e-6     1.0     1.0
>1.26e-6     1.0     1.0
>1.26e-6     1.0     1.0
>0.0         1.0     1.0

>That is, the while B and C show with no formatting to be 1.0 and 1.0,
>they aren't to the compiler's internal maths.  And so only when the
>internal precision says the difference is zero does the write of A shows
>zero.

>Does anyone here have any more insight into this delima?  It makes me
>nervous that such glaring inconsistencies could exist.

>If any yall know how to force common precision, RSVP.  Or any other
>bright ideas.  TIA,

>--
>Dan Stephenson

Many Fortran implementations, including Sun's f77, print fewer digits
than are needed to distinguish all internally representable values.
Printing enough digits to distinguish between all internally
representable values requires printing more decimal digits than can be
represented accurately, possibly giving users a false sense of the
accuracy of their results.  For example, the decimal accuracy of IEEE
single-precision is less than 7 digits, but 9 decimal digits are
required to distinguish between all representable single-precision
values.

One way to force common precision is to use formatted output instead of
list-directed output.  At least 9 decimal digits are required to
distinguish between all representable single-precision values.  At least
17 decimal digits are required to distinguish between all representable
double-precision values.  At least 35 decimal digits are required to
distinguish between all representable REAL*16 values.

                                        Sincerely,
                                        Bob Corbett



Wed, 18 Jun 1902 08:00:00 GMT  
 Internal Precision Vs. Unformatted Precision

Quote:

> Tracking down a bug at work, I've discovered something interesting.  For
> my Solaris 2.5 platform with the Sun FORTRAN 4.2, the compiler computes
> maths internally to a different precision than it displays with an
> unformatted write.  For example,

> A = B - C

> where A,B,C are REAL*4, B is an input, and C is the product of other
> REAL*4s expressed to the limit of REAL*4 precision.

> Over several iteration of computation, write(6,*) a,b,c results in

> 1.26e-6 1.0     1.0
> 1.26e-6 1.0     1.0
> 1.26e-6 1.0     1.0
> 1.26e-6 1.0     1.0
> 1.26e-6 1.0     1.0
> 0.0             1.0     1.0

> That is, the while B and C show with no formatting to be 1.0 and 1.0,
> they aren't to the compiler's internal maths.  And so only when the
> internal precision says the difference is zero does the write of A shows
> zero.

> Does anyone here have any more insight into this delima?  It makes me
> nervous that such glaring inconsistencies could exist.

> If any yall know how to force common precision, RSVP.  Or any other
> bright ideas.  TIA,

> --
> Dan Stephenson

> (Remove the nospam from my email address before replying.)

This behavior is ieee compliant. Sun fortran uses ieee arithemic, your
numbers are truncated for their printing.
( try something like
                     WRITE(6,*) 1.0 / 3.0  and
                     WRITE(6,*) 1.0D0 / 3.0D0
and you'll see the difference due to the conversion of the calculus and
printing )


Wed, 18 Jun 1902 08:00:00 GMT  
 Internal Precision Vs. Unformatted Precision


Quote:
> In article


> >If any yall know how to force common precision, RSVP.  Or any other
> >bright ideas.  

> I'm not exactly sure what you think is the delimma, or the inconsistency.
> Perhaps you just need to write the data with formats so that you can
> control the number of digits displayed?  If you really need bit-by-bit
> consistency between the internal and external representation, then you may
> need to use unformatted I/O.  If you just need more digits, then use a
> format.

> $.02 -Ron Shepard

Hmm.  If they (im)precision occured in the de{*filter*}, THAT at least should
not be.  THat's the point to de{*filter*}s, after all.

--
Dan Stephenson

(Remove the nospam from my email address before replying.)



Wed, 18 Jun 1902 08:00:00 GMT  
 
 [ 5 post ] 

 Relevant Pages 

1. double precision vs. single precision

2. Convert single precision to double precision

3. Constant precision & type/precision promotion

4. HP Fortran question: precision of double-precision?

5. single precision or double precision?

6. Double precision runs fast tan single precision in MS Fortran

7. Internal functions for quadruple precision

8. Time scale precision Vs Simulation time

9. g77 vs. cvf precision

10. Multi-precision packages: gmpy vs mxNumber

11. Problem: character vs double precision

12. m/c Precision vs. Desired Tolerance

 

 
Powered by phpBB® Forum Software