> ----------

> >> The line

> >> idum = 1664525L * idum + 1013904223L;

> >> is just a random number generator using the Linear Congruential Method.

> >> By masking

> >> jflmsk & idum

> >> we get a 23-bit (random) number.

> >> The IEEE representation for 32-bit floating-point numbers has the layout

> >> seeeeeeeefffffffffffffffffffffff (8 e's and 23 f's).

> >> s is the sign bit, the eight e's is the biased exponent, and the

> >> twenty-three f's is the fraction.

> >> The exponent bias is 127. For 0 < e < 255, the number represented is

> >> (-1)**s * 2**(e - 127) * 1.f where ** means exponentiation.

> >> If s is 0 and e is 127, then the number is 1.f, i.e. a number in the range

> >> [1.0, 2.0).

> >> Now in hex 127 is 0x7f. Putting it into the floating-point format we get

> >> 0x3f800000.

> >> So, the line

> >> unsigned long itemp = (jflone | (jflmsk & idum)); //(*)

> >> creates a floating-point number with biased exponent 127 and fraction the

> >> lower 23 bits of idum, i.e. a random floating-point number in

> >> the range [1.0, 2.0).

> >> The reason for using the range [1.0, 2.0) is because the IEEE floating-point

> >> numbers are uniform distributed in that interval.

> >> The last line subtracts 1 to get a result in the range [0.0, 1.0).

> >> Carsten Hansen

> >> ----------

> >> > Hello all,

> >> > There is a program got from internet.

> >> > static unsigned long jflone = 0x3f800000;

> >> > static unsigned long jflmsk = 0x007fffff;

> >> > idum = 1664525L*idum + 1013904223L;

> >> > itemp = (jflone | (jflmsk & idum)); //(*)

> >> > rd = (*(float *)&itemp)-1.0;

> >> > The author said that program transformed the idum into a value

> >> > between 0.0 and 1.0

> >> > where jflone and jflmsk is the IEEE representation for 32-bit

> >> > floating-point numbers.

> >> > Does anyone understand the program? How to understand (*)?

> >> > Thanks in advance.

> > Carsten, your explanation of float is slightly flawed, I think. I spent

> > more that a little time on this. Consider..

> > /*

> > All I know about IEEE floating point, by Joe Wright.

> > This information was obtained by inspection of the operation of

> > DJGPP (GNU C 2.7.2.1) on my x86 PC. I have no IEEE documentation.

> > The Intel architecture (DPMI) is 32-bit and little endian.

> > (64-bit double)

> > 6 5 4 3 2 1

> > 3210987654321098765432109876543210987654321098765432109876543210

> > - 1-bit sign (1 == negative)

> > ----------- 11-bit exponent (unsigned)

> > 53-bit mantissa

> > -----------------------------------------------------

> > (32-bit float)

> > 3 2 1

> > 10987654321098765432109876543210

> > - 1-bit sign (1 == negative)

> > -------- 8-bit exponent (unsigned)

> > 24-bit mant ------------------------

> > Floating point numbers are fractions (less than 1) expressed in the

> > mantissa raised to a power (of 2) expressed in the exponent.

> > The mantissa consists of bits to the right of a binary point. The

> > value of the mantissa is always positive.

> > A note about binary fractions: .1 == 1/2, .01 == 1/4, .001 == 1/8,

> > etc. The value 5 would be .101 (5/8) raised to the third power of 2

> > (8)

> > such that 5/8 * 8 == 5.

> > Normalization: This is the final step of FP ops which shifts the

> > (non-zero) mantissa left until its high-bit (b23) is 1, decrementing

> > the

> > exponent accordingly. Because we know that this high bit will always

> > be one, we don't have to reserve space for it in the float object.

> > Its place is actually occupied by the low order bit of the exponent.

> > 255 129 Inf (Mantissa == .10000000 00000000 00000000)

> > 254 128 FLT_MAX (Mantissa == .11111111 11111111 11111111)

> > 253 127

> > 252 126

> > ---------

> > 128 2

> > 127 1

> > 126 0

> > 125 -1

> > 124 -2

> > ---------

> > 3 -123

> > 2 -124

> > 1 -125 FLT_MIN (Mantissa == .10000000 00000000 00000000)

> > 0 -126 Zero (Mantissa == .10000000 00000000 00000000)

> > The 8-bit exponent is stored with a range of 255..0. A bias of 126

> > is subtracted from the stored value to arrive at the mathematical

> > value of the exponent (See table above).

> > 2047 1025 Inf (Mantissa == .10000 00000000 ...

> > 00000000)

> > 2046 1024 DBL_MAX (Mantissa == .11111 11111111 ...

> > 11111111)

> > 2045 1023

> > 2044 1022

> > ----------

> > 1024 2

> > 1023 1

> > 1022 0

> > 1021 -1

> > 1020 -2

> > ----------

> > 3 -1019

> > 2 -1020

> > 1 -1021 DBL_MIN (Mantissa == .10000 00000000 ...

> > 00000000)

> > 0 -1022 Zero (Mantissa == .10000 00000000 ...

> > 00000000)

> > Double Precision Floating Point (double) is similar but wider, with

> > an 11-bit exponent and a 53-bit mantissa. The exponent range is

> > 2047..0 and the bias is 1022.

> > There are some 'special' representations:

> > Inf: All bits of the exponent are 1 and all mantissa bits 0.

> > NaN: All bits of the exponent are 1 and any mantissa bit 1.

> > NaN generates a floating point exception on my machine.

> > Zero: All bits are 0.

> > Sub-Normal: All exponent bits are 0 and any mantissa bit 1.

> > A sub-normal float converted to double will be normalized.

> > */

> > 0x3f800000

> > 00111111 10000000 00000000 00000000

> > Exp = 127 (1)

> > 00000001

> > Man = .10000000 00000000 00000000

> > 1.00000000e+00

> > 0x007fffff

> > 00000000 01111111 11111111 11111111

> > Exp = 0 (-126)

> > 10000010

> > Man = .11111111 11111111 11111111

> > 1.17549421e-38

> > --

> > "Everything should be made as simple as possible, but not simpler."

> > --- Albert Einstein ---

> ------------

> I think it is you being wrong. The bias for the exponent for single

> precision floating-point numbers is 127.

> First that is what the Standard says. Yes, I have a copy of the ANSI/IEEE

> Std 754-1985 Standard for Binary Floating-Point Arithmetic.

> On page 9 there is a table specifying the exponent bias. For Single it is

> 127. For Double it is 1023.

> Moreover, my description follows what is in paragraph 3.2.1 about the Single

> format.

> Another reference is Hennessy and Patterson, Computer Architecture: A

> Quantitative Approach, Second Edition, p. A-16.

> Second, take the number 0x3f800000. Here the sign is 0. The biased exponent

> is 127. The fractional part is 0. According to the formula I gave this

> number is

> (-1)**s * 2**(e - 127) * 1.f = 1.0

> I would suggest you enter 0x3f800000 into memory and read it back as a

> single precision floating-point number and print it out (you can use the

> original posters program). The answer will be 1.0.

Let me apologize for calling you wrong. I'm sorry. It would seem the