
Run time error (#define) challeng !!
Quote:
>Please email if you can
>-----------------------
>Hi, I have declared this
> #define x 0x60000L
> #define y 0x4000
> #define z ((y * 2) + x)
>the value for z should be 0x68000 but the compiler generated 0x58000 !! why
The answer rests in your fine language reference manual.
First of all, after preprocessing, your expression looks like this:
(0x4000 * 2) + 0x60000L)
The constant 0x4000 fits into the range of an int, so it acquires that type.
However, multiplying 0x4000 by 2 (2 itself being type int) produces the value
0x8000, or 32768, which does not necessarily fit into the type int! The type
int is required to only cover the range -32767 to 32767. Many C
implementations provide a larger range, but implementations with 16 bit ints
are still around.
What happens when you overflow the range of an int is undefined behavior;
anything may happen. Some expected result may be produced, or some unexpected
result may be produced, or your program (or entire machine) may crash.
It would appear that your machine uses twos complement arithmetic, and
that it treats overflows by simply ``wrapping around'', so that the out of
range result 32768 (one higher than the highest integer) actually becomes
-32768 (the lowest integer) or -0x8000. Adding -0x8000 to the long int value
0x60000 produces 0x58000, which is your mysterious, incorrect answer.
To produce the expected answer, you must ensure that the multiplication by
2 takes place using long integer arithmetic. To do this, you must ensure
that at least one of the two operands has type long, by adding the suffix
with which you appear to be already familiar:
(0x4000L * 2) + 0x60000L