APL Integer Arithmetic Question 
Author Message
 APL Integer Arithmetic Question

Recently I began learning and using APL, and having worked with many of
the traditional compiled languages, I'm curious about the way APL
handles integer arithmetic overflow.  I'll use an example to help
explain my question.

First I define two vectors:

A <- 100000 200000 300000
B <- 200000 400000 800000

When I check the data type of these vectors, I find (as I expect) that
they are stored as 32-bit integers.  (The interpreter I have includes a
facility for determining the storage scheme of variables.  This is what
I mean by checking the data type.)

If I now divide A by B, I find that the data type of the result is IEEE
floating point.  This I understand: the interpreter probably promotes A
and B to floating point prior to performing the division.  In any case,
the result *must* be floating point to store the (in general)
non-integer quotient.

I also find that if I add A and B, the resulting data type is still a
32-bit integer.  This I understand also: adding an integer to an integer
yields an integer, and the interpreter can take advantage of this fact
and most likely uses fast integer machine instructions rather than
floating point.  No matter how it actually performs the addition it can
store the result as an integer vector.

I now wonder what happens if the sum of two integer vectors exceeds the
largest value that can be stored in 32-bit integer.  If I keep adding B
to A like this:

A <- A + B
A <- A + B
      .
      .
      .

I find that, eventually, A becomes stored as floating point; the
interpreter was smart enough to promote A's data type when the sum no
longer fit into a 32-bit integer.

So after this long-winded description, here's my question:  How is this
behavior typically implemented in APL interpreters?  Does the
interpreter perform all arithmetic in floating point and convert back to
integers if the magnitude of the result will fit in an integer?  Or will
it attempt to perform integer arithmetic and trap overflows, starting
over using floating point if any occur?  Or am I way off in my thinking
and a completely different method is used?

I tried the examples above with the APL.68000 interpreter, but I assume
(perhaps incorrectly) that this behavior is required of any APL
interpreter (although perhaps with a different word length).

Thanks very much.

Louis Giglio



Sat, 24 Jan 1998 03:00:00 GMT  
 APL Integer Arithmetic Question

Quote:
> ...
> So after this long-winded description, here's my question:  How is this
> behavior typically implemented in APL interpreters?  Does the
> interpreter perform all arithmetic in floating point and convert back to
> integers if the magnitude of the result will fit in an integer?  Or will
> it attempt to perform integer arithmetic and trap overflows, starting
> over using floating point if any occur?  Or am I way off in my thinking
> and a completely different method is used?
> I tried the examples above with the APL.68000 interpreter, but I assume
> (perhaps incorrectly) that this behavior is required of any APL
> interpreter (although perhaps with a different word length).

The automatic and silent promotion to floating point on integer
overflow is more a tradition than a requirement, being first
implemented in APL\360.  (APL\360 is the first widely used
implementation and is the base for all other APL dialects.)  
The behaviour is an attempt to provide the appearance of numbers
as abstract rather than machine entities.

J, a dialect of APL, uses both methods described in your question:
Where practical, promotion to floating point is implemented by
trapping the integer overflow, with the operation restarted from
scratch in floating point; where not, the operation is done in
floating point with attempted demotion back into integers.  
When promotion _does not_ occur (expected to be by far the majority
of cases in actual use), the execution time of the second approach
divided by that of the first approach, on the PC line of machines,
is between 2 and 6, depending on whether there is a FPU (tending to
6 when there is not a FPU).



Sun, 25 Jan 1998 03:00:00 GMT  
 APL Integer Arithmetic Question

Quote:

>[snip]

>So after this long-winded description, here's my question:  How is this
>behavior typically implemented in APL interpreters?  Does the
>interpreter perform all arithmetic in floating point and convert back to
>integers if the magnitude of the result will fit in an integer?  Or will
>it attempt to perform integer arithmetic and trap overflows, starting
>over using floating point if any occur?  Or am I way off in my thinking
>and a completely different method is used?

[snip]

I can speak only with regard to APL*PLUS/PC, but I think most interpreters
work the same way: the interpreter first attempts the operation as
integer. If an overflow occurs anywhere in the array, it starts the
whole array over again, converting integers to float and giving a floating
result. The worst-case scenario is where it doesn't discover until the
last element that it's going to have to 'promote' the result, but in
general, the time spent doing the wasted integer arithmetic is negligible
compared to the floating point arithmetic.



Sun, 25 Jan 1998 03:00:00 GMT  
 APL Integer Arithmetic Question
Louis,

Alan Kay said that APL was the first object-oriented language.
As such, APL has an abstract class (that is no instances) called
Number which could be implemented as the single concrete class
DoubleFloat (or some might argue Complex).
Yuk, bits represented as IEEE double precision floats!
I implemented a numeric vector only APL calculator in 1979 in TRS-80
BASIC. (Don't be so quick to laugh; it was faster and much cooler
than my HP21 calculator.)
Most APL systems implement numbers as subclasses of Number
Bit (1-bit), Integer (32-bit), DoubleFloat (64-bit IEEE).
Integer arithmetic is done with integers (to avoid float errors more
than speed reasons) that trap overflow and convert to DoubleFloat
dynamically.
Alan



Sun, 25 Jan 1998 03:00:00 GMT  
 APL Integer Arithmetic Question

: The automatic and silent promotion to floating point on integer
: overflow is more a tradition than a requirement, being first
: implemented in APL\360.  (APL\360 is the first widely used
: implementation and is the base for all other APL dialects.)  
: The behaviour is an attempt to provide the appearance of numbers
: as abstract rather than machine entities.

Well, I'd argue that it is a requirement.  The key to the fundamental
design philosophy of APL is that it's a language for human-machine
communication based on the human rather than the machine viewpoint,
rather a radical departure from the languages of the time.  One of the
most important steps towards this goal was the notion of only two
datatypes, numbers and characters; it meant the user didn't have to
understand the "machine view" of numbers as subdivided into various
"types" of numbers, i.e. bit/integer/FP, single/double precision, etc.  
In other words, that "appearance of numbers as abstract... entities"
isn't an incidental nicety; it's a basic part of the language.  An APL
that included integer overflow errors would be fundamentally flawed.

Of course, integer arithmetic is not a requirement; the requirement is
that the user see "arithmetic" and not need to know about "integer
arithmetic" as distinct from any other kind.  So a valid APL could use
only IEEE FP to represent all numbers, and do FP arithmetic exclusively.  
But an interpreter that DOES do integer arithmetic IS required to trap
and handle integer overflow.

--

"Sacred cows make the tastiest hamburger." -- Abbie Hoffman



Fri, 30 Jan 1998 03:00:00 GMT  
 APL Integer Arithmetic Question

Eric Landau writes on Monday, August 14:

Quote:

> : The automatic and silent promotion to floating point on integer
> : overflow is more a tradition than a requirement, being first
> : implemented in APL\360.  (APL\360 is the first widely used
> : implementation and is the base for all other APL dialects.)
> : The behaviour is an attempt to provide the appearance of numbers
> : as abstract rather than machine entities.
> Well, I'd argue that it is a requirement.  The key to the fundamental
> design philosophy of APL is that it's a language for human-machine
> communication based on the human rather than the machine viewpoint,
> rather a radical departure from the languages of the time.  One of the
> most important steps towards this goal was the notion of only two
> datatypes, numbers and characters; it meant the user didn't have to
> understand the "machine view" of numbers as subdivided into various
> "types" of numbers, i.e. bit/integer/FP, single/double precision, etc.
> In other words, that "appearance of numbers as abstract... entities"
> isn't an incidental nicety; it's a basic part of the language.  An APL
> that included integer overflow errors would be fundamentally flawed.
> Of course, integer arithmetic is not a requirement; the requirement is
> that the user see "arithmetic" and not need to know about "integer
> arithmetic" as distinct from any other kind.  So a valid APL could use
> only IEEE FP to represent all numbers, and do FP arithmetic exclusively.
> But an interpreter that DOES do integer arithmetic IS required to trap
> and handle integer overflow.

I disagree.  An APL dialect that signals integer overflow would
be different, not fundamentally flawed, or even necessarily flawed.  
(At this point, I should make clear that J does not signal integer
overflow, instead promoting automatically into internal floating
point numbers, as does several other APL dialects in common use.)
For example, a dialect with particular emphasis on database access
might be designed to signal integer overflow for exceptional handling,
rather than automatically promoting to floating point and have the
value eventually being propagated into the database.  The value may be
the same abstract entity, but the database would doubled in size ...

Moreover, the behaviour is just an attempt to provide an appearance,
and is really too imperfect for any insistence on requirements.  
For example, the inclusion in the language of negative numbers and
primitives such as exponentiation, argue strongly for the "abstract
entities" to be complex numbers.  Is a dialect that does not have
complex numbers, not a "valid APL"?



Sat, 31 Jan 1998 03:00:00 GMT  
 APL Integer Arithmetic Question
This discussion is interesting for me since I have recently been bitten by
such silent "promotions" within the interpreter. I was using partition in
APL2, which requires the mask on the left side to be all integer. Just last
week I started using masks with very large numbers (i.e. > -1+2*31) and, much
to my surprise, the partition failed. I later realized that this is an adverse
side-effect of these type conversions; only solution is to rebuild the mask
so as to bring it under the cut-off.

I wonder whether it might be possible to turn around the difficulty more
elegantly, i.e., within the interpreter. For operations like addition, which
do not care about whether the numbers are integers or FP, type conversions
are adequate but how would you handle cases where the function requires that
some its arguments be integer?

Regards,

Olivier Lefevre, NYU Medical School, NYC



Sat, 31 Jan 1998 03:00:00 GMT  
 
 [ 7 post ] 

 Relevant Pages 

1. APL Integer Arithmetic Question

2. Anomalous integer arithmetic--Simul-Task/Native DOS

3. Arbitrary Length Integer Arithmetic Wanted

4. Large integer arithmetic

5. Much faster multiprecision integer arithmetic available

6. big integer arithmetic in vhdl?

7. integer arithmetic with VHDL

8. Looking to license integer and floating point arithmetic cores

9. Large integer arithmetic...

10. Big integer arithmetic

11. Integer Arithmetic Shortcuts

12. large integer arithmetic routines in x86 asm

 

 
Powered by phpBB® Forum Software