** LAZY LANGUAGES FASTER?? ** 
Author Message
 ** LAZY LANGUAGES FASTER?? **

Are lazy languages faster than eager languages?

My intuition says that a lazy language would be a lot faster than an
eager language simply because the eager language evaluates all of its
arguements regardless of whether they are all needed by the function.

The lazy language, by evaluating only what is necessary, appears to
save time.

****************************************************************************
   David E. Nettles

   1000 Woodbury Road                  Phone: (516) 682-8464
   Woodbury, NY 11797                    FAX: (516) 682-8022
   MS D12/237
****************************************************************************



Tue, 10 Jan 1995 21:48:57 GMT  
 ** LAZY LANGUAGES FASTER?? **

Quote:

>Are lazy languages faster than eager languages?
>My intuition says that a lazy language would be a lot faster than an
>eager language simply because the eager language evaluates all of its
>arguements regardless of whether they are all needed by the function.
>The lazy language, by evaluating only what is necessary, appears to
>save time.

This reminded me of something else, which I thought maybe I'd bring
up, which is on a soemewhat related topic.  Anybody thought about the
question of _why_ CBV is more efficient?  I mean, for a theoretician,
there seems to be no really good answer.  But I'd like to be able to
argue for efficiency from purely theoretical grounds, and since I'm
interested in programming-languages research, that means to predict
the efficiency of programming languages.  Of course this depends on
all sorts of things - the quality of the implementation is paramount.
But there ought to be something which theory can say other than the
standard "asymptotic complexity" arguments - after all,

      almost compiler optimization comes down to shaving
      constant factors, eh?

So, anyway, on to CBV versus CBN:

(1) Looking at abstract machines, it seems like CBN should be the
    naturally more efficient evaluation mechanism.

(2) I had this conjecture that maybe it was the "simplicity of the
    standard reduction function" - but here, too, CBN wins - the CBV
    standard reduction sequence definitions are rather more
    complicated than those for CBN.

In the end, it came down for me to something disgustingly applied -
the frequency of expressions which are weak-head-normalizable to
closure-free objects (i.e. which compute to data-values) is very high
in normal programs - so CBV wins because it "represents" them
efficiently - as the values they denote.

In other words, the cost of manipulating "thunks" overwhelms the
savings from not evaluating unneeded arguments.  This seems like an
awfully specious argument, from a theoretical standpoint, and I sure
would like to hear something more firm.

--chet--



Thu, 12 Jan 1995 03:02:52 GMT  
 ** LAZY LANGUAGES FASTER?? **

Quote:

>My intuition says that a lazy language would be a lot faster than an
>eager language simply because the eager language evaluates all of its
>arguements regardless of whether they are all needed by the function.
>The lazy language, by evaluating only what is necessary, appears to
>save time.

Lazy evaluation does indeed save time by not evaluating arguments
that are not needed. However, eager evaluation saves times by never
evaluating the same argument twice.

In eager languages, all arguments are evaluated ONCE, regardless of
whether they are actually needed.

In lazy languages, an argument is evaluated once for every time that it
is used. If the argument is used N times, it will be evaluated N times.
So if the argument is not used, it will not be evaluated,
   if the argument is used once, it will be evaluated once,
   if the argument is used twice, it will be evaluated *TWICE*, etc.

For example, consider the function (\x.\y.xx) (where \ is lambda),
applied to arguments A and B which evaluate to a and b, resp.

(i)  lazy evaluation
     (\x.\y.xx)AB reduces to AA, which reduces to aa

(ii) eager evaluation
     (\x.\y.xx)AB reduces to (\x.\y.xx)ab, which reduces to aa

(i) saves time by not evaluating B. On the other hand, (ii) saves time
by evaluating A only once, whereas in (i) A has to be evaluated twice.
Which of the two is faster depends on the cost of evaluating A and B.

Erik

Quote:
>****************************************************************************
>   David E. Nettles

>   1000 Woodbury Road                  Phone: (516) 682-8464
>   Woodbury, NY 11797                    FAX: (516) 682-8022
>   MS D12/237
>****************************************************************************



Fri, 13 Jan 1995 18:25:55 GMT  
 ** LAZY LANGUAGES FASTER?? **

|> In lazy languages, an argument is evaluated once for every time that it
|> is used.

No, in a lazy implementation (and laziness is an implementation related
concept - the corresponding semantic concept is nonstrictness (as Nikhil
pointed out earlier)) an argument to a function is evaluated *at most once*.
A lazy implementation is required to save the result of evaluation for
later use. What *has* to be done for each use of the argument (in the
absence of strictness analysis) is a check of whether the argument is or
isn't already evaluated, ie if we should evaluate it or use the saved
result of an earlier evaluation.

Karl-Filip
---------------------------------------------------------------------
Make the frequent case fast, and the fast case frequent!



Fri, 13 Jan 1995 21:07:09 GMT  
 ** LAZY LANGUAGES FASTER?? **

|> Anybody thought about the question of _why_ CBV is more efficient?

[Stuff deleted...]

|> In the end, it came down for me to something disgustingly applied -
|> the frequency of expressions which are weak-head-normalizable to
|> closure-free objects (i.e. which compute to data-values) is very high
|> in normal programs - so CBV wins because it "represents" them
|> efficiently - as the values they denote.

Well, I would like to put it in a slightly different way:

(1) In CBN, the value of a variable may or may not already
    be evaluated. A strict primitive, like integer add, needs its
    argument in evaluated form. Hence in an expression like `x+y',
    both arguments needs to be tested, and perhaps evaluated, before
    the addition is performed. This incurrs two costs relative to CBV:

    (a) The run time test is not needed in CBV, since variables are
        only allowed to denote fully evaluated objects.

    (b) A variable must be able to represent not only all objects of
        its type, but also all possible ways of evaluating such objects
        (all possible closures). Thus if a variable is implemented as a
        machine word, either there will be fewer objects of its type
        than bit patterns in a machine word (which is a pain in the
        behind when it comes to ints or floats), or some (or all) values
        in the type have to be stored in the heap and represented as
        pointers (all values must be boxed) which wastes storage and
        instructions (to move data between the heap and the CPU).

(2) Optimizations such as register allocation and static instruction
    scheduling depends on the compiler being able figure out what the
    program will do when run. Every conditional branch is one more place
    where the compiler has to assume the worst. Thus these carry indirect
    costs as well as the direct cost of executing them.

(3) It may be expensive to allocate, initialize and in general manipulate
    the closures. Especially since these are generally in the heap, they
    may well destroy locality, resulting in a thrashing data cache.

Unfortunately, effects such as (1b), (2) and the locality loss in (3) are
not accounted for by standard complexity theory. They just slips between
your fingers if you're only counting the number of operations a program
will execute since on a modern computer, the cost of an operation in
general depends on its context, what operations preceede and follow it in
the dynamic instruction stream, when, where and by whom its operands were
computed and when, where and by whom its result will be used.

|> I mean, for a theoretician, there seems to be no really good answer.

I think that is a sure sign that there is a lot of new and interesting
theory to be done in this field!

Karl-Filip
----------------------------------------------------------------
Make the frequent case fast, and the fast case frequent!



Sat, 14 Jan 1995 00:57:28 GMT  
 ** LAZY LANGUAGES FASTER?? **

Quote:


>Lazy evaluation does indeed save time by not evaluating arguments
>that are not needed. However, eager evaluation saves times by never
>evaluating the same argument twice.

>In eager languages, all arguments are evaluated ONCE, regardless of
>whether they are actually needed.

>In lazy languages, an argument is evaluated once for every time that it
>is used. If the argument is used N times, it will be evaluated N times.
>So if the argument is not used, it will not be evaluated,
>   if the argument is used once, it will be evaluated once,
>   if the argument is used twice, it will be evaluated *TWICE*, etc.

Exuse me, I see your point, however, I don't really know of a *better*
lazy evaluator that doesn't take advantage of formal arguments that
are shared, and updates where necessary when the argument is finally evaluated.

There was a little problem with your example, However.

Quote:
>  (\x.\y.xx)AB which reduces to AA which reduces to aa

   Assuming that A==>a
However, if indeed lazy evaluation then you have.
    (\x . \y x x) A B ==> AA  ==> aA  
To go any further "a" would have to engulf "A" and force its evaluation,
Just a making a lazy point.

Anyway, very naive lambda calculus evaluators may operate the way you assume,
but most are implemented with an environment, in which uses of a variable
point to its environment entry.  Once evaluated, the environment entry(ies) are
updated to reflect the evaluation throughout.  Therefore, arguments that are
shared are evaluated only once, not twice, three times, etc.

IMHO, however, lazy evaluation is not a god that will do us *great* benefit.
Instead, I believe it is a useful tool in applications where it works
best.  For example when expressing a problem with infinite data structures,
and possible _L avoidance is concise.   (That post showing an implementation
of lazy lists in SML was a good argument in this favor)

In the same light, Prolog is a good language where Left-to-right resolution
and unification fits the problem best. Adding other things (which everybody
complains about) slows it down, or doesn't work well.
SML (aside from the appauling syntax) is a good
language where need for eager evaluation fits the problem well.
Ie. if you need a screwdriver, don't reach for the hammer or the vice-grips.

Of course, one of our biggest problems, is that we *think* we need standards,
   (and of course, we have and are developing *many* of them ;-)
We all would feel more comfortable if we can do everything in one language.
Its sort of like pulling off the highway into the McDonalds in North Carolina
knowing that you can still get the same burger you got in South Dakota.
It's comforting.
However, I think PL/1 is a good example of where this philosophy
has lead us in the past.

Personally, I think there will and should always be new languages solving
specific problems, good tools.  This argument of whether or not lazy evaluation
is better than eager evaluation is horse hockey.  People are always picking some
eager evaluation problem that has appauling lazy behavior, like matrix
multipication, obviously a problem suited better for eager evaluation and
updatable arrays --ie. handing the nail to the guy with the vice grips.

And the lazy advocates are always picking that damn ((\x\y.y) _L A) example --
Handing the{*filter*}to the guy with the wrench.  

One of these days, we will learn to integrate all these tools, methods,
evaluation strategies so that we use the proper ones for the problem(s)
at hand. Instead of always trying to invent more "general-purpose" wheels.

Later,
-Polar



Fri, 13 Jan 1995 20:39:47 GMT  
 ** LAZY LANGUAGES FASTER?? **

Quote:


>    Lazy evaluation does indeed save time by not evaluating arguments
>    that are not needed. However, eager evaluation saves times by never
>    evaluating the same argument twice.
>    In eager languages, all arguments are evaluated ONCE,
>    regardless of whether they are actually needed.

This assumes implementation via string reduction.
With string reduction, lazy programs execute very slowly.
(Did somebody mention the speed of continental drift?) :-)

With graph reduction, lazy evaluation means evaluation _at most_ once.
The challenge of laziness is to find a way to implement graph
reduction without imposing a large constant-factor overhead.
------------------------------------------

Tulane University       New Orleans, Louisiana  USA



Sat, 14 Jan 1995 05:59:24 GMT  
 ** LAZY LANGUAGES FASTER?? **

Quote:

>>My intuition says that a lazy language would be a lot faster than an
>>eager language simply because the eager language evaluates all of its
>>arguements regardless of whether they are all needed by the function.
>Lazy evaluation does indeed save time by not evaluating arguments
>that are not needed. However, eager evaluation saves times by never
>evaluating the same argument twice.

  *Non-strict* evaluation saves time by not evaluating arguments which are not
needed. *Lazy* evaluation saves time by evaluating each argument at most once.
Eager evaluation evaluates each argument exactly once.

  It is the mechanism by which lazy evaluation is implemented which slows it
down. The speedup due to not evaluating arguments is often negligible because
mostly people don't write functions which don't use their arguments.

John



Sat, 14 Jan 1995 09:21:37 GMT  
 ** LAZY LANGUAGES FASTER?? **

Quote:

>   *Non-strict* evaluation saves time by not evaluating arguments which are not
> needed.

Pardon my nit-picking, but this is not true.  I refer you to my earlier posting
(last week?) in which I pointed out that evaluators CAN work on unnecessary
arguments while preserving non-strictness.




Sat, 14 Jan 1995 22:08:01 GMT  
 ** LAZY LANGUAGES FASTER?? **

     *Non-strict* evaluation saves time by not evaluating arguments which are not
   needed. *Lazy* evaluation saves time by evaluating each argument at most once.
   Eager evaluation evaluates each argument exactly once.

     It is the mechanism by which lazy evaluation is implemented which slows it
   down. The speedup due to not evaluating arguments is often negligible because
   mostly people don't write functions which don't use their arguments.

I think there are two views towards strict evaluation in a purely
functional framework:

 (1) it constrains the program, letting the compiler generate
     better code (a constant factor speedup)

 (2) it affects the semantics of functions in the presence of
     errors and non-termination

Mostly what I care about is (1). I care about (2) only when I am
debugging. Therefore, for debugging, I want strict evaluation.  For
using my code, I want the most efficient evaluation possible.
Usually, I suspect that will be strict evaluation, but on parallel
machines, it could be something else.

Altogether, it therefore seems prudent to write programs so that they
depend as little as possible on whether the language is strict, eager,
non-strict, or lazy. But if it came down to a choice, I'd suspect that
eager evaluation is a practical necessity for many applications, and
languages should provide it at least in some form.

                                        Thomas.



Sun, 15 Jan 1995 04:01:01 GMT  
 ** LAZY LANGUAGES FASTER?? **

Quote:

>>   *Non-strict* evaluation saves time by not evaluating arguments which are not
>> needed.
>Pardon my nit-picking, but this is not true.  I refer you to my earlier posting
>(last week?) in which I pointed out that evaluators CAN work on unnecessary
>arguments while preserving non-strictness.

  My statement is true if you are implementing non-strict evaluation on a
sequential machine, in which case speculative evaluation can cause
non-termination unless you can prove that the argument being speculatively
evaluated was needed, and at the very least speculatively evaluating an
unneeded argument will waste time. Your nit-picking is pardoned :-).

John



Sun, 15 Jan 1995 09:38:10 GMT  
 
 [ 18 post ]  Go to page: [1] [2]

 Relevant Pages 

1. Speed of FP languages, prospects (was Re: Benchmarking Lazy Functional Languages)

2. lazy.py 0.7 - Lazy expressions and datastructures

3. How lazy is lazy?

4. lazy (non-eager) Scheme ??? (lazy, but eager me)

5. lazy.dylan 0.1 -- add ML/Scheme-style lazy evaluation to Dylan

6. lazy.py 0.7 - Lazy expressions and datastructures

7. lazy (non-eager) Scheme ??? (lazy, but eager me)

8. multiplatform GUI design with parallelized lazy functional language

9. Lazy and Strict Functional languages

10. Memoization + Lazy Languages

11. Specifying the semantics of a lazy language

12. speed improvements to lazy languages

 

 
Powered by phpBB® Forum Software