Lazy languages considered questionable 
Author Message
 Lazy languages considered questionable

Hello,

I have read many a paper on functional languages that begin as a
premise that lazy evaluation is good (that is better than strict
evaluation).   For some time I accepted this premise basically on
'faith' but more and more I am questioning this assumption.  This
note is meant to start a technical dialog on this question which
I hope will help resolve this issue in my mind.    (But you can see
my present opinion in the title of this message).  

I think everyone agrees that lazy evaluation is harder to implement
efficiently on conventional computers but that is not the kind of
problem I wish to discuss (one could imagine a good compiler removing
this difficulty).   On the other hand, since it is more difficult
to implement Lazy evaluation we have to have good reasons for going
to this extra trouble.   Thus my first question is

        1) What are the advantages lazy evaluation that make
           it worth the extra trouble to implement?

To the best of my knowledge the only advantage of lazy evaluation
is the fact that it allows the manipulation of potentially infinite
data structures.   Now this advantage is only worth it if it actually
gets used.    I have only seen two basic uses of this feature.

        1) Generators:   With infinite lists you can generate
                a sequence without having to specify bounds.
                The classic example of this is a sieve function that
                generates the infinite stream of primes.

        2) I/O:   This seems to the be the use driving the need for
                lazy evaluation.  

I consider the first use insufficient to warrant the need for lazy
languages (since it seems that generators can be programmed in reasonably
elegant ways without lazy evaluation).   My experience with lazy
lists for I/O have left me unimpressed.   Most I/O involves a handshaking
protocol which depends upon causality relationships (that is the request
causes the system to send a reply).  It seems that this relationship
is at BEST implicit in lazy list I/O and thus code is very convoluted
and hard to read.    In addition the semantics of the lazy code no longer
has its simple functional reading but must be understood in the context
of the exact evaluation order.  This is just the kind of detail function
language we designed to remove!!

The above two examples are the only ones I could come up with.  If
anyone has others, or wishes to dispute my argument that generators
are not a sufficient reason for laziness or that lazy I/O creates
more problems than it solves please jump right in.

---------------------------------------------------------------------
Thus at present I can find no compelling reason to go to the extra
effort to implement lazy languages.   In addition I have a reason
for NOT using them.   When I use a lazy language AND really use
the fact that it is lazy, I must constantly be worried about the
order of evaluation.   On the other hand, 99% of all programs
(that is programs that have the same interpretation in a strict
and lazy interpreter) I don't have to worry about the order of
evaluation.   Thus it seems to me that since we can get what we
want done in a more abstract environment where the order of evaluation
basically doesn't matter, that is what we should do.

What's more, that is in fact what is done right now.  Even lazy
language programmers mostly write code that would work under
strict evaluation.  When he does write something that requires
laziness, he has to check very carefully that all the functions
he calls with his infinite data structure will actually terminate
with that input.  To do this he has to examine the actual code
(breaking modularity).   The only reason why this is OK at present
is because laziness simply isn't used that often.  If it ever
were, the number of 'termination bugs' is likely to skyrocket.

The spirit of function languages has been 'simplify, simplify'.  Thus
imperative languages were considered to complex because the sharing
conditions cause by pointers were hard to describe and reason about.
Thus they were eliminated from the language (even though it caused
a NOTICEABLE loss of expressive power).  I assert that lazy evaluation
causes a similar problem.  Namely it requires the programmer to worry
about evaluation details that MOST of the time don't matter.  Thus it
seems that laziness should be abandoned unless it has clear advantages
(which from what I have been able to surmise, it does not).

-------------------------------------------------------------------
As stated before the above is intended to start a FRIENDLY technical
discussion about the merits of lazy evaluation.   I am quite willing
to change my opinion if someone has addition information that I did
not consider or has better insight into the problem than I.  

I look forward to the discussion.

Vance



Sun, 01 Jan 1995 22:28:20 GMT  
 Lazy languages considered questionable

[...]

Quote:
>Thus at present I can find no compelling reason to go to the extra
>effort to implement lazy languages.

[...]

As it happens, I gave a talk on lazy evaluation vs. eager evaluation
and strictness analysis just today (as part of my Honours degree).

It's late, so I'll just summarize:

Lazy evaluation:

- may avoid evaluating arguments (efficient)

- allows (conceptually) infinite data structures (very nice programming
  technique)

- non-strict

        - this allows functions such as   if-then-else
                                          short-circuit boolean operators
                                          case statements
          which are inherently non-strict to fit in with the rest of the
          language

        - hence allows certain program transformations which are invalidated by
          eager evaluation

BUT

- extra overhead for parameter passing (inefficient)

Generally lazy evaluation is a much more natural semantics for an
applicative programming language. (Someone else will be able to explain
this much better than I have).

So ideal approach is for the language to have lazy evaluation semantics, but
for the compiler to perform strictness analysis to optimize to eager
evaluation where the two are equivalent, since call-by-value will in
general be more efficient. Most parts of most programs do not require
lazy evaluation, but when you do need it, it's much easier if it is
a natural part of the language.

Quote:
>In addition I have a reason
>for NOT using them.   When I use a lazy language AND really use
>the fact that it is lazy, I must constantly be worried about the
>order of evaluation.  

I don't have this problem at all. With a lazy language I know that a
function will only be evaluated if its result is needed, so why should
I worry about the order of evaluation?

Maybe you could explain this point further.

--

This .signature VIRUS is a self-referential statement that is true - but
you will only be able to consistently believe it if you copy it to your own
.signature file!



Mon, 02 Jan 1995 00:11:25 GMT  
 Lazy languages considered questionable

Quote:
>>In addition I have a reason
>>for NOT using them.   When I use a lazy language AND really use
>>the fact that it is lazy, I must constantly be worried about the
>>order of evaluation.  
>I don't have this problem at all. With a lazy language I know that a
>function will only be evaluated if its result is needed, so why should
>I worry about the order of evaluation?

Think of it this way.  You are a programmer and want to use a large
code library that was written by someone else.  In it is a function
call FOO: List[Nat] -> List[Nat].  Now you extensively use infinite
data structures and at least in theory FOO: can written so that it
can operate on infinite data structures.  The problem is that you
don't know for certain that the writer of this has programmed it this way.  

Now you say, wait a second, part of the 'users manual' that came with
the library will contain the fact that 'FOO' will work on infinite length
lists.   Problem solved.  But what about more complicated data structures
that can be infinite in many different ways.   Now describing in our
user's manual which infinite structures 'FOO' converges on is quite
a bit more difficult.  

It boils down to this: in a strict language all datatypes are finite
and thus only very weak assumptions are needed to insure convergence
of a function ('if' and 'case' need to be lazy, and thats about it).
In lazy languages infinite structures are allowed.  Unfortuately many
functions that work fine for any finite structure fail for infinite
structures.  Thus the programmer needs to keep track of this whenever
he uses a infinite structure.  

Now you say 'that's not a big deal.  I disagree.  It is only a simple
matter now because frankly we don't use laziness much. Pointers are
simple when you only use the once or twice in a program.  If laziness
is REALLY worth it, it will be used extensively, and thus the above
problem could get as bad as pointer are in imperative languages.  

Vance



Mon, 02 Jan 1995 02:33:48 GMT  
 Lazy languages considered questionable

Quote:
>It boils down to this: in a strict language all datatypes are finite
>and thus only very weak assumptions are needed to insure convergence
>of a function ('if' and 'case' need to be lazy, and thats about it).
>In lazy languages infinite structures are allowed.  Unfortuately many
>functions that work fine for any finite structure fail for infinite
>structures.  Thus the programmer needs to keep track of this whenever
>he uses a infinite structure.  

I second this.  See our paper "Exact Real Arithmetic: A Case Study in Higher
Order Programming" (L&FP 86) for another example of this.  If you represent real
numbers as infinite sequences, it's amazingly easy (once you get a correct
representation) to get a partially correct implementation of multiplication
and division.  It's significantly more subtle to get one that converges
for all inputs.  Getting an implementation that performs competitively
with other (e.g. variable precision interval based) representations may
be possible, but is currently an unsolved problem.  And one that requires
very careful reasoning about how evaluation proceeds, something I find
very unintuitive.  (Different natural algorithms may evaluate different
parts of the data structure.  The natural algorithms all tend to evaluate
way to much, leading to anything from divergence to poor asymptotic
complexity. For some of the details, see the paper.)

Exact real arithmetic may not be the world's most important application.
But I would consider it the canonical mathematical example of higher-order
data.  The fact that lazy evaluation doesn't help here seems disconcerting.

Hans-J. Boehm



Mon, 02 Jan 1995 08:09:20 GMT  
 Lazy languages considered questionable

Quote:
>To the best of my knowledge the only advantage of lazy evaluation
>is the fact that it allows the manipulation of potentially infinite
>data structures.   Now this advantage is only worth it if it actually
>gets used.    I have only seen two basic uses of this feature.
>    1) Generators:   With infinite lists you can generate
>            a sequence without having to specify bounds.
>            The classic example of this is a sieve function that
>            generates the infinite stream of primes.
>    2) I/O:   This seems to the be the use driving the need for
>            lazy evaluation.  

  In my (somewhat limited) experience, the most important application of
lazyness is in situations where a problem can be most simply described by
a number of independent processes communicating via lazy streams,
particularly where feedback is involved. The primes sieve is one example.
Another similar, but more convincing example, to my mind is the Hamming
numbers example (quoted from "A theory of nondeterminism, parallelism,
and concurrency" by M. Broy, "Theoretical Computer Science" vol 45, pp1-61,
example on page 9.
  A program is required which generates the infinite stream of all numbers
 greater than 1 of the form 2^i*3^j*4^k in ascending order.
  funct streammult = lambda n,s: (n * first s) & streammult(n, rest(s)),
  funct merge = lambda s1,s2: if first s1 <= first s2
                        then first s1 & merge(rest s1, s2)
                        else first s2 & merge( s1, rest s2) fi,
  stream s1 = streammult( 5, 1 & s1 ),
  stream s2 = merge(streammult( 3, 1 & s2 ), s1 ),
  stream s3 = merge( streammult( 2, 1 & s3 ), s2 )
 ---
 To me, this is nicer than other solutions (eg Dijkstra. "A discipline of
programming" P129) I have seen or could construct myself. That the stream is
infinite is not important - it would be a equally useful technique for a
finite stream. Broy's paper also has other examples of lazy streams which
convinced me of their usefulness. (It also uses some concurrency features
which are not available in most functional languages.)

Quote:
>I consider the first use insufficient to warrant the need for lazy
>languages (since it seems that generators can be programmed in reasonably
>elegant ways without lazy evaluation).   My experience with lazy
>lists for I/O have left me unimpressed.   Most I/O involves a handshaking
>protocol which depends upon causality relationships (that is the request
>causes the system to send a reply).  It seems that this relationship
>is at BEST implicit in lazy list I/O and thus code is very convoluted
>and hard to read.    In addition the semantics of the lazy code no longer
>has its simple functional reading but must be understood in the context
>of the exact evaluation order.  This is just the kind of detail function
>language we designed to remove!!

    I'm not convinced of this. I think that we can cleanly separate components
which communicate by lazy streams, although it is necessary to ensure that the
system does not deadlock by requiring something that is not yet available.
In my experience, this reflects essential logical (eg causality) conditions
in the models.
    I still have serious reservations about functional languages, my main
reservation being in the area of I/O (particularly interactive I/O), and
while streams are perhaps the best solution to this problem, it still
seems very weak and tedious compared with conventional languages.

Quote:
>Vance

     dan.


Mon, 02 Jan 1995 08:43:19 GMT  
 Lazy languages considered questionable

Quote:
>    1) What are the advantages lazy evaluation that make
>       it worth the extra trouble to implement?

  In a friendly manner, I dispute the claim that lazy evaluation is harder to
implement. TIM [Fairbairn & Wray] and the Krivine machine (Curien) both
implement lazy languages (combinators and lambda calculus respectively) in an
almost trivial manner. Similarly, purely strict languages may be implemented
very easily. It's only when you combine them that you get problems. Since
functions like + which are fairly popular in the brainwashed Von Neumann /
Backus procedural programming oligarchy cabal are strict, we must implement at
least strictness. There is no enormous reason other than efficiency for
implementation on available micorprocessors that these functions must be
strict, but that's another argument.

Quote:
>    1) Generators:   With infinite lists you can generate
>            a sequence without having to specify bounds.
>            The classic example of this is a sieve function that
>            generates the infinite stream of primes.

  This seems to be underestimated by people who haven't had a lot of
experience. It's much easier to write an infinite loop than one that
terminates after a specified time or when a specific condition is met. An
example in Miranda - the list of numbers from n to m:

from n m = [], n > m
         = n:(from (n+1) m), otherwise

Now the list of numbers from n:

from n = n:(from (n+1))

  As you can see, the infinite list is easier to produce because you don't
have to worry about the code to make it finite. Now if we write all of our
functions with similar disregard for finiteness, we only write half the code
that we would have. This is a good thing, because we have only half the
opportunities to make errors.

Quote:
>In addition the semantics of the lazy code no longer
>has its simple functional reading but must be understood in the context
>of the exact evaluation order.

  I would agree that for I/O, lazy evaluation order is a problem, but then
again I haven't implemented enough interactive lazy functional programs to be
qualified to comment. I'm certain with experience I'd get used to it. However,
when I/O is not involved, evaluation order becomes of less concern than the
gravitational pull of the moon.

Quote:
>When I use a lazy language AND really use
>the fact that it is lazy, I must constantly be worried about the
>order of evaluation.   On the other hand, 99% of all programs
>(that is programs that have the same interpretation in a strict
>and lazy interpreter) I don't have to worry about the order of
>evaluation.  

  Of course you have to. As another poster pointed out, if-then, and in C,
.?.:. are evaluation order type things you have to worry about. Furthermore,
you have trained yourself not to write procedures with arguments whose
evaluation order matters.

Quote:
>What's more, that is in fact what is done right now.  Even lazy
>language programmers mostly write code that would work under
>strict evaluation.

  Well yes, of course. No-one goes around aiming to write code which doesn't
get evaluated. But the from example above is a counter-example.

Quote:
>When he does write something that requires
>laziness, he has to check very carefully that all the functions
>he calls with his infinite data structure will actually terminate
>with that input.  

  To tell you the truth I never considered such a thing. There aren't so many
functions which use all of an infinite data structure. One is sum, another is
user output. Certainly, I know which things are finite and which are infinite,
but it is not the sort of thing which keeps me awake at night or even delays
me for a split second.

John



Mon, 02 Jan 1995 08:42:11 GMT  
 Lazy languages considered questionable

Quote:
>Now you say 'that's not a big deal.  I disagree.  It is only a simple
>matter now because frankly we don't use laziness much. Pointers are
>simple when you only use the once or twice in a program.  If laziness
>is REALLY worth it, it will be used extensively, and thus the above
>problem could get as bad as pointer are in imperative languages.  

  It occurs to me that Unix pipes behave similarly to lazy functions on infinitelists. Certainly, if I say "yes fred | less", I will achieve the expected
result, i.e. as many pages of fred as I want. On the other hand, if I say
"yes fred | sort | less", I will get nothing because the sort will not
terminate. Now no-one every worries about Unix pipes being difficult, though
they use them all the time. Why should laziness be any different?

John



Mon, 02 Jan 1995 14:46:48 GMT  
 Lazy languages considered questionable

Quote:
Morrison) writes:
> ...
>    1) What are the advantages lazy evaluation that make
>       it worth the extra trouble to implement?
> ...

Fergus Henderson gave a good account of the usual reasons. The obvious one not
mentioned in Vance's posting is that lazy evaluation is more general because it
can give answers where eager evaluation fails to terminate.

I just want to add a very mundane example to illustrate how lazy evaluation can
allow you to write clearer programs, because you don't need explicit control
over expression evaluation (expressions are evaluated "as necessary").

This Miranda definition expresses a well-known algorithm for multiplying
without using multiplication (except multiplication and division by 2, which
are easy in binary).

mult::num->num->num
||precondition: n a natural number
||postcondition: mult x n = x*n

mult x n = 0,           if n = 0
         = y,           if n > 0 & n mod 2 = 0
         = y+x,         otherwise
            where y = 2*(mult x (n div 2))

Obviously the "where" part is a convenient notation, saving some typing and
clarifying the structure (by making explicit the fact that the same value y is
calculated for both the 2nd and 3rd lines). But if y is evaluated eagerly, then
this function as written diverges. Therefore for an eager implementation extra
information is needed to say which cases y is calculated for.

(I'm teaching with the eager language Hope this autumn, so if anyone can prove
me wrong with an eager implementation of mult that really is just as simple as
my Miranda one I'd be grateful to have it!)

Don Sanella posted some examples in ML of how to implement infinite data
structures, using functions from the unit type. His point, made very clearly,
was that infinite data structures are implementable eagerly in a natural way.
However, I think you can also see my point in his examples, because there is
always just a little extra explicit information needed to control the
evaluation.

Vance Morrison also mentions problems with lazy list I/O, saying that
evaluation order seemed to be important. I don't know his examples, but I would
hazard the guess that the problems are with functional I/O (i.e. turning the
side effects of I/O into functional programming) rather with laziness as such,
and that.

Steve Vickers



Mon, 02 Jan 1995 18:09:59 GMT  
 Lazy languages considered questionable

VM>   I think everyone agrees that lazy evaluation is harder to implement
VM>   efficiently on conventional computers but that is not the kind of
VM>   problem I wish to discuss (one could imagine a good compiler removing
VM>   this difficulty).   On the other hand, since it is more difficult

I don't agree. I implemented once a strict language and later changed
the strategy to lazy. The change was not trivial, but it was not difficult
either. [Efficiency was reasonable in both cases.]

VM>   to implement Lazy evaluation we have to have good reasons for going
VM>   to this extra trouble.   Thus my first question is
VM>
VM>     1) What are the advantages lazy evaluation that make
VM>        it worth the extra trouble to implement?

This is simply the wrong question, because you neither have to
implement the languages you want to use nor have you to use the
languages you implement.

VM>   To the best of my knowledge the only advantage of lazy evaluation
VM>   is the fact that it allows the manipulation of potentially infinite
VM>   data structures.   Now this advantage is only worth it if it actually

It is not the only advantage (not even the most important one),
it is only the most apparent difference.

Related to the infinite data structures is the use of circular
programs, you can sometimes write programs in an attribute grammar
style. Richard Bird wrote a paper about this
(Acta Informatica 21, 239-250 (1984)).
Example: Suppose you have a list of natural numbers and you want to
subtract the least element of the list from all elements. In a lazy
language, you can do it all in one list traversal:

normalize ls = res
               where (m,res) = norm m ls
norm m [] = (0,[])
norm m (x:xs) = (min x m',x-m : res)
                where (m',res) = norm m xs

The first argument of "norm" is like an inherited attribute,
it has no value yet when "norm" is called from "normalize".
But when "norm" comes back, it returns (synthesized attribute)
the changed list and the minimum of the list
and the local declaration in "normalize" puts the two attributes together.
This only works, because the subtractions (x-m) in the result of
"norm" are delayed, their result is not needed for "norm" to proceed.

VM>   [...]
VM>   Thus at present I can find no compelling reason to go to the extra
VM>   effort to implement lazy languages.   In addition I have a reason
VM>   for NOT using them.   When I use a lazy language AND really use
VM>   the fact that it is lazy, I must constantly be worried about the
VM>   order of evaluation.   On the other hand, 99% of all programs

"Constantly" is an exaggeration.
I dare say you have to be worried more often in strict languages ---
you simply don't realize this when you have grown up with languages
like Pascal, C, LISP etc. which always first evaluate the arguments
of a function call. And with 'worried' I not only mean "Oops, this could
sometimes fail to terminate!" but also (and more often) "Oh no, this is
horribly inefficient!".

VM>   (that is programs that have the same interpretation in a strict
VM>   and lazy interpreter) I don't have to worry about the order of
VM>   evaluation.   Thus it seems to me that since we can get what we
VM>   want done in a more abstract environment where the order of evaluation
VM>   basically doesn't matter, that is what we should do.

In other words: You simply choose the language that is most suitable for your
application.  There is nothing wrong with this principle.  I don't
believe in the universally-best-for-all-problems language anyway.

VM>   [...]
VM>   The spirit of function languages has been 'simplify, simplify'.  Thus
VM>   imperative languages were considered to complex because the sharing
VM>   conditions cause by pointers were hard to describe and reason about.
VM>   Thus they were eliminated from the language (even though it caused
VM>   a NOTICEABLE loss of expressive power).  I assert that lazy evaluation
VM>   causes a similar problem.  Namely it requires the programmer to worry
VM>   about evaluation details that MOST of the time don't matter.  Thus it
VM>   seems that laziness should be abandoned unless it has clear advantages
VM>   (which from what I have been able to surmise, it does not).

In lambda-calculus, lazy evaluation has an important theoretical
property: It is normalizing.
This means, if you prove by purely equational reasoning
that an expression is equal to a value, then this expression will
reduce to this value under lazy evaluation. For strict evlauation,
you have additionally to prove termination.
In this respect, lazy evaluation provides a simplification.

Unfortunately, this property is lost in most lazy languages,
because they are closer to orthogonal Term Rewriting Systems than to
lambda-calculus.
--


University of Edinburgh            UUCP:     ..!mcsun!uknet!dcs!smk

EH9 3JZ                            Tel:      (44)-31-650-5139
SCOTLAND                           Fax:      (44)-31-667-7209



Mon, 02 Jan 1995 20:09:43 GMT  
 Lazy languages considered questionable

Vance Morrison poses a very interesting question:

        1) What are the advantages lazy evaluation that make
           it worth the extra trouble to implement?

I agree that, if it were just for inifinite data structures,
laziness would be justified only in very special circumstances.
But I think there is more to laziness than that.

I like lazy functional programming because it gives me a simple but powerful
way to reason about programs. Most of us would employ routinely the
following three rules of reasoning:

1. An application (\x.e) e' is equivalent to e with e' substituted for x.

2. In an expression (let x = e' in e), substituting e' for some
   occurrence of x in e does not change the meaning of the program.

3. An expression (let x = e' in e), where x does not occur in e, is
   equivalent to just e.

As Karl-Philip Faxen noted, in lazy functional languages these rules are
always true, barring occasional complaints of a type-checker (if the
language is typed, that is). In strict functional languages, these rules are
true only if e' is a _value_.  This is considerably less useful, because it
means one cannot reason unconditionally about program fragments. To find out
whether an expression has a value, I need to know global information.

The bottom line is that, while strict functional programs are often as easy
to write as lazy funtional programs, lazy functional programs are
considerably simpler to manipulate. Depending on your way to program,
this might be worth the added implementation overhead.

-- Martin Odersky

 can exchange

I can exchange the left- and right hand sides of a
declaration without changing the meaning of a program (I
believe now that similarly simple reasoning techniques can be developed for
imperative programs, but that is another matter).

 can substitute equals for equals
Substitute equals for equals.



Mon, 02 Jan 1995 21:47:49 GMT  
 Lazy languages considered questionable
Hello again,

Quote:
>>        1) Generators:   With infinite lists you can generate
>>                a sequence without having to specify bounds.
>>                The classic example of this is a sieve function that
>>                generates the infinite stream of primes.

In my original post I stated that I did not believe generates warrented
the effort of making languages lazy.  I have gotten replies to this that
suggest that generators are in fact very useful and they give examples of
this.  Let me clarify why I said what I did.

Acutally I would agree that in generators are useful, and that placing
arbitary bounds on the code just to insure termaination complicates things
unnecessary.  But generates CAN be created in a strict language by using
closures to defer the evaluation.   The difference between this approach
and a lazy language is that the finite data structuren and its infinite
counterpart ARE DIFFERENT DATATYPES.  Thus the typing system insures that
you only use infinite data structures where it is legal to do so.  In
languages that support subtyping or auto-coersions, you can have the
infinite data strcuture be a supertype of the finite one (and thus you
can use a finite structure on any function that wants an infinite one).

It seems to me that such a design can be very elegant and can be implemented
in a strict langauge.  (Another possibility is to add a function 'delay'
like they do in scheme, but we loose the distinction between the infinite
and the finite).  

I guess what is at the root of my suspicion of lazy languages is that
I believe that there is a fundamental distinction between the finite and
the infinite (the infinite takes much more care), and that this distinction
should be not be suppressed, but rather made as obvious as possible
(this is why I like type theory instead of set theory).  Current Lazy
evalutation languages blur this distinction and I believe that is bad

Vance



Mon, 02 Jan 1995 21:50:55 GMT  
 Lazy languages considered questionable

Quote:


>>Now you say 'that's not a big deal.  I disagree.  It is only a simple
>>matter now because frankly we don't use laziness much. Pointers are
>>simple when you only use the once or twice in a program.  If laziness
>>is REALLY worth it, it will be used extensively, and thus the above
>>problem could get as bad as pointer are in imperative languages.  
>  It occurs to me that Unix pipes behave similarly to lazy functions on infinitelists. Certainly, if I say "yes fred | less", I will achieve the expected
>result, i.e. as many pages of fred as I want. On the other hand, if I say
>"yes fred | sort | less", I will get nothing because the sort will not
>terminate. Now no-one every worries about Unix pipes being difficult, though
>they use them all the time. Why should laziness be any different?

There are two points to make.  First most programs do NOT produce an
infinite output on finite input.  Thus for MOST programs you need not
worry about whether the execution of pipes is interleaved or not. In
fact on MS-DOS pipes are NOT interleaved and most people never notice
the difference.   Thus the simple sematices of pipes, namely

        1) First program generates output
        2) This output is sent to next program
                ....
        n) This output is sent to next program
        n+1) This output is sent to terminal

suffices.  Notice that this explanation does not get into the exact
or the UNIX actually uses yet is sufficient for 99% of all pipe programming.
(It abstracts away order of evaluation details).

If you asked non-gurus what "yes fred | sort | less" would produce they
may not know because it depends upon details that they never bothered
to learn or deal with.   Thus most people would call the above pipe
'tricky' and hard to understand.  Thus pipes ARE easy, but lazy pipes
are more difficult (which is my point).

The second point is UNIX pipe 'programs' (almost never exceeding one
line) are at LEAST several orders of magnitude less complicated then
a typical program.   Lets face it most code that can fit on one page
can be made easy to understand, the real trick is if it is still easy
to understand when it only fits on 10000 pages.  

Vance



Mon, 02 Jan 1995 22:06:46 GMT  
 Lazy languages considered questionable

(V. Morrison)

Quote:
>I have read many a paper on functional languages that begin as a
>premise that lazy evaluation is good (that is better than strict
>evaluation).   For some time I accepted this premise basically on
>'faith' but more and more I am questioning this assumption.

        I find myself in the same boat.  As someone with an interest in pure
functional programming procured from some small amount of experience in LISP
and SCHEME, my first attraction to _pure_ functional languages was their
potential to be optimized, especially on massively-parallel machines.  One of
these days we'll hit the 'Van Neumann barrier' with respect to the speed of
imperative programs compiled and optimized, and the next step after that seems
to be to use parallel languages.  Existing 'parallel' imperative languages are
_weak_.  One has to specify what tasks to execute, which ones to wait for, etc.
You might as well be programming in assembly code at that point -- but with
a referentially transparent program, this parallelization (optimization?) is
almost trivial (once you get past the hardware requirements).

(V. Morrison)

Quote:
> [...] I have only seen two basic uses of this feature.

>        1) Generators:   With infinite lists you can generate
>                a sequence without having to specify bounds.
>                [...]

>        2) I/O:   This seems to the be the use driving the need for
>                lazy evaluation.

        I think here we ought to add: '3) Parallel optimization'

        A lazy list can be implemented as a stream very efficiently, allowing
the producer of the list to execute in parallel with the consumer.  For very
long lists, this would not only save memory (=time w/gc), but improve
performance dramatically, if used sparingly.

        IMHO, we need languages where 'lazy' (aka 'normal order') evaluation
is primitive in which 'eager' (aka 'applicative order') is also primative.
These languages, if they gave the programmer the ability to specify which is
to be used in a situation, and if they optimized those situations where the
programmer has not supplied this information to the evaluative order most
efficient (this can be done via profiling, to avoid needing to know the number
of processors/processes availible before coding), would be our best best.
(Sorry about that convoluted sentance...)

        In other words, why not recognize the value of normal order evaluation
in some contexts and the value of applicative order evaluation in others and
allow them to co-exist?  Using only normal order evaluation seems like
over-kill, and an un-necessary burden to the run-time system (which would then,
even under a parallel model need to assign processors).  On the other hand,
considerable ability to optimize code is lost by insisting on allowing only
applicative order evaluation.

Oh!!  BTW, lazy lists also allows the programmer to code things that s/he would
need 'static' variables for in 'C' -- like a random number generator -- since
there's no place for the 'seed' in a purely applicative order program.  A lazy
program could define randoms() as follows to supply an infinite list of random
numbers...

        randoms() --> randoms(start_seed);   % start_seed is constant
        randoms (n) --> lazy [ nh | randoms (nh) ]
                        where nh == some_seed_hasher (n);

Does anyone know of another way to do this in a purely functional language?
If so, I'd _love_ to hear it... Or should we add: '4) this kind of thing' to
the list of things that 'lazy' evaluation buys for us?

        Just my $.02...
                -- Scot



Mon, 02 Jan 1995 23:22:09 GMT  
 Lazy languages considered questionable

Quote:

>    Exact real arithmetic may not be the world's
>    most important application.  But I would consider it
>    the canonical mathematical example of higher-order data.
>    The fact that lazy evaluation doesn't help here seems disconcerting.

I cannot even imagine even what kind of domain
could represent the set of real numbers.
What would be the basis and partial order?

I would expect the uncountability of real numbers
to be a real stumbling block.  How can you imagine
coming up with reasonably efficient algorithms
for the basic functions when there don't even exist
inefficient generate-and-test algorithms?

------------------------------------------

Tulane University       New Orleans, Louisiana  USA



Tue, 03 Jan 1995 04:07:46 GMT  
 Lazy languages considered questionable

Quote:


>>        Exact real arithmetic may not be the world's
>>        most important application.  But I would consider it
>>        the canonical mathematical example of higher-order data.
>>        The fact that lazy evaluation doesn't help here seems disconcerting.
>I cannot even imagine even what kind of domain
>could represent the set of real numbers.
>What would be the basis and partial order?
>I would expect the uncountability of real numbers
>to be a real stumbling block.  How can you imagine
>coming up with reasonably efficient algorithms
>for the basic functions when there don't even exist
>inefficient generate-and-test algorithms?

I apologize for my original sloppiness.  What the paper and I actually
refer to are what are variously called the recursive reals, constructive
reals, computable reals, or representable reals, i.e. the set of real
numbers x such that there is a computable function f from positive
integers to rationals, such that for all n, |x - f(n)| < 1/n.

There are countably many such numbers.  Arithmetic operations, trig functions,
etc. are computable on them.  Equality is not in general computable.
Thus for practical purposes, you can perform arithmetic without cumulative
rounding errors, but comparisons can only be performed to a a specified
accuracy.

There are no implementations that are competitive with floating point
arithmetic in performance, certainly.  We have some implementations that
are appreciably better than what's needed for a desk calculator (when
run on a reasonably modern workstation).  But it's very hard to do even
that well with a representation based on lazy sequences of digits or
lazily evaluated infinite continued fractions.

Hans-J. Boehm



Tue, 03 Jan 1995 05:45:32 GMT  
 
 [ 33 post ]  Go to page: [1] [2] [3]

 Relevant Pages 

1. Speed of FP languages, prospects (was Re: Benchmarking Lazy Functional Languages)

2. lazy.py 0.7 - Lazy expressions and datastructures

3. How lazy is lazy?

4. lazy (non-eager) Scheme ??? (lazy, but eager me)

5. lazy.dylan 0.1 -- add ML/Scheme-style lazy evaluation to Dylan

6. lazy.py 0.7 - Lazy expressions and datastructures

7. lazy (non-eager) Scheme ??? (lazy, but eager me)

8. Which Language needs least lines? (COBOL considered harmful)

9. Why - like it or not -Win API must be considered part of our language

10. multiplatform GUI design with parallelized lazy functional language

11. Lazy and Strict Functional languages

12. Memoization + Lazy Languages

 

 
Powered by phpBB® Forum Software