Laziness 
Author Message
 Laziness

The debate of laziness vs. strictness is an old one, with both sides
having their arguments - laziness offers a more powerful programming
style yet strictness gives the programmer "more control", for example,
although I am sure there are other points. Let's not reopen that
argument.

I'm interested in opinions about full laziness like Haskell has, versus
"optional" laziness like Scheme's delay constructs.

The delay construct is a syntactic annoyance, since we have to
explicitly delay lazy arguments to functions rather than having it do so
itself.

A third option is to allow data structure and function definitions to
have a "lazy" option for each field.

What's the current public opinion on the relative utility of these three
mechanisms - full laziness, optional laziness with delay, and optional
laziness with "real language support"?

TIA,

ABW



Sat, 13 Jan 2001 03:00:00 GMT  
 Laziness

Quote:
>What's the current public opinion on the relative utility of these three
>mechanisms - full laziness, optional laziness with delay, and optional
>laziness with "real language support"?

Haskell has optional strictness via the ! annotation. See section 6.3 of
the _Gentle Intro..._.

Also, Goguen et al's OBJ3 allows the programmer to specify lazy, strict,
or some combination for each operator the programmer defines.  See section
2.4.4 of _Introducing OBJ_ at http://www-cse.ucsd.edu/users/goguen/pubs/

John



Sat, 13 Jan 2001 03:00:00 GMT  
 Laziness
Hello!



Quote:
>[...]
>What's the current public opinion on the relative utility of these three
>mechanisms - full laziness, optional laziness with delay, and optional
>laziness with "real language support"?

I'm currently more on the Haskell side. As such, Haskell has ways
to make functions strict
  (strict f _|_ = _|_ ; strict f x = f x  for x <> _|_),
and has strictness annotations for data constructors. So Haskell
is lazy by default, and strict if you want to enforce strictness.
Of course, the compiler is free to evaluate functions strictly even
without making them strict explicitly, WHEN that does not change
the semantics. That's called strictness analysis and is implemented
at least in GHC.

Regards, Felix.



Sat, 13 Jan 2001 03:00:00 GMT  
 Laziness

Quote:

> So Haskell
> is lazy by default, and strict if you want to enforce strictness.
> Of course, the compiler is free to evaluate functions strictly even
> without making them strict explicitly, WHEN that does not change
> the semantics. That's called strictness analysis and is implemented
> at least in GHC.

There is a purposed extension to SML implemented in SML/NJ that allows for
lazy annotations on data constructors. I prefer the strict by default lazy
when you want it approach. Things like space usage and pattern matching are
easier to reason about in a strict language. The fact you need to do
strictness analysis in the first place to improve program performance says
something about the utility of laziness in the common case.


Sat, 13 Jan 2001 03:00:00 GMT  
 Laziness

Quote:


>>[...]

>>What's the current public opinion on the relative utility of these three
>>mechanisms - full laziness, optional laziness with delay, and optional
>>laziness with "real language support"?

>I'm currently more on the Haskell side.

Thanks for your vote; but what's the reasoning behind it?

Quote:
>As such, Haskell has ways
>to make functions strict
>  (strict f _|_ = _|_ ; strict f x = f x  for x <> _|_),
>and has strictness annotations for data constructors. So Haskell
>is lazy by default, and strict if you want to enforce strictness.

So why is that approach better than making things strict by default
and lazy if specified?

Making things lazy by default leads to space leaks and performance problems,
which seem to be quite difficult to debug with current tools and methods;
it also makes the operational semantics more complicated, which makes
it much harder to apply traditional debugging approaches.

Making things strict by default leads to a different set of problems,
of course, but I've yet to see any convincing argument that these
problems are worse than the ones mentioned above.

Quote:
>Of course, the compiler is free to evaluate functions strictly even
>without making them strict explicitly, WHEN that does not change
>the semantics. That's called strictness analysis and is implemented
>at least in GHC.

Yes, but GHC doesn't infer strictness of data structures
(or at least if it does, it doesn't do a good job of it),
and this can often make a very significant difference to efficiency.

--

WWW: <http://www.cs.mu.oz.au/~fjh>  |  of excellence is a lethal habit"



Sun, 14 Jan 2001 03:00:00 GMT  
 Laziness

Quote:

> The debate of laziness vs. strictness is an old one, ...

Just a point on terminology: "laziness" has to do with operational
semantics, "strictness" with denotational semantics, and in that
sense are not directly comparable.  Laziness is just one possible
way to implement non-strict semantics.  You can even implement
non-strictness with eager evaluation (using parallelism).
Most of the benefits claimed for laziness are really about
non-strictness, and not specifically about laziness.

Rishiyur Nikhil



Sun, 14 Jan 2001 03:00:00 GMT  
 Laziness

Quote:

> >As such, Haskell has ways
> >to make functions strict
> >  (strict f _|_ = _|_ ; strict f x = f x  for x <> _|_),
> >and has strictness annotations for data constructors. So Haskell
> >is lazy by default, and strict if you want to enforce strictness.
> So why is that approach better than making things strict by default
> and lazy if specified?

To generalise: is it best to default to the more powerful behaviour and
let
the programmer annotate for more efficient behaviour, or to default for
efficient behaviour and have the programmer annotate for more powerful
behaviour
when it's needed?

Personally, I'd expect the second approach to be superior, since it
means that power is
"activated" only where it's needed, whereas with the first approach,
programmers may not
bother to go back and figure out where he can optimise.

ABW



Sun, 14 Jan 2001 03:00:00 GMT  
 Laziness

Quote:
> To generalise: is it best to default to the more powerful behaviour
> and let the programmer annotate for more efficient behaviour, or to
> default for efficient behaviour and have the programmer annotate for
> more powerful behaviour when it's needed?

> Personally, I'd expect the second approach to be superior, since it
> means that power is "activated" only where it's needed, whereas with
> the first approach, programmers may not bother to go back and figure
> out where he can optimise.

Only, this is a wrong generalisation for the non-strict/strict
discussion.  The "power" of non-strict languages is in part the
referential transparency, which requires a global assumption.  You
cannot just enable it "here and there".  Of course, if you were only
thinking about the operational property of laziness (postposing some
calculations), then you're right, but this IMHO misses the major point
of non-strict languages.

But even without this, your argument could be used to argue for
writing everything in assembler instead of a high-level language.

As I think most would agree, non-strict semantics simplifies
*denotational* reasoning about programs and facilitates making truely
reusable components.  In opposition strict semantics simplifies
reasoning about *operational* properties.  

It seems to me that the biggest problem for lazy implementation of
non-strict languages today is the memory behaviour, which not only can
be hard to understand, but which also can be rather sensitive to minor
changes in the program.

/Tommy



Sun, 14 Jan 2001 03:00:00 GMT  
 Laziness
Quote:

>The debate of laziness vs. strictness is an old one, with both sides
>having their arguments - laziness offers a more powerful programming
>style yet strictness gives the programmer "more control", for example,
>although I am sure there are other points. Let's not reopen that
>argument.

>I'm interested in opinions about full laziness like Haskell has, versus
>"optional" laziness like Scheme's delay constructs.

>The delay construct is a syntactic annoyance, since we have to
>explicitly delay lazy arguments to functions rather than having it do so
>itself.

>A third option is to allow data structure and function definitions to
>have a "lazy" option for each field.

>What's the current public opinion on the relative utility of these three
>mechanisms - full laziness, optional laziness with delay, and optional
>laziness with "real language support"?

There is of course a fourth option:  to have a "strict" option for each field,
instead of the "lazy" option, so that the default is full laziness.  And a
fifth option: to have the option for each application of a function or
constructor.

The original Hope language was strict (eager) and had a single lazy
constructor "lcons".  When we defined Hope+ we made it lazy, but added
annotations for each argument to require it to be evaluated (either eagerly or
as far as head normal form - two different annotations). It's very rare to
want to be eager, or even just strict, so it seemed natural for the default to
be lazy and the overrides to be on particlar applications rather than on the
function and constructor definitions.

Thinking in demand driven terms, what these annotations do is introduce extra
demands.

Of course this suffers from the disadvantage of the second approach described
above, that it's syntactically intrusive (although much so than "delay").

Perhaps with a modern functional language like Haskell, which has a reasonable
approach (monads) to interfacing with the non-declarative real world, a fully
lazy approach without support for forcing strictness and eagerness in odd
places will be adequate - I don't know, I've never seen anyone try to write an
OS or a RDBMS in Haskell - but a decade or more ago we were using much cruder
techniques and annotations to force evaluation were needed for "heavy duty"
programming (and maybe they still are). It did seem clear back then that we
couldn't program an RDBMS or an OS in an eager functional language without
spending so much time outside the functional language in some sort of support
shell that we might as well not start from a functional language in the first
place.  Of course today eager languages like ML are perfectly viable (because
they've ditched the extreme purist claptrap that used to be popular) but they
weren't an alternative (no complete implementations) in those days.

So I'll be interested in seeing other peoples' answers on this, to see if the
world really has moved on since I was a functional programmer.

Tom Thomson



Sun, 14 Jan 2001 03:00:00 GMT  
 Laziness

Quote:
> To generalise: is it best to default to the more powerful behaviour
> and let the programmer annotate for more efficient behaviour, or to
> default for efficient behaviour and have the programmer annotate for
> more powerful behaviour when it's needed?

> Personally, I'd expect the second approach to be superior, since it
> means that power is "activated" only where it's needed, whereas with
> the first approach, programmers may not bother to go back and figure
> out where he can optimise.

The Elegant compiler generator system and programming language
has explicit laziness.
That is, there is a predefined type Lazy(T) to represent closures.
Because Elegant is an imperative language it matters how often
you evaluate a closure. therefore, there is another type
Closure (T). Lazy(T) stabilizes after one evaluation, Closure(T)
never stabilizes.
There is even a third type Incr(T) that keeps track of its own
consistence. That is, it the value from a previous evaluation
is (or might be) out of date, it will be reevaluated. You might call
this 'spreadsheet evaluation'.
All evaluation is demand driven of course.

Our experience is that you do not need laziness in most of the cases.
But if you need it, it is VERY convenient. Therefore, we made strict
typing the default.

There is one execption to this rule and that is the sub-language
covering attribute grammars. Attributes are always lazy.
You may add strict argument to production rules though.
These are available at parse time.

For more info go to:

http://www.research.philips.com/generalinfo/special/elegant/elegant.html

Lex Augusteijn

--------------------------

Name:           dr.ir. Lex Augusteijn
Address:        WL 1.1.13
                Philips Research Laboratories
                Prof. Holstlaan 4
                5656 AA Eindhoven
                The Netherlands
Phone:          (+31 40 27)43938
Fax:            (+31 40 27)44004



Sun, 14 Jan 2001 03:00:00 GMT  
 Laziness

Quote:

> The delay construct is a syntactic annoyance, since we have to
> explicitly delay lazy arguments to functions rather than having it do so
> itself.

This is IMHO not the problem - if we have a language which gives us
both, we have to decide which one we want in each situation.

The real annoyance is IMHO that you have to undelay the values back
(or delay the whole outer expression) when using them:

You can say (+ 2 5), but not (+ 2 (delay (+ 2 3))).
And even if you could, should it mean 7 or (delay 7) or
something like
(lambda () + 2 ((lambda () (+ 2 3)))) ?
I don't even have an idea which solution should be preferred.

Ralf



Sun, 14 Jan 2001 03:00:00 GMT  
 
 [ 23 post ]  Go to page: [1] [2]

 Relevant Pages 

1. Laziness & Symmetry - was: Purity and Laziness

2. Laziness in IO

3. hGetContents and laziness in file io

4. Performance benefits of laziness?

5. question about semantics and laziness/strictness

6. laziness a virtue?

7. laziness a virtue? and Cyclic structures (TOGETHER)

8. How useful is laziness?

9. Which languages implement full laziness?

10. Laziness and infinite lists...

11. Purity and Laziness

12. Recognising QUOTE deemed harmful to EVAL's laziness

 

 
Powered by phpBB® Forum Software