Author |
Message |
Charles L #1 / 14
|
 Naive question
Hi folks. It's time for my periodic naive question. As a reminder, I am a novice in the area, and may remain so for a while, but I am trying to learn about functional languages as a sort of a hobby. Anyway, here are some questions which will undoubtedly be a little simple minded. (1) I think I understand strict vs. non-strict. If I let "bot" mean the bottom sign, or some calculation that is somehow incorrect, then f(bot) = bot in a strict language because the arguments are always evaluated whether or not it is needed by the function body. So, what's the difference between eager and lazy evaluation? Is one preferable over the other? Advantages? Disadvantages? (2) I've been perusing books which have functions that can be written as f(x, y, z) where I assume that x, y, and z can themselves be functions. Now, sometimes I see it written f x y z without parentheses. Why does that happen? Especially, since standard low-level math always uses parentheses. I think that's about it. You can respond via e-mail or post. -- Charles Lin
|
Tue, 12 Sep 1995 12:31:43 GMT |
|
 |
Torben AEgidius Mogens #2 / 14
|
 Naive question
Quote:
> Hi folks. It's time for my periodic naive question. As a reminder, >I am a novice in the area, and may remain so for a while, but I am >trying to learn about functional languages as a sort of a hobby. > Anyway, here are some questions which will undoubtedly be a little >simple minded. > (1) I think I understand strict vs. non-strict. If I let "bot" mean > the bottom sign, or some calculation that is somehow incorrect, > then f(bot) = bot in a strict language because the arguments > are always evaluated whether or not it is needed by the function > body. > So, what's the difference between eager and lazy evaluation? Is > one preferable over the other? Advantages? Disadvantages?
The advantages of strict evaluation are that it is slightly more efficient with current technology and that knowing the exact order of evaluation makes it easy to integerate with side-effects and exceptions. Standard ML is a prime example of a strict functional language with side-effects and exceptions. The advantages of lazy evaluation are referential transparency and the ability to handle infinite data-structures. Referential transparency means (in this context) that you can replace a function definition by it's body with parameters replaced by the actual arguments, without changing the meaning of the program. In a strict language you can make a program terminate more often by doing so, and if side-effects or exceptions are introduced these may occur in a different order. Since the idea of not evaluating arguments until they are needed also applies to constructor arguments, it is possible to define a conceptually infinite data structure and the only use part of it. The usual example is the prime sieve: the infinite list of integers is scanned and composite numbers are thrown out. But it really has more use for modular programming. Assume we want to make a chess program with adaptive search depth. In a strict language we would have to integrate the building of the search tree with the scanning of it to ensure that we don't generate parts we don't need and also to avoid having to store the entire tree before scanning it. In a lazy language you can make one function generate an infinite search tree and another function scan it. The lazy evaluation mechanism ensures both that we don't generate the parts we don't need and that we don't store the entire tree in memory at any one time: no part is generated before it is used and garbage collection can reclaim the parts that have already been examined. Note that this property is an advantage even when a complete finite (but large) tree is needed. Quote: > (2) I've been perusing books which have functions that can > be written as f(x, y, z) where I assume that x, y, and z > can themselves be functions. Now, sometimes I see it written > f x y z without parentheses. Why does that happen? Especially, > since standard low-level math always uses parentheses.
f(x,y,z) can actually be written f (x,y,z) in most functional languages, as it denotes a function with one argument (which is a three-tuple). In general one would write the function followed by it's argument (possibly with a space inbetween for lexical reasons). Thus f(x) is the same as f (x) and f x. Here the parenthesis is just like a paranethesis in arithmetic expressions and is really only needed when we want to group things. Thus f (x+y) is different from f x+y, which most functional languages would parse as equivalent to (f x)+y. When writing f x y z, f is actually a function that takes an x, then returns another function (f x), which takes a y and returns a function ((f x) y) that takes a z and returns (((f x) y) z). The type of f would be f : tx -> ty -> tz -> tr, whereas the other function would have type f : tx*ty*tz -> tr (or f : (tx,ty,tz) -> tr depending on which language you use). The main advantage of the non-tupled form (which is called the curried form after the logician Curry) is that you don't have to give f all its arguments at once. For example, you can define add x y = x+y map f [] = [] map f (a :: l) = (f a) :: (map f l) and then write map (add 2) [1,2,3] and get [3,4,5] or even map (map (add 2)) [[1,2],[3],[4,5,6]] and get [[3,4],[5],[6,7,8]]. In most implementations the tupled (or uncurried) form is slightly more efficient than the curried form, but in some implementations (for example Miranda^(TM)) the curried form wins out. Many programmers prefer the curried form both for its added flexibility and for the simpler notation.
|
Tue, 12 Sep 1995 19:48:21 GMT |
|
 |
Don Sannel #3 / 14
|
 Naive question
Quote: > The advantages of strict evaluation are that it is slightly more > efficient with current technology and that knowing the exact order of > evaluation makes it easy to integerate with side-effects and > exceptions.
Don't forget: ... and pattern matching for defining functions by cases. Quote: > Standard ML is a prime example of a strict functional > language with side-effects and exceptions. > The advantages of lazy evaluation are referential transparency and the > ability to handle infinite data-structures.
This suggests that strict languages are unable to handle infinite data structures. This is not true -- there is a standard way to code any lazy data structure (infinite or non-infinite) using functions. Once that is done, the standard examples used to illustrate the advantages of infinite data structures (e.g. sieve of Eratosthenes) come out looking much the same in a strict language as they do in a lazy language. Don Sannella Univ. of Edinburgh
|
Wed, 13 Sep 1995 00:34:17 GMT |
|
 |
Fergus James HENDERS #4 / 14
|
 Naive question
Quote:
>> The advantages of strict evaluation are that it is slightly more >> efficient with current technology and that knowing the exact order of >> evaluation makes it easy to integerate with side-effects and >> exceptions. >Don't forget: ... and pattern matching for defining functions by cases.
I don't understand. Why does strict evaluation help with pattern matching? --
This .signature virus is a self-referential statement that is true - but you will only be able to consistently believe it if you copy it to your own .signature file!
|
Wed, 13 Sep 1995 09:11:50 GMT |
|
 |
Don Sannel #5 / 14
|
 Naive question
Quote:
> >> The advantages of strict evaluation are that it is slightly more > >> efficient with current technology and that knowing the exact order of > >> evaluation makes it easy to integerate with side-effects and > >> exceptions. > >Don't forget: ... and pattern matching for defining functions by cases. > I don't understand. > Why does strict evaluation help with pattern matching?
Adding pattern matching for function definitions to a lazy language compromises laziness. Consider the following example (I suppose there is a simpler example showing the same thing): f true true x = 1 f false x false = 2 f x false true = 3 f false false true = 4 f true false false = 5 Note that the above patterns are non-overlapping and exhaustive. Suppose we are given an expression (f a b c) to evaluate, where any (or all) of a, b, c may diverge. No matter which one of a, b, c one chooses to evaluate first, there's a chance that evaluating it will turn out to be unnecessary. Without some scheme for evaluating all three in parallel, the case selection might even diverge while attempting to complete an unnecessary computation. Performing computation which turns out to be unnecessary (especially if it leads to divergence) seems suspiciously un-lazy to me ... With strict evaluation there is a well-defined order of evaluation so this issue simply doesn't arise. Don Sannella Univ. of Edinburgh
|
Thu, 14 Sep 1995 01:04:50 GMT |
|
 |
Charles L #6 / 14
|
 Naive question
Let me follow up on the f(x, y, z) representation of functions vs. the f x y z representation. In the first case, the function might be represented by a type signature such as A x B x C -> D, while the second version would be written as A -> B -> C -> D. I get the feeling that the curried version, despite constructing function from it, can only do so left to right, which seems like a limitation to me. It would seem better to do something like the following: g(x, z) = f(x, val_1, z) Or possibly even: g(x) = f(x, val_1, x) Or even g(x,y) = f( x + y, val_1, x * y ) That way you gain a lot of flexibility in creating functions of fewer number of variables. In each of the cases above, g is being defined on a function, f, which has been predefined. Perhaps there is already a way of doing this that I am unaware of. Perhaps just the standard way of declaring functions does this. E-mail or post. -- Charles Lin
|
Fri, 15 Sep 1995 16:35:06 GMT |
|
 |
Fergus James HENDERS #7 / 14
|
 Naive question
Quote:
>> I don't understand. >> Why does strict evaluation help with pattern matching? >Adding pattern matching for function definitions to a lazy language compromises >laziness. Consider the following example (I suppose there is a simpler example >showing the same thing): > f true true x = 1 > f false x false = 2 > f x false true = 3 > f false false true = 4 > f true false false = 5
I'll give you a simpler example: True `or` _ = True _ `or` True = True False `or` False = False Quote: >Note that the above patterns are non-overlapping and exhaustive. >Suppose we are given an expression (f a b c) to evaluate, where any (or all) >of a, b, c may diverge. No matter which one of a, b, c one chooses to evaluate >first, there's a chance that evaluating it will turn out to be unnecessary. >Without some scheme for evaluating all three in parallel, the case selection >might even diverge while attempting to complete an unnecessary computation. >Performing computation which turns out to be unnecessary (especially if it >leads to divergence) seems suspiciously un-lazy to me ...
Ah, now I understand. It's not the pattern matching itself that compromises laziness, it's the usual non-parallel implementation that causes the problem ;-). But then again, a simple 'if...then...else' raises the same problem. In the expression if x then y else z if it turns out that y = z then it is not necessary to evaluate x. So it would be just as valid to claim that 'if...then...else' compromises laziness, because there's a chance that evaluating x will turn out to be unnecessary and the computation might even diverge while attempting to complete an unnecessary computation. Hmmm, eliminating 'if...then...else' seems to rule out most useful programs... --
This .signature virus is a self-referential statement that is true - but you will only be able to consistently believe it if you copy it to your own .signature file!
|
Fri, 15 Sep 1995 20:23:20 GMT |
|
 |
Frank Silberma #8 / 14
|
 Naive question
Quote: >> Why does strict evaluation help with pattern matching?
Quote: > Adding pattern matching for function definitions > to a lazy language compromises laziness.
"Laziness" is sometimes means nonstrictness; other times refers to a more specific technique that avoids overcomputation via graph reduction. I think we must be careful not to confuse the two meanings. Quote: > Consider: > f true true x = 1 > f false x false = 2 > f x false true = 3 > f false false true = 4 > f true false false = 5 > Suppose we evaluate (f a b c). > No matter which one of a, b, c we evaluate first, > there's a chance that it will turn out to be unnecessary.
Indeed, there is no way avoid all all overcomputation. Non-strict evaluation none-the-less perfectly feasible. We need merely evaluate the arguments in parallel (e.g. by interleaving evaluation). Quote: > With strict evaluation there is a well-defined order > of evaluation so this issue simply doesn't arise.
This comment is both false and irrelevant. It is false because strictness evaluation does not necessarily imply a well-defined order of evaluation. Strictness merely requires that all three arguments be evaluated before a result is given. It doesn't require them to be evaluated in any specific order. It is irrelevant because the function above is not strict ("f BOTTOM false true" equals 3, not "BOTTOM"), so strict evaluation will not compute it. If the language is defined operationally via applicative-order evaluation, then the pattern-matching must be viewed as a syntactic sugar. ------------------------------------------
Tulane University New Orleans, Louisiana USA
|
Sat, 16 Sep 1995 00:57:22 GMT |
|
 |
Bj|rn Lisp #9 / 14
|
 Naive question
Quote: James HENDERSON) writes:
(...discussion of pattern matching & laziness deleted...) Quote: >But then again, a simple 'if...then...else' raises the same problem. >In the expression > if x then y else z >if it turns out that y = z then it is not necessary to evaluate x. >So it would be just as valid to claim that 'if...then...else' compromises >laziness, because there's a chance that evaluating x will turn out >to be unnecessary and the computation might even diverge while attempting to >complete an unnecessary computation. >Hmmm, eliminating 'if...then...else' seems to rule out most useful programs...
Sure, the usual (domain-theoretical) if-then-else function, defined by ("_" represents bottom): if true then x else y = x if false then x else y = y if _ then x else y = _ is not "equationally complete", in the sense that there are equations involving if-then-else that will hold (like: if z then x else x = x), but we will not be able to "prove" them by computation, using this function. The reason to still use this definition of if-then-else is of course that it represents a reasonable tradeoff between expressiveness and efficiency: since it is strict in the first argument there is an efficient evaluation strategy that first evaluates the first argument and then selects one of the other depending on the outcome. Note that we perfectly well can implement a "more defined" if-then-else given by the equations if true then x else y = x if false then x else y = y if _ then x else x = x if _ then x else y = _ when x and y are distinct (For simplicity we restrict the discussion to the case where x, y belong to a flat domain: otherwise the definition above does not yield a monotone function.) A programming language with if-then-else adhering to this definition will have better termination properties than a language using the first definition, but the implementation must be less efficient since now *all* arguments are non-strict. It seems to me that a "parallel" evaluation strategy is needed to implement this function. ("Parallel" = the different arguments are evaluated a little at a time, and as soon as we match a left-hand side of some of the three first equations we return the corresponding right-hand side.) Note that even a simple arithmetical operation like multiplication really is subject to the same "problem", since the identities 0*x = x*0 = 0 hold, and thus the usual strict evaluation of multiplication may cause nontermination in cases where zero could have been returned! Again, it is possible to devise a non-strict implementation of multiplication that respects this identity and returns zero as soon as one argument becomes zero, but for obvious efficiency reasons this is usually avoided.... Bjorn Lisper
|
Mon, 18 Sep 1995 00:31:19 GMT |
|
 |
Frank Silberma #10 / 14
|
 Naive question
Bj|rn Lisper: Quote: > Note that we perfectly well can implement a "more defined" > if-then-else given by the equations > if true then x else y = x > if false then x else y = y > if _ then x else x = x > if _ then x else y = _ when x and y are distinct > (For simplicity we restrict the discussion to the case > where x, y belong to a flat domain: otherwise the definition > above does not yield a monotone function.)
This restriction is demanded by more than mere simplicity. To implement it, the function _must_ be monotone. ------------------------------------------
Tulane University New Orleans, Louisiana USA
|
Mon, 18 Sep 1995 04:02:01 GMT |
|
 |
Bj|rn Lisp #11 / 14
|
 Naive question
Quote: Silbermann) writes:
>Bj|rn Lisper: >> Note that we perfectly well can implement a "more defined" >> if-then-else given by the equations >> if true then x else y = x >> if false then x else y = y >> if _ then x else x = x >> if _ then x else y = _ when x and y are distinct >> (For simplicity we restrict the discussion to the case >> where x, y belong to a flat domain: otherwise the definition >> above does not yield a monotone function.) >This restriction is demanded by more than mere simplicity. >To implement it, the function _must_ be monotone.
Yes, of course, that is exactly why I restricted the discussion to flat domains. When x, y belong to a non-flat domain the two last equations must be replaced by if _ then x else y = l.u.b.(x,y) to yield a monotone function. Interestingly, this seems implementable for most non-flat domains that occur in practice. Whenever "if _ then x else y" occurs in an environment where only l.u.b.(x,y) is required, only the "fully defined parts" of l.u.b.(x,y) need to be evaluated. Say, for instance, that x and y are pairs. So an instance of the equation above is if _ then <x1,x2> else <x1,_> = <x_1,_>. Thus, first(if _ then <x1,x2> else <x1,_>) = x1. A parallel, lazy evaluation strategy that evaluates the if-then-else-branches in parallel and stops whenever first(...) is fully defined will evaluate the left-hand side into the right-hand side. Bjorn Lisper
|
Mon, 18 Sep 1995 16:43:57 GMT |
|
 |
Bj|rn Lisp #12 / 14
|
 Naive question
Quote:
>if _ then x else y = l.u.b.(x,y)
Oops! I mean g.l.b.(x,y). Sorry for the confusion. Bjorn Lisper
|
Mon, 18 Sep 1995 20:04:04 GMT |
|
 |
Karl-Filip Fax #13 / 14
|
 Naive question
|> Bj|rn Lisper: |> |> > Note that we perfectly well can implement a "more defined" |> > if-then-else given by the equations |> > |> > if true then x else y = x |> > if false then x else y = y |> > if _ then x else x = x |> > if _ then x else y = _ when x and y are distinct |> > |> > (For simplicity we restrict the discussion to the case |> > where x, y belong to a flat domain: otherwise the definition |> > above does not yield a monotone function.) |> |> This restriction is demanded by more than mere simplicity. |> To implement it, the function _must_ be monotone. Yes, this is correct. On the other hand, there is a generalization of the rule for the case when x and y are nonflat: if true then x else y => x if false then x else y => y if _ then x else y => GLB(x,y) Now, if x and y are of a flat domain, the this reduces to the earlier definition. ------------------------------------------------------------ Make the frequent case fast, and the fast case frequent!
|
Mon, 18 Sep 1995 18:22:51 GMT |
|
 |
Sergio Ant #14 / 14
|
 Naive question
Quote:
> This suggests that strict languages are unable to handle infinite data > structures. This is not true -- there is a standard way to code any lazy data > structure (infinite or non-infinite) using functions. Once that is done, the > standard examples used to illustrate the advantages of infinite data structures > (e.g. sieve of Eratosthenes) come out looking much the same in a strict > language as they do in a lazy language.
There is also a way code lazy data structures without using functions. It is based on a transformation of rewrite programs that guarantees normalization in the transformed program using leftmost innermost evaluation. The code looks simpler than the standard way using function, but I don't know whether it is more or less efficient. The sieve of Eratosthenes example follows. If someone times it, I would like to know the results. datatype lazy_list = nil | cons of int * lazy_list | ints_from of int | show of int * lazy_list | sieve of lazy_list | primes | filter of lazy_list * int fun head_norm (ints_from A) = cons (A, ints_from (A+1)) | head_norm (show (0, _)) = nil | head_norm (show (A, cons (B, C))) = cons (B, show (A-1, C)) | head_norm (show (A, B)) = head_norm (show (A, head_norm B)) | head_norm (sieve (cons (A, B))) = cons (A, (sieve (filter (B, A)))) | head_norm (sieve A) = head_norm (sieve (head_norm A)) | head_norm primes = head_norm (sieve (ints_from 2)) | head_norm (filter (cons (A, B), C)) = if A mod C = 0 then head_norm (filter (B, C)) else cons (A, filter (B, C)) | head_norm (filter (A, B)) = head_norm (filter (head_norm A, B)) fun normalize nil = nil | normalize (cons (A, B)) = cons (A, normalize B) (* A is a normal form *) | normalize A = normalize (head_norm A); (* System.Control.Print.printDepth := 500; System.Control.Print.stringDepth := 5000; normalize (show (100, primes)); *) Sergio Antoy Dept. of Computer Science Portland State University P.O.Box 751 Portland, OR 97207 voice +1 (503) 725-3009 fax +1 (503) 725-3211
|
Tue, 26 Sep 1995 07:20:09 GMT |
|
|
|