SICP and "Loop Invariants"
Author 
Message 
John Clont #1 / 7

SICP and "Loop Invariants"
Hello Folks, I'm on SICP exercise 1.16 (yes I have regressed), which I must quote here: "Design a procedure that evolves an iterative exponentiation process, that uses successive squaring and uses a logarithmic number of steps, as does fastexpt. (Hint: Using the observation that (b^(n/2))^2==(b^2)^(n/2), keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product ab^n is unchanged from state to tstate. At the beginning of the process a is taken to be 1, and the anser is given by the value of a at the end of the process. In general, the technique of defining an __invariant_quantity__ that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)" (Code below to show I'm honest :) I managed to hack thru it (somehow I don't think hacking is a scheme tradition), but I don't get the point about the invariant. What would I see in the problem that would make me think of this magic ab^n quantity? Well I see that a goes 1>ans as b goes b^n>b^0. But I don't understand the general notion of loop invariant. Any helpful further explanation would be greatly appreciated... Cheers, John (define (fastexp2 base expn) (fe2iter base expn 1)) (define (feiter b n a) (if (= n 0) a (if (even? n) (feiter (square b) (/ n 2) a) (feiter b ( n 1) (* a b)))))

Sat, 07 Sep 2002 03:00:00 GMT 


Bruce Hou #2 / 7

SICP and "Loop Invariants"
Quote:
> I managed to hack thru it (somehow I don't think hacking is a scheme > tradition), but I don't get the point about the invariant. What would I > see in the problem that would make me think of this magic ab^n > quantity? Well I see that a goes 1>ans as b goes b^n>b^0. But I > don't understand the general notion of loop invariant.
Are you happy enough to at least convince yourself whether or not a claimed loop invariant is in fact invariant? The process of inventing the loop invariant is one of the classic great inductive leaps that all true science requires  you just have to be looking a tthe problem and going "it's chaos, it's chaos, it makes no sense" and then suddenly get an "aha!" that tells you that soime property is always satisfied. One good way to come up with loop invariants is to use the notion of "weakest precondition". A google search turns up lots of stuff and a few seconds scanning shows this one to be quite promising... <http://www.ics.org/memOnly/conference/papers/94_592.html> ... but really it;s hard to go past just hunting down the originalpaper in the library: E.W. Dijkstra, Communications of the ACM, Vol. 18 No. 8, pp. 453457 1975  Bruce

Sat, 07 Sep 2002 03:00:00 GMT 


RolfThomas Happ #3 / 7

SICP and "Loop Invariants"
Bruce Hoult: Quote: > One good way to come up with loop invariants is to use the notion of > "weakest precondition". A google search turns up lots of stuff and a few
I recommend E. W. Dijkstra: A Discipline of Programming [a CS classic, I think] David Gries: The Science of Programming [textbook] They introduce the weakest precondition semantics of an imperative model language and use it in deriving programs (or program fragments) from their correctness proofs. In particular, they cover the design of loop invariants and loops. (Programmers who really want to master and apply this technique will perhaps prefer Gries' textbook. Reading both won't hurt, of course.) rthappe

Sat, 07 Sep 2002 03:00:00 GMT 


Max Hailperi #4 / 7

SICP and "Loop Invariants"
[How could one come up with the invariant quantity ab^n, or loop invariants more generally?] All the pointers to Dijkstra and Gries are good, but you may also want a more heuristic answer. We try to give one in our book, Concrete Abstractions(*). It goes roughly like this: The iteration strategy is "by doing a little bit of work first, transform your problem into a smaller one with the same solution. Then solve the resulting smaller problem." So consider an example exponentiation problem, and try to apply the above strategy by hand. Let's say we pick 3^5. Writing that out, so we can see how much work there is to do, we get 3 * 3 * 3 * 3 * 3 That is a problem with four multiplications. What would be a smaller problem? One with only three multiplications. But we need the solution to the smaller problem to be the same as the solution to the original problem. So we write down 3 * 3 * 3 * 3 * 3 = __ * __ * __ * __ Now what can we fill in the three blanks of smaller problem? We can't just squeeze the five numbers into the four blanks. The strategy tells us we need to do a little bit of the work first. Since our work consists of multiplications, that suggests that we use multiplication to in effect put two numbers into one blank  we'll put their product in. So we get 3 * 3 * 3 * 3 * 3 = 9 * 3 * 3 * 3 continuing in this way, we get = 27 * 3 * 3 = 81 * 3 = 243 at which point we've got our answer. Now, in order to code this up, we need to write a general procedure that can handle any of the problems in this sequence of equivalent problems  or any other of the same general form. So what do they have in common? We can't just write a procedure foo that we would use as (foo 3 3 3 3 3) or then as (foo 27 3 3 3 3), because we want to stick with a fixed number of arguments. So how can we summarize each line in the above progression of problems in terms of a fixed number of parameters? We notice that in each line, there is one special number, and then the rest are all the same as each other. So rather than listing all the 3s, we can just count how many, and summarize our problems as things like 9 times three threes or 27 times two threes or 81 times one three. In other words, they are all of the form ab^n. So we write a general procedure for computing ab^n, given a, b, and n. Having done so, we are still left with our original problem of computing b^n. But we recognize that this can be done as 1b^n, and so can write the exponentiation procedure as a oneline call to the more general procedure. This heuristic approach to finding invariants doesn't work for arbitrarily hairy problems and arbitrarily noninsightful problem solvers, but our experience is that it is a good starting point. Max Hailperin Associate Professor of Computer Science Gustavus Adolphus College 800 W. College Ave. St. Peter, MN 56082 USA http://www.gustavus.edu/~max/ (*) Concrete Abstractions: An Introduction to Computer Science Using Scheme, by Max Hailperin, Barbara Kaiser, and Karl Knight, Brooks/Cole Publishing Company, 1999. See Chapter 3, "Iteration and Invariants." For more information on the book, see http://www.gustavus.edu/~max/concreteabstractions.html

Sat, 07 Sep 2002 03:00:00 GMT 


John Clont #5 / 7

SICP and "Loop Invariants"
Thank you for that terrific explanation! I understand it perfectly now (well, or until I have to apply to a harder problem we'll see <g>) Cheers, John jclontsATmastnetDOTnet
Quote:
> [How could one come up with the invariant quantity ab^n, or loop > invariants more generally?] > All the pointers to Dijkstra and Gries are good, but you may also want > a more heuristic answer. We try to give one in our book, Concrete > Abstractions(*). It goes roughly like this: > The iteration strategy is "by doing a little bit of work first, > transform your problem into a smaller one with the same solution. > Then solve the resulting smaller problem." > So consider an example exponentiation problem, and try to apply the > above strategy by hand. Let's say we pick 3^5. Writing that out, so > we can see how much work there is to do, we get > 3 * 3 * 3 * 3 * 3 > That is a problem with four multiplications. What would be a smaller > problem? One with only three multiplications. But we need the > solution to the smaller problem to be the same as the solution to the > original problem. So we write down > 3 * 3 * 3 * 3 * 3 = __ * __ * __ * __ > Now what can we fill in the three blanks of smaller problem? We can't > just squeeze the five numbers into the four blanks. The strategy > tells us we need to do a little bit of the work first. Since our work > consists of multiplications, that suggests that we use multiplication > to in effect put two numbers into one blank  we'll put their product > in. So we get > 3 * 3 * 3 * 3 * 3 = 9 * 3 * 3 * 3 > continuing in this way, we get > = 27 * 3 * 3 > = 81 * 3 > = 243 > at which point we've got our answer. Now, in order to code this up, > we need to write a general procedure that can handle any of the > problems in this sequence of equivalent problems  or any other of > the same general form. So what do they have in common? We can't just > write a procedure foo that we would use as (foo 3 3 3 3 3) or then as > (foo 27 3 3 3 3), because we want to stick with a fixed number of > arguments. So how can we summarize each line in the above progression > of problems in terms of a fixed number of parameters? We notice that > in each line, there is one special number, and then the rest are all > the same as each other. So rather than listing all the 3s, we can > just count how many, and summarize our problems as things like > 9 times three threes > or > 27 times two threes > or > 81 times one three. > In other words, they are all of the form ab^n. So we write a general > procedure for computing ab^n, given a, b, and n. Having done so, we > are still left with our original problem of computing b^n. But we > recognize that this can be done as 1b^n, and so can write the > exponentiation procedure as a oneline call to the more general > procedure. > This heuristic approach to finding invariants doesn't work for > arbitrarily hairy problems and arbitrarily noninsightful problem > solvers, but our experience is that it is a good starting point. > Max Hailperin > Associate Professor of Computer Science > Gustavus Adolphus College > 800 W. College Ave. > St. Peter, MN 56082 > USA > http://www.gustavus.edu/~max/ > (*) Concrete Abstractions: An Introduction to Computer Science Using > Scheme, by Max Hailperin, Barbara Kaiser, and Karl Knight, > Brooks/Cole Publishing Company, 1999. See Chapter 3, "Iteration > and Invariants." For more information on the book, see > http://www.gustavus.edu/~max/concreteabstractions.html
Sent via Deja.com http://www.deja.com/ Before you buy.

Sat, 07 Sep 2002 03:00:00 GMT 


Joe Marshal #6 / 7

SICP and "Loop Invariants"
Quote:
> I managed to hack thru it (somehow I don't think hacking is a scheme > tradition), but I don't get the point about the invariant.
Hey, hacking is the raison d'etre of Scheme! Quote: > What would I > see in the problem that would make me think of this magic ab^n > quantity? Well I see that a goes 1>ans as b goes b^n>b^0. But I > don't understand the general notion of loop invariant. > Any helpful further explanation would be greatly appreciated...
Let's consider computing 3^5. We'll be making these calls to feiter: (feiter 3 5 1) (feiter 3 4 3) (feiter 9 2 3) (feiter 81 1 3) (feiter 81 0 243) Now you've observed that a gets bigger and becomes the answer as n gets smaller, but let's consider FEITER in a different light. Suppose FEITER were a `black box'. Someone wrote it, we don't know why or what it does. But we do notice this: the `answer' to (feiter 3 5 1) is 243, the `answer' to (feiter 9 2 3) is 243, the `answer' to (feiter 3 4 3) is 243, etc. etc. They all compute the exact same thing! So we scratch our heads a little and eventually figure out that (feiter a b c) computes (a^b)*c. (Yes, this requires a bit of thought to figure that out, but with practice you will become more proficient at it.) Let's generalize: whenever we have an iterative LOOP, each call to the LOOP computes the *exact same thing* as the previous call. That's the `loop invariant'. In Scheme, it is relatively easy to find the loop invariant: it is a function of the arguments to the looping construct (assuming no side effects). In other languages, where the arguments to a loop are implied, it can be much more difficult.  ~jrm

Sat, 07 Sep 2002 03:00:00 GMT 


Gary T. Leave #7 / 7

SICP and "Loop Invariants"
Quote:
>Bruce Hoult: >> One good way to come up with loop invariants is to use the notion of >> "weakest precondition". A google search turns up lots of stuff and a few >I recommend > E. W. Dijkstra: A Discipline of Programming [a CS classic, I think] > David Gries: The Science of Programming [textbook] >They introduce the weakest precondition semantics of an imperative >model language and use it in deriving programs (or program fragments) >from their correctness proofs. In particular, they cover the design >of loop invariants and loops. (Programmers who really want to master >and apply this technique will perhaps prefer Gries' textbook. Reading >both won't hurt, of course.)
I would also recommend a wonderful little book by Cohen: Edward Cohen, Programming in the 1990s: An Introduction to the Calculation of Programs, SpringerVerlag, 1990. Among other techniques, this has several heuristics for inventing loop invariants. Enjoy, Gary T. Leavens  Department of Computer Science, Iowa State University 229 Atanasoff Hall, Ames, Iowa 500111040 USA
URL: http://www.cs.iastate.edu/~leavens

Sun, 08 Sep 2002 03:00:00 GMT 


