Author Message

Here is a simple problem that was posted on comp.lang.object.

|> One of the companies that I worked for (using Eiffel and OO as the main
|> programming language) had a very successful interviewing methodology.  A
|> part of it was giving a candidate some little programming task in C (or
|> any other language of choice) and asking him/her to write the most
|> efficient and pretty-looking solution. (See example below). Practically,
|> it turned out that those who did the C task best, were later best Eiffel
|> and OO programmers (after training in Eiffel, of course).

|> Example (not actual):

|> You are given an array of numbers:
|> (a(1), a(2), ..., a(k), a(k+1), ..., a(k+m))

|> Problem: convert it to array

|> (a(k+1), a(k+2), ..., a(k+m), a(1), a(2), ..., a(k))

|> with number of operations O(k+m) and not using any additional array or
|> list or any other massive memory. One or several scalar variables are
|> OK.

|> In plain words, the task is to swap two parts of the array efficiently.

I posted my quick solution below.  Anyone who wants to solve it
themselves is welcome to ignore the rest of this message.
-----------------------------------

First attempt:  Save the 0th item.  Move the kth item to the 0th
spot.  Move 1 thru k-1 forward one.  Move the k+1th item to the 1st
spot.  Move 2 thru k forward one.  Etc.

The first k items are moved m times, the last m items are moved
once each.

OK, that's quick to figure but not very good.  Next try.  Each item
goes a particular place.

for 0 =< i < k go to i+m
for k =< i < m+k go to i-k

So item 0 goes to m,
item m goes to 2m-k or m-k,
item m-k goes to 2m-k or 2m-2k,
etc.

If m and k are mutually prime it works.  If they aren't, we'll get
back to the beginning before everything is moved.  Say m=2k, then
we'd only move 3 items.  Or say m and k are both even but m/2 and k/2
are mutually prime, then we'd move half the items.  OK, it's the
greatest common denominator that matters.  Got it.  We have to do it
that many times, starting at 0, 1, 2, etc.

{
VALUE 1ST  ( rememer the 1st one )
VALUE KTH  ( remember the kth one)
VALUE MTH  ( remember the mth one)

DEFER M!      (             "                   )
DEFER MCELLS  (             "                   )
DEFER MSWAP   (          "   maybe faster to save in data structure?)
DEFER MDROP   (             "                   )

: New-index ( i -- i' )   ( get next item, m up or k down )
DUP KTH < IF
MTH + ELSE
KTH - THEN ;

: Cycle ( a i -- )       ( run through once until back at 1st spot)

( val i' )  BEGIN New-index

( val'i')     DUP 1ST = UNTIL
( val'i )   R> DROP MDROP ;
( not ideal, better to save 1st value, shuffle through copying each )
( next one into previous one, saving the original for last.  )
( 1st try, it at least works.)

( greatest common denominator routine GCD omitted, easily available,)
( reposted weekly by Gordon Charlton in comp.lang.forth 8-)

: Shuffle ( a k m-- )   ( repeat Cycle with new items until done)
TO MTH
( a k )    TO KTH
( a )      KTH MTH GCD 0 DO
( a )          DUP I Cycle
( a )          1 MCELLS + LOOP DROP ;

( revised Cycle )
: Cycle ( a i -- )       ( run through once until back at 1st spot)

( val i )      BEGIN DUP >R New-index
( val i')        DUP 1ST <> WHILE

( val i')      REPEAT
( val i )      DROP 2R> MCELLS + M! ;

( avoids the MSWAPs, probably a little more efficient, probably not )
( worth the effort to code. )

Quote:
}

OK, it wasn't hard to do, but counting the thinking time at the start
I'm already 30 minutes into it and don't have time for a good rewrite
and documentation in an hour total.  Given time, I'd break Cycle down
into smaller pieces that would be easier to understand.  I'll make a
start at it:

{
: Not-at-beginning? ( i -- i f )
DUP 1ST <> ;

: Get-new-value ( a i -- val )

: Save-at-old-spot ( val a i -- )
MCELLS + M! ;

: Cycle ( a i -- )       ( run through once until back at 1st spot)
( a i )        TO 1ST  DUP >R  0
( a 0 )        Get-new-value 1ST
( val i )      BEGIN DUP >R New-index
( val i')        Not-at-beginning? WHILE

( val i')      REPEAT
( val i )      DROP  2R> Save-at-old-spot ;

Quote:
}

It would be more readable with variables to replace the nameless
locals.  I like to do without variables in case I later want to use
the code as a template for something re-entrant, but the variables
seem easier to read, and I've already used some.  I can fake it.

{
POSTPONE >R  ; IMMEDIATE

: Keep-old-index  POSTPONE DUP
POSTPONE >R  ; IMMEDIATE
: Get-old-index   POSTPONE R>  ; IMMEDIATE
: Get-base&index  POSTPONE 2R> ; IMMEDIATE
: Get-new-index   OVER ;

: New-value
POSTPONE Get-new-value ; IMMEDIATE
: Put-at-old-spot
POSTPONE Save-at-old-spot ; IMMEDIATE
: Put-1st-value-at-last-spot
POSTPONE Get-base&index POSTPONE Save-at-old-spot ; IMMEDIATE

: Cycle ( a i -- )       ( run through once until back at 1st spot)
( a i )        TO 1ST  Keep-base-addr  0
( a 0 )        Get-new-value 1ST
( val i )      BEGIN Keep-old-index New-index
( val i')        Not-at-beginning? WHILE
( val i')          New-value
( val i'val')      Put-at-old-spot
( val i')      REPEAT
( val i )      DROP Put-1st-value-at-last-spot ;

Quote:
}

Now the algoritm is revealed, but the code is hidden -- it would be
harder to change effectively.  The pieces aren't factored for re-use.
This might possibly be better code to show management, but not better
code to use.  Oh well.  I usually do better at readable code when I
check it the next day, instead of immediately.  My hour is more than
up, I'll stop now.

Sat, 10 Jan 1998 03:00:00 GMT

Quote:

> Here is a simple problem that was posted on comp.lang.object.

> |> One of the companies that I worked for (using Eiffel and OO as the main
> |> programming language) had a very successful interviewing methodology.  A
> |> part of it was giving a candidate some little programming task in C (or
> |> any other language of choice) and asking him/her to write the most
> |> efficient and pretty-looking solution. (See example below). Practically,
> |> it turned out that those who did the C task best, were later best Eiffel
> |> and OO programmers (after training in Eiffel, of course).

> |> Example (not actual):

> |> You are given an array of numbers:
> |> (a(1), a(2), ..., a(k), a(k+1), ..., a(k+m))

> |> Problem: convert it to array

> |> (a(k+1), a(k+2), ..., a(k+m), a(1), a(2), ..., a(k))

> |> with number of operations O(k+m) and not using any additional array or
> |> list or any other massive memory. One or several scalar variables are
> |> OK.

> |> In plain words, the task is to swap two parts of the array efficiently.

> I posted my quick solution below.  Anyone who wants to solve it
> themselves is welcome to ignore the rest of this message.
> -----------------------------------

Hi, my 2cents worth...

How about stacking a(1) to a(k) and then storing back into the array,
thereby getting ( a(k), a(k-1),..., a(2), a(1), a(k+1), ...., a(k+m)).
Then do the same for a(k+1) to a(k+m) yielding ( a(k), a(k-1), ...,
a(k+m), ..., a(k+1)).  We then do the same thing for the whole array this
time so that we get
(a(k+1), ...., a(k+m), a(1), ..., a(k)).

I used the stack in a similar way for dsp convolution operations.
Works for me! ;-)

Cheers!

--
Luc Lefebvre                       |Opinions expressed are my own and
project engineer                   |do not reflect those of my
Aerospace Medical Research Unit    |employer, or anybody else for
McGill University                  |that matter...
|PGP public key availble upon
|request.

Sun, 11 Jan 1998 03:00:00 GMT

Quote:
(Luc Lefebvre) writes:

>> Here is a simple problem that was posted on comp.lang.object.
>> ....
>> I posted my quick solution below.  Anyone who wants to solve it
>> themselves is welcome to ignore the rest of this message.
>> -----------------------------------
>Hi, my 2cents worth...

>How about stacking a(1) to a(k) and then storing back into the array,
>thereby getting ( a(k), a(k-1),..., a(2), a(1), a(k+1), ...., a(k+m)).
>Then do the same for a(k+1) to a(k+m) yielding ( a(k), a(k-1), ...,
>a(k+m), ..., a(k+1)).  We then do the same thing for the whole array this
>time so that we get
>(a(k+1), ...., a(k+m), a(1), ..., a(k)).
>   I used the stack in a similar way for dsp convolution operations.
>Works for me! ;-)

Yes, someone else posted this method in comp.software.eng .  They said it was
elegant, and it is.

It takes 2 fetches and 2 stores for every item, which might make it slower than
mine with one fetch and one store.  But it's a lot easier to see that it's right.
It's surely easier to document and easier for someone new to pick up.  In a
particular implementation (stack direction known, TOS address known) you might be
able to do a whole lot of stores with a single MOVE, which could make it the
fastest.  You need  m+k spaces free in memory, unless you use a less
memory-intensive way to reverse your arrays (which you can do easily, if it
matters).

Sun, 11 Jan 1998 03:00:00 GMT
Here is an update on that problem and the proposed solutions.

First, Luc's solution has been proposed with C code.

Second, a simplified (and wrong) version of my solution was proposed
with C code.  It works only if k and m are relatively prime.

Half a dozen others announced that this solution was wrong, none has
corrected it.  Several of them said that this was the solution they
got, and they checked it enough to see that it was wrong.  They feel
this is worth points.  They said that no one can be expected to solve
the problem unless he's seen the solution before, so particular
attention should be paid to the quality of the actual code and
documentation produced.

One presented a new solution, a recursive one.  No code.  Basically, he
says assume m<=k.  Then you can move the last m items to the front by
pairwise swapping.  If m=k then you're done.  Otherwise, you now need to
switch the 2 parts of the last k -- the first k-m and the last m.
Repeat the process until either the two sides are equal or one side is
gone.  I think he misstated it slightly, but he had the idea clearly in
mind.  He wasn't sure whether it was linear in k+m but thought it was,
and asked for others to tell him.

A new discussion has broken out in comp.software-eng, one of the groups
that has been echoing this.  The question at issue is what is required
to teach CS majors.  All are agreed that CS majors should be able to
produce working code, and also should know more.  Many have agreed that
one requirement for graduation should be production of one or several
working 1000+ line programs.  The suggestion has been made that CS
majors should also be required to spend some time maintaining a 50,000+
line program.

Sun, 11 Jan 1998 03:00:00 GMT

Quote:

> (Luc Lefebvre) writes:

> >> Here is a simple problem that was posted on comp.lang.object.
> >> ....
> >> I posted my quick solution below.  Anyone who wants to solve it
> >> themselves is welcome to ignore the rest of this message.
> >> -----------------------------------
> >Hi, my 2cents worth...

> >How about stacking a(1) to a(k) and then storing back into the array,
> >thereby getting ( a(k), a(k-1),..., a(2), a(1), a(k+1), ...., a(k+m)).
> >Then do the same for a(k+1) to a(k+m) yielding ( a(k), a(k-1), ...,
> >a(k+m), ..., a(k+1)).  We then do the same thing for the whole array this
> >time so that we get
> >(a(k+1), ...., a(k+m), a(1), ..., a(k)).

> >   I used the stack in a similar way for dsp convolution operations.
> >Works for me! ;-)

> Yes, someone else posted this method in comp.software.eng .  They said it was
> elegant, and it is.

> It takes 2 fetches and 2 stores for every item, which might make it slower than
> mine with one fetch and one store.  But it's a lot easier to see that it's right.
> It's surely easier to document and easier for someone new to pick up.  In a
> particular implementation (stack direction known, TOS address known) you might be
> able to do a whole lot of stores with a single MOVE, which could make it the
> fastest.  You need  m+k spaces free in memory, unless you use a less
> memory-intensive way to reverse your arrays (which you can do easily, if it
> matters).

The trouble is, putting so many items on a stack violates
the terms of the problem--it is "a massive use of memory".

I think the problem should be done recursively. A little
thought tells you that if m>k, T(m,k) = ak + T(m-k,k).
That is, we divide and conquer working on smaller and
smaller (contiguous) subsets of the array until m-nk < k.
Then you do the opposite, and so the method will switch
back and forth from downward to upward percolation until
it is done (one or other index = 0). I think the solution is
of O(k+m) but don't have time to prove it.

--
Julian V. Noble

Mon, 12 Jan 1998 03:00:00 GMT

Quote:
Noble) writes:
>> >How about stacking a(1) to a(k) and then storing back into the array,
>> >thereby getting ( a(k), a(k-1),..., a(2), a(1), a(k+1), ...., a(k+m)).
>> >Then do the same for a(k+1) to a(k+m) yielding ( a(k), a(k-1), ...,
>> >a(k+m), ..., a(k+1)).  We then do the same thing for the whole array this
>> >time so that we get
>> >(a(k+1), ...., a(k+m), a(1), ..., a(k)).
>> >   I used the stack in a similar way for dsp convolution operations.
>> >Works for me! ;-)
>> Yes, someone else posted this method in comp.software.eng .  They said it >>

was  elegant, and it is.

Quote:
>> It takes 2 fetches and 2 stores for every item, which might make it slower
>> than mine with one fetch and one store.  But it's a lot easier to see that
>> it's right. It's surely easier to document and easier for someone new to
>> pick up.  In a  particular implementation (stack direction known, TOS
>> address known) you might be  able to do a whole lot of stores with a single
>> MOVE, which could make it the fastest.  You need  m+k spaces free in memory,
>> unless you use a less memory-intensive way to reverse your arrays (which you
>> can do easily, if it matters).
>The trouble is, putting so many items on a stack violates
>the terms of the problem--it is "a massive use of memory".

Yes.  But he doesn't have to keep them on the stack.  He can pick them up two
at a time when he reverses them, and get by with never more than 2 stack items.
That's a trivial change.

Quote:
>I think the problem should be done recursively. A little
>thought tells you that if m>k, T(m,k) = ak + T(m-k,k).
>That is, we divide and conquer working on smaller and
>smaller (contiguous) subsets of the array until m-nk < k.
>Then you do the opposite, and so the method will switch
>back and forth from downward to upward percolation until
>it is done (one or other index = 0). I think the solution is
>of O(k+m) but don't have time to prove it.

That works too.  I haven't figured out the timing.

For k=m every item gets moved once.
For k=1 every item but the first gets moved twice.
I think there's some combination of m and k so that some particular item gets
moved every other recurse.  In the worst case, on each recurse you swap some
i items and half of them will need to be moved again.  Since at least half of
the moved items go to the right place each time, then 2(m+k) is an upper
bound on the number of swaps, and k=1 is probably the worst case.

So it works.

I think my method is the fastest, but it isn't the easiest to write or to read,
and it isn't the most compact.

By now on comp.lang.object, another half dozen people have explained that the
wrong solution is wrong, and several of them have found ways similar to mine
that make it work.  A couple more have presented the recursive solution, and
there's been a little discussion about the timing for the recursive case, with
no resolution.

Mon, 12 Jan 1998 03:00:00 GMT

writes:

Quote:
>Just for the record, here is the fixed version of my swap function.
>If m and n are relatively prime it now spots that "current" has
>returned to "base", and increments both.  Try it for m = n = 6.

OK, I'll do that too.

{ .( swapping parts of something that's indexed like an array ) }

We want to be able to take something that's organized into m+k
storage units and move the first k of them into the last k spaces,
maintaining their order, while moving the last m of them into the
first m spaces, likewise maintaining their order.

First we set up a method to allow any size or shape of
storage unit, and any structure that allows a single index to
determine the unit.  This might be overkill; it would be simple
enough to come back and rewrite a bit at the beginning to change
things around.

{
DEFER GET-VALUE ( index -- value )
DEFER PUT-VALUE ( value index -- )

Quote:
}

GET-VALUE and PUT-VALUE must be changed for different data-types or
structures.  In some cases the values won't fit on the stack and
some sort of pointer to a pair of temporary locations must be used

DEFER ASIZE  ( n -- size of n storage units )

Here is code that works for data structures whose items are all the
same arbitrary size, in order:

VALUE 1SIZE
( !!put # characters in a single element here --> ) 1 ( <--)
TO 1SIZE

: A'SIZE 1SIZE * ;

: A'! ( val addr ) 1SIZE 0 DO TUCK C! LOOP DROP ;
' A'SIZE IS ASIZE

' A'! IS A!

Finally, here's code to work on arrays, with data items the same size
and in order:

( !!put address of zeroth item here --> ) TEST-CASE ( <-- )
: AINDEX' ASIZE BASE-ADDR + ;
: GET-VALUE' ( index -- value )

: PUT-VALUE' ( value index -- )
AINDEX A! ;

' AINDEX' IS AINDEX
' GET-VALUE' IS GET-VALUE
' PUT-VALUE' IS PUT-VALUE

So to use this for a character string, put the beginning string

To use it for an integer array, put the beginning array address into

To use it for a floating point array, put the beginning array address

Etc.  If you want to use this for very complicated data structures
then I'll assume you know what you're doing.

OK, with that out of the way, the following code will work on
any data structures at all, provided the correct operations are
attached. The operations it needs are GET-VALUE and PUT-VALUE .
These must stack 2 values.

{
VALUE KTH
VALUE MTH

Quote:
}

The algorithm is simple and obvious -- we pick up the 0th item and
move it to the kth spot, pick up the kth item and move it to the
k-mth spot, pick up the m-kth item and move it either to the k-2mth
or the 2k-mth spot, depending, and so on.  (We could just add k each
time and do modulo(m+k) in case it got too big.  That avoids a
conditional which is good in general, but would seem cryptic to a
worse-than-average reviewer.)  Eventually we'll cycle back to the 0th
spot.  If m and k are relatively prime then we're done.  Otherwise we
must repeat starting at the 1st spot.  We can tell how many times to
go through the cycle by finding the greatest common denominator of k
and m.  The length of any one cycle is (m+k)/GCD(m,k) .

So we'll want the greatest common denominator.

{
: GCD ( i j -- n )    ( traditional method -- Euclid? )
2DUP < IF SWAP THEN BEGIN
( i j )     TUCK MOD ?DUP 0= UNTIL ;

: Next-site ( j -- j' )
( j )   DUP KTH < IF  MTH + ELSE KTH - THEN ;

: Copy-j-to-i ( i j -- j )
( i j )       TUCK >R >R GET-VALUE ( get new value )
( valj)       R> PUT-VALUE R> ;    ( put at old site )

: 1cycle ( i n -- )

( val0 i n )     0 DO
( val0 i )         DUP Next-site Copy-j-to-i LOOP
( val0 j')       PUT-VALUE ;          ( put 1st val at last site )

: Mswap ( k m -- )
OVER 0= OVER 0= OR IF 2DROP ELSE   ( quit if either zero )
TO MTH
( k )     TO KTH
( )       MTH KTH GCD                      ( get GCD )
( gcd)    MTH KTH + OVER / SWAP            ( get relativelyprime #)
( gcd n ) 0 DO I OVER 1cycle LOOP DROP THEN ;

Quote:
}

In retrospect, nearly half the code and more than half the
explanation are devoted to re-use.  The original problem took 10
additional minutes to polish; the re-use code took 30 minutes, and
extra documentation took another 40.  If I added extensive error
facilities to detect various sorts of nonsensical re-use attempts,
that would add probably another 300%+ to each.  It couldn't prevent
complex errors, and would tend to prevent people from using data
structures that I didn't think of ahead of time.  And the potential
for bugs in the error-checking code would be at least 3 times that in
the code that did the work.

Maybe it would be better to just solve a sample problem, without
overgeneralizing, and let anybody who reads it figure out for himself
how to solve his own problem.

Amortize your time -- If an extra 30 minutes now saves you 30 minutes
next year, you've lost on the deal.  If 30 minutes now saves you 30
minutes next year and 30 minutes the year after that and so on, and
you discount your time at 50% per year, then you break even provided
you live forever.  Should you amortize your time at 50% a year?
Considering how fast skills and languages become obsolescent, 50% a
month might be more appropriate.

Maybe it would work better to write quick, and document the 2nd time
you need to read it, and rewrite for re-use the 3rd time you need to
re-use it.  The time you lose figuring out what you did and
rewriting, might be more than made up by the time you gain on code
that nobody looks at ever again.

Tue, 13 Jan 1998 03:00:00 GMT
Since even a bubble sort has its place, I submit a version of
Jet's not very good array swap:

( array swap 950727 Leo Wong + )
( ASWAP [ i k m a ] in Pygmy Forth)
( split consecutive elements of an integer array into two)
( pieces and swap the pieces)
( method: imitation of Forth's ROT and -ROT)
( i is the first element of the lower piece)
( k is the last element of the lower piece)
( m is the length of the higher piece)
( a is the array)

CREATE A'S  1000 2* ALLOT       ( test array)
: A  ( u - a)  2* A'S + ;       ( used in testing)

: -CLOCKWISE  ( i k m a - i k m a u flag: true if k-i+1 < m)

: SETUP  ( i k m a - 'i 'k+m 'i+ #)  PUSH  + 2* SWAP 2*

: UHAUL>  ( 'i 'k+m 'i+ # m)

2DROP 2DROP ;
: WHEEL>  ( i k m a m)  PUSH SETUP POP UHAUL> ;
: <UHAUL  ( 'i 'k+m 'i+ # k-i+1)

NEXT  2DROP 2DROP ;
: <WHEEL  ( i k m a k-i+1)  PUSH SETUP POP <UHAUL ;
: ASWAP  ( i k m a)  -CLOCKWISE IF <WHEEL ELSE WHEEL> THEN ;

-

New York State Department of Civil Service
Albany, NY  12239

Tue, 13 Jan 1998 03:00:00 GMT
( I help I'm not too late for the party.  The problem is
a classic.  I suspect that the desired answer is something
like this:
)

: exchange                                      ( a a' -- )

R> swap c!                                   ( )
;

: reverse                                       ( a k -- )
1- chars over +                         ( a a')
begin
2dup u<
while
2dup exchange
1 chars under+
1 chars -
repeat                                  2drop
;

: rotate                                        ( a m k -- )
>R                                           ( a m)

2dup R> /string reverse
reverse                                 ( )
;

s" 123abcdef" 2dup 3 rotate type cr

\S

How it works:

Left hand above the right.

/
---
---
---
---

\
---
---
---
---

Turn over left hand

---
---
---
---
\

\
---
---
---
---

Turn over right hand

---
---
---
---
\

---
---
---
---
/

Turn over both hands

\
---
---
---
---

/
---
---
---
---

--
Procedamus in pace -- Wil

Tue, 13 Jan 1998 03:00:00 GMT
As an experiment, I coded the not very good algorithm with
variables.  Speed difference seems small.  Neither version
is optimized.  Looks like a place for VALUES.

( array swap 950728 + )
( ASWAP [ i k m a ] in Pygmy FORTH, with VARIABLES)
( split consecutive elements of an integer array into two)
( pieces and swap the pieces)
( method imitates Forth's ROT and -ROT)
( i is the first element of the lower piece)
( k is the last element of the lower piece)
( m is the length of the higher piece)
( a is the array)

VARIABLE 'i
VARIABLE 'i+
VARIABLE 'k+m
VARIABLE u     ( movelength)

CREATE A'S  10000 2* ALLOT  ( test array)
: A  ( u - a)  2* A'S + ;   ( used in testing)

: -CLOCKWISE  ( i k m a - i k m a u flag: true if k-i+1 < m)

: SETUP  ( i k m a)  PUSH  + 2* SWAP 2*

: UHAUL>  ( m)

: WHEEL>  ( i k m a m)  PUSH SETUP POP UHAUL> ;

: <UHAUL  ( k-i+1)

: <WHEEL  ( i k m a k-i+1)  PUSH SETUP POP <UHAUL ;

: ASWAP  ( i k m a)  -CLOCKWISE IF <WHEEL ELSE WHEEL> THEN ;

-

New York State Department of Civil Service
Albany, NY  12239

Tue, 13 Jan 1998 03:00:00 GMT
Baden's solution is a lesson in seeing.  Thanks, Wil.

-

New York State Department of Civil Service
Albany, NY  12239

Tue, 13 Jan 1998 03:00:00 GMT
Jonah Thomas wrote Re: Ratio of good/bad developers

[ code snipped ]

| In retrospect, nearly half the code and more than half the
| explanation are devoted to re-use.  The original problem took 10
| additional minutes to polish; the re-use code took 30 minutes, and
| extra documentation took another 40.  If I added extensive error
| facilities to detect various sorts of nonsensical re-use attempts,
| that would add probably another 300%+ to each.  It couldn't prevent
| complex errors, and would tend to prevent people from using data
| structures that I didn't think of ahead of time.  And the potential
| for bugs in the error-checking code would be at least 3 times that in
| the code that did the work.

As I understand C.H. Moore's philosophy from 'The Evolution of Forth'
your best bet is to solve your own problems, not the one's you anticipate
somebody else has / will have. Thus there is no intention to generate
reusable code: 'The main enemy of simplicity was, in his [Moore -mhx] view,
the siren call of generality that led programmers to speculate on future
needs and provide for them.'

| Maybe it would be better to just solve a sample problem, without
| overgeneralizing, and let anybody who reads it figure out for himself
| how to solve his own problem.

I agree with this, but probably wouldn't agree on your definition of
'sample problem' and 'overgeneralizing'.

[ economics deleted ]

-marcel

Wed, 14 Jan 1998 03:00:00 GMT

Quote:

>( greatest common denominator routine GCD omitted, easily available,)
>( reposted weekly by Gordon Charlton in comp.lang.forth 8-)

You can also see it on the WWW at

http://taygeta.oc.nps.navy.mil/forth_intro/stackflo.htm

It is one of two routines I know without having to look up. The other is

I bet this makes a _lot_ of sense in comp.object :-)

For what it is worth, I would not move the elements around. I would fix it
so that it looked like they have moved.

: nothing ;

VARIABLE mapping

' nothing mapping !

: ARRAY ( size) CREATE CELLS ALLOT

Now all I have to do is to change "mapping" from doing "nothing" to doing
something appropriate.

(Yes I know there are problems with the code as is, but you get the
idea...)

Gordon.

Wed, 14 Jan 1998 03:00:00 GMT

Quote:
(Gordon Charlton) writes:
>For what it is worth, I would not move the elements around. I would fix it
>so that it looked like they have moved.
>: nothing ;
>VARIABLE mapping
>' nothing mapping !
>: ARRAY ( size) CREATE CELLS ALLOT

>Now all I have to do is to change "mapping" from doing "nothing" to doing
>something appropriate.
>(Yes I know there are problems with the code as is, but you get the
>idea...)
>Gordon.

: swapped ( i -- i' ) DUP KTH < IF MTH + ELSE KTH - THEN ;

' swapped mapping !

Very slick.

The trouble is, you haven't actually swapped them around.  What happens if
later they need to be swapped again some other way?  The function gets more
and more complicated.

: Make-switch CREATE , ,

Well, that's getting there.  You'd probably want a linked list of k m pairs
to do sequentially.  The more swapping gets done, the worse the performance
gets.

Still, I like it a lot.  You get the job done completely independent of
O(m+k), a fixed time for all array sizes.  It's very original.  I'm not
sure it would improve your chance to be hired.  8-)

Wed, 14 Jan 1998 03:00:00 GMT
|> Jonah Thomas wrote Re: Ratio of good/bad developers
|>
|> [ code snipped ]
|>
|> | In retrospect, nearly half the code and more than half the
|> | explanation are devoted to re-use.  The original problem took 10
|> | additional minutes to polish; the re-use code took 30 minutes, and
|> | extra documentation took another 40.  If I added extensive error
|> | facilities to detect various sorts of nonsensical re-use attempts,
|> | that would add probably another 300%+ to each.  It couldn't prevent
|> | complex errors, and would tend to prevent people from using data
|> | structures that I didn't think of ahead of time.  And the potential
|> | for bugs in the error-checking code would be at least 3 times that in
|> | the code that did the work.
|>
|> As I understand C.H. Moore's philosophy from 'The Evolution of Forth'
|> your best bet is to solve your own problems, not the one's you anticipate
|> somebody else has / will have. Thus there is no intention to generate
|> reusable code: 'The main enemy of simplicity was, in his [Moore -mhx] view,
|> the siren call of generality that led programmers to speculate on future
|> needs and provide for them.'
|>
|> | Maybe it would be better to just solve a sample problem, without
|> | overgeneralizing, and let anybody who reads it figure out for himself
|> | how to solve his own problem.
|>
|> I agree with this, but probably wouldn't agree on your definition of
|> 'sample problem' and 'overgeneralizing'.
|>
|> [ economics deleted ]
|>
|> -marcel

Hi
Targeting re-usability is like looking for the Holy Grail. You
can get close but never quite reach it. Several problems are related
to this. The biggest is the problem of indexing or location the
code to re-use. One can't re-use what one can't find.
The next is that, once found, one must determine if the code is
appropiate to the task. This is a real {*filter*} one. Many seem to like
re-use with all the works hidden ( OO style ). This makes the
job of testing for appropiateness almost imposible for all but the
simplest of re-usable code.
I always recommend that one only re-use code that you can understand.
This takes some of the advantage out of re-use but depending on the
application can save lives. I use re-usable code mostly when the
risk is small, otherwise I write my own.
Dwight

Wed, 14 Jan 1998 03:00:00 GMT

 Page 1 of 2 [ 28 post ] Go to page: [1] [2]

Relevant Pages