Hence the name? 
Author Message
 Hence the name?

Ok, I'm curious. What exactly is dynamic about our Dynamic Language?


Sat, 26 Jan 2002 03:00:00 GMT  
 Hence the name?

Quote:

> Ok, I'm curious. What exactly is dynamic about our Dynamic Language?

Well, first, there are the things that make C more dynamic than fortran:
you can write recursive functions (stack allocation) and you can allocate
storage on the heap.

Then there are the things that make C++ more dynamic than C: you can
create sets of related classes, and create variables that can refer to
classes that didn't exist at the time the code containing the variable was
compiled, and have that code call virtual functions that do the right
thing.

To that, Dylan adds:

- functions whose activation frame survives after the function returns,
breaking free of the first in, first out mentality of function calls.

- ability to create classes at runtime

- ability to create generic functions at runtime

- ability to create functions at runtime, using curry, apply and others to
splice together existing primitives into new functions.

- ability to add functions created at runtime into new or existing generic
functions

At the same time that you can do all this, you can make your program
pretty much as static as you want.  Dynamic things are flexible, but in
general slow.  Static is fast and inflexible.  Dylan tries to let you
choose exactly what you want static and exactly what you want dynamic, to
get the best of both worlds.

Here's a little example that I wrote a few years ago (for the MacMarlais
interpreter originally), that I think illustrates the point.

------------------------ fib.dylan -----------------------
define method fib (n :: <integer>) => (f :: <integer>);
  let result =
    if (n < 2)
      1
    else
      fib (n - 2) + fib (n - 1)
    end;
  result
end;

define method main(appname, #rest arguments)
  format-out("fib(40) = %d\n", fib(40));
end method main;
-------------------------- fibc.c -------------------------
#include <stdio.h>

int fib(int n){
  int result;
  if (n < 2)
    result = 1;
  else
    result = fib(n - 2) + fib(n - 1);
  return result;

Quote:
}

int main(){
  printf("fib(40) = %d\n", fib(40));
  return 0;
Quote:
}

-----------------------------------------------------------

On my 200 MHz Pentium Pro linux machine, I compiled the C program with gcc
and the Dylan program with d2c.

Language   Time (sec)
========   ==========
C            32.51
Dylan        35.19

C is about 8% faster.  The only reason for this is that d2c passes an
extra, hidden, argument to fib() each time, for its own purposes (not
actually necessary in this program).  Perhaps one day we can avoid this.

Now, change the Dylan program to this:

------------------------ fib.dylan -----------------------
define open generic fib (n :: <integer>) => (f :: <integer>);

define method fib (n :: <integer>) => (f :: <integer>);
  let result =
    if (n < 2)
      1
    else
      fib (n - 2) + fib (n - 1)
    end;
  add-method (fib, method (arg == n) => (f :: <integer>); result end);
  result
end;

define method main(appname, #rest arguments)
  format-out("fib(40) = %d\n", fib(40));
end method main;
-----------------------------------------------------------

What I've done is explicitly declare the "fib" generic function and make
it open -- which means that I'm now allowed to add methods to it at
runtime.  By default the generic function for "fib" is sealed.

If that's all I did the program would run slower (in fact 132.8 seconds),
but I've also made use of several of Dylan's dynamic features that you
can't do in C or C++ or Java.

I've:

- created a method at runtime,
- that refers to variables ("n" and "result") after the function
containing them exits, and
- I've added the new method to the generic function at runtime.

The method itself is one that you can't write in C or C++ or Java.  The
method will be called not when the actual argument is an integer or a
float or a character, but only when the argument is an integer with a
*specific* value.

The result of this?

Language   Time (sec)
========   ==========
C            32.51
Dylan        35.19
Dylan #2      0.04

It's about 1000 times faster!

Of course this is a highly unrealistic example.  But there are points here
which bias it in favour of Dylan, and points which bias it in favour of C.

In favour of C:

- you could use a different but equivalent technique to get the same or
better speedup:

-------------------------- fibc.c -------------------------
#include <stdio.h>

int memo[1000] = {0};

int fib(int n){
  int result;
  if (memo[n])
    result = memo[n];
  else {
    if (n < 2)
      result = 1;
    else
      result = fib(n - 2) + fib(n - 1);
    memo[n] = result;
  }
  return result;

Quote:
}

int main(){
  printf("fib(40) = %d\n", fib(40));
  return 0;
Quote:
}

-----------------------------------------------------------

This runs in 0.00 seconds according to my machine :-)

In favour of Dylan:

- you could use the same technique as C to get the same speedup as C.

- the Dylan method changes the program logic less.

- in more complicated cases -- in particular if there was more than one
argument to "fib", or if the arguments were not small integers -- then the
C technique breaks down and you'd have to use hash tables etc but the
Dylan one still works fine, with *no* change.

We've gone a little afield, but I hope this has helped to answer your
question...

-- Bruce



Sat, 26 Jan 2002 03:00:00 GMT  
 Hence the name?
[snip]
Quote:
> Of course this is a highly unrealistic example.

[snip]

I agree. Even minimal knowledge of algorithm analysis should reveal to
any reader just how pathetic your first implementations of fib() were.
But let's forget that.

I'm in the process of learning Dylan and I'd like to know that
- how much memory will the Dylan example using add-method take?
- what is the asymptotic complexity of method look up?
I do realize that the answers may be implementation specific.

---
Vesa Karvonen



Sat, 26 Jan 2002 03:00:00 GMT  
 Hence the name?


Quote:
> - what is the asymptotic complexity of method look up?
> I do realize that the answers may be implementation specific.

Theoretically speaking, it's possible to perform multi-method dispatch in
constant time if all potentially applicable methods are known at compile
time. (This ignores singletons and some other fancy types.)

In the current version of d2c, I think that method dispatch (at worst) runs
in linear time relative to the number of applicable methods. I don't know
about Harlequin Dylan.

Cheers,
Eric



Sat, 26 Jan 2002 03:00:00 GMT  
 Hence the name?

Quote:



> > Ok, I'm curious. What exactly is dynamic about our Dynamic Language?

 I think the nonordinary dynamic aspects are the ability to add and remove
 definitions of classes and methods (with some constraints) at runtime.  

 The ability to dynamically allocate memory and recurse is hardly
 "earth shattering" for a language which was conceived in the latter part
 of the 1980's and early 1990's.  Who doesn't put that stuff into a
 "general use" language at this point?  Even Fortran. ;-)

Quote:
> Then there are the things that make C++ more dynamic than C: you can
> create sets of related classes, and create variables that can refer to
> classes that didn't exist at the time the code containing the variable was
> compiled,

 Ah, but those class have to exist at link time. Just becauset he linker
 may be dynamic doesn't necessary make "subclasses" dynamic.  That variable
 still has to be a member of that static class (minus deliberate attempts
 to deceive your C++ runtime environment).

 Ditto with templates. Just because you may instantiate them "just in time"
 at link time doesn't quite make them dynamic.

Quote:

> - functions whose activation frame survives after the function returns,
> breaking free of the first in, first out mentality of function calls.

 It is still a stack.  If you have a function return a function that doesn't
 really change the fact that the last function invoked is going to be the
 first to return.   What may not happen is the disposal of all the invocation's
 variables.  If the returned function uses them they still exist (i.e. may be
 allocated on the heap).

 Similarily, there is "no call with continuation". [ So not quite as
 "dynamic" as some others].  It is only the local vars part of the
 activation frame that is heap stored.

 For a "different" twist, consider restarts.  Being able to take an
 exception and continue from the point of exception invocation. Still
 FIFO though.

Quote:
> - ability to create functions at runtime, using curry, apply and others to
> splice together existing primitives into new functions.

 Like closures, languages like ML have this capability also. They have a
 very "static" feel to them.  Things happen dynamically, but you
 need to have all of your ducks lined up in a row pre-runtime time. :-)

Quote:
> pretty much as static as you want.  Dynamic things are flexible, but in
> general slow.  Static is fast and inflexible.  Dylan tries to let you
> choose exactly what you want static and exactly what you want dynamic, to
> get the best of both worlds.

 Dylan isn't "dynamic" to the extent that you can define a function that
 performs an arbitrary task at runtime and execute it (i.e.  from Lisp related
 languages: eval or from Smalltalk: perform).    That's is part of the price of
 wanting to be in both "worlds".  So it isn't the ultimate radical dynamic. :-)

 So in some sense it is the fact that Dylan isn't so dynamic that is important
 also. [ I don't think that notion was fully developed when they penned the
 languages name. Of course, that fact that "Dylan" is a "real" name was likely
 of significant importance to its choice. So dynamic's "DY" is why it is
 prominent. ]

---

Lyman



Sat, 26 Jan 2002 03:00:00 GMT  
 Hence the name?

...

Quote:
> I'm in the process of learning Dylan and I'd like to know that
> - how much memory will the Dylan example using add-method take?
> - what is the asymptotic complexity of method look up?
> I do realize that the answers may be implementation specific.

  How about the answers ARE implementation specific? :-)  The Dylan
  reference manual places no constraints on these.

  I suspect that what space

        define function foo ( x )  x end;

  takes plus

    the number passed to fib  times closure data structure
    to hold one integer value and a pointer to the above function.
    Also plus some overhead to enter each value/closure pointer pair into
    a hash table.

  would be a close approximation.
  In other words the overhead is higher than just having an array
  of integers. If you were space constraint you'd want to do it
  similar to the demonstrated C way.

  For the second question,

  In the context of a singleton method I'd suspect something close
  to a hash table lookup. ( that depends on the hasher for the
  specifics).  Perhaps something around O(1),  but quite likely with a
  high constant.   For a general generic function lookup I'll
  defer to folks who munge to internals for that. I'd suspect
  that linearization is a major contributor.

---

Lyman



Sat, 26 Jan 2002 03:00:00 GMT  
 Hence the name?

Quote:

>> - what is the asymptotic complexity of method look up?
>> I do realize that the answers may be implementation specific.

>Theoretically speaking, it's possible to perform multi-method
>dispatch in constant time if all potentially applicable methods are
>known at compile time. (This ignores singletons and some other fancy
>types.) In the current version of d2c, I think that method dispatch
>(at worst) runs in linear time relative to the number of applicable
>methods.

Does this mean that there's utility to reading Chamber and Chen's
"Efficient Multiple and Predicate Dispatching"[*] beyond the merely
educational?  

I had assumed that method dispatch would have already been the focus
of large amounts of optimization effort, since Dylan encourages a
highly polymorphic style of programming. Further, this is interesting
because a) the paper is short and the algorithm straightforward, and
b) it suggests that we will see big gains.

I'm willing to try implementing a version of that algorithm for d2c, if
that piece of the compiler doesn't have evil tricky interdependencies,
and if it hasn't been considered and rejected already. (It *looks* like
everything is in d2c/runtime/dylan/func.dylan, but I await corrections.)

[*] <http://www.cs.washington.edu/research/projects/cecil/www/
Papers/dispatching.html> (As a general note, I've found everything on
the Cecil site to be highly interesting and relevant reading, as Dylan
and Cecil seem very close in design space.)

Neel



Sat, 26 Jan 2002 03:00:00 GMT  
 Hence the name?

Quote:

> >In the current version of d2c, I think that method dispatch
> >(at worst) runs in linear time relative to the number of applicable
> >methods.

> Does this mean that there's utility to reading Chamber and Chen's
> "Efficient Multiple and Predicate Dispatching"[*] beyond the merely
> educational?  

> I had assumed that method dispatch would have already been the focus
> of large amounts of optimization effort, since Dylan encourages a
> highly polymorphic style of programming.

I'm sure Harlequin has put a lot of effort into their compiler :-)  but
it's a bit slow in GD at the moment -- I did a version of fib() that had
singleton specialisers for 0 and 1 as well as for the general case, and it
was taking around 500 clock cycles on my machine for the GF dispatch.
That compares to 25 clock cycles for the simple recursive fuction, and
about 75 clock cycles for GF dispatch when I opened the GF but only had
one method installed in it.

Quote:
> I'm willing to try implementing a version of that algorithm for d2c, if
> that piece of the compiler doesn't have evil tricky interdependencies,
> and if it hasn't been considered and rejected already.

Please!

I don't know about interdependencies.  The word "magic" *is* in that
source file in a few places.  Whether that just refers to the compiler
automatically generating calls to those functions or to something else I
don't know.

Quote:
> (It *looks* like
> everything is in d2c/runtime/dylan/func.dylan, but I await corrections.)

As far as *I* know, gf-call, gf-call-one-arg,
internal-sorted-applicable-methods, cached-sorted-applicable-methods etc
are it.

I haven't actually seen the compiler use gf-call-one-arg.  Finding out why
is on my list, as is finding out why it doesn't generate static
discriminator functions when I think it should.  Full GF displatch is
being used far too much at present :-(  I would have thought that fib()
would use gf-call-one-arg, for example, but its GF slot contains
dylanZdylan_visceraZgf_call_FUN, not
dylanZdylan_visceraZgf_call_one_arg_FUN.

Quote:
> As a general note, I've found everything on
> the Cecil site to be highly interesting and relevant reading, as Dylan
> and Cecil seem very close in design space.)

Yes, there's some good stuff there.  I'm trying to read it all myself.

-- Bruce



Sun, 27 Jan 2002 03:00:00 GMT  
 Hence the name?

Quote:


> [snip]
> > Of course this is a highly unrealistic example.
> [snip]

> I agree. Even minimal knowledge of algorithm analysis should reveal to
> any reader just how pathetic your first implementations of fib() were.
> But let's forget that.

There are better ways to calculate Fibonnacci numbers.  But there are
other, mnore realistic, tasks that naturally require multi-way recursion,
and having an easy way to record the results of previous computations is
useful.

Quote:
> I'm in the process of learning Dylan and I'd like to know that
> - how much memory will the Dylan example using add-method take?

I'm not sure.  A quick look at the generated C code shows 40 bytes for the
function itself, plus 48 bytes for each lexical variable closed over.  So
around 6 KB total for the fib() example, plus a lookup table entry in the
Generic Function itself.

Quote:
> - what is the asymptotic complexity of method look up?

Probably more than it should be, right now :-)

-- Bruce



Sun, 27 Jan 2002 03:00:00 GMT  
 Hence the name?

Quote:

> Theoretically speaking, it's possible to perform multi-method dispatch in
> constant time if all potentially applicable methods are known at compile
> time. (This ignores singletons and some other fancy types.)

Even better, it is theoretically possible to do so even without
knowing the applicable methods at compile time. This generates some
overhead for runtime class generation and method addition, but these
operations are uncommon enough so we can tolerate it.

Andreas

--
"We show that all proposed quantum bit commitment schemes are insecure because
the sender, Alice, can almost always cheat successfully by using an
Einstein-Podolsky-Rosen type of attack and delaying her measurement until she
opens her commitment." ( http://xxx.lanl.gov/abs/quant-ph/9603004 )



Sun, 27 Jan 2002 03:00:00 GMT  
 Hence the name?

Quote:


> > Theoretically speaking, it's possible to perform multi-method dispatch in
> > constant time if all potentially applicable methods are known at compile
> > time. (This ignores singletons and some other fancy types.)

> Even better, it is theoretically possible to do so even without
> knowing the applicable methods at compile time. This generates some
> overhead for runtime class generation and method addition, but these
> operations are uncommon enough so we can tolerate it.

Do you have a reference to a paper (or web page :) that explains this more
fully?
--



Sun, 27 Jan 2002 03:00:00 GMT  
 Hence the name?

[multi-method dispatch in constant time]

Quote:
> Do you have a reference to a paper (or web page :) that explains this more
> fully?

The dispatch algorithm is quite easy: for a n-way dispatch, use an
n-dimensional table with the types of the arguments as indices to look
up the right method.

The trick is in representing that table with small amounts of memory,
and generating these compressed tables efficiently. One paper can be
found here:

http://www.acm.org/pubs/citations/journals/toplas/1998-20-1/p116-duja...

"Fast algorithms for compressed multimethod dispatch table generation"
Dujardin, Amiel, Simon, ACM TOPLAS, Vol. 20, No 1.

We're not actually doing this in Gwydion Dylan at the moment, but I've
always wanted to implement it. If someone is faster than me, I won't
complain ;).

Andreas

--
"We show that all proposed quantum bit commitment schemes are insecure because
the sender, Alice, can almost always cheat successfully by using an
Einstein-Podolsky-Rosen type of attack and delaying her measurement until she
opens her commitment." ( http://xxx.lanl.gov/abs/quant-ph/9603004 )



Mon, 28 Jan 2002 03:00:00 GMT  
 
 [ 14 post ] 

 Relevant Pages 

1. nasm ? : jmp r16 bytes hence

2. Multidimensional arrays in HeNCE

3. Multi dimensional arrays in HeNCE

4. What is the name of the function named + ?

5. Alias name vs. folderitem.name

6. DOS File Lookup - just the file name, not the path AND file name

7. field name and key name as a variable

8. field name and key name as a variable

9. A Standard name for .NAME

10. mkdir same name as file name

11. file name vs. real name

12. Named block inside named loop

 

 
Powered by phpBB® Forum Software