Threaded Interpreted Languages 
Author Message
 Threaded Interpreted Languages

[Apologies if this went off on the net prematurely... - I tried to kill
the earlier version].


writes:

|> >Steve Knight who writes applications for Hewlett Packard reckons that, for
|> >a given effort, he can write a POP-11 program for many applications that
|> >runs twice as fast as the equivalent C. Being able to use appropriate
|> >data-structures for the -problem-, knowing that he does not have to worry
|> >about reclaiming them, a particularly serious problem in an interactive
|> >program that may run ad infinitem, slowly clogging up its virtual memory

Quote:
>  >as

|> >it does.
|>
|> I have yet to be convinced that garbage collection is a requirement
|> for efficient programming.  I have never found it difficult to decide
|> when I am done with an object and free() it.  Perhaps if you could
|> describe a situation where automatic GC is a requirement, and not merely
|> a convenience it would help me to understand.

Are we getting into one of these "real programmers" arguments here. "Real
programmers don't use floating point - they write their fixed point routines
in assembly code and ought to know the range of the quantities they are
computing with". (Maybe they should...???)

But computing contains plenty of examples of things which programmers -knew-
they should throw away which turn out vastly to improve a product if they
don't. Compiler writers -knew- that all information about the names and types
of local variables was useless once they had generated object code. But this
information is -just- what is needed to provide a good debugging
-environment-. If the decision about what is or is not garbage is taken
automatically, then adaptation of a product to market requirements in which
what was previously garbage becomes gold is -much- easier.

And programmers do make mistakes, particularly in programs that have a
convoluted use of data, like compilers, algebraic manipulators and solid
modellers or indeed any application where there is a variety of different
-things- you can hang on trees. I notice that computer vision, now that it has
turned to C, is much less adventurous in the variety of entities it tries to
handle than in the old LISP/Pop-2 days.

Failing to return store leads to fragmentation and performance that
declines with time. Returning store that is -actually- in use leads to bizarre
bugs.

Of course there are other approaches. For example reference counting can be
implemented fairly painlessly in a language like C++ that allows user
definition of the (overloaded) assignment operation. But for my money
languages should provide such support automatically.

Robin Popplestone.



Tue, 26 Mar 1996 22:15:51 GMT  
 Threaded Interpreted Languages

| Are we getting into one of these "real programmers" arguments here. "Real
| programmers don't use floating point - they write their fixed point routines
| in assembly code and ought to know the range of the quantities they are
| computing with". (Maybe they should...???)

"Real Programmers" don't use floating point. This make the life of the
"Real Chip Desighner" much easier, and the overall product much less
(price/performance)-y.

We'll se how this will work for MuP21/F21.

--

Disclaimer: All oppinions are mine.



Wed, 27 Mar 1996 02:29:09 GMT  
 Threaded Interpreted Languages

Quote:

>[Apologies if this went off on the net prematurely... - I tried to kill
>the earlier version].


>writes:

>|> >Steve Knight who writes applications for Hewlett Packard reckons that, for
>|> >a given effort, he can write a POP-11 program for many applications that
>|> >runs twice as fast as the equivalent C. Being able to use appropriate
>|> >data-structures for the -problem-, knowing that he does not have to worry
>|> >about reclaiming them, a particularly serious problem in an interactive
>|> >program that may run ad infinitem, slowly clogging up its virtual memory
>>  >as
>|> >it does.
>|>
>|> I have yet to be convinced that garbage collection is a requirement
>|> for efficient programming.  I have never found it difficult to decide
>|> when I am done with an object and free() it.  Perhaps if you could
>|> describe a situation where automatic GC is a requirement, and not merely
>|> a convenience it would help me to understand.

>Are we getting into one of these "real programmers" arguments here. "Real
>programmers don't use floating point - they write their fixed point routines
>in assembly code and ought to know the range of the quantities they are
>computing with". (Maybe they should...???)

Actually, no.  I am interested in finding out just what advantages
GC actually might give me (who have so far had no problems keeping
track of such things myself) for the performance penalty that is
usually paid for such systems.  I have not said "GC is worthless",
I have asked "what is it's worth?".  So far the only answer I have
ever gotten to this question is "you don't have to keep track of
your dynamically allocated objects".  I have never found this to
be problematic -- thus never seen any actual ADVANTAGE to be had
from using GC.  I'm just trying to see if someone else can give
me a good reason why GC would be better (and worth the performance
hit) than doing it myself.

Quote:
>But computing contains plenty of examples of things which programmers -knew-
>they should throw away which turn out vastly to improve a product if they
>don't. Compiler writers -knew- that all information about the names and types
>of local variables was useless once they had generated object code. But this
>information is -just- what is needed to provide a good debugging
>-environment-. If the decision about what is or is not garbage is taken
>automatically, then adaptation of a product to market requirements in which
>what was previously garbage becomes gold is -much- easier.

This is a separate issue, and headed off in another direction.  From what
I understand, the decision made by GC is "can I throw this away yet?" not
"should I keep this because someone might want it later?"  A GC system
would throw away the variable names as soon as nothing referred to them
too.  (If it even applied in the situation you are describing -- where
we are no longer talking about an in-memory process, but rather a
decision about what to write to disk & when.)

Quote:
>Failing to return store leads to fragmentation and performance that
>declines with time. Returning store that is -actually- in use leads to bizarre
>bugs.

Yes.  As I said, the POTENTIAL advantage of GC seems to lie in
situations where it is difficult or impossible to determine when
you are done with an object.

I just have not run across any such situations, and asked for
some examples that might illustrate to me the realization of
the potential to be found in GC.

For example, in the application I am working on right now, the
student is on numerous lists (general roster, list of students
at a single site, list of students asking a question...)  But
I >always< know when to free the memory, as he is either registered
or not registered.  When he signs out, he is removed from all lists
and the memory freed.  No problem.  What >advantage< would GC give
me that is worth the performance hit it usually gives?  (You can
use other situations as illustration if the one I offered presents
no real advantage for GC.)

Quote:
>Of course there are other approaches. For example reference counting can be
>implemented fairly painlessly in a language like C++ that allows user
>definition of the (overloaded) assignment operation. But for my money
>languages should provide such support automatically.

Reference counting support?  Or some more advanced (& behind
the scenes) GC method?

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Philosopher and plowman                 |
each must know his part                 |       -Richard Hartman        

closer to the heart!                    |



Sun, 31 Mar 1996 02:08:21 GMT  
 Threaded Interpreted Languages

Quote:

> Actually, no.  I am interested in finding out just what advantages
> GC actually might give me (who have so far had no problems keeping
> track of such things myself) for the performance penalty that is
> usually paid for such systems.  I have not said "GC is worthless",
> I have asked "what is it's worth?".  So far the only answer I have
> ever gotten to this question is "you don't have to keep track of
> your dynamically allocated objects".  I have never found this to
> be problematic -- thus never seen any actual ADVANTAGE to be had
> from using GC.  I'm just trying to see if someone else can give
> me a good reason why GC would be better (and worth the performance
> hit) than doing it myself.

My experience is that the performance penalty is not all one suffers with automatic garbage collection in POPLOG.

POPLOG may well collect up garbage and free the memory for it's own use, but it does not appear to be very altruistic with respect to other processes running on the machine. When making a garbage collection, POPLOG actually INCREASES the total amount of memory it is using, so forcing other processes running on the machine into an "out of swap space" situation and crashing them.

Further to this; I have found that POPLOG's own behaviour becomes somewhat unpredictable and irrational when swap space runs low, with a distinct tendency for conditional branches to default to the else path regardless of the evaluation performed, and without signalling any error.

Allowing the programmer to manage the utilisation of resources herself, can save an awful lot of heartache!

Helen.



Sun, 31 Mar 1996 19:52:31 GMT  
 Threaded Interpreted Languages
Helen writes

Quote:
> My experience is that the performance penalty is not all one suffers with automatic
> garbage collection in POPLOG.
> POPLOG may well collect up garbage and free the memory for it's own use, but it does
> not appear to be very altruistic with respect to other processes running on the
> machine. When making a garbage collection, POPLOG actually INCREASES the total
> amount of memory it is using, so forcing other processes running on the machine
> into an "out of swap space" situation and crashing them.

You can force Poplog to do an in place garbage collection REF * system/pop_gc_copy

pop_gc_copy -> BOOL                                           [variable]
BOOL -> pop_gc_copy
        If this variable is true (the default), then garbage collections
        use  a  'copying'  algorithm,  which  temporarily  requires  the
        allocation of extra memory in which to copy all non-locked  heap
        structures; otherwise (or  if the required  extra memory is  not
        available), a 'non-copying' algorithm is used.
            The non-copying algorithm is generally 25% - 50% slower than
        the copying one (although in  some situations it may be  faster,
        and  therefore  worth  setting  this  variable  <false>).  (Note
        however that at  certain times the  system needs to  be able  to
        shift heap structures  up in  memory, and  requires the  copying
        algorithm to do this; thus  copying collections may still  occur
        when -pop_gc_copy- is <false>.)

Quote:
> Further to this; I have found that POPLOG's own behaviour becomes somewhat
> unpredictable and irrational when swap space runs low, with a distinct tendency
> for conditional branches to default to the else path regardless of the evaluation
> performed, and without signalling any error.

Can you give an exampe of this ?

Quote:
> Allowing the programmer to manage the utilisation of resources herself, can save
> an awful lot of heartache!

You can reduce the amount of garbage created by use of the procedures
sys_grbg_list and sys_grbg_destpair (REF * fastprocs/sys_grbg_list ).

Cheers


PS. can we have an occasional newline in text :-).



Mon, 01 Apr 1996 18:51:50 GMT  
 Threaded Interpreted Languages

Quote:
> Further to this; I have found that POPLOG's own behaviour becomes
> somewhat unpredictable and irrational when swap space runs low, with a
> distinct tendency for conditional branches to default to the else path
> regardless of the evaluation performed, and without signalling any
> error.

This is obviously nonesense! Do you have a piece of code that does
this? In 10 lines? In 100? I doubt it!

It's easy to force Poplog to use a huge amount of heap for testing
purposes:

    false -> popmemlim;
    vars whopper = initv(5e7); ;;; 50 million long words of store

The above two lines will *not* effect the execution of *any*
program! (except that garbage collections may take a few seconds ;)

Ian.



Tue, 02 Apr 1996 02:17:32 GMT  
 
 [ 10 post ] 

 Relevant Pages 

1. Threaded Interpreted Languages

2. Threaded Interpreted Languages

3. embeddable threaded interpreted language

4. language interpreters/ interpreted languages weaknesses?

5. Found a new interpreted language

6. Found a new interpreted language

7. Found a new interpreted language

8. Found a new interpreted language

9. Found a new interpreted language

10. Found a new interpreted language

11. Interpreted language

12. Interpreted Language timings

 

 
Powered by phpBB® Forum Software