Sun posts Animorphic Smalltalk system 
Author Message
 Sun posts Animorphic Smalltalk system

Sun finally let the Smalltalk system it bought (the origin point for
their HotSpot VM technoloy) - see

http://www.*-*-*.com/

The interesting thing is that VisualWorks release 7 is so much faster
than this technology - it seems that the inlining technique buys you
less on intel now than it did back in 1996.  See

http://www.*-*-*.com/ :8080/CincomSmalltalkWiki/VW+7+Faster+t...

for some quick benchmark results.  If this is where Sun still is for
VM tech, they might well be barking up the wrong tree



Sun, 19 Dec 2004 06:41:19 GMT  
 Sun posts Animorphic Smalltalk system

 |
 | The interesting thing is that VisualWorks release 7 is so much
 | faster than [animorphic smalltalk] technology - it seems that
 | the inlining technique buys you less on intel now than it did
 | back in 1996.  See
 |
 | http://www.cincomsmalltalk.com:8080/CincomSmalltalkWiki/VW+7+Faster+t...
 |
 | for some quick benchmark results.  If this is where Sun still
 | is for VM tech, they might well be barking up the wrong tree

Translation: VisualWorks (VW) is faster than a 6-7 year old
research project.  Obviously this isn't "where Sun still is for
VM tech" since Java runs rings around Smalltalk.

Is VW still 4x-80x slower than Java at numerical codes?  And
less able to inline methods (ie, slower overall)?  Unable to
remove even the simplest no-ops?  And ported to 1/10th the
number of operating systems?  With a development environment
that looks like it came from the 1980s (when 2-bit displays
where still popular)?  With zero support for securely running
untrusted code (applets, servlets)?  And no VisualWorks for
mobile devices?

Meanwhile, Java keeps on lapping Smalltalk performance-wise.
Just today:

  Java runs like the clappers on HP Superdome:
    http://www.theregister.co.uk/content/61/25995.html

FYI, "runs like the clappers" == "very fast".

Jam (address rot13 encoded)



Sun, 19 Dec 2004 10:54:03 GMT  
 Sun posts Animorphic Smalltalk system
I am sorry, James, but you asked for it  :)

I copied the Test class from the Strogtalk Tour in VW7 and executed the
tests from the Strongtalk Tour, based on #simpleTest: and  #notSoSimpleTest:

Here are the results for my machine (AMD KII-233, Win2k, 256MB)

Test benchmark: [ Test simpleTest: 10000000 ]

Strongtalk:
12418 11835 482 458 476 499 471 467 533 529  Best Time = 458
VW7:
2014 2013 2070 2066 2062 2091 2035 2050 2033 2103  Best Time = 2013

Strongtalk's speedup factor is 4.4, almost identical to the one reported in
the Tour (4.6)

Test benchmark: [ Test notSoSimpleTest: 10000000 ]

Strongtalk:
10072 716 691 753 792 691 686 735 669 765  Best Time = 669
VW7:
19300 19218 19253 19930 19284 19414 19441 20135 17056 17218  Best Time =
17056

Strongtalk's speedup factor in this case is 25.5 !! (the Tour reports a
factor of 9 (on a more modern machine))

Plus, given the facts that
a) Strongtalk is not a commercial product, but a 6 years old beta, obviously
not optimized for current processors
b) While their test obviously wants to illustrate their stregth, it is
IMNSHO a much better (more representative for real-life Smalltalk apps) test
than a microbenchmark that avoids message sends and block
creation/evaluation
c) The above impressive results are benchmarked against a VW vm that has
been optimized significantly in the last several years (the exact same tests
are about 25% faster in VW7 than in 3.1d)

..we should aknowledge that some impressive work has been done and that
perhaps there are things to learn from it, instead of bashing it.

Yes, VW's arithmetic primitives are better optimized, but I think their
point has been made.



Quote:
> Sun finally let the Smalltalk system it bought (the origin point for
> their HotSpot VM technoloy) - see

> http://www.cs.ucsb.edu/projects/strongtalk/download1.1.html

> The interesting thing is that VisualWorks release 7 is so much faster
> than this technology - it seems that the inlining technique buys you
> less on intel now than it did back in 1996.  See

http://www.cincomsmalltalk.com:8080/CincomSmalltalkWiki/VW+7+Faster+t...
ongtalk
Quote:

> for some quick benchmark results.  If this is where Sun still is for
> VM tech, they might well be barking up the wrong tree



Sun, 19 Dec 2004 11:32:40 GMT  
 Sun posts Animorphic Smalltalk system

Quote:

>I am sorry, James, but you asked for it  :)

>I copied the Test class from the Strogtalk Tour in VW7 and executed the
>tests from the Strongtalk Tour, based on #simpleTest: and  #notSoSimpleTest:

>Here are the results for my machine (AMD KII-233, Win2k, 256MB)

>Test benchmark: [ Test simpleTest: 10000000 ]

>Strongtalk:
>12418 11835 482 458 476 499 471 467 533 529  Best Time = 458
>VW7:
>2014 2013 2070 2066 2062 2091 2035 2050 2033 2103  Best Time = 2013

>Strongtalk's speedup factor is 4.4, almost identical to the one reported in
>the Tour (4.6)

seems I ran tests it wasn't good at - you ran ones it excels at.  This
is part of why micro-benchmarks aren't all that useful.  

The odd thing is, I had the code doing between 10,000 and 100,000
iterations on each micro-benchmark.  It probably depends on whether or
not the test in question hits the sweet spot of the engine or not.  

Quote:

>Test benchmark: [ Test notSoSimpleTest: 10000000 ]

>Strongtalk:
>10072 716 691 753 792 691 686 735 669 765  Best Time = 669
>VW7:
>19300 19218 19253 19930 19284 19414 19441 20135 17056 17218  Best Time =
>17056

>Strongtalk's speedup factor in this case is 25.5 !! (the Tour reports a
>factor of 9 (on a more modern machine))

it's truly fascinating, because on the OrderedCollection write test,
over 100,000 iterations of executing #addLast:, that didn't happen.
Like I said, micro-benchmarks show wild things.

Quote:

>Plus, given the facts that
>a) Strongtalk is not a commercial product, but a 6 years old beta, obviously
>not optimized for current processors
>b) While their test obviously wants to illustrate their stregth, it is
>IMNSHO a much better (more representative for real-life Smalltalk apps) test
>than a microbenchmark that avoids message sends and block
>creation/evaluation

uhh, their test - which I just looked at - does this at base:

array at: 1 put: 3

which is no more realistic than

myCollection addlast: someObject

I don't see a useful functional difference, except for that fact that
OrderedCollection is probably more commonly used, and I rarely see ST
code that indexes.  

In any case, both are artificial, and IMHO, equally so

Quote:
>c) The above impressive results are benchmarked against a VW vm that has
>been optimized significantly in the last several years (the exact same tests
>are about 25% faster in VW7 than in 3.1d)

>..we should aknowledge that some impressive work has been done and that
>perhaps there are things to learn from it, instead of bashing it.

yes, I admit that.  It just seems less impressive than I was afraid it
would

- Show quoted text -

Quote:
>Yes, VW's arithmetic primitives are better optimized, but I think their
>point has been made.



>> Sun finally let the Smalltalk system it bought (the origin point for
>> their HotSpot VM technoloy) - see

>> http://www.cs.ucsb.edu/projects/strongtalk/download1.1.html

>> The interesting thing is that VisualWorks release 7 is so much faster
>> than this technology - it seems that the inlining technique buys you
>> less on intel now than it did back in 1996.  See

>http://www.cincomsmalltalk.com:8080/CincomSmalltalkWiki/VW+7+Faster+t...
>ongtalk

>> for some quick benchmark results.  If this is where Sun still is for
>> VM tech, they might well be barking up the wrong tree



Sun, 19 Dec 2004 12:25:05 GMT  
 Sun posts Animorphic Smalltalk system
Having looked at the two benchmarks, I have to say that
"notSoSimplebenchmark" is utterly bogus.  A Smalltalk programmer
writing code that way should be shot.  What it does is

-- allocate a 1 long array
-- execute N times the code:

[a at: 1 put: 3] value

yes, it runs that faster - a lot faster

The other one, a far more realistic snippet, btw - it speeds up as it
goes - but the aggregation of this:

Time millisecondsToRun: [Tester benchmark: [ Tester simpleTest:
10000000 ]]

is only slightly more time in VW

7129 ms over 10 million runs as opposed to

6377 over 10 million runs in Strongtalk

So it takes the Strongtalk system a long time to actually amortize the
inlining in a way that makes a difference.  And with the other
benchmarks, it appears that it doesn't amortize the cost well either.

Quote:

>I am sorry, James, but you asked for it  :)

>I copied the Test class from the Strogtalk Tour in VW7 and executed the
>tests from the Strongtalk Tour, based on #simpleTest: and  #notSoSimpleTest:

>Here are the results for my machine (AMD KII-233, Win2k, 256MB)

>Test benchmark: [ Test simpleTest: 10000000 ]

>Strongtalk:
>12418 11835 482 458 476 499 471 467 533 529  Best Time = 458
>VW7:
>2014 2013 2070 2066 2062 2091 2035 2050 2033 2103  Best Time = 2013

>Strongtalk's speedup factor is 4.4, almost identical to the one reported in
>the Tour (4.6)

>Test benchmark: [ Test notSoSimpleTest: 10000000 ]

>Strongtalk:
>10072 716 691 753 792 691 686 735 669 765  Best Time = 669
>VW7:
>19300 19218 19253 19930 19284 19414 19441 20135 17056 17218  Best Time =
>17056

>Strongtalk's speedup factor in this case is 25.5 !! (the Tour reports a
>factor of 9 (on a more modern machine))

>Plus, given the facts that
>a) Strongtalk is not a commercial product, but a 6 years old beta, obviously
>not optimized for current processors
>b) While their test obviously wants to illustrate their stregth, it is
>IMNSHO a much better (more representative for real-life Smalltalk apps) test
>than a microbenchmark that avoids message sends and block
>creation/evaluation
>c) The above impressive results are benchmarked against a VW vm that has
>been optimized significantly in the last several years (the exact same tests
>are about 25% faster in VW7 than in 3.1d)

>..we should aknowledge that some impressive work has been done and that
>perhaps there are things to learn from it, instead of bashing it.

>Yes, VW's arithmetic primitives are better optimized, but I think their
>point has been made.



>> Sun finally let the Smalltalk system it bought (the origin point for
>> their HotSpot VM technoloy) - see

>> http://www.cs.ucsb.edu/projects/strongtalk/download1.1.html

>> The interesting thing is that VisualWorks release 7 is so much faster
>> than this technology - it seems that the inlining technique buys you
>> less on intel now than it did back in 1996.  See

>http://www.cincomsmalltalk.com:8080/CincomSmalltalkWiki/VW+7+Faster+t...
>ongtalk

>> for some quick benchmark results.  If this is where Sun still is for
>> VM tech, they might well be barking up the wrong tree



Sun, 19 Dec 2004 13:07:08 GMT  
 Sun posts Animorphic Smalltalk system



Quote:

> seems I ran tests it wasn't good at - you ran ones it excels at.  This
> is part of why micro-benchmarks aren't all that useful.

> The odd thing is, I had the code doing between 10,000 and 100,000
> iterations on each micro-benchmark.  It probably depends on whether or
> not the test in question hits the sweet spot of the engine or not.

Yes, and I certainly agree that there are a lot of applications that don't
do a lot of repetitive stuff, where the profile-driven optimizations are not
such a clear win.
But there are also many applications that do just that, server-type or
numeric ones come to mind. And if, as a developer, you want to program in a
nice, abstract (Smalltalk) way, you cannot avoid real blocks (just take #do:
blocks) or many small methods and the heavy inlining comes very handy.

Quote:

> >Test benchmark: [ Test notSoSimpleTest: 10000000 ]

> >Strongtalk:
> >10072 716 691 753 792 691 686 735 669 765  Best Time = 669
> >VW7:
> >19300 19218 19253 19930 19284 19414 19441 20135 17056 17218  Best Time =
> >17056

> >Strongtalk's speedup factor in this case is 25.5 !! (the Tour reports a
> >factor of 9 (on a more modern machine))

> it's truly fascinating, because on the OrderedCollection write test,
> over 100,000 iterations of executing #addLast:, that didn't happen.
> Like I said, micro-benchmarks show wild things.

:)

- Show quoted text -

Quote:

> >Plus, given the facts that
> >a) Strongtalk is not a commercial product, but a 6 years old beta,
obviously
> >not optimized for current processors
> >b) While their test obviously wants to illustrate their stregth, it is
> >IMNSHO a much better (more representative for real-life Smalltalk apps)
test
> >than a microbenchmark that avoids message sends and block
> >creation/evaluation

> uhh, their test - which I just looked at - does this at base:

> array at: 1 put: 3

> which is no more realistic than

> myCollection addlast: someObject

> I don't see a useful functional difference, except for that fact that
> OrderedCollection is probably more commonly used, and I rarely see ST
> code that indexes.

> In any case, both are artificial, and IMHO, equally so

They certainly are, I was mainly referring to (unoptimized) blocks, which
are very frequently used in real life.

I should also add,  for the Java programmers out there, that apparently not
all the optimizations from the Animorphic VM were included in Hotspot, I
suspect because Java's restrictions disallow them. So I doubt that inlining
is as succesful for Java



Sun, 19 Dec 2004 13:09:52 GMT  
 Sun posts Animorphic Smalltalk system

Quote:

> Having looked at the two benchmarks, I have to say that
> "notSoSimplebenchmark" is utterly bogus.  A Smalltalk programmer
> writing code that way should be shot.  What it does is

> -- allocate a 1 long array
> -- execute N times the code:

> [a at: 1 put: 3] value

> yes, it runs that faster - a lot faster

James, did you go through the introductory tour the Strongtalk image offers?

It clearly states that this benchmark shows off the strength of
Strongtalk optimization: optimizing away message sends and block
invocations. Thus allowing Smalltalkers to code in a more factored
(=natural) way.

Even though the inner at:put: usage  is bogus for real programs it
states that at:put: is not a primitive. Very interesting...

Anyway that code consisely demonstrates the point they wanted to make,
nothing bogus about that.

Quote:

> The other one, a far more realistic snippet, btw - it speeds up as it
> goes - but the aggregation of this:

> Time millisecondsToRun: [Tester benchmark: [ Tester simpleTest:
> 10000000 ]]

> is only slightly more time in VW

> 7129 ms over 10 million runs as opposed to

> 6377 over 10 million runs in Strongtalk

> So it takes the Strongtalk system a long time to actually amortize the
> inlining in a way that makes a difference.  And with the other
> benchmarks, it appears that it doesn't amortize the cost well either.

Again: read the tour.

Strongtalk can be deployed with a compiled code db, so the optimization
cost is at _testing_ time, not at deployed runtime. So IMO it amortizes
the costs very nicely.

This is why I asked for your test code, the standard
Time>>millisecondToRun: pattern hides the advantages of the Strongtalk
optimization scheme.
Doing multiple runs until optimized and than taking the time seems fair
to me considering the compiled code db mentioned above.

Ah, those micro benchmarks.

Cheers!

Reinout
-------



Sun, 19 Dec 2004 13:33:48 GMT  
 Sun posts Animorphic Smalltalk system

Quote:
> I am sorry, James, but you asked for it  :)

> I copied the Test class from the Strogtalk Tour in VW7 and executed the
> tests from the Strongtalk Tour, based on #simpleTest: and
#notSoSimpleTest:

> Here are the results for my machine (AMD KII-233, Win2k, 256MB)

> Test benchmark: [ Test simpleTest: 10000000 ]

> Strongtalk:
> 12418 11835 482 458 476 499 471 467 533 529  Best Time = 458
> VW7:
> 2014 2013 2070 2066 2062 2091 2035 2050 2033 2103  Best Time = 2013

> Strongtalk's speedup factor is 4.4, almost identical to the one reported
in
> the Tour (4.6)

> Test benchmark: [ Test notSoSimpleTest: 10000000 ]

> Strongtalk:
> 10072 716 691 753 792 691 686 735 669 765  Best Time = 669
> VW7:
> 19300 19218 19253 19930 19284 19414 19441 20135 17056 17218  Best Time =
> 17056

> Strongtalk's speedup factor in this case is 25.5 !! (the Tour reports a
> factor of 9 (on a more modern machine))

Ahhh. Now we're talking.

Florin,

Thanks for pointing out this specific benchmark. Here is what I got...

====
Strongtalk - simpleTest: 10000000
1305ms -> 1328ms -> 49ms  (three runs to inline...)

SmallScript - simpleTest: 10_000_000
172ms -> repeats ...
====
Strongtalk - notSoSimpleTest: 10000000
715ms -> 81ms (two runs to inline...)

SmallScript - notSoSimpleTest: 10_000_000
448ms -> repeats ...
====

For the #simpleTest:, once the inlining kicks in Strongtalk becomes 3.5
times faster. Until it kicks in, Strongtalk is roughly 1/8 the speed.

For the #notSoSimpleTest:, once the inlining kicks in Strongtalk becomes 5.5
times faster. Until it kicks in, Strongtalk is roughly 0.6 the speed.

This is the "inlining" mechanics we were looking for. The fact that it took
10-20 million executions to gain the speed is not so cool -- but the
inlining results are :-).

As I mentioned in another post, it is the inlining aspects of the VM of the
Strongtalk system that are most interesting to me [for which, as far as I
know, sources are not available]. However, there have been many papers on
this topic.

This was an area I was very interested in between 1994 and 1995 while
working on a PowerPC VM design -- but I never got the opportunity to fully
implement it.

The SmallScript VM is already configured for optimizing/re-jitting methods
with inlining based on heuristically gathered data. I.e., the VM gathers the
necessary data now, and the jit can be invoked to call back into the
SmallScript layer itself to generate executable code for a method. This is
the mechanism by which .NET code is cross-jitted from SmallScript opcodes.

I just have been way to busy to do all the *really hard work* to incorporate
adaptive inlining of native x86 code, beyond some trivial things. It is
something that is pretty high on my list once other tasks [like getting a
final release out w/.NET facilities] get taken care of.

-- Dave S. [www.smallscript.org]

- Show quoted text -

Quote:

> Plus, given the facts that
> a) Strongtalk is not a commercial product, but a 6 years old beta,
obviously
> not optimized for current processors
> b) While their test obviously wants to illustrate their stregth, it is
> IMNSHO a much better (more representative for real-life Smalltalk apps)
test
> than a microbenchmark that avoids message sends and block
> creation/evaluation
> c) The above impressive results are benchmarked against a VW vm that has
> been optimized significantly in the last several years (the exact same
tests
> are about 25% faster in VW7 than in 3.1d)

> ..we should aknowledge that some impressive work has been done and that
> perhaps there are things to learn from it, instead of bashing it.

> Yes, VW's arithmetic primitives are better optimized, but I think their
> point has been made.



> > Sun finally let the Smalltalk system it bought (the origin point for
> > their HotSpot VM technoloy) - see

> > http://www.cs.ucsb.edu/projects/strongtalk/download1.1.html

> > The interesting thing is that VisualWorks release 7 is so much faster
> > than this technology - it seems that the inlining technique buys you
> > less on intel now than it did back in 1996.  See

http://www.cincomsmalltalk.com:8080/CincomSmalltalkWiki/VW+7+Faster+t...

- Show quoted text -

Quote:
> ongtalk

> > for some quick benchmark results.  If this is where Sun still is for
> > VM tech, they might well be barking up the wrong tree



Sun, 19 Dec 2004 13:39:41 GMT  
 Sun posts Animorphic Smalltalk system

Quote:

> So it takes the Strongtalk system a long time to actually amortize the
> inlining in a way that makes a difference.  And with the other
> benchmarks, it appears that it doesn't amortize the cost well either.

For long running systems (e.g. web servers) the "slow" learning time of
the VM may amount to only a tiny part of the VMs up-time.  There may be
value here.

Is the Strongtalk VM open sourced, BTW?  If so, how open :-)



Sun, 19 Dec 2004 15:04:16 GMT  
 Sun posts Animorphic Smalltalk system

: Sun finally let the Smalltalk system it bought [out]

: http://www.cs.ucsb.edu/projects/strongtalk/download1.1.html

: The interesting thing is that VisualWorks release 7 is so much faster
: than this technology - [...]

Not exactly what Sun claims:

``Performance: It executes Smalltalk much faster than any other Smalltalk
  implementation, using an advanced inlining compiler based on
  type-feedback technology.''

 - http://www.cs.ucsb.edu/projects/strongtalk/
--
__________



Sun, 19 Dec 2004 15:45:40 GMT  
 Sun posts Animorphic Smalltalk system

Quote:
>seems I ran tests it wasn't good at - you ran ones it excels at.  This
>is part of why micro-benchmarks aren't all that useful.  

Sorry James, caught you here: you found them useful enough to use them two
messages ago to fanfare how fast VW7 is compared to Strongtalk.

Quote:
>In any case, both are artificial, and IMHO, equally so

So let's agree on a useful benchmark and run that one, OK?

--

GnuPG 1024D/E0989E8B 0016 F679 F38D 5946 4ECD  1986 F303 937F E098 9E8B
Cogito ergo evigilo



Sun, 19 Dec 2004 17:14:58 GMT  
 Sun posts Animorphic Smalltalk system

Quote:

>>seems I ran tests it wasn't good at - you ran ones it excels at.  This
>>is part of why micro-benchmarks aren't all that useful.  

>Sorry James, caught you here: you found them useful enough to use them two
>messages ago to fanfare how fast VW7 is compared to Strongtalk.

well, except that I didn't pick my benchmarks.  I ran the subset of my
normal benchmark suite that I could get running in Strongtalk easily.
For instance, I didn't run the db and socket ones, and there was one
Stream test I had trouble getting to work in Strongtalk.  

So I don't think your criticism is entirely fair.  

Quote:

>>In any case, both are artificial, and IMHO, equally so

>So let's agree on a useful benchmark and run that one, OK?



Sun, 19 Dec 2004 19:53:47 GMT  
 Sun posts Animorphic Smalltalk system
my point was, that code snippet is bogus.  It does illustrate
something useful, but not with code that would normally be written.  



Quote:

>> Having looked at the two benchmarks, I have to say that
>> "notSoSimplebenchmark" is utterly bogus.  A Smalltalk programmer
>> writing code that way should be shot.  What it does is

>> -- allocate a 1 long array
>> -- execute N times the code:

>> [a at: 1 put: 3] value

>> yes, it runs that faster - a lot faster

>James, did you go through the introductory tour the Strongtalk image offers?

>It clearly states that this benchmark shows off the strength of
>Strongtalk optimization: optimizing away message sends and block
>invocations. Thus allowing Smalltalkers to code in a more factored
>(=natural) way.

>Even though the inner at:put: usage  is bogus for real programs it
>states that at:put: is not a primitive. Very interesting...

>Anyway that code consisely demonstrates the point they wanted to make,
>nothing bogus about that.

>> The other one, a far more realistic snippet, btw - it speeds up as it
>> goes - but the aggregation of this:

>> Time millisecondsToRun: [Tester benchmark: [ Tester simpleTest:
>> 10000000 ]]

>> is only slightly more time in VW

>> 7129 ms over 10 million runs as opposed to

>> 6377 over 10 million runs in Strongtalk

>> So it takes the Strongtalk system a long time to actually amortize the
>> inlining in a way that makes a difference.  And with the other
>> benchmarks, it appears that it doesn't amortize the cost well either.

>Again: read the tour.

>Strongtalk can be deployed with a compiled code db, so the optimization
>cost is at _testing_ time, not at deployed runtime. So IMO it amortizes
>the costs very nicely.

>This is why I asked for your test code, the standard
>Time>>millisecondToRun: pattern hides the advantages of the Strongtalk
>optimization scheme.
>Doing multiple runs until optimized and than taking the time seems fair
>to me considering the compiled code db mentioned above.

>Ah, those micro benchmarks.

>Cheers!

>Reinout
>-------



Sun, 19 Dec 2004 19:55:01 GMT  
 Sun posts Animorphic Smalltalk system
On Wed, 03 Jul 2002 17:04:16 +1000, Bruce Badger

Quote:


>> So it takes the Strongtalk system a long time to actually amortize the
>> inlining in a way that makes a difference.  And with the other
>> benchmarks, it appears that it doesn't amortize the cost well either.

>For long running systems (e.g. web servers) the "slow" learning time of
>the VM may amount to only a tiny part of the VMs up-time.  There may be
>value here.

>Is the Strongtalk VM open sourced, BTW?  If so, how open :-)

The Strongtalk image seems to be under a bsd-ish license, but the VM
is closed off


Sun, 19 Dec 2004 19:55:37 GMT  
 
 [ 27 post ]  Go to page: [1] [2]

 Relevant Pages 

1. Sun posts Animorphic Smalltalk system

2. Sun posts Animorphic Smalltalk system

3. Future of Smalltalk [Sun buys Animorphic Systems]

4. Sign for Animorphic Smalltalk release

5. Sign for the Animorphic Smalltalk Release

6. looking for a smalltalk system on SUN with OS 4.0

7. Verdix Announces Ada Development System for Sun-4 and Sun-386i

8. A correction on my Sun Fortran posting

9. Animorphic VM

10. Info/papers on Animorphic VM tech? (fwd)

11. Smalltalk Systems' Smalltalk Newsgroup

12. Smalltalk Industry Leaders Join to Form Smalltalk Systems

 

 
Powered by phpBB® Forum Software