The Logic Programming Paradigm for the First Language 
Author Message
 The Logic Programming Paradigm for the First Language

Quote:

>Fellow LP-ers,
>...
> [good stuffs, deleted]

>.....
>Finally, a comment on a regularly raised furphy: that LP systems are slow. The
>best prolog compilers generate code that is a small linear factor slower than
>C compilers on simple programs - a factor of 1.5 or 2 at the outside. Where
>recursion is involved, prolog is often FASTER than C. Morevoer, LP optimisation
>techniques are still developing rapidly, and will likely soon outstrip C on
>most, but especially parallel, architectures.
>--
>Bob McKay                   Phone:  +61 6 268 8169      fax: +61 6 268 8581
>Dept. Computer Science,ADFA, Northcott Dve, Campbell ACT 2600 AUSTRALIA    

Very interesting!
I wonder if you could provide pointers to the Prolog compiler you are referring to?

A factor of 1.5--2 times slower than C is really not a big deal (i.e. not so slow),
given the pace CPU speed is increasing nowadays. But I am afraid the factor of 1.5-2
is the best case scenario and, unfortunately, Prolog programs usually are still
an order of magnitude slower than the C counterparts. An example is the N-queens
problem. A decent Prolog program for N-queens is 15 times slower than an non-optimized
C program which mimics the Prolog algorithm. The measurement is with Quintus 3.3.1,
on a Sparc2 IPX. Solving all solution 11-queens, the C program takes 4.6 second,
while the Quintus program takes 58 seconds.

I assume N-queens is a very typical problem Prolog is considered good at. Correct me
if I am wrong.

There is much catch-up to do for LP when it comes to comparing program speed with
an imperative language. I think Prolog is hopeless: the language simply doesn't
provide the information necessary to get a program to run fast on today's general
purpose machines. Is there a Prolog machine coming out soon, before the language
vanishes? I doubt it.

Comments welcome. I am not a regular reader at this group so you might want to
send e-mail to my account. I may be able to entertain flames :-) it is summer time!

-Zheng Lin      
Computer Science
University of Toronto



Tue, 09 Jan 1996 22:26:35 GMT  
 The Logic Programming Paradigm for the First Language

Quote:

> Very interesting!
> I wonder if you could provide pointers to the Prolog compiler you are referring to?

Check out the article in the Jan. 1992 issue of IEEE Computer magazine:
"High-Performance Logic Programming with the Aquarius Prolog Compiler".

So far, Prolog implementors have concentrated more on functionality than
on speed.  This is now changing.  BTW, there are plenty of systems out
there for which speed is a secondary consideration.  For example, all
scripting languages.

Quote:

> I think Prolog is hopeless: the language simply doesn't
> provide the information necessary to get a program to run fast on today's general
> purpose machines.

This is just plain incorrect, as any serious implementor of Prolog
knows.

Quote:
> Is there a Prolog machine coming out soon, before the language
> vanishes? I doubt it.

Prolog machines are not the panacea.  They may help; so-called
"general-purpose" machines are actually optimized for C and for
floating point computation.  What would help Prolog is a machine
that is optimized for pointer chasing operations.  But Prolog can
run plenty fast even without such a machine.

Quote:

> -Zheng Lin
> Computer Science
> University of Toronto


Peter Van Roy


Sun, 04 Feb 1996 15:22:57 GMT  
 The Logic Programming Paradigm for the First Language

Quote:
>[...]
>Prolog machines are not the panacea.  They may help; so-called
>"general-purpose" machines are actually optimized for C and for
>floating point computation.  What would help Prolog is a machine
>that is optimized for pointer chasing operations.  But Prolog can
>run plenty fast even without such a machine.

What would a machine which is "optimized for pointer chasing
operations" look like? How does this differ from the M680X0
architecture or the IBM 370. Would the Multics hardware (e.g.
Honeywell DPS 8/70M, circa 1980) qualify as "optimized for pointer
chasing operations"?

Lindsey Spratt



Mon, 05 Feb 1996 03:53:46 GMT  
 The Logic Programming Paradigm for the First Language

   >Prolog machines are not the panacea.  They may help; so-called
   >"general-purpose" machines are actually optimized for C and for
   >floating point computation.  What would help Prolog is a machine
   >that is optimized for pointer chasing operations.

   What would a machine which is "optimized for pointer chasing
   operations" look like? How does this differ from the M680X0
   architecture or the IBM 370. Would the Multics hardware (e.g.
   Honeywell DPS 8/70M, circa 1980) qualify as "optimized for pointer
   chasing operations"?

Crucially, Prolog implementations need to distinguish efficiently at
runtime between between different kinds of value: compound terms (the
Prolog equivalent of records), references and atomic objects (numbers
of various kinds, symbolic constants, other implementation-defined
types). Modern implementations also want to recognize efficiently
references to suspended goals, constraints, etc. So what is needed is
not just pointer chasing, but pointer chasing with type dispatch on
some subfield of the pointer. For instance, on many current 32-bit
implementations the two low-order bits of a word are used as a
first-level type field to distinguish between the most frequent cases
(eg. reference, compound term, list) and the rest of the types. This
type dispatch is at the heart of the Prolog engine, but typical 32 bit
architectures do not support it well.

--
Fernando Pereira
2D-447, AT&T Bell Laboratories
600 Mountain Ave, PO Box 636
Murray Hill, NJ 07974-0636



Mon, 05 Feb 1996 11:31:59 GMT  
 The Logic Programming Paradigm for the First Language

Quote:

>[...]
>Crucially, Prolog implementations need to distinguish efficiently at
>runtime between between different kinds of value: compound terms (the
>Prolog equivalent of records), references and atomic objects (numbers
>of various kinds, symbolic constants, other implementation-defined
>types). Modern implementations also want to recognize efficiently
>references to suspended goals, constraints, etc. So what is needed is
>not just pointer chasing, but pointer chasing with type dispatch on
>some subfield of the pointer. For instance, on many current 32-bit
>implementations the two low-order bits of a word are used as a
>first-level type field to distinguish between the most frequent cases
>(eg. reference, compound term, list) and the rest of the types. This
>type dispatch is at the heart of the Prolog engine, but typical 32 bit
>architectures do not support it well.

This sounds like a "tagged" architecture. Isn't this what LISP machines do?
Why is a PROLOG machine a better idea than a LISP machine, particularly
if the nature of the architectural specialization is so similar? (People
seem to have decided that LISP machines are not as good an idea as
just relying on general purpose architecture machines, since the latter
improve at a much faster rate than the former.)

        -Lindsey



Wed, 07 Feb 1996 00:11:20 GMT  
 The Logic Programming Paradigm for the First Language

Quote:

>> I think Prolog is hopeless: the language simply doesn't provide the
>> information necessary to get a program to run fast on today's general
>> purpose machines.

>This is just plain incorrect, as any serious implementor of Prolog
>knows.

But it's certainly true that a language that provided more information,
would make it easier for the implementation to generate more efficient
code.  Things like type systems, mode systems, and module systems would
all help, especially when (as is often the case) global analysis of the
entire program is impractical.

For example, according to the Aquarius Prolog manual, Aquarius produces
the best code only when the programmer specifies some additional
information in the form of a mode declaration.  The lack of a proper
module system in Prolog makes it difficult for compilers to check
that the mode declarations specified by the programmer are indeed
correct.  This means that if the programmer specifies an incorrect
mode declaration, the compiler may generate incorrect code.
It's difficult for programmers to be sure that all their mode declarations
are correct, especially after modifying some part of a large program.
Therefore programmers tend not to use mode declarations and as a result
compilers tend to generate code that is not as efficient as it could be.

In early versions of Prolog, it was quite legitimate to use
        read(X), retract(X)
to retract an abitrary clause.  However this sort of code makes
compilation much much more difficult, at the very least forcing the
system to keep a copy of the original clauses around, and imposing
unwarranted complexity and overhead on compiled code.  As a result
the language evolved: now we have static and dynamic predicates.

In ISO Prolog, it is still quite legitimate to use
        read(X), call(X)
to call an arbitrary predicate with arbitrary arguments, and this sort
of code makes program analysis much much more difficult, essentially
defeating any attempt at global analysis, and imposes unwarranted
complexity and overhead.  (It also means that we may end up with a
module system that is complex and ugly and sacrifices encapsulation.)

For these reasons and others, I think it's quite fair to say that Prolog
is a language which is difficult to implement efficiently.

--



Wed, 07 Feb 1996 11:05:53 GMT  
 The Logic Programming Paradigm for the First Language

   >[...]
   >Crucially, Prolog implementations need to distinguish efficiently at
   >runtime between between different kinds of value: compound terms (the
   >Prolog equivalent of records), references and atomic objects (numbers
   >of various kinds, symbolic constants, other implementation-defined
   >types).
   This sounds like a "tagged" architecture. Isn't this what LISP machines do?
   Why is a PROLOG machine a better idea than a LISP machine, particularly
   if the nature of the architectural specialization is so similar? (People
   seem to have decided that LISP machines are not as good an idea as
   just relying on general purpose architecture machines, since the latter
   improve at a much faster rate than the former.)
The *language* implementation needs tags, as Lisp does. Tags can
either be handled by specialized hardware, as in the Lisp machines, or
by software as in current Prolog and Lisp implementations for
general-purpose machines. Both Symbolics and TI had Prolog
implementations for their machines that took good advantage of the
tagging hardware. However, Prolog has particular requirements, eg.
variable references, which were not supported transparently by those
architectures (LispM invisible pointers were *almost* ideal for
references, but unfortunately they were a bit "too invisible" making
it more expensive to determine whether a cell contains a reference --
as opposed to following it automatically -- than it should be).

The question of general-purpose vs. specialized architectures is
decided partly by engineering, partly by economics and partly by
fashion. For dynamically-typed languages like Prolog, Lisp or
Smalltalk, current conditions along those dimensions favor
general-purpose architectures. But this could conceivably swing the
other way in the future, as it has swung back and forth with previous
machine generations. What I think Peter VanRoy had in mind was not
specialized architectures, but a modest amount of architectural
support for the tag dispatch and manipulation operations that are such
an important fraction of execution time for dynamically-typed
languages and software systems (eg. OODBs).

--
Fernando Pereira
2D-447, AT&T Bell Laboratories
600 Mountain Ave, PO Box 636
Murray Hill, NJ 07974-0636



Wed, 07 Feb 1996 07:53:52 GMT  
 The Logic Programming Paradigm for the First Language

Quote:
>machine generations. What I think Peter VanRoy had in mind was not
>specialized architectures, but a modest amount of architectural
>support for the tag dispatch and manipulation operations that are such
>an important fraction of execution time for dynamically-typed
>languages and software systems (eg. OODBs).

I seem to remember reading that Sun's SPARC architecture includes a
limited amount of support for tagged datatypes. Are there any Prolog
implementations that take advantage of this?

Daniel Cohen              Department of Computer Science

Tel: +44 71 975 5245/4/9  Mile End Road, London E1 4NS, UK
Fax: +44 81 980 6533      *** Glory, Glory, Hallelujah ***



Fri, 09 Feb 1996 19:35:37 GMT  
 The Logic Programming Paradigm for the First Language
 ...
|>
|> I seem to remember reading that Sun's SPARC architecture includes a
|> limited amount of support for tagged datatypes. Are there any Prolog
|> implementations that take advantage of this?
|>

Yes - but If my memory serves me well it has a 10 bit tag and 22 bit data.
The tag's too wide and the data too narrow! -

    What I'd *really* like is a 4  bit tag and 28 bit data and a 'switch  on
tag' instruction (or even 3 bit tags! - to reduce the code volume somewhat!).

   Joe



Fri, 09 Feb 1996 20:56:02 GMT  
 The Logic Programming Paradigm for the First Language

Quote:

>[...]
>I seem to remember reading that Sun's SPARC architecture includes a
>limited amount of support for tagged datatypes. Are there any Prolog
>implementations that take advantage of this?

Don't know about Prolog implementations but I seem to remember from
one drunken rendezvous with Andy Sizer that Harlequin's Common Lisp
implementation uses it. Maybe it was just him (or me) musing... difficult
to tell with Andy (or me).

Andrew Dinn
-----------------------------
Our Motto - A Proper Lisp Now



Fri, 09 Feb 1996 22:08:10 GMT  
 The Logic Programming Paradigm for the First Language

Quote:

> What would a machine which is "optimized for pointer chasing
> operations" look like? How does this differ from the M680X0
> architecture or the IBM 370. Would the Multics hardware (e.g.
> Honeywell DPS 8/70M, circa 1980) qualify as "optimized for pointer
> chasing operations"?

> Lindsey Spratt

Uh oh!  This is a big can of worms.  What you need is a machine
that can handle memory latency without slowing down.  Two ways
in which that can be built into existing machines are:

1. Asynchronous load.  That is, the load instruction initiates
   a load operation, and the machine goes on executing instructions.
   The next generation of killer micros (e.g., Alpha) will do this
   as a matter of course.

2. Lightweight multithreading.  For example, single cycle context
   switches between different register sets.  Getting this to work
   without screwing up the pipeline is slightly tricky, but doable.
   See the ALEWIFE machine at MIT that uses a modified SPARC to
   switch between four threads--the large register set is chopped
   into four pieces.

The MC680X0 family does neither of these.
Your compiler must know all about this, of course.

Peter



Sat, 10 Feb 1996 02:07:11 GMT  
 
 [ 30 post ]  Go to page: [1] [2] [3]

 Relevant Pages 

1. Logic Programming as an Introductory Programming Paradigm

2. Logic Programming an Introductory Programming Paradigm

3. Logic Programming as an introductory programming paradigm

4. Logic Programming as an Introductory Programming Paradigm

5. Logic Programming as an Introductory Programming Paradigm

6. 2nd CFP: Multi-Paradigm Logic Programming

7. Workshop: Multi-Paradigm Logic Programming

8. 2nd CFP: Multi-Paradigm Logic Programming

9. Workshop: Multi-Paradigm Logic Programming

10. What are the tersest programming languages or paradigms?

11. First Logic Progamming Language

12. CFP: PLILP93 Programming Language Implementation and Logic Programming

 

 
Powered by phpBB® Forum Software