Definable operators (was: Problems with Hardware, Languages, and Compilers) 
Author Message
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:
>[....  I used IMP72, a language where
>you could add any operations you wanted, and it was awful. -John]

Why was it awful?  It seems to me that mathematicians would be quite
frustrated if they were restricted to using + - * and / in their
papers...  Granted, it would require some discipline (what language
feature doesn't? :-), but we seem to be pretty good at understanding
traditional math notation where it's normal to define appropriate
operators for the objects under consideration.  But I've never used a
language that allowed operator definition....
--

Computer Facilities Director -- Northwest Center for Environmental Education
http://www.*-*-*.com/
[IMP72 let you stick BNF in your programs so you could add any syntax
you wanted.  The problem was that everyone did, so before you could
read an IMP72 program you first had to figure out what language it was
written in.  Experience with C++ tells me that even operator
overloading can easily lead to unmaintainable code if you can no
longer tell at a glance what each operator does. -John]

--




Thu, 26 Aug 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:

>>[....  I used IMP72, a language where
>>you could add any operations you wanted, and it was awful. -John]

>But I've never used a
>language that allowed operator definition....

Try Haskell.  Besides being a side-effect-free lazy evaluation language,
Haskell has a lot of syntactic sugar.  In particular, you can define
pretty much arbitrary operators.  Any token which consists entirely of
operator characters (+-*/%<>= and a few others) is considered an infix
operator by the parser.  You also set the associativity and precedence:

  -- Declare <<< to be a precedence 7 left-associative operator:
  infixl 7 <<<
  -- Type declaration: <<< takes two integers and returns an
  -- integral number of the type of the first argument:
  (<<<) :: (Integral a, Integral b) => a -> b -> a
  -- The actual function: left bit shift:
  x <<< b = x * 2^b

You can also use a regular prefix function as an infix operator if you
enclose its name in backtics.  For example, the standard environment
includes a two-argument div function which performs integer division.
You can use it like (45 `div` 3).

Quote:
>[IMP72 let you stick BNF in your programs so you could add any syntax
>you wanted.  The problem was that everyone did, so before you could
>read an IMP72 program you first had to figure out what language it was
>written in.  Experience with C++ tells me that even operator
>overloading can easily lead to unmaintainable code if you can no
>longer tell at a glance what each operator does. -John]

I suspect that Haskell doesn't lead to as many problems.  There are
some pretty strict limitations on operators in Haskell - in particular
overloading of operators (all functions, really) is very limited - you
can only do it for types which conform to the same class (Integral
above is a class) - so when you use an operator you pretty much know
what you're going to get, unlike in C++.  Also, the ability to use
named functions as operators means that people tend to avoid operators
whose meaning can't be easily discerned in favor of named functions.

Haskell also has a few other cool parsing features - in particular,
indentation-based grouping.  All of Haskell's syntactic features seem
to be very well thought out - they allow for pretty dense code which
is still very readable.  If you want to check out Haskell, I'd
recommend starting with HUGS
(http://www.cs.nott.ac.uk/Department/Staff/mpj/hugs.html), a simple
but complete Haskell interpreter (actually it compiles to bytecode,
but it's still *much* slower than a real compiler).

Seth
--




Mon, 30 Aug 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Do you think the definable operators make things worse than, say,
poorly chosen identifiers?  It seems to me that the risks to code
quality from allowing definable operators are about the same, but the
potential benefits are quite large.  While I certainly would not
advocate allowing arbitraty syntax definition, I have been toying with
the idea of providing a large set of operators (~ those available in
TeX or FrameMaker's equation editor) which could be overloaded
appropriately.  How frightened does that make you?  :-)

In fact, I might go as far as to say that

(mnvs \oplus k1 \dot k2) \otimes ijq4

is better code than

frop (glort ( foo (mnvs, k1), k2), ijq4)

if only because it's easier for humans to parse, and with a restricted
set of operators, your're forced to chose something at least a little
better than "frop."

Quote:
> [IMP72 let you stick BNF in your programs so you could add any
> syntax you wanted.  The problem was that everyone did, so before you
> could read an IMP72 program you first had to figure out what
> language it was written in.  Experience with C++ tells me that even
> operator overloading can easily lead to unmaintainable code if you
> can no longer tell at a glance what each operator does. -John]

--

Computer Facilities Director -- Northwest Center for Environmental Education
http://www.cs.washington.edu/homes/rrogers/nceepg.html
--




Mon, 30 Aug 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:

> >[....  I used IMP72, a language where
> >you could add any operations you wanted, and it was awful. -John]

> Why was it awful?  It seems to me that mathematicians would be quite
> frustrated if they were restricted to using + - * and / in their
> papers...  Granted, it would require some discipline (what language
> feature doesn't? :-), but we seem to be pretty good at understanding
> traditional math notation where it's normal to define appropriate
> operators for the objects under consideration.  But I've never used a
> language that allowed operator definition....

Having been trained as a mathematician ...

You're correct that mathematicians invent new terminology and
notations all the time. (In fact, there are times when I think that's
what mathematics is.) But, when writing a paper, there is a standard
protocol for this:

 1. First, if it's really a new terminology, you have to introduce it
and explain it in the paper prior to using it. (E.g. "A group is
defined as ...".)

 2. If you are using notation that's introduced in another paper,
generally one which will be known to your readers, you can introduce
the notation by explicitly referring to the paper. As things mature a
little more, notations can be introduced by referring to text
books. (E.g. "In this paper, we use the definition of group as found
in ...".)

 3. Finally, there reaches a point where you can assume that all
researchers in the area of interest will be familiar with the
terminology and no reference is needed. (E.g. "Consider the group of
automorphisms of ...".) This is typically the point where the concept
is included in undergraduate or first-year graduate level texts.

It is interesing that in a research paper, you must indicate what
concepts you are including from other references. This is a different
from, for example, the C/C++ "#include xyz.h", which doesn't tell you
that xyz.h might include a definition of a class foo.

It seems to me that these stages should correspond to some sort of
maturing of software concepts starting with ones that are defined in a
given program, moving to ones that are part of a (standard) library,
and finally(?), to ones that are part of the programming language.

 Chris

Dr. Christopher L. Reedy, Mail Stop Z667
Mitretek Systems, 7525 Colshire Drive, McLean, VA 22102-7400

--




Tue, 31 Aug 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:
>[....  I used IMP72, a language where
>you could add any operations you wanted, and it was awful. -John]


>But I've never used a language that allowed operator definition....


|> Try Haskell.  Besides being a side-effect-free lazy evaluation language,
|> Haskell has a lot of syntactic sugar.  In particular, you can define
|> pretty much arbitrary operators.  Any token which consists entirely of
|> operator characters (+-*/%<>= and a few others) is considered an infix
|> operator by the parser.  You also set the associativity and precedence:
|>
|>   -- Declare <<< to be a precedence 7 left-associative operator:
|>   infixl 7 <<<
|>   -- Type declaration: <<< takes two integers and returns an
|>   -- integral number of the type of the first argument:
|>   (<<<) :: (Integral a, Integral b) => a -> b -> a
|>   -- The actual function: left bit shift:
|>   x <<< b = x * 2^b

Algol 68 did, and was by no means the first.  But Haskell has made a
very old mistake in being too general - consider the problems about
parsing a mixture of left- and right-associative operators of the same
priority.  Even worse, consider varying commutativity and
distributivity.  The 1960s experience was that allowing user- defined
operators (including redefinition) was fine, as was allowing
user-defined precedences for textually new ones, but beyond that lies
madness.

Nick Maclaren,
University of Cambridge Computer Laboratory,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.

Tel.:  +44 1223 334761    Fax:  +44 1223 334679
--




Thu, 02 Sep 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:

>Do you think the definable operators make things worse than, say,
>poorly chosen identifiers?  It seems to me that the risks to code
>quality from allowing definable operators are about the same, but the
>potential benefits are quite large.

Yes, I think that definiable operators are worse than poorly chosen
value reference names.  Definable operators share the "what is that?"
property of poorly chosen names, but they also have a "what is it
operating on?" problem.

Frankly, I see no "large" potential benefits.  What I see is misguided
generalization from */+-.

Infix operators have two big problems.  One is that infix
presumes/asserts two operands, which is often way too restrictive even
for the four basic aritmetic ops.  The other is that infix requires
precedence rules, which no one agrees on.

Combine that with a bit of "definable operators can only use special
characters", and you get incomprehensible operator overloading.  ("+"
is often a poorly chosen name, even if the language doesn't let you
choose a good one.)

Quote:
>In fact, I might go as far as to say that
>(mnvs \oplus k1 \dot k2) \otimes ijq4
>is better code than
>frop (glort ( foo (mnvs, k1), k2), ijq4)
>if only because it's easier for humans to parse, and with a restricted
>set of operators, your're forced to chose something at least a little
>better than "frop."

Since different humans will parse the first different ways, the
asserted "ease" is getting in the way of communication.
(Interestingly enough, precedence advocates don't actually trust
prededence - as the above examples show.)

Since the operation in question really is best described as frop (if
it isn't, the example can be dismissed as a strawman), and has no
relationship to anything like "+", it's unclear why keeping me from
using frop and forcing me to use "+" is a good thing.  The fact that
we have agreed on a set of glyphs for several common arithmetic
operations does not imply that those glyphs are good names for other
operations.

-andy
--




Thu, 02 Sep 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:

>In fact, I might go as far as to say that

>(mnvs \oplus k1 \dot k2) \otimes ijq4

>is better code than

>frop (glort ( foo (mnvs, k1), k2), ijq4)

>if only because it's easier for humans to parse

You must be joking, not to mention biasing the question.

The bias: you should have used the same names for both, e.g.
  (mnvs \foo k1 \glort k2) \frop ijq4

And this human did not find it easier to parse the above, i.e. is that
  ((mnvs \foo k1) \glort k2) \frop ijq4
or
  (mnvs \foo (k1 \glort k2)) \frop ijq4

(Ok, more familiarity with TeX might have helped me here - I haven't a clue
what its precedence rules are).

Quote:
> with a restricted set of operators, your're forced to chose
> something at least a little better than "frop."

Well sure, if you cannot invent new operators but only overload the
existing ones, it is not longer any harder to parse than before
(though understanding the semantics may have suffered).

But if you can invent new operator names
  (a) the language cannot force you to make those sensible
  (b) unless the precedence rules are *very* simple - e.g. one level and always
      left-to-right (like your example seems to imply) or always lowest
      precedence and left-to-right associative (like fortran 90) - you have
      upped the ante on merely parsing the expression, and on code maintenance.

Cheers,
...........................Malcolm Cohen, NAG Ltd., Oxford, U.K.

--




Thu, 02 Sep 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:

>> [ re: user-defined infix operators ]

>Algol 68 did, and was by no means the first.  But Haskell has made a
>very old mistake in being too general - consider the problems about
>parsing a mixture of left- and right-associative operators of the same
>priority.

Haskell treats this condition (correctly, IMO) as a parse error.  (I
think ML does the same thing).

Quote:
> Even worse, consider varying commutativity and distributivity.

Distributivity and commutativity are semantic properties, not
syntactic ones; I don't see why this should affect parsing.  Haskell
does allow you to specify an operator as non-associative -- so that (X
`op` Y `op` Z) will be flagged as an error without further
parenthesization -- which is sometimes useful for operators that
aren't commutative.

Quote:
> The 1960s experience was that allowing user- defined operators
> (including redefinition) was fine, as was allowing user-defined
> precedences for textually new ones, but beyond that lies madness.

I think the 1990s experience has been somewhat different :-) Judicious
use of user-defined infix operators has been very beneficial to
clarity of exposition in most of the Haskell code I've written and
read.

--Joe English


--




Sat, 04 Sep 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:

>Yes, I think that definiable operators are worse than poorly chosen
>value reference names.  Definable operators share the "what is that?"
>property of poorly chosen names, but they also have a "what is it
>operating on?" problem.

In languages such as Haskell and Prolog, which both allow alphabetic
character sequences in operators, there is no reason why operators
need have the "what is that?" property of poorly chosen names.
And the "what is it operating on" problem is usually easily
resolved by looking at the operator declaration.

Quote:
>Frankly, I see no "large" potential benefits.

Have you ever tried to write a combinator parser without using infix operators?

--

WWW: <http://www.cs.mu.oz.au/~fjh>

--




Sat, 04 Sep 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

 > Yes, I think that definiable operators are worse than poorly chosen
 > value reference names.  Definable operators share the "what is that?"
 > property of poorly chosen names, but they also have a "what is it
 > operating on?" problem.

Clear operator names help a lot and may even make the program clearer.
For instance, I once cooperated in the creation of a set of Algol 68
operators in numerical analysis.  For instance, the integral from
0 to 1 of exp(x)^2dx with an absolute error bound of 1.0e-5 and a
relative error bound of 1.0e-6 would be written as:

    c := 'range' (0, 1) 'integral' (('real' x) 'real': exp(x) ** 2)
                        'abserr' 1.0e-5 'relerr' 1.0e-6;

Adding "'method' gauss(10)" would modify it to a 10-point gaussian
integration.

Why is the first not clearer than (also in Algol 68)

    c := integrate(0, 1, ('real' x) 'real': exp(x) ** 2, 1.0e-5, 1.0e-6)

which at least neads some comments to be made clear?

A good choice for the priorities is of course important.  A bad choice
can be just as misleading as a badly chosen identifier.

--
dik t. winter, cwi, kruislaan 413, 1098 sj  amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn  amsterdam, nederland; http://www.cwi.nl/~dik/
--




Sat, 04 Sep 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:
> with a restricted set of operators, your're forced to chose
> something at least a little better than "frop."


: Well sure, if you cannot invent new operators but only overload the
: existing ones, it is not longer any harder to parse than before
: (though understanding the semantics may have suffered).

: But if you can invent new operator names
:   (a) the language cannot force you to make those sensible

This is true, but then the language can't  force  you  to  choose
sensible variable/procedure/function names either. So what is so
terrible about this?

--
Antoon Pardon    Brussels Free University Computing Centre
--




Sat, 04 Sep 1999 03:00:00 GMT  
 Definable operators (was: Problems with Hardware, Languages, and Compilers)

Quote:

>>Yes, I think that definiable operators are worse than poorly chosen
>>value reference names.  Definable operators share the "what is that?"
>>property of poorly chosen names, but they also have a "what is it
>>operating on?" problem.

>In languages such as Haskell and Prolog, which both allow alphabetic
>character sequences in operators, there is no reason why operators
>need have the "what is that?" property of poorly chosen names.
>And the "what is it operating on" problem is usually easily
>resolved by looking at the operator declaration.

I am afraid that isn't true if you allow user-defined association
properties (i.e. associativity, commutativity, priority and overloading).
In order to work out what is going on, the user must at least be able to
parse the expression, which is why Algol 68 had the priority a property
of the operator SYMBOL and not the operator DEFINITION.

The key is to add operator definitions in a limited, well-defined and
clean fashion.

Quote:
>>Frankly, I see no "large" potential benefits.
>Have you ever tried to write a combinator parser without using infix
>operators?

It also makes certain programs (such as many numerical ones) a couple of
orders of magnitude easier to follow.  Consider some of the statistical
and numerical formulae that take half a page of A4 in matrix notation.
And now write those without using matrix operators :-(

Nick Maclaren,
University of Cambridge Computer Laboratory,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.

Tel.:  +44 1223 334761    Fax:  +44 1223 334679
--




Tue, 07 Sep 1999 03:00:00 GMT  
 
 [ 30 post ]  Go to page: [1] [2] [3]

 Relevant Pages 

1. operator overloading and user definable operators.

2. Problems with Hardware, Languages, and Compilers

3. Problems with Hardware, Languages, and Compilers

4. Problems with Hardware, Languages, and Compilers

5. Definable operators

6. Definable operators

7. User definable operators

8. Operator overload of base operator and compiler diagnostic

9. I am having compiler problems with MS FORTRAN 3.31

10. Specification languages for hardware description languages

11. Specification languages for hardware description languages

12. Looking For Old Mainframe Hardware/Language Specs

 

 
Powered by phpBB® Forum Software