What is this newsgroup for? (actually syntactic redundancy) 
Author Message
 What is this newsgroup for? (actually syntactic redundancy)

I thought I would cross-post to comp.lang.pop, since some of the
readers of that group may be interested in these issues (which
are familiar in discussions of how Pop-11 syntax should evolve).


> Date: 22 Feb 93 02:25:27 GMT
> Organization: University of California, Berkeley

> >is anyone working on an alternative syntax for Dylan where I could
> >write 2+2 instead of ( + 2 2 ) ? Syntax really does matter in the
> >general acceptance of a language, and there aren't enough LISP application
> >delvelopers out there to make Dylan a success with its current syntax,
> >IMHO.

> People often say this, and I still don't believe it.

Is that because you have done surveys of the variety of cognitive
styles of programmers out their to find out what they are or are not
able to learn and use easily?

I have met quite a number of programmers in industry, previously
used to things like Pascal, fortran, C, etc. who tried using Lisp,
disliked it intensely, and then fairly effortlessly managed to learn
*and like* a lisp-like language with a syntax more like Pascal.
(Actually the similarity was deceptive, since Pascal is a far more
impoverished language.)

But I have not done systematic surveys to find out actual
proportions, or actual numbers. Without proper research intuitions
are not worth much.

> ...I know that some
> Lisp-haters *say* it's the syntax they don't like, but that's only because
> they don't understand the semantic issues well enough to articulate their
> *real* dislike, which is for the functional programming style.

I think that's a misdiagnosis, for reasons elaborated below.

> ...Disguising
> Lisp with a different syntax won't solve that problem.  My evidence is that
> Logo, which is precisely Lisp disguised with an imperative-language syntax,
> has not taken over the world either.

Logo has its own problems. See below.

It's not a matter of haters and lovers. If these issues were merely
emotional ones then convergence on the basis of evidence and
argument would never be possible. (Some may think it isn't possible,
and we might as well all agree to go our different ways. I don't
share such pessimism.)

When considering syntactic alternatives, it is important to
distinguish several questions of different sorts. Some of them are
objective questions, not matters of emotional reaction.

There are of course some aesthetic questions -- and here personal
preferences will play a strong role: I find the lisp-like syntax
aesthetically very pleasing, and mathematically very elegant (much
better in scheme than in Common Lisp though, because of several
fundamental flaws in the design of CL, such as distinguishing
function values and other values of variables.)

But these aesthetic questions are probably among the *least*
important when discussing the design of a language intended for
serious use by many people, especially if it is intended to be used
for large applications.  So people who object to a construct because
they think it is "ugly" are addressing an irrelevant issue if a
engineering tool is being designed, rather than a toy for private
use. After all, tastes regarding what is ugly can change. (Much
great music was found ugly by those who heard it first.)

Then there are questions about ease of implementation. These are
objective questions concerning the complexity of code required for
e.g. parsing. For example, having very few syntax words, and not
having to deal with precedences of operators, or, in other words,
having scopes of all expressions explicitly delimited by brackets
makes parsing very much easier for a machine, and simplifies the
task of the compiler writer. It also simplifies the task of
providing support in a text editor, writing cross-reference
utilities, etc. So, on that sort of criterion, Lisp syntax wins
against languages that allow complex expressions with lots of
infix operators, including user-definable operators with different
precedences, or languages that have lots of different syntactic
forms for different purposes. But Lisp-like syntax may lose on other
criteria than ease of implementation.

When you consider advantages for (a) human learners (b) people who
have to develop and maintain programs of various sorts, far more
subtle issues arise, and these can, for *some* purposes outweigh the
advantages from the point of view of the compiler writer or
developer of support tools.

It's very easy to ignore the cognitive processing tasks in the user
of a language. E.g. some Logo implementors made the mistake of
thinking that having to type extra symbols, like commas and
parentheses, was a serious impediment for children, and consequently
expected poor little brains to be able to parse things like

    print first butfirst last replyto lastsentence

and even worse constructs using functions of more than one argument.

This was a serious error in cognitive ergonomics. Some versions of
Logo (e.g. that implemented in Edinburgh by Danny Bobrow in 1973)
allowed brackets, commas, etc., and were consequently far more

In the case of lisp, having to learn very few syntax rules is a
great plus for the absolute beginner. But it is very easy to
overrate the importance of this. What makes the first week's
learning easy can seriously slow down the first six month's

Problems come (a) when the learner has to read programs written by
others, where there's very little redundancy to help the semantics
stand out, (b) when the learner makes semantic errors (which can
occur with syntactically perfect programs) and the lack of
redundancy in the syntax makes it very hard for the compiler to give
help (which would be easier with a bigger variety of opening and
closing keywords and other intermediate words, e.g. "elseif",
"then", "else" etc.) E.g. balanced but wrong brackets in a complex
multi-way conditional can produce odd effects that are hard to
track down.

Automatic indentation programs help the reader detect odditities,
but we need empirical research to help us decide wether it's better
to provide help in the form of patterns of white space or help in
the form of more varied symbol patterns.

(And the answer may well be different for different kinds of
programmers, and for different sorts of programs. The issues I am
pointing to matter less for programs that are easily broken up into
very small procedures, each of which is easy for a human to grasp
as a whole.)

Nobody knows very much about how human brains work, but the syntax
of natural languages, whose complexity does not prevent even infants
learning them far more easily than most adolescents learn simpler
formal languages like first order predicate calculus, suggests that
for most human brains a highly redundant syntax is more useful than
a simple and elegant syntax, despite the the added complexity of
parsing in the former.

NOTE: this is an *empirical* question on which it is possible to do
experiments with different kinds of learners doing different kinds
of tasks. However, the experiments would be quite costly if done
properly because the important issue is not what happens in the
first few minutes or hours, but what happens over months or years.

When considering large software systems maintained over a long term
by changing groups of people additional problems arise. People may
have to look at code that they have not written, in order to fix
bugs or implement extensions. Even when there are comments, and
system documentation, they may be incomplete or inaccurate.

It is therefore quite important that the language structures
facilitate comprehension. Again, it is an objective question what
does and what doesn't do this, for particular sorts of individuals
and particular sorts of tasks.

I *conjecture* that if errors and costs are to be minimised it is
important for the syntax to be designed so that higher level
semantic differences map onto higher level syntactic patterns, so
that someone looking at code can fairly quickly get an idea what
sort of thing is going on, and also fairly quickly see discrepancies
between expected patterns and what is in the code.

For example: syntactic patterns
    (a) that quickly reveal that there's iteration over numbers,
    over lists, over locations in N-dimensional arrays, etc. or that
    there's a long multi-branch conditional, and

    (b) that quickly show the scopes of the various sub-expressions,

may be more effective than a type of syntax where a single key-word
indicates what sort of thing is going on and hunting for a matching
")" among large numbers of occurrences is needed to find the scope.

The latter problem can be reduced by lining up matching opening and
closing brackets vertically, as some C programmers do, and most lisp
programmers seem to dislike doing.

But even that can break down if opener and closer are too far apart:
matching keywords may then help the visual search e.g. if ... endif,
while ... endwhile, or if .... fi, while .... eliwh. (Actually I
think reversing keywords is more error prone as it requires the
learnt "lexicon" to be larger and requires more time to be spent in
training the brain to recogize the unfamiliar, and possibly
unpronouncable, character sequences.)

If there are more different opening and closing pairs distinguished,
then there's less need to rely on such things as counting, or
vertical alignment, to ensure that the right scopes are perceived.

Then there will be fewer errors when people make changes, e.g.
errors like adding some step to the end of the wrong expression, or
deleting the wrong expression while editing, or moving the wrong
expression to a different place. (I guess we've all seen these and
similar errors producing obscure bugs.)

In this respect, I think learning to be an expert software engineer
is a bit like learning to be a good musical sightreader. Standard
musical notation has evolved to encourage the visual detection of
patterns of various sorts, even though a simpler more economical
notation might have been mathematically equivalent. Similarly, a
good programming language should make it easier for trained people
to see the right patterns quickly, so that they make fewer mistakes,
and solve problems faster.

(I suspect that languages that try to do everything with recursion
and provide no looping syntax are inadequate for large scale softare
engineering because they lose out in terms of cognitive ergonomics
since they don't provide sufficient syntactic variation, even though
they may be mathematically perfectly adequate.)

Anyhow, I've resisted giving specific examples from my own favourite
language, because there are too many empirical unknowns. But I hope
it's clear that the *sorts* of considerations offered here are of a
type that indicate lines of objective analysis and scientific
research to replace religious wars between adherents of particular
forms of syntax, even if the detailed conjectures I've made turn out

Ultimately, we may have to wait for much deeper theories in
cognitive science before we can resolve some of the issues.

I have been involved in the development of the syntax of Pop-11
between about 1975 and 1992. Many of the decisions taken were a
result of observing problems of learners and users at various stages
of sophistication and trying to find ways to help them. We did not
have the resources to do extended empirical tests, or to try out
alternatives, and we made a number of mistakes, including taking
decisions that were wrong because we did not foresee some of the
future developments. (E.g. we followed Pop-2 in making dynamic
scoping of variables the default. This still causes problems when
people forget to declare local variables as lexical, and then get
unwanted interactions between procedures.)

The one thing I am pretty sure of after years of teaching and
managing software development, is that *verbosity* in a language is
not necessarily a fault if it aids the syntactic redundancy for the
human perceiver. But exactly what sort of syntactic redundancy is
most useful is another question.

> I know that the Apple people strongly believe this myth about syntax and
> are talking about an alternate syntax, but I'm convinced that having two
> notations will kill Dylan absolutely.  It'll be too hard to implement;
> people will start inventing hybrid notations that don't quite work;

Perhaps you are thinking of the problems of languages that have both
interpreters and compilers that don't quite match up?

Actually, if one syntax is made to compile into the other, as can be
done with macros, there should be no serious problem. If two
different forms of syntax are independently compiled to a lower
level then it may be much harder to ensure that the intended
equivalences are achieved.

> ...there
> will be a lot of confusion; anyone who wants to be sure of being able to
> read other people's programs will have to work twice as hard to learn the
> language.

This is in any case a problem with all versions of lisp (and Pop-11)
that allow extensions to be defined by users, e.g. via macros.

(However, insofar as those syntactic extensions introduce higher
level syntactic constracts that support the pattern recognition task
for the human perceiver, the risks may be outweighed by the

> It's the worst idea in the entire language.

Or perhaps it's essential if the language is ever to be used for
large scale software engineering?

Aaron Sloman,
School of Computer Science, The University of Birmingham, B15 2TT, England

Phone: +44-(0)21-414-3711       Fax:   +44-(0)21-414-4281

Fri, 18 Aug 1995 08:44:22 GMT  
 [ 1 post ] 

 Relevant Pages 

1. Smalltalk CRC code? (Cyclic Redundancy Check)

2. I am not deaf, but am I mute?

3. Reusable code needed for 8-bit CRC (cyclic redundancy check)

4. Class for cyclic redundancy code (CRC)

5. Cyclic Redundancy Error

6. Autoincrement redundancy-HELP

7. Purpose of newsgroups RE: Newsgroup with Near Death Experience

8. reducing redundancies

9. Cyclic Redundancy Checks in VHDL

10. Cyclic Redundancy Check 16

11. Class for cyclic redundancy code (CRC)

12. Reusable code needed for 8-bit CRC (cyclic redundancy check)


Powered by phpBB® Forum Software