Newbie question 
Author Message
 Newbie question


Quote:

> Even to bind a variable as in (let ((x 3)) ...) is invalid if the "..."
> needs to access an outer x.   But we don't say you can't access both
> x's at once without a change to the language, we just write
>  (let ((x 2))
>    (flet ((outer-x () x))
>      (let ((x 3))
>        (+ x (outer-x)))))

[BTW, why would you want a flet here? what's wrong with:

(let ((x 2))
 (let ((outer-x x)
       (x 3))
   (+ x outer-x)))

?]



Wed, 24 Oct 2001 03:00:00 GMT  
 Newbie question
...

Quote:
>         (defvar ,name #',name)
...
>     (declaim (inline send))
>     (defun send (thing message &rest args)
>       (declare (dynamic-extent args))
>       (apply thing message args))

"Message" is a function. Therefore, shouldn't that last line read

         (apply  message thing args )

 or alternatively just make send a macro since this is really just
 "sugar" to hide the generic function. [ I presume that defmethod was
 being used to define the "methods" ]

       (defmacro  send ( thing message &rest args )

 Another approach (as opposed to layering on top of generic functions) would
 be for objects to have "message" slots. and then "send" would look more like

       (defmacro  send  ( thing message &rest args )

 The slot definition being something akin to

           (msg :initform #'(lambda (...args..) ... body ... )
                :type  function
                :reader msg
                :allocation :class )

 Where you'd define the messages "inside" the class definition; similar to
 Java's style.   So you truely would have "members that are functions".
 You'd loose the "next-method" mechanism so this isn't quite what you'd
 have to do.  Or even prefer to do. :-)

 Additionally, if you didn't wanted to "overload" a name multiple times
within a
 single class definition this should work. However, there is a disconnect
 between a desire for a "function name" with mulitple argument signatures and
 CLOS. So this isn't really "new".   That's not really about "message
 passing", but language support for "name mangling".   Those are two
 different things.   Smalltalk doesn't have "name mangling" and I'm
 pretty sure most consider it "message passing". :-)

P.S.  Netspace insists on printing backquotes as forward quotes. I cut and
      pasted back into a Listener, so I think there really are backquotes
      leading the body of those macros. If not please replace at your end.

--

Lyman



Wed, 24 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:


> > Even to bind a variable as in (let ((x 3)) ...) is invalid if the "..."
> > needs to access an outer x.   But we don't say you can't access both
> > x's at once without a change to the language, we just write
> >  (let ((x 2))
> >    (flet ((outer-x () x))
> >      (let ((x 3))
> >        (+ x (outer-x)))))

> [BTW, why would you want a flet here? what's wrong with:

> (let ((x 2))
>  (let ((outer-x x)
>        (x 3))
>    (+ x outer-x)))

If you ask that, you have to ask why I have bindings at all.
I meant this to stand for the more general case of:

 (let ((x ...))
    ... lots of code that might read or assign x ...
    (flet ((outer-x () x)
           (set-outer-x (y) (setq x y)))
      (let ((x ...))
        ... more code that reads or writes x or outer-x...
      ))
    ... more code reading or assigning x ...)

My point was not that you could copy x around but that in fact there
are in-language ways of referring to the original x.  Just in case
you needed to.



Wed, 24 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:

> Most problems could be solved by people in one way or the other, but
> then what is the point of standardising stuff.

This is a good question.  Here is my take on the answer:

Standardizing is what you do when you have not only just solved it yourself
but shared your solution around and you have good "current practice"
reason to think you have the "de facto" standard so that you can document
it to create normative effect.

Standardizing is what you do when you have many peopld doing certain things
in ways that those people all mutually agree should be brought together
under one cover and preferrably have figured out what cover that should be.
Sometimes, with enough community spirit behind it, it's sufficient for them
to just wish the standards body would pick one, but mostly that's a bad
idea.  We had to do it with CLOS becuase it was so central to everything
else.  But, for example, we did not do that with FFI vs RPC because it wasn't
the committee's role to say what industry should decide--it's industry's
role to say what it wants and it is standardization efforts' role to react
to that.

Even CLIM, as an example, is not in my opinion "de facto standard enough"
to become a real standard because (a) not everyone accepts it as window
system of choice and (b) even those who use it feel it should change in
many ways that being standard would keep it from doing.

Quote:
> When something is
> likely to be widely used it's worth standardising.

Not that my word is the final one, but I don't really agree.  I think
it's mostly worth standardizing when it's been in use either in its
present form or comparable form from more than one vendor or widely
among users (e.g., as with DEFSYSTEM).

It's not really that I think I think NO other such things are
possible.  Sometimes you can get a feature in that doesn't hurt things
and helps some others.  But largely I don't say that's the "purpose"
of standardizing; it's just more something you tolerate sometimes
because you can.

Quote:
> I may be entirely
> wrong, but I suspect that if CL incorporated a standard, succinct
> syntax for associating types with symbols and checking the types of
> objects then it would get used a lot more then the current, somewhat
> verbose, ways of doing this is.

I don't understand.  Declarations do this.  What's verbose is to
repeat the type with every use.  But in any case, since you can
personally extend the language to have the feature, it's easy for you
to gather together a community to speak to this point if you think
that's so. I doubt it is so.  But I would be convinced by numbers.

Note that I *would* be amenable to a proposal for a more general
mechanism that allowed users to associate generally additional
declaration information at any point in a code-walk (as was in CLTL2)
and to retrieve that under other circumstances.  We knew a lot of
people wanted that and didn't withdraw it for ANSI CL out of spite--we
just couldn't make the mechanics work in a way we were confident about
in the time allotted becuase it was untested.  There is a great
temptation in standards work to standardize things that are not
tested, but it's pretty risky.  Sometimes you get lucky, as with CLOS
and the condition system.  Sometimes you end up doing very flakey things.
A number of things in CL that are painful and weird (like the type
inheritance rules for arrays with various element types and storage
modes) are traceable to last-minute decisions that were not heavily
tested in practice before the original language design was rolled out.



Wed, 24 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:

> There is a slightly trivial but unfortunate problem with this in that
> Symbolics CL (and perhaps also the other LispM flavours of Lisp) uses
> a VALUES declaration in a different way -- namely to tell the system
> what the *names* of the value(s) that the function returns are.

Oh, drat.  You're right.  I said something else about values and was wrong.
Thanks for (even inadvertently) correcting me.


Wed, 24 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:


> > The point is probably this: A C++/Java compiler cannot catch all
> > errors, especially not design or logical errors, but at least it
> > catches most simple errors like typos, passing the wrong number of
> > arguments, passing a wrong argument, etc.

As do many Lisp compilers.

Quote:
> > With existing Lisp
> > implementations many such errors are detected only at runtime even
> > when declarations are used.

Most commercial Lisp compilers I've dealt with will catch ill-named
functions, wrong number of arguments [except with APPLY, which mostly
C can't even do as gracefully as Lisp and surely can't catch errors
in any better than Lisp], etc.

Quote:
> > This is less problematic with mainline
> > code which is likely to be run by the developer anyway, but typos in
> > sections of the source code that are less frequently run have the
> > habit of crashing in the hands of a user, or the QA department if
> > you're lucky. Yes, you should test all your code, but the kind of bug
> > we're talking about is often introduced by changes that are so
> > 'obvious' that many developers don't imagine a bug may have been
> > introduced.

> Again, I want to say: this is a good theoretical point, but do you
> know of any evidence that it causes large Lisp systems to be less
> robust than large C++ systems.  I know of none, but I have not looked
> that hard.

I think in fact just the opposite.  Speaking only anecdoctally here,
it's assumed that type matching means things work.  I'm not so sure.
It gives one almost a false sense of confidence:

This code:

 (defvar *foo* (- most-positive-fixnum 1))
 (defun foo () (* *foo* 2))

works fine undeclared in Lisp but in C the equivalent code, properly
type declared, would do modular arithmetic.  The types would match
but the effect would be wrong.  Now, in "properly type-declared code"
you might see that the function was declared fixnum all over the place
but that wouldn't make it right--it would just mean you were asking
the compiler to trust you that the data was not going to be out of
bounds, which isn't a good thing to trust in this case.  The CMU
compiler actually will probalby put in a type-check to make sure
that the declaraiton is not violated, but such type checks do cost
and many people feel pressured not to have them.  Further, and this is
the really insidious thing about type checks in practice, there is
ENORMOUS pressure to turn
 (defun foo (x) (+ x 2))
into
 (defun foo (x) (declare (fixnum x)) (+ x 2))
to make it "more efficient" as if somehow the generic (+ x 2) was
in fact less efficient.  (+ x 2) is maximally efficient when you don't
know if x is going to be a fixnum or not.  Adding the declaration makes
it more efficient ONLY IF you happen to know x will not be other than
a fixnum; if you don't know that, it isn't "more efficient", but rather
it is "broken".  The real problem with type declarations is not the
mathematical proof techniques associated with them, it is the willingness
to ignore or hand-wave away the very real societal tendancies of people
to force people with access to type declarations to over-aggressively
apply narrow type boundaries to problems, turning every program in the
world into a metaphor for the Y2K bug becuase each such program has its
own little time bomb in it waiting for data to become big enough to
overflow and cause a problem.  To say that people don't overaggressively
seek these little "shortcuts" (sacrificing the future for the present)
is to deny that there is any cost to dealing with Y2K, and to somehow
say that "good programmers would never make shortsighted decisions".



Wed, 24 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:

> [on the subject of detecting bugs early via static type checking]


> > Again, I want to say: this is a good theoretical point, but do you
> > know of any evidence that it causes large Lisp systems to be less
> > robust than large C++ systems.  I know of none, but I have not
> > looked that hard.

The evidence I have is from my own experience over the past three
years. Again and again, bugs found by QA were the result of
programming errors which would not have passed the compiler had this
been C++ or Java.

Before anybody jumps on me again for all the wrong reasons: I am not
trying to put down Lisp here, I am impressed by the language. Anybody
who has seriously used C++, however, cannot fail to notice that
existing commercial implementations of CL (unlike CMU CL apparently)
miss many opportunities to detect errors early by not making full use
of type information. I agree with Erik that the errors that go
undetected are usually not the kind of bugs that are difficult to
fix. But isn't it precisely in the area of routine, repetitive tasks
where computers are supposed to help us out? I am fully prepared to
take responsibility for deep, logical errors, but I appreciate tools
that help me detect simple blunders quickly.

Quote:
> I think in fact just the opposite.  Speaking only anecdoctally here,
> it's assumed that type matching means things work.  I'm not so sure.

Is a program that passes type checks guaranteed to be correct? Of
course not! Seems I still haven't been able to make myself
understood. This is not about proving programs correct, and I
certainly don't regard static type checking as a silver bullet. Static
type checking will not gurantee correct programs, it can, however,
very easily detect simple programming errors. Why? Because the simple
errors I'm talking about have a high probability of violating type
checks.

Quote:
> [section about disadvantages of type declarations in arithmetic
> functions omitted]

The type declarations I have in mind were of a very different
kind. The system I am working on is based on a complex,
object-oriented information model. Functions, mostly generic
functions, operate on instances of well-defined types from the
model. Passing an instance of a non-conforming type is almost
certainly an error. These errors are usually easy to fix when looking
at the symptoms, but they are a major frustration when they slip
through initial tests and make it into QA or beyond. What makes this
so frustrating is that it would be so easy for a good compiler to
detect. [Yes, I'm complaining about a vendor's implementation, but
then again, you seem to be defending this weakness as perfectly
acceptable. In that light, I don't think my response is inappropriate
in this newsgroup.]

Joachim



Thu, 25 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:

> If you care that deeply about static checking of things like
> argument counts, I'm pretty surprised you haven't taken one of the
> publically-available who-calls things and modified it to warn you
> about all this stuff.

Sounds interesting. What are these 'who-calls things' and where can I
find more details about them?

Joachim



Thu, 25 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:

> Sounds interesting. What are these 'who-calls things' and where can I
> find more details about them?

They are tools that let you ask questions like `who calls this
function'.  I think to do this really right you need a code-walker,
and I'm not sure that these things really have one. Of course, a
code-walker can really do any static type-checking you want anyway. I
think there is at least one at the CMU archive.

--tim



Thu, 25 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:
> I think I might have the code somewhere to the old teco-based indenter.
> Maybe sometime for grins I'll publish it.  Teco was a wonder to behold.

To make the circle round, are there Teco docs available to write a
Teco emulator for Emacs? ;-)

--

If there are aliens, they play Go. -- Lasker



Thu, 25 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:

> > I think in fact just the opposite.  Speaking only anecdoctally here,
> > it's assumed that type matching means things work.  I'm not so sure.

> Is a program that passes type checks guaranteed to be correct? Of
> course not!

I think you mean to say "of course it is not provably correct" but it isn't
so clear whether you mean to say "of course it is known not to be correct
by the person who compiled it".  I allege (absent proof) that a large
number of people believe "absence of compilation warnings" means "correct",
and further that "compilation warning" means "user is directed to change
his program in a way that muffles warning".  I think both of these are
bad practices.

I also can't help but feel thatpeople who think they are promised that the
compiler will statically catch a certain class of errors feel more
comfortable failing to QA their programs.

You are making a great deal of your arguments about how you personally
would use compiler information.  I am making arguments not about how I
personally would use compiler information, but how I believe people
really do use compiler information.  Neither of us has the data to back up
our claims, so it will have to just rest there.

Quote:
> > [section about disadvantages of type declarations in arithmetic
> > functions omitted]

> The type declarations I have in mind were of a very different
> kind. The system I am working on is based on a complex,
> object-oriented information model. Functions, mostly generic
> functions, operate on instances of well-defined types from the
> model. Passing an instance of a non-conforming type is almost
> certainly an error. These errors are usually easy to fix when looking
> at the symptoms, but they are a major frustration when they slip
> through initial tests and make it into QA or beyond. What makes this
> so frustrating is that it would be so easy for a good compiler to
> detect. [Yes, I'm complaining about a vendor's implementation, but
> then again, you seem to be defending this weakness as perfectly
> acceptable. In that light, I don't think my response is inappropriate
> in this newsgroup.]

You are welcome to think that, however I will keep saying you are asking
the wrong place every time I have the energy to say it.

I believe it will HARM the Lisp community to require it.  It will only
make it harder than it already is to reach "CL-hood" for an
implementation, and make for there to be fewer Lisps.  I allege (and
for this there is considerable data) that it is HARD to get a CL
together and there are lots of people who decline to try.  It is
important to the community and important to the users that there be
vendors able to make implementations of known quality, but it is less
important that every vendor be required to be at the same quality
because it is quality/price upon which people compete.  If someone
wants to market a high-quality Lisp, at corresponding cost, they can
and should do that, offering whatever you want.  But it is just not
necessary for everyone to do this.  And certainly you don't want to
legislate that all implementations must have certain quality because
like the pressure/volume constraint for gases, that effectively
legislates that all implementations have a certain price.

Pluralism is about tolerating people doing and needing different things
than you.  The net is pluralistic.  The market is pluralistic.  The thing
that destroys plurlisms is the insistence that not only one thing but all
things in the market must meet your needs.  That destroys diversity.
And that, I feel, is bad.



Thu, 25 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:


> > I think I might have the code somewhere to the old teco-based indenter.
> > Maybe sometime for grins I'll publish it.  Teco was a wonder to behold.

> To make the circle round, are there Teco docs available to write a
> Teco emulator for Emacs? ;-)

There is, but I'm not sure the intellectual property ownership.  Note well,
you'd need ITS Teco, not DEC Teco.  DEC Teco was a pale shadow of ITS Teco
and could never possibly have accomodated Emacs.  ITS Teco was to DEC Teco
like CL is to Lisp 1.5.  Every character was a command.  And every character

characters did different things based on number of arguments they got [either
one or two].  DEC Teco had only a fraction of this.  Not to mention many fewer
q-registers.  ITS Teco had one q-register [built-in storage name] per
keyboard key in ASCII+control+meta, plus an extended namespace of variables
and on and on.  It would be a lot of work.  Not sure to ask.  Maybe try
alt.sys.pdp10, actually, rather than the teco newsgroup.  not sure of most
people on the teco group would know what ITS Teco was.  The document you
want to ask for is called "TECORD >".  Anyone who doesn't recognize it
under that name doesn't know the right one.


Thu, 25 Oct 2001 03:00:00 GMT  
 Newbie question

Quote:

> The type declarations I have in mind were of a very different
> kind. The system I am working on is based on a complex,
> object-oriented information model. Functions, mostly generic
> functions, operate on instances of well-defined types from the
> model. Passing an instance of a non-conforming type is almost
> certainly an error. These errors are usually easy to fix when looking

Hmm, what in your context are "non-conforming types"?  Conforming to
what?  To the available methods as in:

(defclass foo ())
(defclass bar ())

(defgeneric frobme (instance arg))

(defmethod frobme ((instance foo) arg)
  (frobnicate instance arg))

(defun foobar ()
  (let ((x (make-instance 'bar)))
    (frobme x) ; Major losage will occur since x is not of type foo?
    ...))

In that case I don't think issuing warnings is a sound tactic for a
general purpose tool like a compiler, since the compiler cannot know,
and shouldn't assume whether an applicable method for BAR will be
available at run-time.  If in your particular application the set of
known methods is available at some specific point in time, you might
be able to walk your code at that point, and determine whether all
calls of FROBME will have applicable methods (though this will
probably involve a fair amount of analysis I'd imagine).

If I've misunderstood your problem, I'm sorry, and would be interested to
see a simplified example of your exact problem, so that maybe something
could be worked out for it.  I'm not arguing against doing more analysis
on programs, to make them more robust.  In fact I think CL is one of the
most powerful languages in that area, allowing the programmer, as Tim
Bradshaw noted, to easily implement a whole host of interesting analysis
tools, to grovel over code.

I'm mostly arguing against the call for these analyses to be included
into general-purpose tools like the compiler or even (though you
specifically didn't call for that) into the language.  The problem I
see here is that most of these analyses are only useful or tractable,
if you make some project and/or programmer-specific assumptions
somewhere.  So if you include them into an implementation, you'll
either have to alienate one part of your community, by putting out
spurious warnings, or another part of your community, by not warning
against clear (to them) errors, or most probably both.  Or you have to
include an infinite number of tuning knobs, to let each user adjust
the analysis framework to suit his specific needs, which will most
likely still not please all of your users, and will carry with it a
non-trivial amount of investment in implementation complexity.

I'd rather see a move in the ANSI standard to expose a suitable
substrate for these kinds of analysis, like e.g. providing a
standardized code-walking facility, and/or providing hooks to get at
the compilers' analysis results, like e.g. call-site type information,
etc.  The details of such a solution could most likely be obtained by
looking at the functionality that is already available in most
implementations, and standardizing on a useful subset/sideset of
this.

This would IMHO be far more useful to a far wider audience, wouldn't
codify some particular programming style as correct practice, and
would have the advantage of being built on a more solid foundation.

Regs, Pierre.

--

  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]



Thu, 25 Oct 2001 03:00:00 GMT  
 
 [ 350 post ]  Go to page: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]

 Relevant Pages 
 

 
Powered by phpBB® Forum Software