Quote:
(John Prentice) writes:
Quote:
john> (Piercarlo Grandi) writes:
pcg> ...one of the tragedies of Fortran, that it traps people that do
pcg> not understand about computers to think they do. Anybody who is
pcg> good at maths thinks he/she is by God'a gift a good numerical
pcg> analyst too.
pcg> Doesn't Fortran give you real numbers, mathematical formulas, and
pcg> familiar looking syntax? Fortran == maths!
pcg> Frankly, a lot of mathematical research with computers is entirely
pcg> meaningless because of this ridiculous delusion. Me, I am not a
pcg> numerical analyst, but I know enough about computers to understand that
pcg> numerical computing is a minefield of very difficult problems, and that
pcg> the least of them is having a familiar notation that is utterly
pcg> misleading (a+b in Fortran has completely different semantics from, only
pcg> very remotely similar to, the semantics of a+b in maths).
john> Well, I AM a numerical analyst and while not minimizing the
john> difficulty of some numerical techniques, this paragraph is a wild
john> exaggeration. How about some supporting evidence instead of just
john> opinion?
It's not the difficulty, it's the profound difference which is
pervasive. Just the fact that + has very different properties over the
reals and over the floating points has profound and extensive
consequences, and not just on the way code an algorithm, also on the how
you design or choose the algorithm.
The structure of algorithms built around the semantics of + on floating
points very often does not in any way resemble the structure of
algorithms that assume + over the reals.
The (Dekker?) paper that I have mentioned takes a lot of pages to show
you that in numerical analysis the dumbest way to solve "ax^2+bc+c=0" is
to write
+
 2
b + \ b + 4ac

x = 
2*a
and then code
subroutine solve2(a,b,c,x1,x2)
discr = sqrt(b**24*a*c)
x1 = (b + discr)/(2*a)
x2 = (b  discr)/(2*a)
return
end
and the reason is precisely because + and  have different semantics for
floatings.
Surely you don't use the mathematical definition of inverse of a matrix
to compute the inverse; and surely even Gauss Jordan may not work too
well, and you have to use power series expansions (which have the
remarkable qualities that they invert by successive elevations to a
power, i.e. multiplication, and thus are stabler and for certain
machines much faster than Gauss Jordan).
These are only two examples; solving stiff ODEs is a tricky business,
and the coding techniques used have nothing to do with ODE maths. Even
in symbolic computer algebra algorithms are completely different from
those of "maths", even if the reasons for the difference are not the
same as numerical analysis. Symbolic integration is certainly not done
by parts, the Risch Norman symbolic integration algorithm is something
that is about as alien from "maths" as the right way to solve
numerically the 2nd order polynomial equation is.
Exaggeration is IMNHO not; when you cannot even rely on the familiar
properties of + and , when it takes a long article to explain how to
solve 2nd order polynomial equations, everything becomes suspect,
paranoia becomes sanity. Floating point is weird, weird. Computer are
weird, weird. (for mathematicians that is).
john> As far as Fortran's relationship to mathematical notation, I don't
john> think this has anything to do with why people fail to comprehend
john> numerical methods.
Well, again based on anectodal evidence, it misleads the unwary (that is
those that are not coached hard in numerical analysis) into believing
that Fortran == maths. As you say, this is a small problem, some people
even fail to understand the properties of their algorithms, not just of
computer algebra:
john> In fact, the problem people have in misusing numerical methods is
john> not their programming for god's sake, it is their MATHEMATICS. An
john> example, people in the flow in porous media community routinely
john> apply finite difference techniques to solve PDE's using extremely
john> long and skinny cells (n 2d). I have seen alot of 2 dimensional
john> calculations with cells of 1 cm tall and 1 km wide. Yet the
john> finite difference schemes are only accurate to some order of the
john> largest cell dimension (in that community, they are usually only
john> 1st order to boot  ugh!). That means the kilometer size
john> dominates the error, except in circumstances where the flow is
john> actually one dimensional. Now, people don't seem to appreciate
john> this fact.
And they probably will fail to appreciate the fact that in all
likelyhood they are mixing in the same calculations quantities with five
or six orders of magnitude between them, and this may well cause
problems with their floating point as well. As well.
john> Are you telling me it is because Fortran misled them?
No, even if examples of [censored] [censored] like this are all too
common. But surely these [censored] also believe that Fortran == maths.
The least of their problems, but probably (OK, I admit handwaving here!)
it contributed.
Certainly the major problem is poor mathematics, but the second major
one is a failure to appreciate that maths and numerical analysis are
quite different disciplines, just like theoretical and experimental
physics, even if they have a common ground. But Fortran yes it does help
to maintain the illusion that the common ground is much larger than it
really is.
Hey, however, would you believe that as a practical matter major
computer manufacturers have had to provide floating point compatibility
modes with the IBM/370 to sell their machines? What do you do when your
customer says "My programs print 5.3E15 on a 370 and 8.4E15 on your
machine, why does your machine get the result wrong by more than 50%"?
Do you explain to them that the mantissa (if not the order of
magnitude!) of the result printed by their program with which they have
published loads of paper is solely function of the rounding algorithms
of the machine it runs on? Hey, no, you wanna keep the customer, so out
goes base 2 mantissa with rounding, and in comes base 16 mantissa with
truncation like the IBM 370, which gives the right results: IBM of
course does it right.
I have seen worried comments by people heavvy in computational physics
that since everybody in that field is running the same codes,
independent verification is no longer common, as wide circulation of
codes also means wide criculation of the bugs and poor numerical aspects
of some of those codes.
pcg> What are the good languages for numerical research then? Sadly,
pcg> Fortran is so preminent, precisely because it deceives the unwary
pcg> about the immense chasm between maths and numerical computing, that
pcg> I cannot think of any other similar low level language. [ ... but
pcg> maybe scheme or C++ .. ]
john> A bold claim
Which claim? That there few alternatives to Fortran for implementation
work in numerical analysis? Hard to dispute that, IMNHO. It is easier
to dispute the rationale I give, because it looks so attractive to the
unwary. OK, let me at least claim that it is a powerful, even if maybe
not dominannt, attraction, as demonstrated bu a number of posters in
this thread, who argue whether C or Fortran is better suited for
numerical analysis by looking at whether one or the other looks more
like maths notation, something I regard as quite irrelevant myself, and
misleading as well.
john> from a nonnumerical analyst.
Well, I call myself a non numerical analyst only because I am not
presumptuous. Surely your hydrologists as per above call themselves
"numerical analysts". Surely the guys above that said that their code
gave the correct results on an IBM 370 called themselves "numerical
analysts". Ah yes.
john> And I suppose if some hydrologist codes the same finite difference
john> scheme I just referred to in C (or Scheme or whatever) instead of
john> Fortran, divine elightenment will happen and he will see that he
john> is making serious mathematical mistakes?
john> Come on, this argument is ridiculous.
But it is not my argument! You are reducing it to absurdity yourself.
The argument is that a misleadingly familiar notation can trap the
unwary, not that an unfamiliar notation will automagically make them
wary.
Given that a language should not be preferred (or avoided) because of
its familiar notation, things like Scheme can become attractive.
Scheme has excellent performance (a little more work on compilers can
yield MAClisp style ones I think), and it has inbuilt support for things
like infinite precision numbers, rationals, ... that ought to be used
far more in numerical analysis, but are not because Fortran does not
have them. Scheme has also fairly powerful abstraction facilities, so
that for example adding interval arithmetic (another thing that is not
in Fortran, and thus ignored by many) is not that difficult, and it has
also excellent exception handling and library facilities, and so on. It
does not a familiar looking syntax, but nobody should care.
john> And if you aren't a numerical analyst, why the sense of outrage?
Because I can sympathize with my numerical analyst friends that would
send to the wall [censored] like your hydrologists, or like the
[censored] I met that complained about the results being wrong because
different from those of an IBM 370. Having shot them, one would put them
down in unmarked graves in unblessed land with a stake in their heart,
because they are surely agents of the Adversary :).
Even I (and I only did a couple of years of quack numerical analysis
fif{*filter*} years ago) can be outraged by the [censored] idea that computers
ought to give the illusion that they can do maths, and that any
mathematician can become an
...
read more »