Fortran vs. C for numerical work - expression notation 
Author Message
 Fortran vs. C for numerical work - expression notation

Quote:

>                                 y
>                                x
  [ ... ]
> All I was saying was that infix
> notation is what mathematicians use for exponentiation

No. Superscript notation is not infix notation. It's not even close.
Jim implied that fortran is closer to ``standard mathematical notation''
than C is. I don't believe him. x**y and pow(x,y) are both quite far
from superscript notation. And x.gt.y is certainly not standard.

Quote:
> I have NEVER heard a room full of Fortran
> programmers complain about it however because it uses < instead of
> .lt. (which is what you said).

Well, I have.

Wasn't there a study showing that command names could be ridiculously
illogical and people would learn them just as quickly? I really do think
familiar notations are more important than ``standard mathematical
notation.''

---Dan



Sat, 29 May 1993 04:53:55 GMT  
 Fortran vs. C for numerical work - expression notation
A room full of Fortran programmers may indeed groan when someone
mentions that A > B will be the new form of  relational expression.
They are _not_ groaning because they think it's a bad idea, they
are groaning because it is a good idea that has been discussed to
death in the Fortran community for nearly two decades.  They groan
simply because it is an old debate that they are simply tired of
hearing about.

Now, if you tell that same roomful that you plan to remove x**y and
replace it with pow(x,y), they'll throw vegetables at you.  This is
because such a change is _not_ a good idea.  One of the advantages to
x**y or x^y over pow(x,y) is conciseness (which isn't really a word -
the correct word is 'concision' :-).  Another advantage is that, while
different from standard mathematical notation, they are _MUCH_ closer
to it.  If anyone disagrees, fine - Fortran doesn't FORCE you to use
the '**' operator, you can write a pow() function and call it - you
can even make it a statement function so that it gets inlined.  C
can't offer the same privilege to converting Fortan users - there is
no way in C (within the standard anyway) of defining an exponentiation
operator.  People who find this form most natural (most of us) cannot
use it in C.

Yes, C++ has allows user defined operators (or, allows users to
overload existing operators, but not define new ones is the real
rule).  This is a good idea and _allows_ most of Fortran user's
complaints about C expression notation to be fixed (I say 'allows',
there is still a problem of compatibility if two sets of users pick
different operators to overload, different functions to implement the
overloaded functionality, etc.).  But, if such overloaded operators
are actually implemented as _external_ function calls, this answer
is not satisfactory.  (By the way, Fortran Extended has overloadable
and user definable operators too.)

J. Giles



Sat, 29 May 1993 04:51:28 GMT  
 Fortran vs. C for numerical work - expression notation

Quote:

>For numerical stuff, "design or develop in APL, recode in Fortran" makes
>a lot of sense.

As I said originally, I don't know anything about APL so I will accept
your opinion on APL versus Fortran.  However, I do doubt that developing in
one language and recoding into another is a good idea as a rule.  It might
work for small applications, but often our algorithm development codes
are themselves huge.

John



Sat, 29 May 1993 05:51:49 GMT  
 Fortran vs. C for numerical work - expression notation


Quote:
Poser) writes:

poser> Regarding Piercarlo Grandi's argument that programming notation
poser> should differ as much as possible from mathematical notation,
poser> I am not terribly sympathetic.

I am afraid that you like Jim Giles did not understand my argument -- it
was not "since maths and programming are so different the notations
should be different as well", but "since maths and programming are so
different, similarity of notation is irrelevant and possibly even a trap
for the unwary".

I am not advocating pursuing difference of notation as a benefit; I am
saying that it should not be seen as an advantage of Fortran, and if it
seen as such, this may indicate little awareness of the immense
difference therein. Hey, Mathematica (as somebody remarked) has an even
more maths like notation than Fortran, but this does not mean that the
semantics of those operations are the same (at least in the numerical
domain -- as to the symbolic one Mathematics used to have several bugs
:->).

poser> I suspect that this will just make people spend their time
poser> learning the funny new notation, not make them think harder about
poser> how actual digital computation differs from symbolic or ideal
poser> continuous numerical computation.

Oh yes, I can accept that. After all one of my alternatives to Fortran
was C++, because of its abstraction capabilities; one can use them to
create even more faithful reproduction of maths like notation, even if
again I think it is pointless.

poser> So, no, I'm not advocating gratuitious differences, just
poser> suggesting: (a) that Fortran and C are not very different in this
poser> respect; (b) that these differences probably don't matter very
poser> much.

Notation is usually not that important, as long as it helps work instead
of hindering it, understanding the issues is. Maybe a noation that does
not resemble traditional notation helps more understand that the
semantics do not resemble traditional semantics, maybe not.

--

Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg



Sat, 29 May 1993 04:26:45 GMT  
 Fortran vs. C for numerical work - expression notation


Quote:
(John Prentice) writes:


Quote:
john> (Piercarlo Grandi) writes:

pcg> ...one of the tragedies of Fortran, that it traps people that do
pcg> not understand about computers to think they do. Anybody who is
pcg> good at maths thinks he/she is by God'a gift a good numerical
pcg> analyst too.

pcg> Doesn't Fortran give you real numbers, mathematical formulas, and
pcg> familiar looking syntax? Fortran == maths!

pcg> Frankly, a lot of mathematical research with computers is entirely
pcg> meaningless because of this ridiculous delusion. Me, I am not a
pcg> numerical analyst, but I know enough about computers to understand that
pcg> numerical computing is a minefield of very difficult problems, and that
pcg> the least of them is having a familiar notation that is utterly
pcg> misleading (a+b in Fortran has completely different semantics from, only
pcg> very remotely similar to, the semantics of a+b in maths).

john> Well, I AM a numerical analyst and while not minimizing the
john> difficulty of some numerical techniques, this paragraph is a wild
john> exaggeration.  How about some supporting evidence instead of just
john> opinion?

It's not the difficulty, it's the profound difference which is
pervasive.  Just the fact that + has very different properties over the
reals and over the floating points has profound and extensive
consequences, and not just on the way code an algorithm, also on the how
you design or choose the algorithm.

The structure of algorithms built around the semantics of + on floating
points very often does not in any way resemble the structure of
algorithms that assume + over the reals.

The (Dekker?) paper that I have mentioned takes a lot of pages to show
you that in numerical analysis the dumbest way to solve "ax^2+bc+c=0" is
to write

                +---------
                |   2
          -b + \| b  + 4ac
             -
   x    = ----------------
                2*a            

and then code

        subroutine solve2(a,b,c,x1,x2)
            discr = sqrt(b**2-4*a*c)
            x1 = (-b + discr)/(2*a)
            x2 = (-b - discr)/(2*a)
        return
        end

and the reason is precisely because + and - have different semantics for
floatings.

Surely you don't use the mathematical definition of inverse of a matrix
to compute the inverse; and surely even Gauss Jordan may not work too
well, and you have to use power series expansions (which have the
remarkable qualities that they invert by successive elevations to a
power, i.e. multiplication, and thus are stabler and for certain
machines much faster than Gauss Jordan).

These are only two examples; solving stiff ODEs is a tricky business,
and the coding techniques used have nothing to do with ODE maths.  Even
in symbolic computer algebra algorithms are completely different from
those of "maths", even if the reasons for the difference are not the
same as numerical analysis.  Symbolic integration is certainly not done
by parts, the Risch Norman symbolic integration algorithm is something
that is about as alien from "maths" as the right way to solve
numerically the 2nd order polynomial equation is.

Exaggeration is IMNHO not; when you cannot even rely on the familiar
properties of + and -, when it takes a long article to explain how to
solve 2nd order polynomial equations, everything becomes suspect,
paranoia becomes sanity. Floating point is weird, weird. Computer are
weird, weird. (for mathematicians that is).

john> As far as Fortran's relationship to mathematical notation, I don't
john> think this has anything to do with why people fail to comprehend
john> numerical methods.

Well, again based on anectodal evidence, it misleads the unwary (that is
those that are not coached hard in numerical analysis) into believing
that Fortran == maths. As you say, this is a small problem, some people
even fail to understand the properties of their algorithms, not just of
computer algebra:

john> In fact, the problem people have in misusing numerical methods is
john> not their programming for god's sake, it is their MATHEMATICS.  An
john> example, people in the flow in porous media community routinely
john> apply finite difference techniques to solve PDE's using extremely
john> long and skinny cells (n 2d).  I have seen alot of 2 dimensional
john> calculations with cells of 1 cm tall and 1 km wide.  Yet the
john> finite difference schemes are only accurate to some order of the
john> largest cell dimension (in that community, they are usually only
john> 1st order to boot - ugh!).  That means the kilometer size
john> dominates the error, except in circumstances where the flow is
john> actually one dimensional.  Now, people don't seem to appreciate
john> this fact.

And they probably will fail to appreciate the fact that in all
likelyhood they are mixing in the same calculations quantities with five
or six orders of magnitude between them, and this may well cause
problems with their floating point as well. As well.

john> Are you telling me it is because Fortran misled them?

No, even if examples of [censored] [censored] like this are all too
common.  But surely these [censored] also believe that Fortran == maths.
The least of their problems, but probably (OK, I admit handwaving here!)
it contributed.

Certainly the major problem is poor mathematics, but the second major
one is a failure to appreciate that maths and numerical analysis are
quite different disciplines, just like theoretical and experimental
physics, even if they have a common ground. But Fortran yes it does help
to maintain the illusion that the common ground is much larger than it
really is.

Hey, however, would you believe that as a practical matter major
computer manufacturers have had to provide floating point compatibility
modes with the IBM/370 to sell their machines? What do you do when your
customer says "My programs print 5.3E-15 on a 370 and 8.4E-15 on your
machine, why does your machine get the result wrong by more than 50%"?

Do you explain to them that the mantissa (if not the order of
magnitude!) of the result printed by their program with which they have
published loads of paper is solely function of the rounding algorithms
of the machine it runs on? Hey, no, you wanna keep the customer, so out
goes base 2 mantissa with rounding, and in comes base 16 mantissa with
truncation like the IBM 370, which gives the right results: IBM of
course does it right.

I have seen worried comments by people heavvy in computational physics
that since everybody in that field is running the same codes,
independent verification is no longer common, as wide circulation of
codes also means wide criculation of the bugs and poor numerical aspects
of some of those codes.

pcg> What are the good languages for numerical research then? Sadly,
pcg> Fortran is so preminent, precisely because it deceives the unwary
pcg> about the immense chasm between maths and numerical computing, that
pcg> I cannot think of any other similar low level language. [ ... but
pcg> maybe scheme or C++ .. ]

john> A bold claim

Which claim? That there few alternatives to Fortran for implementation
work in numerical analysis? Hard to dispute that, IMNHO.  It is easier
to dispute the rationale I give, because it looks so attractive to the
unwary. OK, let me at least claim that it is a powerful, even if maybe
not dominannt, attraction, as demonstrated bu a number of posters in
this thread, who argue whether C or Fortran is better suited for
numerical analysis by looking at whether one or the other looks more
like maths notation, something I regard as quite irrelevant myself, and
misleading as well.

john> from a non-numerical analyst.

Well, I call myself a non numerical analyst only because I am not
presumptuous. Surely your hydrologists as per above call themselves
"numerical analysts". Surely the guys above that said that their code
gave the correct results on an IBM 370 called themselves "numerical
analysts". Ah yes.

john> And I suppose if some hydrologist codes the same finite difference
john> scheme I just referred to in C (or Scheme or whatever) instead of
john> Fortran, divine elightenment will happen and he will see that he
john> is making serious mathematical mistakes?

john> Come on, this argument is ridiculous.

But it is not my argument! You are reducing it to absurdity yourself.
The argument is that a misleadingly familiar notation can trap the
unwary, not that an unfamiliar notation will automagically make them
wary.

Given that a language should not be preferred (or avoided) because of
its familiar notation, things like Scheme can become attractive.

Scheme has excellent performance (a little more work on compilers can
yield MAClisp style ones I think), and it has inbuilt support for things
like infinite precision numbers, rationals, ... that ought to be used
far more in numerical analysis, but are not because Fortran does not
have them. Scheme has also fairly powerful abstraction facilities, so
that for example adding interval arithmetic (another thing that is not
in Fortran, and thus ignored by many) is not that difficult, and it has
also excellent exception handling and library facilities, and so on. It
does not a familiar looking syntax, but nobody should care.

john> And if you aren't a numerical analyst, why the sense of outrage?

Because I can sympathize with my numerical analyst friends that would
send to the wall [censored] like your hydrologists, or like the
[censored] I met that complained about the results being wrong because
different from those of an IBM 370. Having shot them, one would put them
down in unmarked graves in unblessed land with a stake in their heart,
because they are surely agents of the Adversary :-).

Even I (and I only did a couple of years of quack numerical analysis
fif{*filter*} years ago) can be outraged by the [censored] idea that computers
ought to give the illusion that they can do maths, and that any
mathematician can become an ...

read more »



Sat, 29 May 1993 05:47:15 GMT  
 Fortran vs. C for numerical work - expression notation

Quote:

>Would someone care to enlighten me as to why he or she thinks that C has
>a difficult syntax and is difficult to learn?

It is not that C's syntax is hard to learn, it is that C is tricky to use.
To convince yourself of this, read Andrew Koenig's book "C Traps and
Pifalls", and try to come up with and equal number of equally serious
traps and pitfalls for FORTRAN.  Your FORTRAN list will certaily not
be empty, but it will not be as long or serious as the one for C.

You can alse read the FAQ list in this news group, and compare it to
the one posted in comp.lang.fortran.  If there is no such posting in
comp.lang.fortran (as I suspect) then try to come up with a similar set
of questions yourself.
--


               Dept. of Computer Science / University of Manitoba
               Winnipeg, Manitoba, Canada  R3T 2N2 / (204) 275-6682



Sat, 29 May 1993 08:40:32 GMT  
 Fortran vs. C for numerical work - expression notation

Quote:

>>>>In C or Pascal you could allocate a structure as a global (static) variable,
>>>                                                    ^^^^^^^^^^^^^^
>>>To allow recursion, the compiler would have to allocate this
>>>as 'auto,' in the caller's stack frame.

><stuff deleted>

>It's very difficult for the compiler to determine
>that there's no recursion going on.  ...  

Ah... it should be noted that PL/I (an often denigrated but in many ways
superior language to both FORTRAN and C) solves this problem quite neatly...
Routines may be declared RECURSIVE in which case automatic variables are
allocated as part of a "frame" along with the appropriate register save
areas.  Any routine not so declared may not be used recursively and only
one copy of local variables is created.  This permits increased optimization
by the compiler and also reduces performance degradation due to reallocation
of local variables.

Jon Rosen



Sat, 29 May 1993 22:53:22 GMT  
 Fortran vs. C for numerical work - expression notation

Quote:

>[...] some of Fortran's (more or less)
>standard operators (exponentiation, normal arithmetic on Complex,
>etc.) require prefix (function call) notation in C.  This is an
>impediment.  You may choose to call it trivial, but it is still an
>impediment.

Yes, it is indeed.  It is sort of strange that a language with 40
operators should make exponentiation a function.  I always think of
this as one of the bad things in C.  It could have had 41 operators
just as easily.

... or 42, or 43, for that matter...  Look, it is fine to accomodate
the preferences of the math-minded people by making operators operators
and functions functions, but it is also hopeless.  There simply are
too many operators in mathematics.  Let's suppose exponentiation must
not be prefix; we'll write it "**".  Now what about factorial? It is
just as common an operation.  And it must be postfix.  You don't want to
create an impediment, do you?  (Don't forget what double factorials mean
in maths. 4!! is not (4!)! = 24!. Do you expect that users will be willing
to unlearn this?)  Next, how about...  And so on and so forth...

If this were a serious issue, languages which allow you to define your
own operators (such as Algol-68) would have more success among number-
processing users.

Quote:
>[...] both languages (but C more than Fortran)
>use standard mathematical symbols for non-standard purposes.  The
>equal sign in both languages is used for assignment, for example.
>Again, this is an impediment.  Again, C suffers from it worse than
>Fortran.

Could you please say why?  I'm completely confused by this statement.
How can one language suffer more than the other by a deviation from
standard mathematics that they both share?

Quote:
>[...] both languages use non-standard symbols for
>standard mathematical concepts.  Fortran uses '.and.' for conjunction,
>while C uses '&', for example (the standard symbol for conjunction is,
>of course, '/\').  I think that C is still worse than Fortran here.

Again, why?  I have seen "&" in quite many logic books.  (It is another
story, of course, that for some reason "&" is bit-wise conjunction in C,
while truth values are conjoined with "&&", which looks awful.)

Quote:
>To be sure, C uses '<' and '>' for comparison operators, but then
>commits the error of the last paragraph by using these same symbols as
>part of the shift operators.

By the very same logic, Fortran commits an error by using "*" for multi-
plication, because it already is a part on the exponentiation operator.
You commit an error if you call a variable "I", because that's part of
the name of the sine function.

Quote:
>These last two points (and the fact that C uses the whole ASCII
>character set in peculiar ways)

1. Is it really the whole ASCII charset?  Remind me the use of the dollar
sign.  And of the backquote.
2. What is wrong with using the whole ASCII charset?  What is it there for
anyway?
3. What are the unpeculiar ways of using it?  How would you use (put here
the name of your favourite peculiarly-used character) in an intuitive way?

Quote:
> tend to make many C programs look
>like communications line noise - at least to most non-users of C.

It doesn't take too much time for much of this noise to start to make
sense.  It depends on the good will of the programmer.  The language
hasn't been designed that wouldn't allow you to write unreadable code
should you choose to.

Quote:
>This can't be held in C's favor in a discussion about closeness
>to standard forms of notation.

No, since both languages are hopelessly far from standard notation.
You know how introductory books go: "Thou shalt not write `(-1)**K'
as thou dost in maths" etc. etc.

Again, speaking of standard notation (and given the fact that the
ASCII charset is a natural limitation).  How about admitting that the
C "?:" operator makes C look much more like maths?  Conditional
expressions (written with a big "{") are an integral part of the usual
mathematical notation.  They are very naturally converted to "?:".
Fortran has nothing similar, which is a serious impediment.

--

MB 1766 / Brandeis University                but having an opinion is an art.
P.O.Box 9110 / Waltham, MA 02254-9110 / USA                    Charles McCabe



Sat, 29 May 1993 13:14:48 GMT  
 Fortran vs. C for numerical work - expression notation

<< reply to Jim Giles concerning the fidelity of mathematical notation in
   Fortran and c largely deleted ..

Quote:

>No, since both languages are hopelessly far from standard notation.
>You know how introductory books go: "Thou shalt not write `(-1)**K'
>as thou dost in maths" etc. etc.

I think it is fairly obvious that neither fortran nor C maintain any
great fidelity to mathematical notation beyond a very simple one.  I also
don't know that the people who wrote these languages ever suggested
more than that.

John Prentice



Sun, 30 May 1993 00:06:07 GMT  
 Fortran vs. C for numerical work - expression notation

<< long discussion of the pitfalls of the dangers of naive numerical
   methods deleted >>

Your points are extremely valid and well taken.  I have absolutely no
quarrel about the fact that scientists very often fail to comprehend
the difficulties and dangers inherent in sometimes even simple numerical
math.  I don't think one can lay the blame on any particular computer language
however.  I have to agree that saying c=a+b probably misleads some
people into thinking you get the same answer on the computer as you would
"analytically", but what is the alternative?  Beyond that, certainly
Fortran can't be held singularly responsible for that.   The syntax is
the same in almost all languages.

Quote:
>john> As far as Fortran's relationship to mathematical notation, I don't
>john> think this has anything to do with why people fail to comprehend
>john> numerical methods.

>Well, again based on anectodal evidence, it misleads the unwary (that is
>those that are not coached hard in numerical analysis) into believing
>that Fortran == maths. As you say, this is a small problem, some people
>even fail to understand the properties of their algorithms, not just of
>computer algebra:

Okay, I can accept this up to a point.  It is certainly true finite precision
arithmetic is quite different from infinite precision arithmetic.
This can lead to problems that even the best numerical analysists do not yet
comprehend (a good deal of non-linear dynamics research is tied up with
this precise point!).  However, why lay the blame on Fortran?  Just because the
originators designated the language Fortran, meaning "Formula Translation" ?
That seems a bit strong to me I guess.  How is something like C or Pascal
any better?  While not in any way denying any of your quite valid points
about numerical methods, I don't think language designers can be held
responsible for this situation.

<< after referring to an example I originally gave of poor application
   of numerical methods >>

Quote:

>And they probably will fail to appreciate the fact that in all
>likelyhood they are mixing in the same calculations quantities with five
>or six orders of magnitude between them, and this may well cause
>problems with their floating point as well. As well.

>john> Are you telling me it is because Fortran misled them?

>No, even if examples of [censored] [censored] like this are all too
>common.  But surely these [censored] also believe that Fortran == maths.
>The least of their problems, but probably (OK, I admit handwaving here!)
>it contributed.

>Certainly the major problem is poor mathematics, but the second major
>one is a failure to appreciate that maths and numerical analysis are
>quite different disciplines, just like theoretical and experimental
>physics, even if they have a common ground. But Fortran yes it does help
>to maintain the illusion that the common ground is much larger than it
>really is.

Okay, maybe so.  It is a sad statement about the quality of scientists
using computers (not necessarily a wrong one, but a sad one).  Still,
why just Fortran?  This is a problem with people not understanding
numerical methods, not a problem with Fortran (or so it seems to me).
Do you REALLY think Fortran promotes this sort of thing or is it just
that it is the most popular language for scientists?  If scientists
start programming in language [..pick one..], I bet you would have
the same problem.

- Show quoted text -

Quote:
>Hey, however, would you believe that as a practical matter major
>computer manufacturers have had to provide floating point compatibility
>modes with the IBM/370 to sell their machines? What do you do when your
>customer says "My programs print 5.3E-15 on a 370 and 8.4E-15 on your
>machine, why does your machine get the result wrong by more than 50%"?

>Do you explain to them that the mantissa (if not the order of
>magnitude!) of the result printed by their program with which they have
>published loads of paper is solely function of the rounding algorithms
>of the machine it runs on? Hey, no, you wanna keep the customer, so out
>goes base 2 mantissa with rounding, and in comes base 16 mantissa with
>truncation like the IBM 370, which gives the right results: IBM of
>course does it right.

>I have seen worried comments by people heavvy in computational physics
>that since everybody in that field is running the same codes,
>independent verification is no longer common, as wide circulation of
>codes also means wide criculation of the bugs and poor numerical aspects
>of some of those codes.

No quarrel at all here, in fact we {*filter*}ly agree!

Quote:
>pcg> What are the good languages for numerical research then? Sadly,
>pcg> Fortran is so preminent, precisely because it deceives the unwary
>pcg> about the immense chasm between maths and numerical computing, that
>pcg> I cannot think of any other similar low level language. [ ... but
>pcg> maybe scheme or C++ .. ]

Well, we have hashed this out above.  However, I would again suggest
that Fortran is getting the rap just because it is the most common
language used for doing science, not because it is inherently bad.  As
I said earlier, I have no doubt that if all the C enthusiasts out there
managed to convert the scientific world to C tomorrow (or if not C,
you name it), then you would see the same problems.

Quote:

>john> And I suppose if some hydrologist codes the same finite difference
>john> scheme I just referred to in C (or Scheme or whatever) instead of
>john> Fortran, divine elightenment will happen and he will see that he
>john> is making serious mathematical mistakes?

>john> Come on, this argument is ridiculous.

>But it is not my argument! You are reducing it to absurdity yourself.
>The argument is that a misleadingly familiar notation can trap the
>unwary, not that an unfamiliar notation will automagically make them
>wary.

My apologies if I misrepresented your position.  My choice of wording
was inappropiate, I am sorry.  And yes, I can accept that familiar
notation can trap the unwary, but what about all those people out there
who know what they are doing?  I would hate to give up the convience of
saying c=a+b just to avoid some idiot failing to understand the
limitations of computer arithmetic.

Quote:

>Given that a language should not be preferred (or avoided) because of
>its familiar notation, things like Scheme can become attractive.

>Scheme has excellent performance (a little more work on compilers can
>yield MAClisp style ones I think), and it has inbuilt support for things
>like infinite precision numbers, rationals, ... that ought to be used
>far more in numerical analysis, but are not because Fortran does not
>have them. Scheme has also fairly powerful abstraction facilities, so
>that for example adding interval arithmetic (another thing that is not
>in Fortran, and thus ignored by many) is not that difficult, and it has
>also excellent exception handling and library facilities, and so on. It
>does not a familiar looking syntax, but nobody should care.

I am not familiar with Scheme.  Could you point me at a reference so
I can learn about it?  Thanks.

Quote:

>Even I ... can be outraged by the [censored] idea that computers
>ought to give the illusion that they can do maths, and that any
>mathematician can become an instant numerical analyst (Fortran) or
>symbolic algebrist (Mathematica).

I only wish you were in charge of my funding!  I couldn't agree with
you more and wish the powers that be understood this more.

Piercarlo, I think we are probably {*filter*}ly agreeing throughout this
argument.  I don't think Fortran is specifically to blame and you
apparently do, but so be it.  However, your points about people
underestimating the difficulty of numerical methods are ones that most
numerical analysists would agree on I think.  In fact, one of the bigger
problems computational physicists (or computational scientists in
general, no matter what their principal discipline) face is the fact
that the field is highly interdisciplinary and requires understanding
computers, mathematics, and physics (or whatever).  That is a tall order
and most people end up compromising on one or more of these fields.
More importantly however is that computational physics is really a
separate discipline, not just physics which happens to be done on a
computer.  Makes you feel a bit like neither fish nor fowl.

Regards,

John Prentice



Sat, 29 May 1993 14:56:16 GMT  
 
 [ 132 post ]  Go to page: [1] [2] [3] [4] [5] [6] [7] [8] [9]

 Relevant Pages 
 

 
Powered by phpBB® Forum Software