Quote:

> [Don O'Donnell]

> > ...

> > If by "floating-point-decimal" you mean internal representation of

> > both the mantissa and the exponent by some sort of a binary-coded

> > decimal, let me point out that it is not necessary to go to full

> > decimal in order to to achieve the `7.35` == "7.35" goal.>

> > By letting the exponent represent powers of 10 rather than 2, the

> > base (or mantissa) and exponent can both be represented in binary

> > as an int or long. Thus, the internal calculations can use the

> > fast integer hardware instructions, rather than decimal arithmetic

> > which would have to be done by software.

> > ...

> See

> <ftp://ftp.python.org/pub/python/contrib-09-Dec-1999/DataStructures/Fi...

> nt.py>

> for a fixed-point version of that. For "typical" commercial use, the

> problem is that converting between base-2 (internal) and base-10 (string) is

> very expensive relative to the one or two arithmetic operations typically

> performed on each input. For example, hook up to a database with a million

> sales records, and report on the sum. The database probably delivers the

> sale amounts as strings, like "25017.18". Even adding them into the total

> *as* strings would be cheaper than converting them to a binary format first.

> In addition, once you're outside the range of a native platform integer,

> you're doing multiprecision operations by hand anyway. Python's longs use

> "digits" in base 2**15 internally, but *could* use, say, base 10**4 instead.

> The code would be almost the same, except for using different low-level

> tricks to find twodigits/base and twodigits%base efficiently.

Thanks for your comments, Tim. I agree that in most commercial

environments, input, moving, formatting and output of numbers

exceeds the amount of actual calculations that are done with them.

Hence the early business oriented computers did all their calculations

in some form of decimal format, to save the costly dec-bin-dec

conversion steps. The "revolutionary" IBM 360 of the '60s was the

first to have both floating-point hardware for scientific processing

as well as fixed-point "packed-decimal" hardware for business use.

With today's fast processors however, the radix conversion steps are

hardly noticeable. I've done a lot of COBOL (yuck) programming on

Univac/Unisys mainframes, which, lacking hardware decimal instructions,

did all their fixed-point processing in binary. I never encountered

any performance problems. Quite the contrary, they were incredibly

fast machines for commercial work.

I took a look at your FixedPoint.py module. Very nice work, thanks.

As it turns out I already had downloaded ver 0.0.3, but had forgotten

about it. Thanks for the update. I notice that you are also using a

long integer internally to store the base number and an int to store

a power of 10, as I suggested in my original posting.

I was thinking more along the lines of a floating-point type rather

than your fixed-point. I.E., with your

FixedPoint class:

5.12 * 4.22 == 21.61 (it's rounded to two decimal places)

With my dfloat class:

5.12 * 4.22 == 21.6064 (result is exact)

I think there is a real need for both types of numbers.

Especially in view of the fact that with Python's built in types,

what we get today is:

Quote:

>>> 5.12 * 4.22

21.606400000000001

Do you think it would be possible or desirable to extend/generalize

your FixedPoint class to handle the "floating decimal" as an option?

Or would it be better to make it a separate class or subclass?

Any opinions?

BTW, I also believe there is a place for a rational type for

representing numbers like 1/3 which can't be represented exactly

by a finite number of decimal or binary digits.

still-learning-from-reading-your-code-ly y'rs

-Don