64-bit chips, 32-bit compatibility? 
Author Message
 64-bit chips, 32-bit compatibility?

Quote:

>For some reason people always post things like this to comp.arch.  This
>is a C question, not a computer architecture question.  The types
>possessed by the underlying computer architecture need not correspond to
>the types a given C compiler will support (particularly for "char" on
>processors with no byte addressing).  Followups to comp.lang.c.

I disagree.  Compiler issues are informed by hardware issues.

Also, for whatever reason, the moderators in comp.compilers did not
choose to post my request for info.  Maybe they think it's an
architectural issue.  ;-)

Quote:
>>What I'm wondering, is how many mainstream CPU's out there are
>>"strictly 64-bit," and have no 32-bit emulation of any kind.
>You'd need to be more specific.  No 32-bit pointer types?  Or what?

As I said, I was talking about integer types.  "char8 int16 int32 int64."

Quote:
>Also, this is a compiler question, not an architecture question.
>Several processors have no byte-manipulation instructions but have
>C compilers that have an 8-bit char type.

But if the machine architecture is incapable of supporting a direct
8-bit access, then you know something about the possible efficiency of
any compiler.  If you know that the architecture will segfault if not
aligned to 64-bit boundaries, then you also know something about all
possible compilers.

Quote:
>>int     32 bits
>Whoops.  In 16-bit code for x86 processors, and for some 68k compilers
>for embedded processors, int is 16 bits.

As I said, I'm only interested in supporting 32-bit and 64-bit code.
My "laundry list" was for 32-bit compilers.

Cheers,
Brandon

--
Brandon J. Van Every            |    Computer Graphics     |  The sun attempts
                                |                          |  to be white,

http://www.*-*-*.com/ ~vanevery  |  HTML CGI   Perl TCL/Tk  |  daytime.



Mon, 09 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

Quote:

>This is an architecture and a language issue.

(I have more or less removed the architecture issue from this, so
probably should have removed comp.arch from the newsgroups, but...)

Quote:
>[T]he Alpha ... has instructions for 32-bit [integer operations].
>If it seems a bit ambiguous at the assembly level, look at the C interface.
>For integer types, `int' is usually the `natural word size' of the machine,
>and it's common that `short' is a half-word, and/or `long' is a double-word.
>If you look at the Alpha, this scheme fits perfectly: it's a 32-bit machine.

This is true in the same sense that a 68020 using 16-bit `int's is
a 16-bit machine.  That is, the internal busses in the CPU are all
twice as wide, so while the Alpha is perfectly capable of doing
both 32 and 64 bit operations, it is really a 64-bit architecture.
It just gives you an easy way to toss out the upper half of the result.

Quote:
>This can be changed with compiler switches that specify what size `int' is,
>but that won't solve the problem without creating more compatibility problems.
>For example, when you call "fscanf(fp, "%d", &value);", we need to be clear on
>what size `value' is meant to be, or you get {*filter*} memory corruption problems.
>There are lots of other, similar problems to worry about, like file formats;
>we need to agree on whether various integer values are 32-bit or 64-bit.

This is true.  But that does not mean there should necessarily be
a compiler option to make `int' 32 or 64 bits.  If the C compiler
flatly refuses to do anything but 32-bit `int's, this problem
vanishes.

Quote:
>In fact, the situation with Alpha is very similar to the current situation
>with Macintosh and 16-bit Windows programs, with the sizes doubled of course.

Not really, because no one ever promised that `int's were big enough
to hold pointers, and there are no 32-bit pointer formats on the Alpha.
I know nothing about 16-bit Mac applications, but 16-bit Windows
applications use native 16-bit pointers.

Quote:
>I think that we can see the problem here: we're straining C's type system,
>because the usual `char', `short', `int', and `long' just can't cover enough.
>I'd very much like to see some sort of de-facto standard way to specify the
>number of bits for an integral type, with both `exact' and `at least' sizes.
>This would make it easier to write portable programs, without a lot of crap,
>and it would return old types like `int' to what they're supposed to be.

The `SBEIR proposal', recently discussed to death in comp.std.c,
does exactly this.  I, however, believe you are overestimating the
usefulness of `at least' and `exact' bit-sizes, and the extent to
which they will improve the situation.  (Time may tell, since the
SBEIR extensions are, they say, being put into a version of GCC,
and will thus be available to anyone via the GNU Copyleft.)
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc

  `... if we wish to count lines of code, we should not regard them as
   ``lines produced'' but as ``lines spent.'' '  --Edsger Dijkstra


Fri, 13 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

] In fact, the situation with Alpha is very similar to the current situation
] with Macintosh and 16-bit Windows programs, with the sizes doubled of course.
] You will run into the same types of problems around 2 GB on the Alpha as we
] currently run into at 32 KB with Mac and Windows programs.  Of course these

Why should that be ?
If you run WinNT or OpenVMS on an alpha, then yes: those OSes are 32bit OSes
that only use the lower 32bit pointers, but for on OSF/1 I can't see why 2GB
should be any kind of limit (except of course when there has to be some
compatibility with other unix versions, like if you NFS mount a 32bit file
system).

        Stefan



Fri, 13 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

Quote:
>Then look at the Alpha instruction set.  It has instructions for 32-bit add,
>subtract, and multiply.  It also has instructions for 32-bit load and store,
>even though it *doesn't* have single instructions for byte load and store!
>That seems just a bit peculiar for a `pure' 64-bit, byte-addressed machine.

Then look at the PDP-11 instruction set.  It supports both byte and
word operands.  That seems just a bit peculiar for a `pure' 16-bit,
byte-addressed machine.

Quote:
>If it seems a bit ambiguous at the assembly level, look at the C interface.
>For integer types, `int' is usually the `natural word size' of the machine,

Hear, hear.  How many C compilers for the 8080 or Z80 using an 8-bit int
have you seen/used?

Quote:
>and it's common that `short' is a half-word, and/or `long' is a double-word.

As far as I know, the concept of "word" is meaningless on Alpha.  "half-word"
and "double-word" even more so.  The C compilers for Alpha simply assign a
_standard_ type to every integer size supported by the implementation.
This is precisely what the C compilers for other platforms should have done,
as well, instead of stupidly assigning int and long to the same size
and inventing the gratuitous long long type for 64-bit integers.

Quote:
>If you look at the Alpha, this scheme fits perfectly: it's a 32-bit machine.

Except that it happens to have 64-bit general purpose registers and
supports a 64-bit virtual address space (and much more than 4 GB of real
memory), features which don't fit at all to a 32-bit machine (at least,
not to any that I know of).

Dan
--
Dan Pop
CERN, CN Division

Mail:  CERN - PPE, Bat. 31 R-004, CH-1211 Geneve 23, Switzerland



Fri, 13 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

Quote:
>] In fact, the situation with Alpha is very similar to the current situation
>] with Macintosh and 16-bit Windows programs, with the sizes doubled of course.
>] You will run into the same types of problems around 2 GB on the Alpha as we
>] currently run into at 32 KB with Mac and Windows programs.  Of course these

32KB? I see, 16 bit (int). However, 68K pointers have always been 32 bits.
Nowadays, (int) usually is too. Still, embedded 68K systems sometimes try to
get by with halfwords. And really small ones store halfword pointers, knowing
that they don't have more than, e.g., 24K of memory.

Quote:
>For example, when you call "fscanf(fp, "%d", &value);", we need to be clear on
>what size `value' is meant to be, or you get {*filter*} memory corruption problems.

Typically, compiler and runtime library have to agree on (sizeof(int)), etc.
Often you're given a choice of library. And that's why we have "%ld" and (long).
--
        Alex Colvin




Sat, 14 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

] Well, in Ada you can write
]
]     type int is range -2**31 .. 2**31 - 1;
]     type uint is mod 2**32;

Funny. In Lisp you just have integers and you don't need to care about any kind
of arbitrary size limit.

        Stefan



Sat, 14 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

Quote:

>>] In fact, the situation with Alpha is very similar to the current situation
>>] with Macintosh and 16-bit Windows programs, with the sizes doubled of course.
>>] You will run into the same types of problems around 2 GB on the Alpha as we
>>] currently run into at 32 KB with Mac and Windows programs.  Of course these
>32KB? I see, 16 bit (int). However, 68K pointers have always been 32 bits.
>Nowadays, (int) usually is too. Still, embedded 68K systems sometimes try to
>get by with halfwords. And really small ones store halfword pointers, knowing
>that they don't have more than, e.g., 24K of memory.

The register+displacement addressing mode in the original 68K only
offered a 16 bit signed offset.  This is the origin of the Mac 32K
data limitation (under what contexts I'm not sure, I've never done any
Mac programming).  That limit should be gone now that the 68020 and
later CPUs allow 32 bit offsets.

And as for 16 bit addresses... Well, the 68K has a 16 bit absolute
addressing mode.  With this you can address the first 32K and the last
32K of memory.  On the original 68000 it's much faster than the 32 bit
absolute mode.  And when you load a 16 bit word into an address
register it's automatically sign-extended to 32 bits.

--
Richard Krehbiel, Kastle Systems, Arlington VA USA



Sun, 15 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?
Quote:

>This is an architecture and a language issue.

>The Alpha people often point out that Alpha is a `pure' 64-bit architecture,
>instead of a 32-bit architecture extended to 64-bit, like say SPARC and MIPS.
>This sounds to me like a good story; certainly if i got to design a brand new
>architecture for the next 30 years, i'd make it clear that it's really 64-bit,

              ^^^^^^^^^^^^^^^^^^^^^^
Well, I would be careful about such statements.  It was MUCH LESS than 30
years ago that people thought that 16 bits was a lot and 32 bits was more than
we would need this century.

Quote:
>although it would also have good support for smaller and larger data types.

--
.---------------------------------------------------------------------.

|    Unix Software Development                  919-248-6133          |
|    Data General Corp., RTP NC                                       |
`---------------------------------------------------------------------'


Sun, 15 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

Quote:
>>This sounds to me like a good story; certainly if i got to design a brand new
>>architecture for the next 30 years, i'd make it clear that it's really 64-bit,
>              ^^^^^^^^^^^^^^^^^^^^^^
>Well, I would be careful about such statements.  It was MUCH LESS than 30
>years ago that people thought that 16 bits was a lot and 32 bits was more than
>we would need this century.

And they weren't too far off.  32-bits is sufficient for nearly all
applications and probably will remain so for a few more releases of
MicroSoft Windows. ;-)
--
===========================================================================
Jeffrey Glen Jackson  _|_Satan jeered, "You're dead meat Jesus, I'm gonna

x5483   Bungee till    | Jesus said, "Go ahead, make my day."
         you drop! ~~~~~~~~~ -- Carman, "The Champion"


Sun, 15 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

|> And they weren't too far off.  32-bits is sufficient for nearly all
|> applications and probably will remain so for a few more releases of
|> MicroSoft Windows. ;-)

Since many people also believe this, but with no humor ...

1) *Most* applications *are* perfectly happy in 32-bit, i.e.,
the statement above is true ... but easily misleading, because:

2) However, if you need 100 apps, and 99 of them are 32-happy, but
*1* of them needs 64-bit, then you
may want 64-bit, especially if that app is important to you.
In the recent 64-bit initiative, the ISVs who came to
participate in the press conference were {Oracle, Sybase, Informix} ...

3) AS noted in an earlier posting, memory-mapped files, and files with
holes, burn virtual memory pretty fast.  Some friends of ours created a
single 377GB file ... but I htink they just wrote & read it, rather than
mmaped it :-)  Some of this may seem crazy, but it is even applicable for
fairly modest-sized machines, i.e:
        a) You have time-series data collection, which may well leave
           holes in the file.
        b) You have very large design data bases.
In either case, a system may want to map the whole file into memory, and
only page-in those pages actually referenced in response to an
interactive session.  On a large machine, you might get a large part of the
data into memory.  On a small desktop, you could run the same code, but
might only be able to get reasonable performance on a smaller part of the
data.  For instance, suppose your database is "Boeing 777": on *BIG*
machine, you get to see the whole thing at once ... on your desktop, maybe
you get one tray table :-)  The nice thing is that programming can be
relatively straightforward and repsonds to the simple addition of more physical
memory.  

-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>

DDD:    415-390-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311



Mon, 16 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

|> >architecture for the next 30 years, i'd make it clear that it's really 64-bit,
|>               ^^^^^^^^^^^^^^^^^^^^^^
|> Well, I would be careful about such statements.  It was MUCH LESS than 30
|> years ago that people thought that 16 bits was a lot and 32 bits was more than
|> we would need this century.

Let's try a more detailed analysis, I've posted something like this before,
but forgot to save it, so let me try again:
PHYSICAL ADDRESSING
1) For many years DRAM gets 4X larger every 3 years, or 2 bits/3 years.

2) Thus, a CPU family intended to address higher-end systems will typically
add 2 more bits of *physical address* every 3 years, and will typically
be sized to fit the *largest* machine you intend to be built.
Given the normal progress, and usual need to cover 2-3 generations of
DRAMs, depending on timing of products, you need at least a 4:1 range,
and maybe a 16:1 range for extreme cases.

For example, 36-bit physical addresses support 16GB memories ...
and there already have been shipped single-rack microprocessor boxes with
16GB using just 16Mb DRAMs; there are of course, more in the 4GB-8GB range.
Of course, a 32-bit physical addressingmachine can get around this with extra
external-mapping registers ... assuming one can ignore the moaning from
the kernel programmers :-)

Of course, some kinds of system designs burn physical memory addresses
faster than you'd expect.   In particular, suppose you build a system
with multiple memory systems.  A minimal/natural approach is to use the
high-order bits of an address to select the memory to be accessed.
The simplest design ends up leaving addressing space for the *largest*
individual memory, so that smaller memories leave addressing holes.
I.e., suppose each memory might range from 64MB to 1GB (30 bits).
With a 36-bit address, one can conveniently use 2**6 or 64 CPUs
together.   Of course, if individual memories might go to 4GB (factor of 4),
then you are now down to 16 CPUs.

Note: of the next crop of chips, the physical address sizes seem split
between 36 and 40 bits...

VIRTUAL ADDRESSING
1) Is visible to user-level code, unlike physical addresses, which usually
are not.

2)  I've claimed that one rule of thumb says that there are practical
programs whose virtual memory use is 4X the physical memory size.  (I.e.,
having seen some like this ... and seeing that if they start paging much more,
they get slower than peopel can standa :-).  Hennessy claims this is
a drastic under-estimate, i.e., that as memory-mapped files get more use,
and files-with-holes, one can consume virtual memory much faster ...
and I agree, but it is hard to estiamte this effect.

FORECASTS for 64->128-bit transition:
1) If memory density continues to increase at the same rate,
and virtual memory pressure retains the 4:1 ratio, and we think we've just
added 32 more bits, to be consumed 2 bits/3 years, we get:
        3*32/2 = 48 years
and I arbitrarily pick 1995 as a year when:
        a) There was noticable pressure from some customers for 4GB+
        physical memories, and a few people buying more, in "vanilla"
        systems.
        b) One can expect 4 vendors to be shipping 64-bit chips,
        i.e., not a a complete oddity.
Hence, one estimate would be 1995+48 = 2043 to be in leading edge of
64->128-bit transition, based on *physical memory* pressure.
That is: the pressure comes from the wish to conveniently address the
memory that one might actually buy.

Of course, the multiple-memory system issue above pulls that in a few
years ... however, one can deal with that in the time-hallowed way of adding
extra mapping information, without bothering user-level code with changes.

2) On the other hand, if files-with-holes and file-mapping of large
files get much heavier use, the *virtual memory* pressure grows much faster
than a constant factor above the physical size ... and my best guess yields
around 2020.  Note that "minor" implementation issues like die space,
routing, and gate delays, especially of 128-bit adders & shifters are
non-trivial, so people aren't going to rush out and build 128-bitters for
fun, just as people matched timing dates of their 64-bitters to their
expected markets.  Of course, if somebody does an operating system that
uses 128-bit addressing to address every byte in the world uniquely, *and* this
takes over the world, it might be an impetus for 128-bitters :-)

Of course, all sorts of surprises could occur to disrupt these scenarios.
Note however, that the common assumption that it took N years to go
from 32->64 means that it would take N years from 64->128 ... is
incompatible with the normal memory progress, i.e., 64->128 is 2N.

-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>

DDD:    415-390-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311



Mon, 16 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

Quote:

>There is an implicit assumption in the discussion so far that the integer
>register word size is the virtual address size.

I don't think so but you're right to point out the distinction.

Quote:
>If we're trying to look
>20 years ahead I don't think this is a necessary assumption.

It's a false assumption right now.

Quote:
>I don't
>think it is necessary now, but the popularity of low level languages and
>a lingering overreaction to Intel's 8086 series limit acceptance
>of other addressing models.

8086 is one of the classic cases where pointers and ints can be different
(i.e. 16 bit register/ints vs. segment+offset for 'far' pointers).

--
-----------------------------------------


-----------------------------------------



Tue, 17 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?


|> >1) For many years DRAM gets 4X larger every 3 years, or 2 bits/3 years.
|>
|> This is of course dangerous territory because, to date, everyone that has
|> predicted some slowing in the rate of growth has had criticism heaped upon
Yes, and sorry, I might have been a little clearer.  I've mostly used the same
analysis to deter people from thinking that we'd be fighting the 64->128-bit
conversion in the year 2000: I've seen people claiming that this year, and
I also got beat up in 1991 taht R4K's were only 64-bit, and why didn't we
go straight to 128-bit? :-)  Anyway, to be clear: this was a straightline
analysis, and indeed there are plenty of big IF's given the likely process
gyrations out a few years.

|> I think the 64-bit transition is one that brings little to no real value. It will
|> increase code and data size with very little return in terms of performance.
It doesn't increase code size much, programs whose data is dominated by
floating-point arrays effectively don't increase in any noticable amount.

|> I have been measuring program size for the last 6 years on every system
|> I encounter, looking for the largest executables. This includes a broad
|> range of systems from mainframes to PC's. What I have found is that the
|> largest executable on what seems to be a typical system does not have
|> more than a dozen megabytes of executable code. I have seen no
|> appreciable change in this size over the last few years.
This is interesting ... but irrelevant, as code size is not pushing on the
edge, but data size.

...
|> there are some limited number of applications and users that have needs
|> that transcend the 32-bit addressing boundary, I just do not think this is the
|> general case.
I don't think we disagree, in some sense, i.e., after all, on this planet,
the "general case" is that person has 0MB of disk space, because the "average
person" doesn't own a computer, and the "most common" case for memory is
probably 640K or 1MB for DOS-based PC's.   On the other hand, we
disagree on the timing of the 64-bit need, but I suspect this is mostly
from talking to different kinds of customers...

Anyway, I agree: the people you talk to may not need 64-bit.
The people I talk to have been complaining for years about 2GB file and
8GB filesystem limits...

-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>

DDD:    415-390-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311



Tue, 17 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?

Quote:

>|> >architecture for the next 30 years, i'd make it clear that it's really 64-bit,
>|>               ^^^^^^^^^^^^^^^^^^^^^^
>|> Well, I would be careful about such statements.  It was MUCH LESS than 30
>|> years ago that people thought that 16 bits was a lot and 32 bits was more than
>|> we would need this century.

>Let's try a more detailed analysis, I've posted something like this before,
>but forgot to save it, so let me try again:
>PHYSICAL ADDRESSING
>1) For many years DRAM gets 4X larger every 3 years, or 2 bits/3 years.

This is of course dangerous territory because, to date, everyone that has
predicted some slowing in the rate of growth has had criticism heaped upon
their head for being stupid and short-sighted, but I think at some point we
must admit that the kind of growth we have been seeing cannot continue
indefinately. We will eventuall run out of atoms. Physics will increasingly limit
our ability to get smaller and faster. For the last 30 years the impediments
to smaller and faster storage were technological, now they are starting to
become physical.

Quote:
>2) Thus, a CPU family intended to address higher-end systems will typically
>add 2 more bits of *physical address* every 3 years, and will typically
>be sized to fit the *largest* machine you intend to be built.
>Given the normal progress, and usual need to cover 2-3 generations of
>DRAMs, depending on timing of products, you need at least a 4:1 range,
>and maybe a 16:1 range for extreme cases.

It is unclear that our abilities to realize very large systems are growing at the
same rate as smaller systems. So while average program size may be growing
at the stated rate, this does not mean that the programs that are actually
pushing on the address space limits are growing at the same rate.

Quote:
>VIRTUAL ADDRESSING
>1) Is visible to user-level code, unlike physical addresses, which usually
>are not.

>2)  I've claimed that one rule of thumb says that there are practical
>programs whose virtual memory use is 4X the physical memory size.  (I.e.,
>having seen some like this ... and seeing that if they start paging much more,
>they get slower than peopel can standa :-).  Hennessy claims this is
>a drastic under-estimate, i.e., that as memory-mapped files get more use,
>and files-with-holes, one can consume virtual memory much faster ...
>and I agree, but it is hard to estiamte this effect.

I used to be a BIG advocate of mapping file systems into virtual memory
memory systems (based on the unified access mechanism argument). This
no longer seems so attractive, particularly with so much remote file system
access. Virtual mapping of files while sometimes interesting, is certainly not
necessary (the right API could easily provide what looked like random file
access without file mapping), and if supporting file mapping is going to push
word width past 32 bits, maybe we should delay it for a while.

Quote:
>FORECASTS for 64->128-bit transition:

64 bit addresses allow approximately 4GBytes of address range per human
inhabitant of the planet. It is unclear how many people the planet can
contain, but even allowing for a 4 fold increase in population density, it
seems questionable why a single machine's address space would need to
be able to hold an average of 64,000,000,000,000,000,000,000,000,000 bytes
per planet inhabitant. This is more than enough space to keep track of
everything a person sees and hears with millesecond resolution for their
entire life.

I think the 64-bit transition is one that brings little to no real value. It will
increase code and data size with very little return in terms of performance.
I have been measuring program size for the last 6 years on every system
I encounter, looking for the largest executables. This includes a broad
range of systems from mainframes to PC's. What I have found is that the
largest executable on what seems to be a typical system does not have
more than a dozen megabytes of executable code. I have seen no
appreciable change in this size over the last few years.

I have also measured maximum file sizes, ignoring the files generated by
databases that like to keep everything in just one file. While average file
size is relatively small, maximum file size has, in my experience, broken
into the 100 megabyte range. Discussions with some friends in the text
retrieval business has indicated that individual GB size files are sometimes
useful, but that larger logical data stores are more, or just as, easily
handled by some arbitrary partitioning into sub GB files. I do not doubt that
there are some limited number of applications and users that have needs
that transcend the 32-bit addressing boundary, I just do not think this is the
general case.

Patterson and Hennesey indicate that when address bits become tight,
someone always comes up with some hair-brained segmentation solution
to extend a limited address range when a simple increase in address
size is what is really necessary. Not wanting to pass up an opportunity to
be hair-brained I decided to invent a 32-bit segmentation scheme to solve
the address range problem and what I concluded was that 32 bits provides
significantly more in the way segmentation possibilities than perhaps people
give it credit for and I think a 32-bit segmentation scheme could satisfy
addressing needs for the next 15-20 years.

david mayhew, ibm, (919)-254-4351



Tue, 17 Mar 1998 03:00:00 GMT  
 64-bit chips, 32-bit compatibility?


Quote:
>Well, I would be careful about such statements.  It was MUCH LESS than 30
>years ago that people thought that 16 bits was a lot and 32 bits was more than
>we would need this century.

20 years ago people didn't imagine applications that could need megabytes
or gigabytes of memory (and larger amounts of backing store). The evidence
at the moment is that software expands to fill the hardware available (and
then demands more). We can predict much better now how thing develop than
we could then because we know what to base that prediction on (i.e. the
improvements in hardware) and we have much more experience now about what to
expect in that area. That's not to say we can predict for certain what will
happen, just that we have much better grounds for prediction.

--
-----------------------------------------


-----------------------------------------



Wed, 18 Mar 1998 03:00:00 GMT  
 
 [ 30 post ]  Go to page: [1] [2] [3]

 Relevant Pages 

1. top 32 bits of 64-bit product of two 32-bit integers

2. Help: porting 32-bit app to 64-bit Dec Alpha

3. 64-bit integer on a 32-bit machine

4. Calling 64 bit lib. from 32 bit program

5. emulating a 64 bit divide with 2 32 bit registers in ansi c

6. converting signed 64 bit - 32 bit

7. 64 bit operation on 32 bit PC

8. REQUEST: 64-bit integer math on 32-bit processor

9. Accessing 32-bit com componet from 64-bit application

10. 32-bit vs 64-bit

11. Looking for: multiply and divide of 32 bit integers (64 bit result)

12. Tool 2 port 32 bit programms 2 64 bit

 

 
Powered by phpBB® Forum Software