little-endian and big-endian in bit-order 
Author Message
 little-endian and big-endian in bit-order

Hi...

I've read the explanation of little-endian and big-endian at whatis.com.
But I have curiosity about some of that explanation:

 "Note that within both big-endian and little-endian byte orders, the
  bits within each byte are big-endian. That is, there is no attempt
  to be big- or little-endian about the entire bit stream represented
  by a given number of stored bytes. For example, whether hexadecimal
  4F is put in storage first or last with other bytes in a given
  address range, the bit order within the byte will be:

    01001111

  It is possible to be big-endian or little-endian about the bit order,
  but CPUs and programs are almost always designed for a big-endian bit
  order."

If I use bit-field with 16-bit compiler in my computer, the MSB (most
significant bit) is at bit number 31, and the LSB (least significant
bit) is at bit number 0. Is this 'little-endian' in bit order?

And, we could think 'little-endian' byte order in two ways.

  1) the address increase left to right and bytes are reversed
  2) the address increase right to left

If I store 1 in 16-bit word, we generally think like below in first
way:

   Address:    A+1          A+2
               00000001   00000000

Although the order in which I write the bits is big-endian, but the
bit-number where 1 is stored is zero, and the number of leftmost bit
is 7 in the above, isn't it?

So I think that the explanation which I quoted is wrong.

Plz tell me whether I think right or not.

Thanks for your time.



Mon, 30 Sep 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order


Quote:
> Hi...

> I've read the explanation of little-endian and big-endian at whatis.com.
> But I have curiosity about some of that explanation:

>  "Note that within both big-endian and little-endian byte orders, the
>   bits within each byte are big-endian. That is, there is no attempt
>   to be big- or little-endian about the entire bit stream represented
>   by a given number of stored bytes. For example, whether hexadecimal
>   4F is put in storage first or last with other bytes in a given
>   address range, the bit order within the byte will be:

>     01001111

>   It is possible to be big-endian or little-endian about the bit order,
>   but CPUs and programs are almost always designed for a big-endian bit
>   order."

> If I use bit-field with 16-bit compiler in my computer, the MSB (most
> significant bit) is at bit number 31, and the LSB (least significant
> bit) is at bit number 0. Is this 'little-endian' in bit order?

> And, we could think 'little-endian' byte order in two ways.

>   1) the address increase left to right and bytes are reversed
>   2) the address increase right to left

> If I store 1 in 16-bit word, we generally think like below in first
> way:

>    Address:    A+1          A+2
>                00000001   00000000

> Although the order in which I write the bits is big-endian, but the
> bit-number where 1 is stored is zero, and the number of leftmost bit
> is 7 in the above, isn't it?

> So I think that the explanation which I quoted is wrong.

> Plz tell me whether I think right or not.

The bits are represented for us indeterminate machines with the
most-significant-bit on the left.  That makes it big-endian. However, the
bit number is not its address, just its name. To the (byte-oriented)
computer all the bits in a byte are at the same address. Individual bits in
a byte are accessed by applying a mask to the byte. It doesn't really make
sense to say which end they are at, as they are processed in parallel. This
contrasts with longer constructs, which are read sequentially in the order
least-significant-byte, next-least-significant-byte,...most-significant byte
on a little-endian machine.
So I think you are right in that, from the perspective of the human, bits
are numbered in increasing order of their significance when considered as
part of a larger object, and written out in decreasing order on a diagram.
But to the computer it just makes no sense to say which endianness they
have.
\/\/\/*= Martin


Mon, 30 Sep 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:

>I've read the explanation of little-endian and big-endian at whatis.com.
>But I have curiosity about some of that explanation:

> "Note that within both big-endian and little-endian byte orders, the
>  bits within each byte are big-endian. That is, there is no attempt
>  to be big- or little-endian about the entire bit stream represented
>  by a given number of stored bytes. For example, whether hexadecimal
>  4F is put in storage first or last with other bytes in a given
>  address range, the bit order within the byte will be:

>    01001111

>  It is possible to be big-endian or little-endian about the bit order,
>  but CPUs and programs are almost always designed for a big-endian bit
>  order."

As Wolfgang Pauli supposedly once said, "This isn't right.  This isn't
even wrong."

The problem here is that endianness is an artifact of relationships.
Endianness arises when you slice up a whole -- as long as you deal
with the thing (whatever it is) as a whole, endianness does not
even occur.  CPUs and programs that deal with bytes as "atoms", as
it were, have *no* bit-within-byte endianness.  There *is* no
(visible) bit order because you cannot get just one bit.  You *have*
to get at least eight bits every time you look.  All eight bits
come at you at once; there is no "first" bit and no "last" bit;
they all arrive simultaneously.

Endianness gets in the way when you start to take things apart and
deal with them one piece at a time.  Once you do that, you introduce
some sort of ordering.  You now have "the first part" and "the last
part" and maybe one or more "middle parts", and you have to decide
which part (the high-order bits, the low-order bits, some middle
set of bits, or whatnot) you want to take first.

Quote:
>If I use bit-field with 16-bit compiler in my computer, the MSB (most
>significant bit) is at bit number 31, and the LSB (least significant
>bit) is at bit number 0. Is this 'little-endian' in bit order?

No, but it is not big-endian bit order either.  You said "I paste
the number 0 on the least significant bit and I paste the number
31 on the most significant bit", but you have not told us "I take
the bit I numbered 12 first and think about it" (which would be a
rather odd order, using a 1-bit group) or "I take bits 0 through
7 first, as a unit, and think about them" (which would be classic
little-endian order, using 8-bit bytes).

Suppose your CPU has a way to take a 16-bit quantity apart into two
8-bit quantities.  Suppose further that when that CPU takes the 16-bit
value 0x5678 apart into two 8-bit values (0x56 and 0x78 respectively),
it hands you 0x56 first.  This is big-endian order.  If it hands you
0x78 first, this is little-endian order.

"First" and "last" here is defined loosely as "increasing addresses",
but it applies to file I/O as well.  If you take the value 0x5678
and write it out to a "FILE *" that has been opened with "wb" or "w+b"
or similar, and you do it as:

        putc((val >> 8) & 0xff, thefile);
        putc(val & 0xff, thefile);

you have just imposed big-endian ordering into your file.  You put
the more-significant 8-bit quantity first.  If you take a 32-bit
quantity (in an unsigned long) and write it out as:

        putc((val >> 8) & 0xff, thefile);
        putc(val & 0xff, thefile);
        putc((val >> 24) & 0xff, thefile);
        putc(val & 0xff, thefile);

you have just imposed a mixed-endian order (reverse PDP-endian, in
fact).

You can *always* make up your own bit and byte orders whenever you
split a big value into a series of smaller values, and you can
always read any bit and byte order whenever you assemble a series
of small values into a bigger value.  If your machine will do its
own assembling-and-breaking-up, it will have its own orders, and
you can choose to use that, or impose your own instead.

The important thing to keep in mind is that "endianness" arises
only from converting between "elements taken one at a time" and "a
whole".  If you only ever deal with "things as a whole", endianness
never arises.  It is quite difficult to do that though, because
the world is full of other people who have taken big (16 or more
bit long) things and broken them down into a series of smaller (8
or so bit) things, and you have to find out how *they* did that,
and un-do and re-do it, when programming in C.
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc




Mon, 30 Sep 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:

>    putc((val >> 8) & 0xff, thefile);
>    putc(val & 0xff, thefile);
>    putc((val >> 24) & 0xff, thefile);
>    putc(val & 0xff, thefile);

>you have just imposed a mixed-endian order (reverse PDP-endian, in
>fact).

Of course, that last putc() call should be:

        putc((val >> 16) & 0xff, thefile);
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc




Mon, 30 Sep 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:

>Hi...

>I've read the explanation of little-endian and big-endian at whatis.com.
>But I have curiosity about some of that explanation:

> "Note that within both big-endian and little-endian byte orders, the
>  bits within each byte are big-endian.

This is nonsense.  The bits within a byte can't be described as
either big-endian or little-endian.  They're all within one byte
and thus share the same address in memory.

John
--
John Winters.  Wallingford, Oxon, England.

The Linux Emporium - the source for Linux CDs in the UK
See http://www.linuxemporium.co.uk/



Fri, 04 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:
> This is nonsense.  The bits within a byte can't be described as
> either big-endian or little-endian.  They're all within one byte
> and thus share the same address in memory.

Don't be silly. Take the a byte value say 148 (0x94) which in binary is
10010100. This of course, is big-endian. In little-endian it would be
written in reverse 00101001.

Think of memory as an 2d array. Vertically you have the address,
horizontally you have the bits.

Dean



Sat, 05 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:

> > This is nonsense.  The bits within a byte can't be described as
> > either big-endian or little-endian.  They're all within one byte
> > and thus share the same address in memory.

> Don't be silly. Take the a byte value say 148 (0x94) which in binary is
> 10010100. This of course, is big-endian. In little-endian it would be
> written in reverse 00101001.

> Think of memory as an 2d array. Vertically you have the address,
> horizontally you have the bits.

> Dean

stop, what you state is absolute nonsense.
endianes is concerned to bytes only not to bits.
(There used to be some machines that would have a reverse bit-order,
but don't remeber which these where. perhaps somebody else?)

endianess is a problem with types that are bigger than 1 byte.
i.e. 1 in int, which ought to be 32Bits (4Bytes), can be stored as

address value
0xe000  0x01
0xe001  0x00
0xe002  0x00
0xe003  0x00

which is little-endian, because the lowest byte is stored in the
first address. In big-endian machines this looks like:

0xe000  0x00
0xe001  0x00
0xe002  0x00
0xe003  0x01

(address is just an example!)

        Z



Sat, 05 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:

>... Take the a byte value say 148 (0x94) which in binary is
>10010100. This of course, is big-endian. In little-endian it would be
>written in reverse 00101001.

But what if it is written simultaneously, all in the same spot?

Answer these questions for yourself:

 - Why did you write it down with a "1" at the front the first time
   (aka "big endian" bit order)?

 - Who took the value 0x94 and split it up into a sequence of bits?
   Did the *computer* do it, or was it just *you*?

 - Thus, who is responsible for creating the bit order?

Consider this as well: serial port data is transmitted little-endian.
Even if you plug a Mac into a PC, the serial port data comes across
the wire in little-endian order.  Does that mean the Mac uses
little-endian bit order?  How about the PC?
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc




Sat, 05 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

 > stop, what you state is absolute nonsense.
 > endianes is concerned to bytes only not to bits.

Wait a minute.  There are machines that have bit addressable memory.
--
dik t. winter, cwi, kruislaan 413, 1098 sj  amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn  amsterdam, nederland; http://www.cwi.nl/~dik/



Sat, 05 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:
> > This is nonsense.  The bits within a byte can't be described as
> > either big-endian or little-endian.  They're all within one byte
> > and thus share the same address in memory.

There exist, and have existed, processors which have addressable bits.
While one could argue that nobody-who-counts has ever referred to the
endianness of bits, that would demote the question from the interesting
(of semantics) to the pointless (of dialect).

Quote:

> Don't be silly. Take the a byte value say 148 (0x94) which in binary is
> 10010100. This of course, is big-endian. In little-endian it would be
> written in reverse 00101001.

That's an artifact of transcription. If you wrote them vertically,
would they be middle endian? In c, if you printf fields larger than
bytes, the endianness of your bytes disappears in the same transcription
process; nobody would propose that as evidence that all processors
exhibit the same endianness.

Review: Endianness refers to the significance of bytes within larger
integral values. Big-endian means that the byte that shares the same
address as a word containing it, or for word-addressed machines, the
byte with the lowest offset (of the set of bytes contained by the word)
is the most significant byte (as was intended by God).

To extend the concept to bits, one must identify what one means by
the "address of a bit". Only a few, likely all obsolete, instruction
sets have a reasonable meaning ascribable to the term (bit shifts are
defined by significance, so attempts to propose shifting-as-address
would be circular).

Marginally on topic:
C's bitfields might be said to have endianness, though it's somewhat
of a stretch to correlate the order of their declaration with the
concept of their "address". If one accepts the stretch, one can then
talk about the "bit field endianness" of a processor+compiler+release,
though I don't believe it's possible to observe it without invoking
the daemons of undefined behavior.

/* This example presumes sizeof(int)==4, CHAR_BIT==8 */
#include <stdio.h>
int main(int argc, char *argv[]) {
 struct bits {
  unsigned int byte0bit0: 1;
        ... etc ...
  unsigned int byte3bit7: 1;
 };

 union bitmap { struct bits thebits; unsigned int theint; } bitmap;

 bitmap.theint = 0xc8c4c2c1;
 printf("%d%d%d ... etc ...\n", bitmap.thebits.byte0bit0, ... etc ... );

 return 0;

Quote:
}

10000011010000110010001100010011

On this system, it would appear, to the extent that it's possible to
draw solid conclusions from undefined programs, byte and bitfield
ordering are little endian.

Off topic:
IIRC, for unfortunate historical reasons, the 68020 was bit dual-endian.
Bitfield instructions spanned words, so were necessarily big-endian to
match byte addressing. Bit numbers within words were little-endian,
possibly to match shift counts. I could be wrong, it's been a long time.

Martin


Always code as if the person who ends up maintaining your code will be a
{*filter*} psychopath who knows where you live.



Sat, 05 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:
>  - Why did you write it down with a "1" at the front the first time
>    (aka "big endian" bit order)?

I was taught English which is read from left to right. The order we write
numbers is purely convention due to historical reasons. I'm aware there are
languages which read from right to left or vertically.

Quote:

>  - Who took the value 0x94 and split it up into a sequence of bits?
>    Did the *computer* do it, or was it just *you*?

In this case, I did it, I've written too many base conversion routines to
write another just to work this out. I could have used the MS calculator,
instead.

Quote:
>  - Thus, who is responsible for creating the bit order?

I did, first based on the mathematical convention we currently employ, which
was invent by some mathematician long ago. The second based on the point
being demonstrated.

Quote:
> Consider this as well: serial port data is transmitted little-endian.
> Even if you plug a Mac into a PC, the serial port data comes across
> the wire in little-endian order.  Does that mean the Mac uses
> little-endian bit order?  How about the PC?

The person or group of people who design the specification for the serial
port decided that the bits should be transmitted starting with the least
significant bit through most significant bit. Comparing this with the
convention that we (English/Western cultures) write numbers starting with
the most significant digit through to the least significant, it would appear
that they got it wrong. They didn't get it wrong, they made an arbitrary
decision.

The point I was trying to make is that the bits do have order. Whilst most
of the time you can assume a byte is a black box and not think about it.
There are times when blindly assuming is stupid.

I seem to remember some space project shafted by programmers at mission
control using newtons and the craft programmers using pound/square inch.
This could easily be the serial port example, imagine if the Apple design
the mac serial port to follow the convention of numbers (most significant
digit first, through to the least significant) big endian.

Dean



Sun, 06 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

supposed to be "thought questions" (not really "rhetorical" but
rather "exercises in viewing things from different standpoints"):

Quote:
>>  - Who took the value 0x94 and split it up into a sequence of bits?
>>    Did the *computer* do it, or was it just *you*?
>>  - Thus, who is responsible for creating the bit order?


Quote:

>In this case, I did ...
>first based on the mathematical convention we currently employ, which
>was invent by some mathematician long ago. The second based on the point
>being demonstrated.

Thus, the bits have an order because you assigned an order to them.

If the computer (or any other entity) assigns an order, the bits will
then have that order:

Quote:
>> Consider this as well: serial port data is transmitted little-endian.
>> Even if you plug a Mac into a PC, the serial port data comes across
>> the wire in little-endian order.  Does that mean the Mac uses
>> little-endian bit order?  How about the PC?
>The person or group of people who design the specification for the serial
>port decided that the bits should be transmitted starting with the least
>significant bit through most significant bit.

(Incidentally, this convention is used on other serial media, including
Ethernet.)

Quote:
>... They didn't get it wrong, they made an arbitrary decision.

Exactly so.  My point -- and I do not think anyone is specifically
disagreeing with it, but I am not sure everyone out in usenet-land
is getting it, hence all these postings :-) -- is that these things
*are* arbitrary decisions, made by some person or group or computer
or IEEE standard or whatever.  The decision is made in order to
facilitate information exchange.  First I hand you a 1, then a 0
-- is that 10-base-2 (value 2), or is that 01-base-2 (1)?  We must
cooperate if we are both to construct the same meaning from these
things that have been split upart.  The original value and "true
meaning" of the sequence of separate pieces of information --
whether that is 1, or 2, or if we have enough bits and definitions
around, maybe a value like 59071513312 or "her sweather was a light
teal color" -- arises from a shared interpretation of those separate
pieces of information.

Quote:
>The point I was trying to make is that the bits do have order.

They may or may not have some "inherent" order imposed by the
computer, but what that order is, is irrelevant, if the computer
always hands the value to you as a pre-interpreted value.  In other
words, if I cannot tell whether "bit 0" was stored in "the leftmost
DRAM chip" or "the rightmost DRAM chip", I cannot tell what the
bit order *is*.  Fortunately, I do not *have* to care -- the order
of bits in a C byte is a black box.  Given an unsigned char "uc"
whose value is 0x94, if I ask for "uc & 1", the computer will fetch
*all* the bits from the appropriate DRAM chips.  I have no idea
whether the one that I will see after the "& 1" came from the left
or right side of the computer -- it does not matter.

Quote:
>Whilst most of the time you can assume a byte is a black box and
>not think about it.  There are times when blindly assuming is stupid.

In C, at least, you have no choice.  The smallest entity available[%]
is "the byte" (which in C means "the unsigned char", even if that
has more than 8 bits in it).  Since this is the smallest piece you
can "break off", as it were, it is the smallest unit of order you
can discover about your system, using C code.  It is the indivisible
atom upon which you can impose your own order, or allow someone
else to impose an order.
-----
[%] This is not strictly true.  C does have bitfields as structure
    members.  As it happens, if you use a union of unsigned chars
    and bitfields, you can uncover a bits-within-bytes order.  Where
    people tend to go wrong is that, having done this, they think
    that this is the *machine*'s bit order.  In fact, it is often
    merely the *compiler*'s bit order.  Two different C compilers for
    the same machine will sometimes choose different bit-within-byte
    orders!  Suppose the C compiler turns:

        struct { int :1, bitfieldA:4, bitfieldB:9; } x;

        x.bitfieldA = <value>;

    and

        use(x.bitfieldB_value);

    into runtime code sequences that resemble:

        temp = *(int *)&x;
        temp &= ~mask_for_field_A;  /* e.g., 0x1e or 0x78000000 */
        temp |= value << shift_for_field_A; /* e.g., 1 or 27 */
        *(int *)&x = temp;

    and:

        temp = *(int *)&x;
        temp &= mask_for_field_B;   /* e.g., 0x3fe0 or 0x07fc0000 */
        temp >>= shift_for_field_B;       /* e.g., 5 or 18 */
        use(temp);
        /*
         * Or more simply:
         *      use((*(int *)&x >> shift) & (mask >> shift));
         * If bitfields are to be signed, the above needs more work
         * -- generally, a left shift, followed by a sign-extending
         * right shift, to "smear" the top bit of field B into all
         * the sign bits.
         */

    Thus it is the compiler, not the computer, that is imposing an
    order.  The values in the comments show two ways a 32-bit-"int"
    compiler could impose a bit order on that "int" -- which the
    compiler here is treating as "atomic", i.e., not breaking it
    up -- making up the structure "x".

    If the computer in question has some sort of native bitfield
    addressing, and if the compiler uses that, then -- and only
    then -- is the computer imposing an order on the bits.  In that
    case the compiler can simply use the computer's order, rather
    than inventing its own.

    All of this footnote applies to bytes within words as well.
    A C compiler could choose (probably foolishly) to ignore some
    computer's already-existing byte-at-a-time instructions, instead
    synthesizing everything with full 32- or 64-bit load and store
    instructions, and shift-and-mask operations to extract the data
    within those 32-or-more-bit objects.  In that case, the computer
    itself might have one "native" byte order, and the runtime C
    system might have another one entirely.

    The original DEC Alpha had only 32- and 64-bit load and store
    instructions, hence C compilers for the Alpha *had* to create
    their own byte order (or use 32-bit bytes, which they chose
    not to do).  Fortunately the same group who built the Alpha
    also built the compilers, and when they built later Alphas that
    had byte load/store instructions, they used the same
    bytes-within-words order.

    The original MIPS R2000 had 8, 16, and 32 bit load and store
    *instructions*, but the CPU always did full 32-bit wide bus
    transactions.  Computers built with these CPUs had to take
    8-bit-byte oriented I/O devices and "wire the address lines
    funny", so that each 8-bit-byte appeared at a separate 32-bit-word
    address.  This made writing device drivers "interesting" (not
    really *difficult*, but it overturns some assumptions people
    like to make).
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc




Sun, 06 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

Quote:

> > This is nonsense.  The bits within a byte can't be described as
> > either big-endian or little-endian.  They're all within one byte
> > and thus share the same address in memory.

> Don't be silly. Take the a byte value say 148 (0x94) which in binary is
> 10010100. This of course, is big-endian. In little-endian it would be
> written in reverse 00101001.

> Think of memory as an 2d array. Vertically you have the address,
> horizontally you have the bits.

> Dean

Assuming memory is byte-addressed, there is no way to know nor any
reason to care about the 'order' of the bits within a byte.  Consider a
simple memory device like an EPROM.  The chip might have outputs named
d0, d1, ...,d7.  The chip is connected to a data bus with names like
D0,D1,...,D7.  There is no requirement that d0 be connected to D0.  In
fact, there is no way to know whether it is or not.  As long as the
eight outputs are connected to eight bus lines everything works.
Joe


Wed, 09 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order


Quote:

> > > This is nonsense.  The bits within a byte can't be described as
> > > either big-endian or little-endian.  They're all within one byte
> > > and thus share the same address in memory.

> > Don't be silly. Take the a byte value say 148 (0x94) which in binary is
> > 10010100. This of course, is big-endian. In little-endian it would be
> > written in reverse 00101001.

Except, that is an artifact how we typically write out the values.  How does
how we humans discuss things change how a machine works?  We were talking
about the endianness of a CPU, not the endianness of humans or of human
language...
Quote:

> > Think of memory as an 2d array. Vertically you have the address,
> > horizontally you have the bits.

> > Dean

> Assuming memory is byte-addressed, there is no way to know nor any
> reason to care about the 'order' of the bits within a byte.  Consider a
> simple memory device like an EPROM.  The chip might have outputs named
> d0, d1, ...,d7.  The chip is connected to a data bus with names like
> D0,D1,...,D7.  There is no requirement that d0 be connected to D0.  In
> fact, there is no way to know whether it is or not.  As long as the
> eight outputs are connected to eight bus lines everything works.

Actually, if it's an EPROM that you programmed previously, you care very
much.  However, if we assume a RAM, your point is quite valid...


Thu, 10 Oct 2002 03:00:00 GMT  
 little-endian and big-endian in bit-order

(snip)

Quote:
>[%] This is not strictly true.  C does have bitfields as structure
>    members.  As it happens, if you use a union of unsigned chars
>    and bitfields, you can uncover a bits-within-bytes order.  Where
>    people tend to go wrong is that, having done this, they think
>    that this is the *machine*'s bit order.  In fact, it is often
>    merely the *compiler*'s bit order.  Two different C compilers for
>    the same machine will sometimes choose different bit-within-byte
>    orders!  Suppose the C compiler turns:

If the processor has special instructions for addressing bits within
bytes (or words) then one might expect the compiler writer to follow
the specification for them.

Also, if an OS uses bit fields for its structures, a compiler written
by the same company will normally follow the OS conventions.  Once the
convention is chosen it is convenient (but not required) that other
compilers for the same OS choose the same convention.  In the days
of portable compilers and cross compilers, this does not always happen.

Now, my personal preference is for big-endian bytes, because it makes
it easier to read hex numbers written out.  Also, it helps prevent
bugs due to using the wrong datatype.  (If you use a pointer to the wrong
size datatype, you might get the right answer on a little endian machine,
but not on a big endian machine.  (Did anyone ever use the VAX/VMS dump
command?  It writes characters left to right, and hex right to left!)

For consistency, I prefer big-endian but numbering, but note that this
is only for documentation purposes in most computers, and is opposite
of popular use.  

-- glen



Sat, 12 Oct 2002 03:00:00 GMT  
 
 [ 17 post ]  Go to page: [1] [2]

 Relevant Pages 

1. Big endian, little endian question.

2. Read double little endian on big endian machine

3. Big Endian/Little Endian

4. Big Endian->Little Endian for floats

5. big endian on SUN to little endian on Intel conversion

6. big-endian vs. little-endian

7. Big Endian/Little Endian

8. Big Endian vs Little Endian and other questions

9. Big endian to Little endian

10. Big Endian - Little Endian

11. Floats: Big-endian/Little-endian conversion

12. Big-endian & Little-endian

 

 
Powered by phpBB® Forum Software