Eskandar Ensafi wrote in a message to All:
EE> 1. I remember hearing a while back that it's better to use
EE> an unsigned short over an unsigned char when implementing a
EE> small bitfield which needs to be 8 bits long or even
EE> smaller. Is there any truth to this? If so, what's the
EE> reason?
No reason at all... When you have bitfields, USE bitfields...
But......... (now we go compiler-dependent here ;-)) when you use
'unsigned char' instead of 'var:8' you might get the compiler into creating less
code. (some compilers and/or CPU's don;t have opcodes which can fetch a few bits
from a word, so the 'var:8' code will create extra AND-masks in your code.
(size/speed-optimization for some compilers only)
when you want bare speed, and don;t mind loosing a few bits, don;t use unsignbed
char, dont use unsigned short, but use unsigned int as that is a 'native entity'
to the compiler: the CPU generally processes 'int' types fastest. (well....
ofcourse, I'm sitting near one which doesn't (MC68000): that one processes
'short' types fastest...)
So you see:
a) portable -- use bitfields when you talk bitfields (disadvantage according to
ANSI/K&R: you cannot build a pointer to a bitfield)
b) speed/non-portable -- disassemble compiler output on various input (I do at
the job when coding for Z80 (8-bit machine!!!) comtroller: it has to fit a small
EPROM, and portability is priority +Infinity there (== NOT important)
EE> 2. Also, when implementing bitfields in a linked list -- in
EE> which case you want to minimize the size of the struct -- is
EE> it best to use a char, short or int to represent the
EE> bitfield? The reason I ask is that on some systems, short
EE> and int are the same size so there would be no gain, but
EE> char is (as far as I know) always 8 bits.
'char' is best for size, except when the compiler cannot 'pack' multiple 'char'
variables together; instead those are word-alligned by some compilers, but all
of them have #pragma's (again! compiler-dependent!) to force 'packed' structs.
generally, a char[] (array) is allways packed, so stuff your goods into that one
if you have seperate 8-bit wide bitfields...
BEST: use bitfields, as current-day compilers (most of them at least) will pack
bit-fields in a very good way. For you to achive this same packing performance
by hand requires additional coding of #defines and AND and OR mask to seperate
the bitfields...
EE> 3. I have also heard that it's "wrong" to implement a
EE> bitfield of the form bitfield:bits whose `bits' are the same
EE> length as a predefined C type such as char (bits = 8), short
EE> (bits = 16 or ...) or int (bits = 32 or ...). Is this true?
Nonsense...
But.... don't use bitfields larger that 16 bits when you want your stuff
portable to 16-bit MSDOS machines, as those C compilers say 'Yeck' and dump some
fine frustrating error-messages if you do things like this:
unsigned biggy_bit:17; /* Oh Yeah! */
EE> Then what is the best way to handle the following situation:
EE> typedef enum
EE> { a = 0x0F,
EE> b = 0x10,
EE> c,
EE> d = 0x80,
EE> e,
EE> f
EE> } foo_t; /* foo_t does not need a storage type greater than
EE> 8 bits */
EE> struct bar
EE> { struct bar *next;
EE> struct bar *last;
EE> short key;
EE> foo_t flags;
EE> };
Use your foo_t enum type if you like.
I( personally favor the more cruel approach using #defines and a bit-field or a
regular 'int' type to store those values...
NEVER store enum types in bit-fields (at least that's my practice)...
EE> OK, so sizeof(foo_t) == sizeof(int) and sizeof(struct bar *)
EE> == sizeof(int). As a result, the size of this struct is
Ah Ah, not on this compu of mine:
sizeof(foo_t) == sizeof(short)
(when I set the best size-optimizing flags of my compiler the darn thing even
will change this to:
sizeof(foo_t) == sizeof(char)
when both are used in arrays or structs...)
and:
sizeof(struct bar *) == 2 * sizeof(int)
(or in 32-bit models)
sizeof(struct bar *) == sizeof(int)
EE> foo_t flags:8;
Ow! all compilers I use fail here!
unsigned flags:8; they accept...
singed flags:8; they accept too....
foo_t flags:8; they do NOT accept... as bitfields may not be preceeded by a
specific type-definition. Only 'signed' and 'unsigned' modifiers are allowed
with my compilers. (I don't know if this is ANSI though. but I sure know ANSI
allows those modifiers... so it might be a 'subset' my compilers support...)
EE> What's the preferred way of dealing with this issue?
bitfields to bitfields.... use var:num; whwreever you DON'T use enum types.
Where you use 'enum' types, declare 'enum' typed variables to contain those data
(NOT int, chars, or else).
Anything else will have system/compiler dependent behaviour/speed/code-size and
will very well not survive the severest 'lint' verifications...
Best regards,
Ger Hobbelt a.k.a. Insh_Allah