Bitfields: char, short or int? 
Author Message
 Bitfields: char, short or int?

Hi,

I had 3 questions regarding bitfields:

1.  I remember hearing a while back that it's better to use an unsigned
short over an unsigned char when implementing a small bitfield which needs
to be 8 bits long or even smaller.  Is there any truth to this?  If so,
what's the reason?

2.  Also, when implementing bitfields in a linked list -- in which case you
want to minimize the size of the struct -- is it best to use a char, short
or int to represent the bitfield?  The reason I ask is that on some systems,
short and int are the same size so there would be no gain, but char is (as
far as I know) always 8 bits.

3.  I have also heard that it's "wrong" to implement a bitfield of the form
bitfield:bits whose `bits' are the same length as a predefined C type such
as char (bits = 8), short (bits = 16 or ...) or int (bits = 32 or ...).
Is this true?  Then what is the best way to handle the following situation:

typedef enum
{   a = 0x0F,
    b = 0x10,
        c,
        d = 0x80,
        e,
        f

Quote:
} foo_t;  /* foo_t does not need a storage type greater than 8 bits */

struct bar
{   struct bar  *next;
        struct bar  *last;
        short  key;
        foo_t  flags;

Quote:
};

OK, so sizeof(foo_t) == sizeof(int) and sizeof(struct bar *) == sizeof(int).
As a result, the size of this struct is sizeof(short) + 3 * sizeof(int),
which malloc() rounds off to 4 * sizeof(int) on my system.  I could reduce
this to 3 * sizef(int) by either using a char in place of foo_t or by doing
something like:

        foo_t flags:8;

But is this wrong and/or treated differently from a char?  What's the
preferred way of dealing with this issue?

- Eskandar

--
KiNDa LiKe a DoG WiTH SeVeN PuPiLS iN iTS eYe                L E F T   H A N D
KiNDa LiKe a MaDNeSS THaT ReFuSeS To SuBSiDe                   B  L  A  C  K
KiNDa LiKe eVeRYTHiNG You WaNT JuST WiTHiN YouR GRaSP           - - - - - -
KiNDa LiKe HoW a BaNSHee-WaiL DaNCeS oN a LiViNG HeaRT...       D a n z i g



Wed, 23 Apr 1997 00:53:09 GMT  
 Bitfields: char, short or int?

Quote:

>Hi,
>I had 3 questions regarding bitfields:
>1.  I remember hearing a while back that it's better to use an unsigned
>short over an unsigned char when implementing a small bitfield which needs
>to be 8 bits long or even smaller.  Is there any truth to this?  If so,
>what's the reason?

We are not talking about bitfields, but about skalar types "used as"
bitfields.

Access to a word may or may not be faster that access to a char,
but once you start having arrays of chars and arrays of words,
this constellation becomes more usual.

Please note that I did not use "unsigned short". You have no
reason to believe that an unsigned short will be a word.

Quote:
>2.  Also, when implementing bitfields in a linked list -- in which case you
>want to minimize the size of the struct -- is it best to use a char, short
>or int to represent the bitfield?  The reason I ask is that on some systems,
>short and int are the same size so there would be no gain, but char is (as
>far as I know) always 8 bits.

Again, we have a considerable empirical basis that leads us to
believe that chars are 8 bits on some systems, but since the
standard does not state this as a fact, you should not assume
that this is so.

Quote:
>typedef enum
>{   a = 0x0F,
>    b = 0x10,
>    c,
>    d = 0x80,
>    e,
>    f
>} foo_t;  /* foo_t does not need a storage type greater than 8 bits */
>struct bar
>{   struct bar  *next;
>    struct bar  *last;
>    short  key;
>    foo_t  flags;
>};
>OK, so sizeof(foo_t) == sizeof(int) and sizeof(struct bar *) == sizeof(int).
>As a result, the size of this struct is sizeof(short) + 3 * sizeof(int),
>which malloc() rounds off to 4 * sizeof(int) on my system.  I could reduce
>this to 3 * sizef(int) by either using a char in place of foo_t or by doing
>something like:

Such computations usually don't make too much sense, because alignments
(and even sizes of data types) may change by the simple change of a
compiler option or after a compiler upgrade. If you really have
problems when the size of a struct is 4 * sizeof(int), then do what
seems to be best for your compiler and write down why you did it.

(And write down that this point has to be checked again as soon as
you change compiler options, or upgrade your compiler.)

Kurt
--
| Kurt Watzka                             Phone : +49-89-2180-2158




Thu, 24 Apr 1997 05:03:20 GMT  
 Bitfields: char, short or int?
Eskandar Ensafi wrote in a message to All:

 EE> 1.  I remember hearing a while back that it's better to use
 EE> an unsigned short over an unsigned char when implementing a
 EE> small bitfield which needs to be 8 bits long or even
 EE> smaller.  Is there any truth to this?  If so, what's the
 EE> reason?

No reason at all... When you have bitfields, USE bitfields...

But......... (now we go compiler-dependent here ;-)) when you use
'unsigned char' instead of 'var:8' you might get the compiler into creating less
code. (some compilers and/or CPU's don;t have opcodes which can fetch a few bits
from a word, so the 'var:8' code will create extra AND-masks in your code.
(size/speed-optimization for some compilers only)

when you want bare speed, and don;t mind loosing a few bits, don;t use unsignbed
char, dont use unsigned short, but use unsigned int as that is a 'native entity'
to the compiler: the CPU generally processes 'int' types fastest. (well....
ofcourse, I'm sitting near one which doesn't (MC68000): that one processes
'short' types fastest...)

So you see:

a) portable -- use bitfields when you talk bitfields (disadvantage according to
ANSI/K&R: you cannot build a pointer to a bitfield)

b) speed/non-portable -- disassemble compiler output on various input (I do at
the job when coding for Z80 (8-bit machine!!!) comtroller: it has to fit a small
EPROM, and portability is priority +Infinity there (== NOT important)

 EE> 2.  Also, when implementing bitfields in a linked list -- in
 EE> which case you want to minimize the size of the struct -- is
 EE> it best to use a char, short or int to represent the
 EE> bitfield?  The reason I ask is that on some systems, short
 EE> and int are the same size so there would be no gain, but
 EE> char is (as far as I know) always 8 bits.

'char' is best for size, except when the compiler cannot 'pack' multiple 'char'
variables together; instead those are word-alligned by some compilers, but all
of them have #pragma's (again! compiler-dependent!) to force 'packed' structs.

generally, a char[] (array) is allways packed, so stuff your goods into that one
if you have seperate 8-bit wide bitfields...

BEST: use bitfields, as current-day compilers (most of them at least) will pack
bit-fields in a very good way. For you to achive this same packing performance
by hand requires additional coding of #defines and AND and OR mask to seperate
the bitfields...

 EE> 3.  I have also heard that it's "wrong" to implement a
 EE> bitfield of the form bitfield:bits whose `bits' are the same
 EE> length as a predefined C type such as char (bits = 8), short
 EE> (bits = 16 or ...) or int (bits = 32 or ...). Is this true?

Nonsense...

But.... don't use bitfields larger that 16 bits when you want your stuff
portable to 16-bit MSDOS machines, as those C compilers say 'Yeck' and dump some
fine frustrating error-messages if you do things like this:

unsigned biggy_bit:17;  /* Oh Yeah! */

 EE> Then what is the best way to handle the following situation:

 EE> typedef enum
 EE> {   a = 0x0F,
 EE>     b = 0x10,
 EE>    c,
 EE>    d = 0x80,
 EE>    e,
 EE>    f
 EE> } foo_t;  /* foo_t does not need a storage type greater than
 EE> 8 bits */

 EE> struct bar
 EE> {   struct bar  *next;
 EE>    struct bar  *last;
 EE>    short  key;
 EE>    foo_t  flags;
 EE> };

Use your foo_t enum type if you like.

I( personally favor the more cruel approach using #defines and a bit-field or a
regular 'int' type to store those values...

NEVER store enum types in bit-fields (at least that's my practice)...

 EE> OK, so sizeof(foo_t) == sizeof(int) and sizeof(struct bar *)
 EE> == sizeof(int). As a result, the size of this struct is

Ah Ah, not on this compu of mine:

sizeof(foo_t) == sizeof(short)

(when I set the best size-optimizing flags of my compiler the darn thing even
 will change this to:

   sizeof(foo_t) == sizeof(char)

 when both are used in arrays or structs...)

and:

  sizeof(struct bar *) == 2 * sizeof(int)

(or in 32-bit models)

  sizeof(struct bar *) == sizeof(int)

 EE>    foo_t flags:8;

Ow! all compilers I use fail here!

    unsigned flags:8; they accept...
    singed flags:8; they accept too....
    foo_t flags:8; they do NOT accept... as bitfields may not be preceeded by a
specific type-definition. Only 'signed' and 'unsigned' modifiers are allowed
with my compilers. (I don't know if this is ANSI though. but I sure know ANSI
allows those modifiers... so it might be a 'subset' my compilers support...)

 EE> What's the preferred way of dealing with this issue?

bitfields to bitfields.... use var:num; whwreever you DON'T use enum types.
Where you use 'enum' types, declare 'enum' typed variables to contain those data
(NOT int, chars, or else).

Anything else will have system/compiler dependent behaviour/speed/code-size and
will very well not survive the severest 'lint' verifications...

Best regards,

Ger Hobbelt a.k.a. Insh_Allah



Wed, 30 Apr 1997 05:01:24 GMT  
 
 [ 3 post ] 

 Relevant Pages 

1. convert char to short int via pointer?...

2. bytes to unsigned char, unsigned short, unsigned int, ...

3. int to char / char to int

4. char to int, int to char...

5. conversion int to char, char to int ?????

6. char *fnpars(const char *fn, int p, int v)

7. different between short int and int

8. cast long int to short int

9. short int vs int wrt to execution time

10. Int,short int and portability of code

11. int gethostname(char FAR* name, int len)

12. int main( int argc, char *argv[] )

 

 
Powered by phpBB® Forum Software