Speed vs. variable size 
Author Message
 Speed vs. variable size

Hi

I was wondering, why would you use int's when you are sure a variable
won't get higher than about 50 (eg in a for-loop). If you had a lot of
these variables, would it pay to use char's instead of int's, as for
memory saving purposes and faster access?
 In other words, which will be fastest in a program: 10000 char's or
10000 int's, or doesn't it make a difference (because of CPU
architecture and such)?

Just wondering, fk
--



Wed, 19 Mar 2003 03:00:00 GMT  
 Speed vs. variable size


Quote:
>Hi

>I was wondering, why would you use int's when you are sure a variable
>won't get higher than about 50 (eg in a for-loop). If you had a lot of
>these variables, would it pay to use char's instead of int's, as for
>memory saving purposes and faster access?
> In other words, which will be fastest in a program: 10000 char's or
>10000 int's, or doesn't it make a difference (because of CPU
>architecture and such)?

int will almost always be faster than char, and I do not know of a
modern system (other than some embedded ones) where it would be slower
(if you were using one of the old 8-bit chips such as a 6052, it might
be worth it)

Quote:

>Just wondering, fk

Francis Glassborow      Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA          +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
--



Thu, 20 Mar 2003 08:33:25 GMT  
 Speed vs. variable size

Quote:

> Hi

> I was wondering, why would you use int's when you are sure a variable
> won't get higher than about 50 (eg in a for-loop). If you had a lot of
> these variables, would it pay to use char's instead of int's, as for
> memory saving purposes and faster access?
>  In other words, which will be fastest in a program: 10000 char's or
> 10000 int's, or doesn't it make a difference (because of CPU
> architecture and such)?

> Just wondering, fk

I tried a simple test that does 1,000 iterations on a
100,000-element char and long array, filling and doing
a simple calculation.  The results for both sizes were
consistently the same; however, I paid no attention to
any optimizations, alignment, etc.

The results were (Pentium Pro 200):

CharFill: Time: Sat Sep 30 17:11:50 2000
CharFill: Time: Sat Sep 30 17:12:00 2000 10 secs
CharCalc: Time: Sat Sep 30 17:12:00 2000
CharCalc: Time: Sat Sep 30 17:12:20 2000 20 secs

LongFill: Time: Sat Sep 30 17:12:20 2000
LongFill: Time: Sat Sep 30 17:12:30 2000 10 secs
LongCalc: Time: Sat Sep 30 17:12:30 2000
LongCalc: Time: Sat Sep 30 17:12:50 2000 20 secs

#include        <stdlib.h>
#include        <stdio.h>
#include        <string.h>
#include        <ctype.h>
#include        <time.h>

#define         ITERS   1000L
#define         MAXMUM  100000L

unsigned char   Carray[MAXMUM];
unsigned long   Larray[MAXMUM];

int     main ( void )
{
  long          i, j;
  time_t        tTime;

/* fill char array */

  time( &tTime );
  printf( "CharFill: Time: %s",ctime(&tTime) );
  srand( 1331 );
  for( j=0; j<ITERS; j++ )
    for( i=0; i<MAXMUM; i++ )
      Carray[i] = rand() % 512;
  time( &tTime );
  printf( "CharFill: Time: %s",ctime(&tTime) );

/* calculate char array */

  time( &tTime );
  printf( "CharCalc: Time: %s",ctime(&tTime) );
  for( j=0; j<ITERS; j++ )
    for( i=0; i<MAXMUM; i++ )
      Carray[i] = Carray[i] / (i + Carray[i]);
  time( &tTime );
  printf( "CharCalc: Time: %s",ctime(&tTime) );

/* fill long array */

  time( &tTime );
  printf( "LongFill: Time: %s",ctime(&tTime) );
  srand( 1331 );
  for( j=0; j<ITERS; j++ )
    for( i=0; i<MAXMUM; i++ )
      Larray[i] = rand() % 512;
  time( &tTime );
  printf( "LongFill: Time: %s",ctime(&tTime) );

/* calculate long array */

  time( &tTime );
  printf( "LongCalc: Time: %s",ctime(&tTime) );
  for( j=0; j<ITERS; j++ )
    for( i=0; i<MAXMUM; i++ )
      Larray[i] = Larray[i] / (i + Larray[i]);
  time( &tTime );
  printf( "LongCalc: Time: %s",ctime(&tTime) );

  return 0;

Quote:
}

Yours,

Geoff Houck
systems hk

http://www.teleport.com/~hksys
--



Thu, 20 Mar 2003 08:34:06 GMT  
 Speed vs. variable size

Quote:
> I was wondering, why would you use int's when you are sure a
> variable won't get higher than about 50 (eg in a for-loop). If you
> had a lot of these variables, would it pay to use char's instead of
> int's, as for memory saving purposes and faster access?

It probably would not be noticeably faster, since (for example) most
32-bit processors prefer to transfer 32 bits at a time.

Early on in my C programming career, I had a program that never had to
deal with a number above 122, so I made every variable a char (or a
char*). However, the program kept getting warnings from the compiler
for using a char as an array index. Other warnings were issued because
of expressions that would be automatically promoted to an int, the
results of which were naturally being assigned to a char. I rewrote
the program to use ints everywhere (except when dealing with actual
character data) and the compiler was happy.

b
--



Sat, 22 Mar 2003 13:21:08 GMT  
 Speed vs. variable size

Quote:
>Hi

>I was wondering, why would you use int's when you are sure a variable
>won't get higher than about 50 (eg in a for-loop). If you had a lot of
>these variables, would it pay to use char's instead of int's, as for
>memory saving purposes and faster access?
> In other words, which will be fastest in a program: 10000 char's or
>10000 int's, or doesn't it make a difference (because of CPU
>architecture and such)?

>Just wondering, fk

While char is usually smaller than int, it may not be faster.  int is
supposed to be the "natural" size on the processor and it would be
entirely reasonable for the processor to be optimized for that size.

<<Remove the del for email>>
--



Sat, 22 Mar 2003 13:21:40 GMT  
 Speed vs. variable size

Quote:

> Hi
> I was wondering, why would you use int's when you are sure a variable
> won't get higher than about 50 (eg in a for-loop). If you had a lot of
> these variables, would it pay to use char's instead of int's, as for
> memory saving purposes and faster access?

Typically, the *only* real effect will be memory saving. If the code
runs faster with chars than with ints, this usually happens because a
char array fitted into a fast cache, but the int array (4 times as
large, typically) didn't. It may even make the difference between
'runs' and 'does not run', if the total amount of available memory is
tight.

OTOH, accessing chars can very well be *slower*, depending on what
type of CPU you have. Ints are (supposed to be) the 'native language'
of the hardware, whereas chars often have to be extracted by extra
hard- or software steps, making them slower to access.

So it usually pays off to make all elementary variables 'int' (or
unsigned), leaving the use of smaller data structures to being
elements of composite datastructures (arrays in particular).
--

Even if all the snow were burnt, ashes would remain.
--



Sat, 22 Mar 2003 13:24:17 GMT  
 Speed vs. variable size
thanks for the info
i should have come up with that myself though... stupid me :-|
anyway, i tested it myself, and found that the speed is "undefined
behavior" as it turns out completely different on different compilers.
For example, one fills up the arrays faster than the other, while with
another compiler the calculations go faster...

thanks to all, fk

Quote:
> I tried a simple test that does 1,000 iterations on a
> 100,000-element char and long array, filling and doing
> a simple calculation.  The results for both sizes were
> consistently the same; however, I paid no attention to
> any optimizations, alignment, etc.

> The results were (Pentium Pro 200):

> CharFill: Time: Sat Sep 30 17:11:50 2000
> CharFill: Time: Sat Sep 30 17:12:00 2000 10 secs
> CharCalc: Time: Sat Sep 30 17:12:00 2000
> CharCalc: Time: Sat Sep 30 17:12:20 2000 20 secs

> LongFill: Time: Sat Sep 30 17:12:20 2000
> LongFill: Time: Sat Sep 30 17:12:30 2000 10 secs
> LongCalc: Time: Sat Sep 30 17:12:30 2000
> LongCalc: Time: Sat Sep 30 17:12:50 2000 20 secs

> Yours,

> Geoff Houck

--



Sat, 22 Mar 2003 13:24:43 GMT  
 Speed vs. variable size

Quote:

> Hi

> I was wondering, why would you use int's when you are sure a variable
> won't get higher than about 50 (eg in a for-loop). If you had a lot of
> these variables, would it pay to use char's instead of int's, as for
> memory saving purposes and faster access?
>  In other words, which will be fastest in a program: 10000 char's or
> 10000 int's, or doesn't it make a difference (because of CPU
> architecture and such)?

There's no definite answer to this question.  The answer depends
on many things, such as compiler, hardware, and operating system.

Some architectures (hardware and operating system) are more efficient
at handling operations on 16 or 32 (or even 64) bit integers than
on a straight char (typically 8 bit).  For example, an increment
of a 32 bit integer implemented in hardware is almost certain to
be faster than an increment of a 8 bit char that must be emulated
in software.

Some architectures require "word alignment", so declaration of a
single char may still be padded for various efficiency reasons.
So, even though you declare a char, you may still lose 4 or 8
bytes of memory at run time.  [The same applies to other types
such as short.  However a smart compiler writer will make
sure that the int type supported by a compiler happens to
give great efficiency.]

Also in C (at least C89;  I'm unsure what C99 has to say), a char
will be promoted to an int when passed as an argument.  Which
will quickly negate any savings from using a char counter.
Also many standard i/o functions return an int, which really
should be tested before storing it in a char variable.
--



Sat, 22 Mar 2003 03:00:00 GMT  
 Speed vs. variable size

Quote:

>>... which will be fastest in a program: 10000 char's or 10000 int's ...


Quote:

>There's no definite answer to this question.  The answer depends
>on many things, such as compiler, hardware, and operating system.

Indeed.

This followup is just to expound on one point:

Quote:
>Also in C (at least C89;  I'm unsure what C99 has to say), a char
>will be promoted to an int when passed as an argument. ...

The rules are essentially unchanged from C89 to C99.  "Narrow"
types -- specifically, float, and char and short in their signed
and unsiged flavors -- are widened when being passed as an
"unprototyped" argument.  This includes all but the first parameter
to printf, for instance; this is why "%f" prints a "double", not
a "float".  The prototype for printf is:

        int printf(const char *, ...);

Any float -- or char, or unsigned short, etc. -- in the "..." part
does not have a specific argument type it must match, so it undergoes
promotion.

The same applies to any "old-style" functions (already deprecated in
C89, but still useable):

        int weirdcompare(x, y)
                short x;
                float y;
        {
                if (x < y - 1.0)
                        return -1;
                if (x > y + 1.0)
                        return 1;
                return 0;
        }

        void f(void) {
                char a;
                float b;
                int result;
                ...
                result = weirdcompare(a, b);
                ...
        }

The definition above does not provide a prototype, and uses the
(deprecated) "old" K&R-1st-edition form.  The correct prototype
is therefore:

        int weirdcompare(int, double);

and the arguments must be widened: char becomes int[%], and float
becomes double.
-----
 [% In an extremely unlikely situation, char might become unsigned int.
  As far as I know no one has ever built such a C compiler.  The
  standard I/O functions would be difficult to use correctly, in this
  case.]
-----

On the other hand, if you use prototype syntax, and the argument
has a prototype, parameter-passing is effectively identical to
ordinary assignment:

        int weirdcompare2(short x, float y) {
                ... same as before ...
        }
        ...
                result = weirdcompare2(a, b);

Now, having the prototype in hand, a compiler is free *not* to
widen "a" to "int", and "b" to "double".  Thus, if you are going
to use narrow types, you *must* use prototype syntax in the function
definition, and you *must* have a prototype in scope when you call
that function.  Otherwise the compiler is obliged to widen the
arguments, and the widened arguments may not match up with the
narrow ones and havoc may ensue.  (Many compilers widen anyway,
perhaps on the theory that bad programmers will think the compiler
is broken if the compiler keeps the types narrow, and produces
faster object code that fails when the function is not prototyped
and defined properly.)
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc


--



Sat, 22 Mar 2003 03:00:00 GMT  
 Speed vs. variable size

Quote:

> Also in C (at least C89;  I'm unsure what C99 has to say), a char
> will be promoted to an int when passed as an argument.

That's only true when a ANSI prototype is missing.

--

 __ San Jose, CA, US / 37 20 N 121 53 W / ICQ16063900 / &tSftDotIotE
/  \ You cannot step into the same river once.
\__/ Cratylus
    Official Omega page / http://www.alcyone.com/max/projects/omega/
 The official distribution page for the popular Roguelike, Omega.
--



Sun, 23 Mar 2003 11:23:18 GMT  
 Speed vs. variable size

Quote:

> Hi

> I was wondering, why would you use int's when you are sure a variable
> won't get higher than about 50 (eg in a for-loop). If you had a lot of
> these variables, would it pay to use char's instead of int's, as for
> memory saving purposes and faster access?
>  In other words, which will be fastest in a program: 10000 char's or
> 10000 int's, or doesn't it make a difference (because of CPU
> architecture and such)?

Ints are larger, but are typically the word size of the processor or
another "natural" value, making them faster to process in practice.  If
your program was *really* memory critical, you could certainly try it,
but would involve trading off time for space, since it's usually not
faster to extract a char value, convert it to int, and then process it
on today's processors.
--



Fri, 04 Apr 2003 03:00:00 GMT  
 
 [ 11 post ] 

 Relevant Pages 

1. Q: prog. size vs speed

2. Advantages of having member function definition outside of class declaration in msvc (speed vs size)

3. Speed and size of double vs float (portability n performance)

4. Visual C++ 7 speed vs Visual C# speed

5. .NET speed vs COM speed

6. C++ memory access speeds: normal vs struct vs class

7. About local variables and variable size???

8. Find size of variable size structure?

9. static local variables vs global variables

10. speed? lexical/global/extern variables

11. Speed with global variables.

12. How do you optimise for size over speed in Sun C compiler

 

 
Powered by phpBB® Forum Software