use 1d array + macro or 2d array? 
Author Message
 use 1d array + macro or 2d array?

I need to allocate a large 2d array (perhaps 2000 X 2000 float). The array
will be traversed repeatedly in somewhat random (i,j) order. The usual
approach is:
        float **x=(float **)calloc(nrow,sizeof(float *));
        for(irow=0;irow<nrow;irow++){
          x[irow]=(float *)calloc(ncol,sizeof(float));
        }

        with subsequent references to x[irow][jcol]

If nrow was 2000, that would require 2000 calls to calloc, which seems a bit
excessive and perhaps unnecessary. Also it might improve performance on
repeated (i,j) references if the array was a contiguous block (although this
may depend a lot on the system's paging algorithms too).  Anyway I was
toying with the idea of allocating a 1d array and then using a macro to map
(i,j) into that array (as in 2.13 of the FAQ), i.e.:
        float *x=calloc(nrow*ncol,sizeof(float));

        #define X(irow,jcol)    x(irow*ncol+jcol)

In the first case, the compiled code presumably looks up an address in the
array and adds an offset to it. In the second case, it computes a single
offset. It looks like the second case will be slower because it requires
more multiplication than for the 'real' 2d array.  But I'm not really sure.

I know the actual generated code depends on the compiler & system architecture,
but are there any obvious advantages or disadvantages (performance-wise) to
either method that might be common to all compiler-generated code?
--

Airborne Geophysics, Geological Survey of Canada, Ottawa
If you followup, please do NOT e-mail me a copy: I will read it here.



Sat, 23 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?

|> I need to allocate a large 2d array (perhaps 2000 X 2000 float).

[two possible implementations snipped]

|> In the first case, the compiled code presumably looks up an address in the
|> array and adds an offset to it. In the second case, it computes a single
|> offset. It looks like the second case will be slower because it requires
|> more multiplication than for the 'real' 2d array.  But I'm not really sure.
|>
|> I know the actual generated code depends on the compiler & system architecture,
|> but are there any obvious advantages or disadvantages (performance-wise) to
|> either method that might be common to all compiler-generated code?

I doubt it.  The 2d array requires an extra load, compared to an extra
multiplication for the 1d array.  Which is faster is machine-dependent.

(The following assumes the array dimensions are known at compile time.)
Most compilers these days will break multiplication by a constant known at
compile time into a sequence of shifts, adds, etc., which goes pretty fast.
If your compiler is deficient in this area you can also do it by hand
in your macro.  You may also be able to increase the speed of the
multiplication by increasing the dimension of the array to some nice power
of 2, so multiplication translates into a single shift instruction.  But with
the amount of data you're talking about, that may cause you to lose more in
page faults than you gain in arithmetic instructions.  In summary, it depends.



Sat, 23 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?

Quote:

>I need to allocate a large 2d array (perhaps 2000 X 2000 float). The array
>will be traversed repeatedly in somewhat random (i,j) order. The usual
>approach is:
>    float **x=(float **)calloc(nrow,sizeof(float *));
>    for(irow=0;irow<nrow;irow++){
>      x[irow]=(float *)calloc(ncol,sizeof(float));
>    }

>    with subsequent references to x[irow][jcol]

Bad idea.  Don't use calloc().  calloc() initializes its block with
all-bits-zero, which is not guaranteed to be the same as
floating-point-zero.  Use malloc(), then initialize manually.

Quote:

>If nrow was 2000, that would require 2000 calls to calloc, which seems a bit
>excessive and perhaps unnecessary. Also it might improve performance on
>repeated (i,j) references if the array was a contiguous block (although this
>may depend a lot on the system's paging algorithms too).  Anyway I was
>toying with the idea of allocating a 1d array and then using a macro to map
>(i,j) into that array (as in 2.13 of the FAQ), i.e.:
>    float *x=calloc(nrow*ncol,sizeof(float));

>    #define X(irow,jcol)    x(irow*ncol+jcol)

You probably know this, but it's a good idea to parenthesize macro arguments,
plus you used parentheses instead of brackets on the array reference:
  #define X(irow,jcol) x[(irow)*ncol+(jcol)]

Quote:

>In the first case, the compiled code presumably looks up an address in the
>array and adds an offset to it. In the second case, it computes a single
>offset. It looks like the second case will be slower because it requires
>more multiplication than for the 'real' 2d array.  But I'm not really sure.

>I know the actual generated code depends on the compiler & system architecture,
>but are there any obvious advantages or disadvantages (performance-wise) to
>either method that might be common to all compiler-generated code?

Your array is rather large.  Assuming 32-bit `float's, it is
2000*2000*4 == 16000000 bytes in size.  Does your target computer have
at least 20MB of memory (16MB for the array plus 4MB for OS and
program)?  If not, then your choice of structure is a moot point: it's
gonna be slow no matter what.

Modern personal computer architectures are generally memory
bottlenecked, that is, the memory isn't fast enough to keep up with
the CPU.  They also tend to have a fairly small L1 cache (8K or 16K
typical depending on processor) and an L2 cache just a few orders of
magnitude larger (256K on my 32MB RAM system, 512K or 1MB on nicer
machines).  This means that if you are working with a dataset that
fails to fit in the cache(s), every access to memory is essentially a
speed penalty.  Look at the two techniques:

        * With double dereferencing (float **), you make two references to
          memory to pick out any one element.
        * With single dereferencing (float *), you make one reference to
          memory to pick out any one element.

Based on that evidence, I'd say that single dereferencing is a better
choice, especially since you may not have enough L1 cache to hold even
the table of 2000 pointers (8000 bytes given 32-bit pointer size)
along with whatever code the program uses.  If the table were, say,
3000x3000, the problem grows even worse, because then you'd have 12000
bytes in pointers, impossible to cache with 8K cache, tight with 16K
(and many L1 caches segregate code and data.  You have an 8K data
cache + an 8K code cache on a Pentium.)

Besides, look at the cost of an integer multiply: it's quick on an
i586.  How quick?  I can't find my book of instruction timings, and
Intel's ftp site has lousy response time, but it's fast.  On the other
hand, it might be slow on a RISC machine or an i486 or earlier
machine.

Can you give the array exactly 2048 rows?  Then the multiply turns into
a shift left by log(2048)/log(2) == 11 bits.

Is the array sparse?  That is, are many cells empty?  You can use
techniques to avoid using all that memory.  (This only matter if
memory is tight.)

--



Sat, 23 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?

Quote:

> I need to allocate a large 2d array (perhaps 2000 X 2000 float). The array
> will be traversed repeatedly in somewhat random (i,j) order. The usual
> approach is:
>    float **x=(float **)calloc(nrow,sizeof(float *));
>    for(irow=0;irow<nrow;irow++){
>      x[irow]=(float *)calloc(ncol,sizeof(float));
>    }

>    with subsequent references to x[irow][jcol]

[snip]

Quote:
> Anyway I was
> toying with the idea of allocating a 1d array and then using a macro to map
> (i,j) into that array (as in 2.13 of the FAQ), i.e.:
>    float *x=calloc(nrow*ncol,sizeof(float));

>    #define X(irow,jcol)    x(irow*ncol+jcol)

In the former replies to your article there was discussion about the
two approaches.  I think you can also consider third one.

        float **x = (float **) malloc (nrow * sizeof (float *));
        float *base_x = (float*) malloc (nrow * ncol * sizeof(float));
        for (irow = 0; irow < nrow; irow++)
                x [irow] = &base_x [irow * ncol];

Quote:
>    with subsequent references to x[irow][jcol]

This solution somewhat combines the two above.  I don't know how
clever and how efficient it is (esp. compared to machine
architecture).  I hope it will help you.
--
                                                 Grzegorz Nowakowski

-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GC dpu s:- a- C++(++++) ULSH P+>+++ L+>++++ E+(+++) W N++ o? K? w---() O?

------END GEEK CODE BLOCK------



Sun, 24 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?

Quote:

> I need to allocate a large 2d array (perhaps 2000 X 2000 float). The array
> will be traversed repeatedly in somewhat random (i,j) order.

[two possible methods' descriptions snipped]

There's a third method, that sort of falls in the middle of the other
two: allocate one big array of floats, and an array of *floats that
each point to the beginning of one of row of data:

        float *realarray = malloc(2000*2000*sizeof float);
        float **twodimarray = malloc(2000*sizeof (float *));
        int i;
        for (i=0; i<2000; i++)
          twodimarray[i] = realarray + 2000 * i;

Using this technique, use can use both methods of access (either
realarray[i*2000+j], or twodimarray[i][j]), whichever fits the case
best, and you won't call malloc more often than necessary.

(BTW, I think this method is also described in the FAQ, isn't it?)

Hans-Bernhard Broeker (Aachen, Germany)



Sun, 24 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?

Quote:

> I need to allocate a large 2d array (perhaps 2000 X 2000 float). The array
> will be traversed repeatedly in somewhat random (i,j) order. The usual
> approach is:
>    float **x=(float **)calloc(nrow,sizeof(float *));
>    for(irow=0;irow<nrow;irow++){
>      x[irow]=(float *)calloc(ncol,sizeof(float));
>    }

>    with subsequent references to x[irow][jcol]

> If nrow was 2000, that would require 2000 calls to calloc, which seems a bit
> excessive and perhaps unnecessary. Also it might improve performance on
> repeated (i,j) references if the array was a contiguous block (although this
> may depend a lot on the system's paging algorithms too).

If the second dimension is a constant, you can always use:

                float   (*array)[ dim2 ] = malloc( dim1 * dim2 * sizeof( float ) ) ;

(By the way, what's the point of using calloc instead of malloc.  It
isn't guaranteed to initialize an array of floats correctly anyway.  Or
do you know that the code will never be ported, and that it works on the
machine in question.)

Other than that, I would still only use two allocations:

                float   **array = malloc( dim1 * sizeof( float* ) ) ;
                float   *tmp = malloc( dim1 * dim2 * sizeof( float ) ) ;
                for ( i = 0 ; i < dim1 ; i ++ )
                    array[ i ] = &tmp[ i * dim2 ] ;

--

GABI Software, Sarl., 8 rue des Francs Bourgeois, 67000 Strasbourg, France
Conseils en informatique industrielle --
                            -- Beratung in industrieller Datenverarbeitung



Sun, 24 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?


 >
 >> I need to allocate a large 2d array (perhaps 2000 X 2000 float). The array
 >> will be traversed repeatedly in somewhat random (i,j) order. The usual
 >> approach is:
 >>       float **x=(float **)calloc(nrow,sizeof(float *));
 >>       for(irow=0;irow<nrow;irow++){
 >>         x[irow]=(float *)calloc(ncol,sizeof(float));
 >>       }
 >>
 >>       with subsequent references to x[irow][jcol]
 >>
 >> If nrow was 2000, that would require 2000 calls to calloc, which seems a bit
 >> excessive and perhaps unnecessary. Also it might improve performance on
 >> repeated (i,j) references if the array was a contiguous block (although this
 >> may depend a lot on the system's paging algorithms too).
 >
 >If the second dimension is a constant, you can always use:
 >
 >           float   (*array)[ dim2 ] = malloc( dim1 * dim2 * sizeof( float ) ) ;
 >
 >(By the way, what's the point of using calloc instead of malloc.  It
 >isn't guaranteed to initialize an array of floats correctly anyway.  Or
 >do you know that the code will never be ported, and that it works on the
 >machine in question.)
 >
 >Other than that, I would still only use two allocations:
 >
 >           float   **array = malloc( dim1 * sizeof( float* ) ) ;
 >           float   *tmp = malloc( dim1 * dim2 * sizeof( float ) ) ;
 >           for ( i = 0 ; i < dim1 ; i ++ )
 >               array[ i ] = &tmp[ i * dim2 ] ;

        Ok, thanks to all who answered. I've certainly got a handle on this
        now. There have been at least 6 posts suggesting this third
        method, which *is* in the FAQ as several have pointed out. It has
        the advantage of only 1 malloc() instead of 2000 and still allows
        [i][j].

        A couple of people have also pointed out that calloc() clears
        memory with zero-bits, but that isn't the same as:
                x[i][j]=0;
        because zero-bits might be NaN on some O/S. That was definitely
        something I hadn't considered.

        So, unless there are any new viewpoints on this, I'm satisfied
        with the discussion. Thanks to all.
--

Airborne Geophysics, Geological Survey of Canada, Ottawa
If you followup, please do NOT e-mail me a copy: I will read it here.



Mon, 25 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?


 >|> I need to allocate a large 2d array (perhaps 2000 X 2000 float).
 >
 >[two possible implementations snipped]
 >
 >|> In the first case, the compiled code presumably looks up an address in the
 >|> array and adds an offset to it. In the second case, it computes a single
 >|> offset. It looks like the second case will be slower because it requires
 >|> more multiplication than for the 'real' 2d array.  But I'm not really sure.
 >|>
 >|> I know the actual generated code depends on the compiler & system architecture,
 >|> but are there any obvious advantages or disadvantages (performance-wise) to
 >|> either method that might be common to all compiler-generated code?
 >
 >I doubt it.  The 2d array requires an extra load, compared to an extra
 >multiplication for the 1d array.  Which is faster is machine-dependent.
 >
 >(The following assumes the array dimensions are known at compile time.)
 >Most compilers these days will break multiplication by a constant known at
 >compile time into a sequence of shifts, adds, etc., which goes pretty fast.
 >If your compiler is deficient in this area you can also do it by hand
 >in your macro.  You may also be able to increase the speed of the
 >multiplication by increasing the dimension of the array to some nice power
 >of 2, so multiplication translates into a single shift instruction.  But with
 >the amount of data you're talking about, that may cause you to lose more in
 >page faults than you gain in arithmetic instructions.  In summary, it depends.
        I think you're right about the page faults - it may be my biggest
        problem.  Can't restrict the size to 2^n though.

        So far, I have heard several comments on the problems with using
        2D arrays (extra array of pointers etc). I've also read of several
        awkward things in the FAQ (declared arrays not the same alloc'ed
        arrays). What is the general consensus of the way that C implements
        2D arrays? Is the implementation considered to be better or worse
        than in other languages?
--

Airborne Geophysics, Geological Survey of Canada, Ottawa
If you followup, please do NOT e-mail me a copy: I will read it here.



Mon, 25 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?


 >>I need to allocate a large 2d array (perhaps 2000 X 2000 float). The array
 >>will be traversed repeatedly in somewhat random (i,j) order. The usual
 >>approach is:
 >>       float **x=(float **)calloc(nrow,sizeof(float *));
 >>       for(irow=0;irow<nrow;irow++){
 >>         x[irow]=(float *)calloc(ncol,sizeof(float));
 >>       }
 >>
 >>       with subsequent references to x[irow][jcol]
 >
 >Bad idea.  Don't use calloc().  calloc() initializes its block with
 >all-bits-zero, which is not guaranteed to be the same as
 >floating-point-zero.  Use malloc(), then initialize manually.
        Yes, I specifically need to clear the memory first. So you're
        saying that even memset() is a bad idea and that I should
        loop through like this:
                for(i=0;i<2000;i++){
                  for(j=0;j<2000;j++){
                    x[i][j]=0;
                  }
                }
        That seems unnecessarily slow, but I guess if all zero is NaN on
        some systems, it would be the only way?

 >>If nrow was 2000, that would require 2000 calls to calloc, which seems a bit
 >>excessive and perhaps unnecessary. Also it might improve performance on
 >>repeated (i,j) references if the array was a contiguous block (although this
 >>may depend a lot on the system's paging algorithms too).  Anyway I was
 >>toying with the idea of allocating a 1d array and then using a macro to map
 >>(i,j) into that array (as in 2.13 of the FAQ), i.e.:
 >>       float *x=calloc(nrow*ncol,sizeof(float));
 >>
 >>       #define X(irow,jcol)    x(irow*ncol+jcol)
 >
 >You probably know this, but it's a good idea to parenthesize macro arguments,
 >plus you used parentheses instead of brackets on the array reference:
 >  #define X(irow,jcol) x[(irow)*ncol+(jcol)]
        Agreed.

 >>In the first case, the compiled code presumably looks up an address in the
 >>array and adds an offset to it. In the second case, it computes a single
 >>offset. It looks like the second case will be slower because it requires
 >>more multiplication than for the 'real' 2d array.  But I'm not really sure.
 >>
 >>I know the actual generated code depends on the compiler & system architecture,
 >>but are there any obvious advantages or disadvantages (performance-wise) to
 >>either method that might be common to all compiler-generated code?
 >
 >Your array is rather large.  Assuming 32-bit `float's, it is
 >2000*2000*4 == 16000000 bytes in size.  Does your target computer have
 >at least 20MB of memory (16MB for the array plus 4MB for OS and
 >program)?  If not, then your choice of structure is a moot point: it's
 >gonna be slow no matter what.
        That's true, but memory isn't a problem (Windows NT/64+ Mb is
        the target).

 >Modern personal computer architectures are generally memory
 >bottlenecked, that is, the memory isn't fast enough to keep up with
 >the CPU.  They also tend to have a fairly small L1 cache (8K or 16K
 >typical depending on processor) and an L2 cache just a few orders of
 >magnitude larger (256K on my 32MB RAM system, 512K or 1MB on nicer
 >machines).  This means that if you are working with a dataset that
 >fails to fit in the cache(s), every access to memory is essentially a
 >speed penalty.  Look at the two techniques:
 >
 >   * With double dereferencing (float **), you make two references to
 >          memory to pick out any one element.
 >   * With single dereferencing (float *), you make one reference to
 >          memory to pick out any one element.
 >
 >Based on that evidence, I'd say that single dereferencing is a better
 >choice, especially since you may not have enough L1 cache to hold even
 >the table of 2000 pointers (8000 bytes given 32-bit pointer size)
 >along with whatever code the program uses.  If the table were, say,
 >3000x3000, the problem grows even worse, because then you'd have 12000
 >bytes in pointers, impossible to cache with 8K cache, tight with 16K
 >(and many L1 caches segregate code and data.  You have an 8K data
 >cache + an 8K code cache on a Pentium.)
        So C has a penalty in the extra array of pointers required to
        manage the 2D array, compared to say fortran which presumably
        just treats the array as a single linear block?

        Of course, this caching may be a moot point anyway, because
        I will be applying a 2D filter array (perhaps 10 X 10) to each
        data point which will be located at any (i,j). So for each
        point, I will be running loops to access [i +/- 10][j +/- 10]
        (100 elements). I don't think a linear array or a 2D "C" array
        is going to make much difference, because the caching is going
        to be a relative waste of time, as I hop around all over the
        array.

 >Besides, look at the cost of an integer multiply: it's quick on an
 >i586.  How quick?  I can't find my book of instruction timings, and
 >Intel's ftp site has lousy response time, but it's fast.  On the other
 >hand, it might be slow on a RISC machine or an i486 or earlier
 >machine.
 >
 >Can you give the array exactly 2048 rows?  Then the multiply turns into
 >a shift left by log(2048)/log(2) == 11 bits.
        The array could be anything from 100 X 100 to 2000 X 2000, not
        necessarily square and not necessarily limited to 2000, but that's
        typical.

 >Is the array sparse?  That is, are many cells empty?  You can use
 >techniques to avoid using all that memory.  (This only matter if
 >memory is tight.)
        No, the array isn't sparse. This is a gridding program, designed to
        'smear' ~250k data points into the grid cells, using a weighted
        smoothing algorithm (as opposed to splining).

        I think I'll just use a single linear array with a macro to compute
        the linear subscript (I think that's essentially what FORTRAN does
        anyway). If performance is a problem, I'll re-think it.

        Thanks for the input.
--

Airborne Geophysics, Geological Survey of Canada, Ottawa
If you followup, please do NOT e-mail me a copy: I will read it here.



Mon, 25 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?


Quote:


> >Bad idea.  Don't use calloc().  calloc() initializes its block with
> >all-bits-zero, which is not guaranteed to be the same as
> >floating-point-zero.  Use malloc(), then initialize manually.
>    Yes, I specifically need to clear the memory first. So you're
>    saying that even memset() is a bad idea and that I should
>    loop through like this:
>            for(i=0;i<2000;i++){
>              for(j=0;j<2000;j++){
>                x[i][j]=0;
>              }
>            }
>    That seems unnecessarily slow, but I guess if all zero is NaN on
>    some systems, it would be the only way?

   There's another way, which might give an improvement in some cases. We
run a development model without optimization and with debugging code
active, and large re-initialization loops like this can extract a minor
time penalty. A solution which I have used in some places, where I had to
initialize to zero a large vector of floats, or compare it with the
initialized values to see if there had been changes to it (for efficiency
of use of the data disk cache) is to count on the compiler initializing
variables of static duration to zero. I create a medium-sized static
vector of floats, and then use memcpy() and memcmp() against blocks of
the larger vector. As those functions are hand-tuned assembly code for
our platform, we potentially win over the unoptimized loop.

--

 Home page:  http://caliban.physics.utoronto.ca/neufeld/Intro.html
 "Don't edit reality for the sake of simplicity"



Tue, 26 Jan 1999 03:00:00 GMT  
 use 1d array + macro or 2d array?


:  >
:  >Bad idea.  Don't use calloc().  calloc() initializes its block with
:  >all-bits-zero, which is not guaranteed to be the same as
:  >floating-point-zero.  Use malloc(), then initialize manually.
:       Yes, I specifically need to clear the memory first. So you're
:       saying that even memset() is a bad idea and that I should
:       loop through like this:
:               for(i=0;i<2000;i++){
:                 for(j=0;j<2000;j++){
:                   x[i][j]=0;
:                 }
:               }
:       That seems unnecessarily slow, but I guess if all zero is NaN on
:       some systems, it would be the only way?

Not really.

static double zeros[100];

void foo() {
   ...
   for (i = 0; i < 2000; i++) {
      for (j = 0; j < 20; j++) {
         memcpy(&x[i][j*100], zeros, 100 * sizeof(double));
      }
   }

Quote:
}

Should be as fast as memset(), or nearly so, and has the advantage of
working no matter what (double)0 is, since static memory is guaranteed
to be initialized to the correct form of zero.

:  >>In the first case, the compiled code presumably looks up an address in the
:  >>array and adds an offset to it. In the second case, it computes a single
:  >>offset. It looks like the second case will be slower because it requires
:  >>more multiplication than for the 'real' 2d array.  But I'm not really sure.

I've timed it before; it's highly machine dependent.  Basically it
depends whether a memory access or multiplication is more expensive.
It also tends to be algorithm dependent, since the cost of memory
access is highly dependent on how often you hit the cache on modern
machines.  If, for example, you iterate over the matrix in the correct
order: x[i][j], x[i][j+1], x[i][j+2] ...  x[i] is hopefully lifted out
of the loop, and even if not, it is likely to be in the cache most of
the time, so the memory address time is neglible.  On the other hand,
with random accesses to a large array, the two memory accesses vs one
memory access difference can be much more costly than multiplication
(especially if it is multiplication by a constant).

:  >>
:  >>I know the actual generated code depends on the compiler & system architecture,
:  >>but are there any obvious advantages or disadvantages (performance-wise) to
:  >>either method that might be common to all compiler-generated code?
:  >
:  >Your array is rather large.  Assuming 32-bit `float's, it is
:  >2000*2000*4 == 16000000 bytes in size.  Does your target computer have
:  >at least 20MB of memory (16MB for the array plus 4MB for OS and
:  >program)?  If not, then your choice of structure is a moot point: it's
:  >gonna be slow no matter what.

This isn't necessarily true; I've seen programs with 128 MB of memory
allocated running quite quickly on a machine with 32 MB memory.  The
trick: good locality of reference so swapping doesn't kill you.

[ good discussion of cache issues omitted ]

:       I think I'll just use a single linear array with a macro to compute
:       the linear subscript (I think that's essentially what FORTRAN does
:       anyway). If performance is a problem, I'll re-think it.

Yes; this is actually what I do with many of my large (n>200)
matrices.  Though I've stopped worrying about this issue that much
unless the memory fetches show up as a bottleneck in profiling, since
most of the test cases I ran show a difference of +-20% at most.  A
significant difference, but one I don't want to worry about most of
the time.

---------------------------------------------------------------------------
Tim Hollebeek         | Disclaimer :=> Everything above is a true statement,
Electron Psychologist |                for sufficiently false values of true.

----------------------| http://wfn-shop.princeton.edu/~tim (NEW! IMPROVED!)



Tue, 26 Jan 1999 03:00:00 GMT  
 
 [ 11 post ] 

 Relevant Pages 

1. mallocing a 1D and 2D array

2. Which 2D array of 2D array addressing method?

3. 2D array of pointers to 2D arrays

4. How To pass 2D String array to VB from VC++ Using Safe array

5. Assigning one dimension of a 2D array to a 1D array variable?

6. Multiply and Add matrices with 2d array WITHOUT ARRAY INDEXING

7. Fast array transfer with 2D arrays...?

8. Help manipulating 2D array using (not and)

9. Help manipulating 2D array using (not and)

10. Beginner: 2d array using CArray

11. 1D and 2D table

12. Copy 1D safeArray to 2D safearray

 

 
Powered by phpBB® Forum Software