
(*)alloc for large arrays?
Actually it is a little broader question that the subject suggests.
I have an array of floats in a file that I want to read into a C program.
I would like to read the array into a one dimensional dynamically allocated
array. I can do all of that. The problem is that the array is 16000
floats. A little quick math (16000*sizeof(float) > max value of size_t)
says that the standard memory management (malloc, realloc, memcpy) just
isn't going to cut it. I can't tell these routines to give me enough space.
Does anyone have any thoughts? I'm using gcc-2.3.3 on SunOS 4.1.1B
for those who think it matters.
The way I see it I have the following options....
a) I'm a moron. I'm missing something obvious. Please illuminate.
b) Use 'calloc' and pray you don't need to increase the size. Will this
work?
c) Don't read the whole thing into one array, dummy. Reconsider design
philosophy and get smart about using smaller chunks. This is probably
doable, but a pain in the rear end.
I'll post a summary if anyone expresses an interest.
Thanks, Jeff
--
###################### Opinions expressed are my own. ##########################