
Multiple processes sharing data space. --- HELP
here is a site that covers IPC under Unix:
http://www.ecst.csuchico.edu/~beej/guide/ipc/
I have used many of the techniques. If you use shared memory interface
you are limited to SHMMAX bytes in each shared segment. SHMMAX is in
/usr/include/asm/shmparam.h .
If you want to increase the size of SHMMAX on a later version of linux
you can write to /proc/sys/kernel/shmmax ( iknow it sounds odd but open
it with vi and give it a try).
FYI here is anoterh place I often start ( The Linux programmer's
BouncePoint):
http://www.ee.mu.oz.au/linux/programming/
Cheers,
Eric
Quote:
> I am really getting frustrated. Your help is greatly appreciated.
> The code I am writing, sitting on a linux box, must spawn several different
> processes which all share memory.
> I need to be as efficient as possible, as I am handling quite large amounts
> of data, therefore, I don't believe
> using a pipe would be appropriate.
> From what I can see, I have two options.
> Option I. I can't get to work.
> The function clone, a modified version of fork, has several different flags,
> this function should as I understand it, duplicate the parent process, and
> share data space. If I use fork, I am certain that my global variables are
> copies of each other, and modifying one, will not affect the other. Clone
> is sharing memory, but the problem
> I am having is my parent process appears to hang, and does not continue
> until the cloned process exits.
> What could I be doing wrong? Has anyone had success with this.
> I believe my other option is to use the shmemat() command.
--
Eric Hegstrom .~.
Senior Software Engineer /V\
Sonoran Scanners, Inc. // \\ L I N U X
520-617-0072 x402 ^^-^^