Semaphores, one writer, multiple readers? (Linux)

Semaphores, one writer, multiple readers? (Linux)

Post by t_i_m_0_n.. » Thu, 21 Jun 2001 16:27:03

I have read some basic concepts about semaphores
but haven't found a solution to my problem. I assume there
is an effective way to create some kind of locking mechanism for
"single writer, multiple readers" type of system?

I have a shared memory area in my system, which has to be read
very frequently by many processes, and updated infrequently by
exactly one process. If I use simple semaphore solution, which
has semaphore initial count set to one, then only one process
can read that memory area at a time, which, I assume, is not the
most effective solution? I need to protect that shared memory area,
because one process sometimes updates it (with memcpy, which I assume
is not atomic), so the memory area might contain partially old, partially
new data, which I don't want. However, there are no problems if
several processes read the area simultaneously, my only concern
is to prevent simulatenous write & read operations.

How can I implement this kind of "one writer, multiple readers"
solution in the most effective way?


 -----  Posted via NewsOne.Net: Free (anonymous) Usenet News via the Web  ----- -- Free reading and anonymous posting to 60,000+ groups
   NewsOne.Net prohibits users from posting spam.  If this or other posts


1. Fork multiple readers on one file, multiple writers on another?

Hello all,

I have a pretty basic question which I hope you can help me with.
I've read a number of man pages & searched Google for links, but
I haven't been able to come up with a solution.

I want several processes (created by fork) to read one file and
write another file. Each process is assigned one block in the
input file and one block in the output file -- the blocks don't

The problem is that with more than 1 process, the processes step
on each other's toes. I guess this must be because all i/o must
eventually go from/to the same physical file. I believe what's
directly causing trouble is a shared offset -- I want each process
to seek to its block in the input or output, and then read and
write from there; it doesn't seem to work that way, though.

What is the appropriate way to coordinate multiple readers &
writers which don't share memory? If I were using pthreads, I
could use a pthreads mutex to coordinate. However, I need to
have a non-shared heap, so pthreads is not feasible, I think.

The relevant snippet of code is shown below. I've tried moving
the fopen's before the fork loop, and I've tried putting flock
(on both files) around the reading & writing, but that doesn't

Any suggestions? Thanks in advance. I appreciate it.

Robert Dodier

PS. I'm working on Linux, but I need something to run on other Unices.

---------------- begin relevant code -----------------
    for ( j = 0; j < nprocesses; j++ )
        if ( (child_pid = fork()) == 0 )
            (in  = fopen( infile,  "r" )) != 0 || die( infile );
            (out = fopen( outfile, "a" )) != 0 || die( outfile );

            fseek( in,  j*bs, SEEK_SET );
            fseek( out, j*bs, SEEK_SET );

            for ( i = 0; i < bs; i++ )
                fputc( fgetc(in), out );

            if ( j == nprocesses-1 )
                off_t rem = size - (j+1)*bs;

                for ( i = 0; i < rem; i++ )
                    fputc( fgetc(in), out );

            return 0;

2. Dynalink ISPH64 and SuSE Linux 6.1

3. multiple readers, one writer on shared memory

4. x win config w/ leadtek s900

5. pipe() problem - one reader, multiple writers?

6. Mail Program

7. readers-writers problem using SYSV semaphores

8. Printcap

9. flash media reader/writer , which one works with Linux

10. Disk based buffer, one writer, many readers problem

11. "tee", but with fast writer, 1 slow reader and 1 fast reader

12. Linux with CDRW (Cd reader/writer) technology?

13. Linux fs reader (writer) for NT ?