Disk based buffer, one writer, many readers problem

Disk based buffer, one writer, many readers problem

Post by John Antyp » Mon, 10 Mar 1997 04:00:00



Thought I'd throw this out as I'm sure I'm not the only one to have to
have solved this...

I have a stream of data arriving in one a pipe via process A -- about 50M/day
I want to buffer this data to a large disk buffer.
Multiple readers want to get at that circular buffer and read data.

In memory, this would be easy, with a large number of threads and their
condition variables this would be easy.

I first thought I'd divide the buffer into say 1000000 "pages".  The writer
could lock any page as it was accessing/updating it.  Readers would have to
aquire a lock to read the page.  

Still, readers need to know where a writer is, so I need an index of some
sort.  This needs to be on disk as well, but I thought I'd mmmap that into core.
After the writer started work on a page, it would msync() the shared map.

Still, locking is an issue.  How do I keep the index pages consistent without
hitting the disk continuouly and using lock() style code.  Again, condition
variables work, but I need one million of them.

Any ideas?

 
 
 

Disk based buffer, one writer, many readers problem

Post by Andrew Giert » Tue, 11 Mar 1997 04:00:00


 John> I have a stream of data arriving in one a pipe via process A --
 John> about 50M/day I want to buffer this data to a large disk
 John> buffer.  Multiple readers want to get at that circular buffer
 John> and read data.

 John> In memory, this would be easy, with a large number of threads
 John> and their condition variables this would be easy.

 John> I first thought I'd divide the buffer into say 1000000 "pages".
 John> The writer could lock any page as it was accessing/updating it.
 John> Readers would have to aquire a lock to read the page.

 John> Still, readers need to know where a writer is, so I need an
 John> index of some sort.  This needs to be on disk as well, but I
 John> thought I'd mmmap that into core.  After the writer started
 John> work on a page, it would msync() the shared map.

 John> Still, locking is an issue.  How do I keep the index pages
 John> consistent without hitting the disk continuouly and using
 John> lock() style code.  Again, condition variables work, but I need
 John> one million of them.

This looks perfectly solvable to me using nothing more than the usual
fcntl() locking functions.

How about this:

  writer always holds an exclusive lock on the element which it will
  write next

  readers can get the writer's position using F_GETLK

  readers hold shared locks for at least the first unread element;
  this automatically handles waking up readers when data arrives, and
  also blocks the writer when the queue becomes full

--
Andrew.

 
 
 

Disk based buffer, one writer, many readers problem

Post by Curt Smi » Tue, 11 Mar 1997 04:00:00


: Thought I'd throw this out as I'm sure I'm not the only one to have to
: have solved this...
:
:
: I have a stream of data arriving in one a pipe via process A -- about 50M/day
: I want to buffer this data to a large disk buffer.
: Multiple readers want to get at that circular buffer and read data.
:

If you are interested in a commercial product that I wrote and am open to
extention of the features to include multiple-reader circular queue functionality,
check out:

http://www.compgen.com/widgets

The end of this page describes the SMQ FIFO library.

For what OS is this?

Curt Smith

 
 
 

1. pipe() problem - one reader, multiple writers?

Hi
I want to have a process (a daemon) fork off children whenever a certain
event happens and then to have the children write back some data when
they have finished.

I tried to implement this with the parent creating a pipe when it
initialised and then forking children when necessary.  I pictured the
parent reading from the pipe and each child being able to write to it,
but I can't get this to work.  I keep getting EBADF for the child pipe
write.

This is how my program looks:

 create_pipe
 while (1) {
        while (nothing_to_do)
                ; /* parent waits */    
        fork_off_child  /* something arrives */
                child_processes_event
                child_writes_to_parent
                child_exits
        parent_continues        

Is it not possible to have one pipe with multiple children writing and
the parent reading?
Thanks
Swati

2. non-blocking sockets and overlapped i/o in linux

3. Semaphores, one writer, multiple readers? (Linux)

4. Help with Madge TR driver for Linux

5. Fork multiple readers on one file, multiple writers on another?

6. Can't get to Linux -- HOW to set up yaboot??!?

7. flash media reader/writer , which one works with Linux

8. rpc.portmap dies

9. multiple readers, one writer on shared memory

10. "tee", but with fast writer, 1 slow reader and 1 fast reader

11. Mount Compact Flash Disk on RH Linux w/ ISA Reader/Writer?

12. Mounting a Compact Flash Disk on RH Linux 6.0a with an ISA Vadem Reader/Writer?