Is this a reader writer problem?

Is this a reader writer problem?

Post by cha » Fri, 16 May 2003 13:41:30



I am working on my Sun work station with Solaris 9.

In a project of mine, I have a CRL server, which writes a disk file
called thecrl.crl every minute.  This file has the next update time
information.

For the same project, I also have a CRL client, which simply reads the
thecrl.crl file across the network, checks its next update time,
sleeps for that amount of time, and then reads that file again.

I am having problems with this.

It seems the CRL server runs well.  It does not get killed by the
operating system.

But the CRL client gets killed by the operating system a few, usually
3, minutes after it is launched.  I could see by ps -ef | grep 'CRL'

I launched the client using nohup, but I can't find any info in the
nohup.out file, which is empty.  My applicaton is written in Java, I
can find no exception info in the nohup.out file.

I am wondering if there is a reader-writer problem in this case, since
both the client and server will be accessing the thecrl.crl file.

In order to avoid the reader-writer problem, I let my client sleep
until 20 seconds after the server has written the thecrl.crl file.
But this does not seem to help.

What do you guys think?

If you guys confirm that there is a reader-writer problem involved
here, I may have to use a semaphore to take care of their fight,
right?

Thanks.

 
 
 

Is this a reader writer problem?

Post by Barry Margoli » Fri, 16 May 2003 23:57:46




>I am working on my Sun work station with Solaris 9.

>In a project of mine, I have a CRL server, which writes a disk file
>called thecrl.crl every minute.  This file has the next update time
>information.

Are we supposed to understand what CRL is in this context?  Does it matter?

Quote:>For the same project, I also have a CRL client, which simply reads the
>thecrl.crl file across the network, checks its next update time,
>sleeps for that amount of time, and then reads that file again.

>I am having problems with this.

>It seems the CRL server runs well.  It does not get killed by the
>operating system.

>But the CRL client gets killed by the operating system a few, usually
>3, minutes after it is launched.  I could see by ps -ef | grep 'CRL'

>I launched the client using nohup, but I can't find any info in the
>nohup.out file, which is empty.  My applicaton is written in Java, I
>can find no exception info in the nohup.out file.

Run the client using "truss" to see what signal is killing it.

Quote:>I am wondering if there is a reader-writer problem in this case, since
>both the client and server will be accessing the thecrl.crl file.

I've no idea what "the reader-writer problem" is.  My guess is it might be
something that could come up with pipes or sockets, but you're just reading
an ordinary file, right?

--

Genuity Managed Services, a Level(3) Company, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

 
 
 

1. "tee", but with fast writer, 1 slow reader and 1 fast reader

I'm extending a data acquisition app, written in C.  It will acquire a
large amount of data (e.g. from a PCI card on an Sparc Ultra5 running
Solaris 9, I think), perhaps 10 or more Gibytes.  So, we're into
largefiles.  The acquiring process as it stands now is a 32-bit app,
although I _suppose_ that could change if that's required for the
solution.  The acquired data need to go two places, sort of like
"tee":

1. The data need to be written to a local disk.

2. The data also need to be sent to another process, running on
another machine.  The data could be sent to the other process as its
standard input (popen("rsh othermachine otherprocess"), or somesuch, I
guess).  Or it could be written to a named pipe, I suppose.  (Do named
pipes work if either the reader or writer is looking at an NFS
filesystem?)  It could be some other way of feeding a remote process
that you thought of that I didn't.

The problem is that the data must be collected in a timely fashion,
but the remote consumer might be slow.  (On the other hand, the remote
consumer might be fast enough to keep up and indeed be mostly
waiting.)  And the amount of data is large, in excess of the RAM in
the machine, but not more than the local disk.  So we don't want the
acquiring thread/process to block because it has filled up all
buffers.  And the buffers perhaps can't simply be made larger (or can
they?)

Is a reasonable solution to have multiple pthreads within the "tee"
process, one writing the data to disk, and the other reading the data
back from the disk and sending it to the remote consumer?  These two
threads would communicate (synchronize) so that the one reading from
disk never tried to get ahead of the one writing to disk.  I don't
want the slow one looping waiting for more data (maybe it's not that
slow); I'd rather it somehow block and be alerted when more data are
available.

If I instead went with the sort of obvious solution of NFS mounting
the local file system onto the remote machine, and having the slow
reader running on the remote machine, open the remote file itself,
etc., is there any way for that remote process to know how to wait for
more data to be written to the file?  (It will get EOF when more data
might be still coming, right?) Is there any way for it to know when
the writer of that file has called close()?

- Ben Chase
(Any email address above is junk, for spam-avoidance.

".foo")

2. i810 TV-out

3. Reader/writer problem.

4. Solaris x86 vs. Linux for home use

5. readers-writers problem using SYSV semaphores

6. Help looking for "startkde"

7. pipe() problem - one reader, multiple writers?

8. Skey for RedHat 6.0

9. Disk based buffer, one writer, many readers problem

10. reader-writer livelock problem

11. Linux with CDRW (Cd reader/writer) technology?

12. chip-0.2 / driver for MARALU chip-card-reader/writer v1.0

13. ide cdrom and cdrom writer with scsi harddisk using cdrecord can't find cd reader