STREAMS driver per-adaptor read service routine

STREAMS driver per-adaptor read service routine

Post by Joerg Miche » Sat, 24 Jul 1993 19:12:44

I find myself on the horns of a dilemma: to find the "right way to do it".

I have a cloneable STREAMS driver for a network adaptor, which demultiplexes
incoming data based on a channel id onto several streams. Doing the demulti-
plexing in the interrupt routine seems costly. The thread has to do a search
operation, to aquire locks. I'd like to let the interrupt routine run through
without blocking (if possible). One way is to simply enqueue the data tagged
with the channel id onto a queue (attached to the physical instance/unit) and
have a separate service routine do the demultiplexing and further processing.
The classical way to do it, is to have your own queue structure and to fire
a software interrupt procedure. But isn't STREAMS supposed to do this ?
Yes and no. It is possible to have the read side queue and service
routine running. But this is per-software instance (stream), not per-hardware
instance (adaptor), thus I'd still have the multiplexing in the interrupt
routine. Is it possible to use the muxrinit structure for such a purpose ?
Will this be still DDI/DKI compliant ? Any other elegant solutions ?

Yet another question: does anybody have a pointer to a good paper about
locking strategies ? A good locking scheme seems to be the key for good
performance of a Solaris driver.

Thanks for any help.

GMD     German National Research Corporation for Mathematics and
FOKUS   Data Processing - Research Institute for Open Communication Systems
                                                        Berlin, Germany


1. STREAMS queue returned to stream head during service routine execution

We are seeing something rather strange when running our STREAMS multiplexor
drivers on a Solaris 2.3 dual CPU SPARC 20. What is happening is that
occasionally, I believe after an I_UNLINK, the queue associated w/ a
running lower write service routine will revert back to the stream head
part way thru execution of the service routine.  Needless to say this has
devastating consequences.  What seems to be happening is that there is a
lack of synchronization wrt the running of the lower service routines and
the return of a "borrowed" queue pair. We have never seen this occur on
single CPU SPARC 10s or on a dual CPU SPARC 20 w/ one CPU disabled.

We do not use perimeters (should we?), but instead do mutex locking on a
driver basis. I.e., each driver has a single mutex which is acquired first
thing in each put and service routine. The mutex is released prior to
putnext() and family, and qreply(), and it is reacquired afterwards if
necessary. Just prior to return from the service or put routine, the mutex
is released. You can't get much coarser granularity than that.

Is this a Solaris bug? If so, is there a work-around? If not, what are we
doing incorrectly? Is there a lock of some sort that we should grab prior
to performing processing in the service routine?

Any help gratefully accepted.

Jim Robinson


2. system administrators guide to cracking

3. Streams device driver - open and close routines

4. .ps and .dvi to Epson Stylus Color printer

5. Mutexes in device driver interrupt service routines

6. travan or DAT

7. Drivers: read routine

8. Passing variables to a CGI script in SSI

9. In a Driver Read/Write Routine, accessing a memory-mapped device

10. problem in driver read routine

11. iosched: effect of streaming read on streaming write

12. iosched: impact of streaming write on streaming read

13. What happens if a stream module is pushed onto multiple stream driver?