>Hi.
>I am having a problem writing a "meta-driver", ie one that is sandwiched
>between the kernel and a real device driver. The platform I am using
>consists of a Sun IPC, running SunOS 4.1.1.
>My problem is that in attempting to use the OPEN(2V) system call from within
>the "xxopen" routine of my metadriver (to open another device-driver (which
>then opens the device)); OPEN(2V) "fails".
>It appears that calling OPEN(2V) from within the kernel requires a different
>calling sequence and argument list. Viz-a-viz: instead of a file descriptor
>being returned by OPEN(2V), it seems a pointer is returned (which itself is
>a pointer to something else).
Not quite right. The Sun semantics for system calls are not very
well documented. The parameters are context information. This is a change
from the "expected" behaviour of no parameters. File system services,
per se, are not available from the Sun kernel, as they are in systems which
share UNIX syntax but not internals (like AIX). The return value seems to
be ignored (from my experience).
Quote:>Is this true? Is the OPEN(2V) syscall different for a process executing in
>user address space, compared to the OPEN call executed by a process running
>in kernel mode/address space?
Yes, you will have to use one of two mechanisms to cause an open
in kernel space to succeed:
1) Duplicate the calls made by open, except pass your own parameters
instead of open's. The most obvious of differences is the fact
that your calls to the VFS layer must specify that the buffers
passed are in kernel rather than user address space. If you
specify user address space, a copyin() or copyout() is performed,
based on which direction the data is going. In this case, the
buffer must be in the address space of the currently executing
process, or the page flipping for the copy operations, which is
based on the u struct will flip in the wrong pages and blow up
some poor unsuspecting process that happens to be on your machine.
This is why many drivers save context information from the u struct
to be used in asynchronus completion.
2) Call the open from a loaded system call, such that it takes the
expected parameters plus your own. In this fashion, the open
parameters will be in the "correct" order and location, and
you can ignore the fact there's a problem:
my_open( i, j, k)
int i, j, k;
{
/*
* We got four parameters, not 3, so we do processing
* based on the fourth parameter, which we dereference
* out of the u struct. Due to the declaration of our
* system call in our systent entry, a copyin has been
* done on our behalf in trap.c.
*/
...
...
/*
* If we get to here, call the real open.
*/
return( open( i, j, k));
}
This, unfortunately, has limited utility in the case you describe.
Since the system call open will place the index of the element in
the list of file descriptors (in your u struct, if less than 65 files have
been opened, or in kernel address space if more than 64 files are open) in
the return value in the u struct, it's questionable if file I/O in the
kernel using the standard system calls is useful to you at all. You
probably would want to go directly to the VFS.
An alternative would be a single-entry/exit streams MUX as your
driver. Anything on top of the MUX would not know that it was not the
device (when in reality it was your meta device). Below the MUX, you
will have linked the real devices. In this fashion, selection of which
lower queue to put to could easily select which device under the auspices
of the meta-device is to be communicated to for the user opening the
meta-device.
This would require that you write a "daemon" that held the "control"
connection for your meta-device streams multiplexer open after performing
the initial open and link for the devices controlled by the meta-device.
Most streams drivers (the real devices multiplexed under you meta-device)
require writing to the DLPI layer do that they may be utilized by streams.
Many people confuse DLPI with TPI, which is used for the
implementation of layer 4/3 protocols, and not for the provision of device
drivers. You are not writing "TP5" or some other new transport stack; you
are writing a device. If you don't use DLPI, you are confusing streams
with streams-stack based networking.
Quote:>If this is true for OPEN, what about CLOSE, READ, WRITE, etc? Unfortunately,
>I don't have (easy) access to any Unix (source) code, to verify this.
As I said, SunOS does not adhere to either the Berkeley or System V
system call paradigms; it has it's own, and there's no readily available
documentation, short of a SunOS source license. My personal preference,
having written something similar to this in the past, would be a streams
implementation. Barring this, given an explicit rather than relative path
to the device (since relative paths require reference to the u struct's
locked vnode for the "current directory"), you could probably use the VFS
interface directly. This is not very well documented, but you could
probably look to either the NFS server or MFS (memory file system) for
examples of writing to use this interface. This code is part of the Net-2
distribution, and can be FTP'ed from various locations, including ftp.uu.net.
Terry Lambert
---
Disclaimer: Any opinions in this posting are my own and not those of
my present or previous employers.