Hi,
I am looking for a simple mutual exclusion driver
for HPUX 10.X that offers better performance than
standard UNIX semaphores.
Does anyone know if one exists?
thanks,
Michael Mc Mahon
Hi,
I am looking for a simple mutual exclusion driver
for HPUX 10.X that offers better performance than
standard UNIX semaphores.
Does anyone know if one exists?
thanks,
Michael Mc Mahon
> [cc'd to poster]
> + I am looking for a simple mutual exclusion driver
> + for HPUX 10.X that offers better performance than
> + standard UNIX semaphores.
> I'm not an HPUX user, but I find it hard to belive that you'd ever find a
> method of doing mutual exclusion than the one supplied with the operating
> system, in the kernel itself. Whatever third-party code you use for mutual
> exclusion is going to have to rely on the native semaphore operations
> eventually -- which means that the third-party code is nothing but wrappers,
> and that's hardly going to offer better performance. (Readability, perhaps.
> Performanace, no.)
> Just my two timeslices.
> /dev/phil
One of the problems with semaphores is in situations where there is
a very low probability of contention (the situation I have)
but very high frequency of access, the cost of locking and unlocking
a semaphore is very high.
There are a number of ways to improve this. One way is to implement
semaphores based on shared memory. Locking the semaphore only involves
a "test and set" operation (one assemply language instruction)
where there is no contention. In the case of contention you can pay
the price of an inefficient back-off (by sleeping maybe).
But a better solution is to have explicit support from a driver
in the kernel. In either case though, traditional semaphores have
nothing to do with it.
: It's dangerous to assume that because something is in the kernel
: it must be optimal (or even efficient). There is a lot of legacy
: code in UNIX that no-one has looked at for years. I would include
: semaphores in that list (alonmg with message queues and other such
: relics)
I hate to see someone refer to message queues as relics. I've found them
very useful over the years. Of course, I've been at this long enough to
be considered a relic myself :-).
Might I ask what replacement you would suggest that allows single/multiple
senders/receivers/message types, prioritization by message type, sending
messages to myself, blocking or non-blocking, tuning of queue/message
sizes, etc.? All at a fairly high speed.
--
Larry Blanchard
Old roses, old motorcycles, old trains, and just plain old:)
The only useful paragraph here is the last one. :-)
+ It's dangerous to assume that because something is in the kernel
+ it must be optimal (or even efficient). There is a lot of legacy
+ code in UNIX that no-one has looked at for years. I would include
+ semaphores in that list (alonmg with message queues and other such
+ relics)
Interesting that test-and-set assembly instructions -- the implementation
basis for semaphores -- qualify as relics.
+ One of the problems with semaphores is in situations where there is
+ a very low probability of contention (the situation I have)
+ but very high frequency of access, the cost of locking and unlocking
+ a semaphore is very high.
Perhaps on HPUX platforms. Please don't extrapolate that to other platforms.
+ There are a number of ways to improve this. One way is to implement
+ semaphores based on shared memory. Locking the semaphore only involves
+ a "test and set" operation (one assemply language instruction)
+ where there is no contention. In the case of contention you can pay
+ the price of an inefficient back-off (by sleeping maybe).
If the "semaphore" is going to reside in shared memory, and there's no
exclusion protection for that "semaphore" then, yes, there had better not
be any contention.
+ But a better solution is to have explicit support from a driver
+ in the kernel. In either case though, traditional semaphores have
+ nothing to do with it.
If the HP kernel driver writers have implemented a better mutual exclusion
than the routines used for their semaphores, then one would expect that
they would then replace the "traditional" semaphore code with their new
drivers, thus making the native semaphore code the best possible.
Hey, call me an idiot and keep looking if you want, but I don't think
you're going to find a third-party /kernel-driver/ replacement for
something as low-level as semaphores which is inherently better than
what's already sitting there. (In general, third-party replacements for
/anything/ are only going to beat out the native implementation in that
case of replacing Microsoft programs, largely due to the excellent point
-- but I digress.)
I think what you want is just a mid-level wrapper for semaphores that's
keyed to your particular situation of "almost no contention". Since
semaphores were designed to help the programmer solve contention problems,
then yes, of course semaphores are going to appear inefficient -- but that
isn't because semaphores are old relics of programming past, it's because
semaphores aren't the right tool for that job. If you think that a "lock"
variable in shared memory is what you want, then code a lock variable in
shared memory; don't try to replace the kernel code to suit your application.
/dev/phil
<snip>
What is this cost? Are you concerned about having to switch toQuote:> One of the problems with semaphores is in situations where there is
> a very low probability of contention (the situation I have)
> but very high frequency of access, the cost of locking and unlocking
> a semaphore is very high.
What evidence do you have that they're performing poorly? I'm curious
what you could have compared it to (file locking? no locking?)
since System V IPC is the only game in town.
We've used System V IPC semaphores for 5 years now on HP-UX, AIX,
Solaris, and Digital Unix, as a mutex governing access to a lookup
table that must be scanned for *every* message send, and have had
no problems. In some cases this is hundreds of processes constantly
"fighting" over that lock. (This differs from your "low probability
of contention".)
Also, Transaction-based relational database systems such as Oracle
RDBMS use System V IPC semaphores in their Unix implementations.
My point being, there's a lot of anecdotal evidence out there
that people get good performance out of systems using System V IPC
semaphores.
Implementing the locking using only shared memory also involvesQuote:> There are a number of ways to improve this. One way is to implement
> semaphores based on shared memory. Locking the semaphore only involves
> a "test and set" operation (one assemply language instruction)
> where there is no contention.
It strikes me that this is exactly the reason why System V introduced
the sem*(2) API to give us a locking mechanism with such guarantees
of atomicity.
Are you concerned about blocking when the lock is unavailable?Quote:> In the case of contention you can pay
> the price of an inefficient back-off (by sleeping maybe).
Dave
--
Some versions of HP-UX have a memory-based semaphore facility; look for
the msem_* functions.
The other possibility to investigate is pthread mutexes (which can be
shared between processes if you set their attributes accordingly).
--
Andrew.
comp.unix.programmer FAQ: see <URL: http://www.erlenstar.demon.co.uk/unix/>
> The only useful paragraph here is the last one. :-)
> + It's dangerous to assume that because something is in the kernel
> + it must be optimal (or even efficient). There is a lot of legacy
> + code in UNIX that no-one has looked at for years. I would include
> + semaphores in that list (alonmg with message queues and other such
> + relics)
> Interesting that test-and-set assembly instructions -- the implementation
> basis for semaphores -- qualify as relics.
Not true, how do you think semaphores are implemented?Quote:> + One of the problems with semaphores is in situations where there is
> + a very low probability of contention (the situation I have)
> + but very high frequency of access, the cost of locking and unlocking
> + a semaphore is very high.
> Perhaps on HPUX platforms. Please don't extrapolate that to other platforms.
> + There are a number of ways to improve this. One way is to implement
> + semaphores based on shared memory. Locking the semaphore only involves
> + a "test and set" operation (one assemply language instruction)
> + where there is no contention. In the case of contention you can pay
> + the price of an inefficient back-off (by sleeping maybe).
> If the "semaphore" is going to reside in shared memory, and there's no
> exclusion protection for that "semaphore" then, yes, there had better not
> be any contention.
It seems they have, as another poster has pointed out. I have done someQuote:> + But a better solution is to have explicit support from a driver
> + in the kernel. In either case though, traditional semaphores have
> + nothing to do with it.
> If the HP kernel driver writers have implemented a better mutual exclusion
> than the routines used for their semaphores, then one would expect that
> they would then replace the "traditional" semaphore code with their new
> drivers, thus making the native semaphore code the best possible.
> Hey, call me an idiot and keep looking if you want, but I don't think
> you're going to find a third-party /kernel-driver/ replacement for
> something as low-level as semaphores which is inherently better than
> what's already sitting there. (In general, third-party replacements for
> /anything/ are only going to beat out the native implementation in that
> case of replacing Microsoft programs, largely due to the excellent point
> -- but I digress.)
I want a large number of finely granular semaphores where there is
little contention. SysV semaphores are not ideal for this purpose.
Ergo, my original query.
Michael.Quote:> I think what you want is just a mid-level wrapper for semaphores that's
> keyed to your particular situation of "almost no contention". Since
> semaphores were designed to help the programmer solve contention problems,
> then yes, of course semaphores are going to appear inefficient -- but that
> isn't because semaphores are old relics of programming past, it's because
> semaphores aren't the right tool for that job. If you think that a "lock"
> variable in shared memory is what you want, then code a lock variable in
> shared memory; don't try to replace the kernel code to suit your application.
> /dev/phil
> Some versions of HP-UX have a memory-based semaphore facility; look for
> the msem_* functions.
thanks,
Michael.
Quote:> The other possibility to investigate is pthread mutexes (which can be
> shared between processes if you set their attributes accordingly).
> --
> Andrew.
> comp.unix.programmer FAQ: see <URL: http://www.erlenstar.demon.co.uk/unix/>
> <snip>
> > > + I am looking for a simple mutual exclusion driver
> > > + for HPUX 10.X that offers better performance than
> > > + standard UNIX semaphores.
> <snip>
> > One of the problems with semaphores is in situations where there is
> > a very low probability of contention (the situation I have)
> > but very high frequency of access, the cost of locking and unlocking
> > a semaphore is very high.
> What is this cost? Are you concerned about having to switch to
> kernel mode (because semop(2) is a system call), and possibly lose
> the processor, to test/set a lock?
see below.Quote:> What evidence do you have that they're performing poorly? I'm curious
> what you could have compared it to (file locking? no locking?)
> since System V IPC is the only game in town.
We are looking at replacing an RDBMS with a custom memory mappedQuote:> We've used System V IPC semaphores for 5 years now on HP-UX, AIX,
> Solaris, and Digital Unix, as a mutex governing access to a lookup
> table that must be scanned for *every* message send, and have had
> no problems. In some cases this is hundreds of processes constantly
> "fighting" over that lock. (This differs from your "low probability
> of contention".)
Oracle recommend the use of alternative drivers if they are available.Quote:> Also, Transaction-based relational database systems such as Oracle
> RDBMS use System V IPC semaphores in their Unix implementations.
> My point being, there's a lot of anecdotal evidence out there
> that people get good performance out of systems using System V IPC
> semaphores.
It seems that HPUX and other OSes provide these (msem_* in HPUX,
pthread mutexes elsewhere)
and they do provide the kind of performance that I want. I did a test
and find approx. 100 times performance improvement.
I certainly didnt relish the thought of writing it myself, but I thinkQuote:> > There are a number of ways to improve this. One way is to implement
> > semaphores based on shared memory. Locking the semaphore only involves
> > a "test and set" operation (one assemply language instruction)
> > where there is no contention.
> Implementing the locking using only shared memory also involves
> knowing the OS implementation and underlying hardware intimately.
> To do a "test and set" op, an atomic instruction, which may not even
> exist in a given machine language, is needed. (ie. Are you writing in
> PA-RISC assembler?) IMHO, it would be a huge mess to implement this
> yourself because of subtleties like SEM_UNDO and FIFO acquisition of
> the lock when multiple processes were waiting.
Michael.
1. BUG:: IPC/semop clobbers PID of the last process performed semop() on the semaphore
2. NET-2 subnet/netmask bug(s)?
3. semaphore config question (semop() calls hurting performance!)
4. dosemu and pdether or pdipx
5. semaphore function semop fails with error EINTR
7. Shared memory and semaphores - semop() error
9. Linux semaphores. (semget & semop)
10. Shared memory and semaphores - semop() error
11. error EIDRM (36) for semop call to modify a semaphore
12. Calling semop() after signal interrupts blocking semop() call
13. Device driver question: How to implement semaphore in driver?