>My colleague uses SIGALRM handler to do such things as reading
>commands from sockets and IPC queues, controlling hardware, etc...
>But I feel that it is error-prone and even not readable. Is there any
>paper or article (or even your own considerations) arguing why it is
>not the best practice.
Well, at the extreme position, all one should be allowed to do is
set a variable of type sig_atomic_t (usually an int).
However, in practice, you can usually do function calls as long as what
you're calling is reentrant. So, e.g., using i/o calls, especially stdio,
is right out (you could be calling printf while you're in an interrupted
My own practice is to just set a flag (of type sig_atomic_t) and then
return. The program checks for the flag in its main loop and then safely
handles the signal at that point.
Obviously, there are exceptions to this: e.g., a handler for SIGSEGV,
which would have to diagnose and possibly remedy the problem before
returning (I believe this is how memory de*s, like ElectricFence
by Bruce Perens, work).
Another issue to consider is using longjmp to get out of a signal handler.
I don't know much about this, but I suspect that it's not terribly portable
at best, and even if it works, it's probably so confusing to the reader of
the program that it should be avoided anyway.
In summary, I agree with your view and possibly am even more conservative
about signal handling than you are. Your colleague is playing with fire,
IMHO, and when (s)he gets burned, is going to have one hell of a time
tracking down what happened.
PGP public key at http://www.veryComputer.com/~wozzle/alane.pgp.key