> > It will not work if you pipe the call of your function as with 'bob
> > grep "blabla"' because of the loss of the return code. It will be
> > one of grep and not the one of bob.
> > It exists solutions for this (ugly: saving intermediate status in a
> > temporary file, more beautiful but heavier: using a coprocess) but
> > code becomes more complex.
> > Moreover when you use several imbricated functions, your code
> > really heavy with enormous "if" to avoid executing all the code
> > following a function call.
> > Any other idea ?
> Only the obvious - redesigning your script. The whole point of
> functions is to run them in the current shell instance. Running
> them as the head of a pipeline, forcing them into a subshell,
> destroys the whole rationale. Why do you want to pipe a function
> to another process? what do you hope to gain by that. It's like
> that old joke of the guy who goes to the doctor and says it hurts when
> he raises his arm above his head and the doctor says "Don't do that".
> What may be humorously bad advice from a doctor is usually the right
> advice in prgramming. I cannot tell you how many times I've seen
> posts from people trying to do the impossible when a modest re-design
> would cure their problems. But they stubbornly prefer their
> Rube Goldbrick designs to a proper design. Go figure!
Yes, indeed :-) , for the moment the only way I have found to avoid the
non script exit by the exit command is to avoid piping functions. But I
don't find so far from the standard to use them to produce output able
to be simply filtered by grep or another command.
For instance, I am currently working on sending email by shell script.
I like the idea to prepare the mail by a generic function (in a shell
library I include in my main script) and then pipe its output to
sendmail. I find nice to have the possibility to exit the script right
away if I encounter a fatal error without executing the code after
myfunction | sendmail. Today the "non exit with pipe problem" is my
only problem and I find pratical to use generic function libraries.
As you said, maybe I should redesign my scripts architecture (which is
today generic functions in libraries, included in main scripts). The
solution could be to create external scripts and not libraries to show
clearly the creation of subshells and so managing the return status. My
aim is (of course) to avoid a too heavy code and too much temporary
files creation. For the moment this aim is reached well with piping.
But I remain stuck by the exit problem.
I am not so stubborn :-). I am open to any suggestion from your
experience and the choice you have made, above all if they match the
principle "small and simple is beautiful". You know my aims, what would
you suggest and what is your design ?
site web: http://cbrugne.free.fr
Sent via Deja.com