/bin/sh, /bin/ksh crash with SIGSEGV with a huge script

/bin/sh, /bin/ksh crash with SIGSEGV with a huge script

Post by Juergen Kei » Sat, 21 Aug 2004 01:26:45



(Here's something for the bug database, unless it's a known problem)

On Solaris 9 & 10, /bin/sh and /bin/ksh crash with a SIGSEGV when a
huge script is executed with the . (dot) operator:

    % cat huge.pl
    print "echo start;\n";
    for (1..1000000) {
        print ": 'INSERT INTO ...;'\n";
    }
    print "echo end;\n";

    % perl huge.pl > huge.sh

The above perl script produces a huge shell script (21mbyte).  It's a
simple sequence of 1000002 commands, with an echo at the start and at
the end, and lots of no-ops in between.  Running the script through
/bin/sh works, as expected:

    % sh huge.sh      
    start
    end

But running it with the shell's . (dot) command, the shell silently
crashes, apparently with a SIGSEGV, and without dumping a core:

    % sh -c ". huge.sh"

    % echo $status
    1

    % truss -t\!all sh -c ". huge.sh"
        Incurred fault #6, FLTBOUNDS  %pc = 0xFF31EC38
          siginfo: SIGSEGV SEGV_MAPERR addr=0xFF3FFFC8
        Received signal #11, SIGSEGV [caught]
          siginfo: SIGSEGV SEGV_MAPERR addr=0xFF3FFFC8

Korn shell has the same problem, but writes a core dump:

    % ksh -c ". huge.sh"
    Segmentation Fault (core dumped)

    % /usr/xpg4/bin/sh -c ". huge.sh"
    Segmentation Fault (core dumped)

Bash has no problem with this script:

    % bash -c ". huge.sh"
    start
    end

    %

It seems that /bin/sh and /bin/ksh are running out of stack space,
while parsing the included script. Raising the stack limit to 128m
works around the problem.

Is it OK that the shell wastes so much stack space for a simple
sequential list of commands?