> l> Because a program running in an SMP environment could run on any
> l> CPU, 50% of the time, one would run on an different CPU than the
> l> last time the program was on queue.
> A process will generally stay on a particular CPU for long enough to
> bring the cache hit rate up to something useful, so it turns out that
> single-threaded programs do not, in general, run more slowly on
I did various single threaded tests which basically allocates
about enough memory from a few K to just about what would fit
onto a TMS390Z55 1M cache, and did various operations on it.
Running it on SS10/40, 50, 51, 512, 61.
It turns out the test result on the 512 is the
least consistant. On the 51 and 61, the peak speed is
achieved with memory allocation of pretty much up to 1M,
which turned out to be ~50-60Mflops.
On 512's sometimes, I hit full thruput, other times
it is not much faster than a SS10/50, which would average
around 12Mflops on big sizes.
Because the penalty for a cache miss is so much greater
than a cache hit, even a cache miss of a few percent more
will increase execution time by a factor or two.
Run mps and you would see the process jumping back and forth
Run mpstat and you have CPU utilization of ~50% each, verifying
that the process is randomly distributed.
I don't think SunOS or Solaris has a smart process queue,
where the process would give up its position in front of the
queue if the free CPU is not the one it ran last on.
The scheduler as I know it works on a round robin.