Hi,
I have an Ultra 5 400mhz running Solaris 9 I'm using to work up a disk
array. I have a pretty basic mirrored configuration going on the array,
but when I do something disk intensive, like a big ufsrestore to test
it, the foreground tasks become quite unresponsive. I have FSS enabled
with my user's project at a 100:1 share relative to the rest of the
system- configuration performing as expected via prstat- so its not as
if the root ufsrestore is using up the cpu from a scheduler standpoint.
My guess is the heavy pci traffic from the disk i/o is slowing down the
video & network i/o, etc..
It seems like FSS does have some indirect effect on the problem. If I
run mozilla or do something else in my project that involves a lot of
cpu-time, responsiveness improves. Presumably this is the FSS giving me
my 100:1 cpu-share, the big disk job being throttled down, easing the
pci utilization. However if all I'm doing is little fiddly stuff, then
my project starts to bog down. iostat shows the varying disk throughput
that I would expect in this situation.
So I was wondering if there are tools to limit bus bandwidth on a
project if not process basis beyond tweaking process priorities. I mean
something like diffserv statistical limits that reduce the scheduling of
the task. If Solaris 10 has features to do something like this, it
wouldn't bother me to upgrade this machine.
Thanks,
Greg