msgget() program uses more "System" time on one system than another

msgget() program uses more "System" time on one system than another

Post by Owe » Thu, 03 Jul 2003 17:43:48



I have an executable provided to me that makes IPC system call
(msgget) to create a IPC message segment ( in 0666 mode ).  I run it
on 3 x  two-node Linux Red Hat AS 2.1 clusters.  On just 1 of the
clusters, the job uses significantly more CPU time and takes longer to
complete.  In particular, it uses lots of System time (30-40%),
whereas on the other 2 clusters, the same executable file uses hardly
any System time - mostly User time. This can be seen by using the
"top" utility.

The system configurations are  identical in the following respects:

Kernel revision:  2.4.9-e.10.7enterprise #1 SMP
Kernel parameters:  /etc/sysctl.conf
RPM package installs:   rpm -qa
Hardware:   Dell 2650 2 CPUs per node, with 6Gb RAM.

The message queue is created:

ipcs:
------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages    
0x00000003 98304      oracle     666        252          3          

and the program runs , but on the troubled cluster, it is mostly in
msgrcv state:

ps -elf:
000 S oracle   20986     1  8  80   5    -   355 msgrcv 01:36 pts/2  
00:00:49

Any ideas? This is driving me nuts!

I am fairly new to Unix so any pointers in the right direction would
be much appreciated!