performance problem

performance problem

Post by qmang » Thu, 15 Jun 2000 04:00:00



hi everybody,

   i need help on one of my systems that seems to be slowing
down. it is a k-class with hpux 10.20.

  the first noticable figures that goes up is the load from
/usr/bin/top. however, the cpu idle is at a very high 60% so it
seems that there is no cpu bottleneck.

  i use glance for further investigation and found out that
eventhough there are around 612 page faults rate, there is no
page in or page out. reactivated/deactive proc. how do page
faults get satisfied?

  i do get disk bottleneck, but i don't know what else to do, it
seems that that was how the program was written i.e. read/write
intensive.

  the network guys says there is a network congestion, but i can
see my network queue at 0 and collision rate a little under 1%.

  all these data don't seem to match, can anyone out there please
help me on how i can really understand this.

thanks in advance
q

* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!

 
 
 

performance problem

Post by nos.. » Thu, 22 Jun 2000 04:00:00


On Wed, 14 Jun 2000 01:34:35 -0700, qmangi


>hi everybody,

>   i need help on one of my systems that seems to be slowing
>down. it is a k-class with hpux 10.20.

>  the first noticable figures that goes up is the load from
>/usr/bin/top. however, the cpu idle is at a very high 60% so it
>seems that there is no cpu bottleneck.

>  i use glance for further investigation and found out that
>eventhough there are around 612 page faults rate, there is no
>page in or page out. reactivated/deactive proc. how do page
>faults get satisfied?

Sounds to me that the page faults are being satisfied from memory
buffer cache.  If I recall correctly, 10% of the total memory of the
machine is allocated for cache by default.

Quote:

>  i do get disk bottleneck, but i don't know what else to do, it
>seems that that was how the program was written i.e. read/write
>intensive.

I have built a variety of machines that live in this type of
environment.  It seems the key to making them work well is having
plenty of memory and the best you can afford in disk technology.
Although HP does not recommend it, I allocate most all of the unused
memory on a machine to buffer cache (why waste it?).  The key here is
to know what your peak usage is otherwise.  HP suggests that there are
some issues with a large buffer cache; I suggest the additional losses
involved with the os dealing with a large buffer are still far less
than processing waiting for disk.  

Another item I make sure that I have plenty of is ninode.  Watch the
usage during peak periods and expand accordingly.  Don't necessarily
go by the high water mark in Glance though where performing a backup
will cause it to generally hit 100% usage during that evolution.  

Another thing to watch for is carefully allocating/matching disk
block/fragment size to the needs of the environment and the database
engine in particular.  Knowing the ins and outs of how the db engine
works is the key here.  I have found the default 1K fragment size of
VxFS file systems is generally too small.  Sequential files are best
handled with the fragment size the same as the block size.  A good
example is a spool output file.  There is certainly no benefit to
having multiple memory page faults when one will do the job.

Quote:

>  the network guys says there is a network congestion, but i can
>see my network queue at 0 and collision rate a little under 1%.

>  all these data don't seem to match, can anyone out there please
>help me on how i can really understand this.

>thanks in advance
>q

>* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
>The fastest and easiest way to search and participate in Usenet - Free!


 
 
 

1. Linux NFS Server and HP-UX Client performance problem

Hello,

I have a performance problem with an Linux (SuSE8.0) NFS server and a HP-UX
10.20 client. If I try to copy 100 small files (200 Bytes each) from the
HP-UX client disc to the Linux NFS server, this takes 25 seconds for the
copy and 23 seconds for a rm command. With an HP-UX NFS server and HP-UX
client it takes 3.5 seconds. I now about the NFS problem with small files,
but is there a way to tune the HP-UX/Linux connection to HP-UX/HP-UX speed ?
(or faster)

thanks and regards
Rolf

2. Using an external physical keyboard

3. Samba Performance Problem with HPUX11 and pwrite

4. PGP Amiga

5. Java: Performance problem using Thread.sleep()

6. PC Security Software

7. Performance Problems on an L3000

8. How Companies Can Make It on the Internet

9. DB Performance problems under 10.20 - shmmax?

10. performance problem

11. Weir network performance problem

12. Performance problems with parallel compiler directives

13. Performance problem with HPUX11.00