performance problems

performance problems

Post by Martin Hargreav » Sun, 18 Feb 1996 04:00:00




>I have a performance question.  We were running some volume testing on an  
>application we are going to roll out in the near future.  This is our first
>experience with Client/Server and HP Unix boxes.
>We have an HP 9000 series 800 G50.  We have three disks with 5 gig.
>2-2 gig disks and 1-1 gig disk. We have 128 meg of Ram with 3 times that set
>up for swap.        
>During the testing, We monitored system performance and resources using    
>GlancePlus.  We do have other tools that we have not had the oppurtunity to
>install.  We are looking for help interpreting some of the results. We would
>appreciate any insight or direction on this.
>In general, our network and memory usage was normal.  There was little swapping    
>activity.  However, there was significant problems with CPU usage and disk
>utilization. We have 3 disk (each mirrored).  All disks frequently had
>utilizations between 50 and 75%, with one hitting 91% often.

Look for hot spots on the disk. Are all the disks on the same I/O
controller? If so adding more I/O controllers may help... you'd need
to look at which disks were getting hit. As for more disks, look at
striping or stripe/mirroring. Mirroring reduces performance. If it's
Oracle raw volumes that are being hit at my previous clients site we
had a Sequent SE70 with 7 way stripe (making sure to offset each
stripe like in the manual) mirrored. This requires 14 disks but you
get the idea. Oracle seems to like striped raw volumes.

Quote:>As for the CPU, we had utilization of about 100% throughout the length of the
>test with 8-11 processes ready to run (swapped in and waiting on the cpu).
>Response on the plant floor was poor on any operations hitting the database.
>Also, Oracle was taking the large protion of the cycles, as would be expected.  
>System usage was between 8 and 12%

Looks like a healthily loaded CPU. It's not badly tuned it's just too
small.

Quote:>Now the easy interpretation is that we need to get more disks so that we can
>spread out access.  In addition, upgrade the processor to get greater
>throughput.  However, I have learned that the easy interpretation is not
>always the best.
>I would like to know what other things that I might look at to either validate      
>my interpretation or shed more light on it.  

Mike Loukides book "System Performance Tuning" by O' Reilly, and
"Tuning Oracle" (or something similar) by Oracle Press.

Quote:>Thanks,

No problem.

M.

##################################################################

# Director, Datamodel Ltd                                Chemist #
# Contract Unix system admin/Unix security              Sysadmin #  
##################################################################

 
 
 

performance problems

Post by Larry Forsyt » Sun, 18 Feb 1996 04:00:00


You said you had 2-2Gb abd 1-1Gb disk, then later you said you had 3 disks
(each mirrored).  I suspect you've created logical volumes with mirrors, some
or all of which reside on the same physical drive.  This got me also.  If you
did mirror on the same spindle and the spindle dies, you're screwed.  Mirror to
different physical disks - on different controllers - for best results and
optimal protection.  Throughput should be better also.

Where did you place the Oracle database and re-do logs files?  Having them on
the same spindle can result in a lot of contention and send response times
soaring for larger transactions.
--
Larry Forsyth                   |Opinions and/or statements contained in this
UNIX Systems Specialist         |posting are the sole responsibility of the
Newbridge Networks, Inc.        |author.


 
 
 

performance problems

Post by k.. » Tue, 20 Feb 1996 04:00:00



> Look for hot spots on the disk. Are all the disks on the same I/O
> controller? If so adding more I/O controllers may help... you'd need
> to look at which disks were getting hit. As for more disks, look at
> striping or stripe/mirroring. Mirroring reduces performance. If it's

                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Although mirroring reduces the performance of writing, it increase the read
performance of the system as the read requests are directed to the mirror copy
shortest request queue.

Since most systems do more reading than writing ( why else do you write the
data :-) the overall performance of mirrored systems tends to exceed that of
non mirrored systems.

--

Ken Green Computer Consultancy  
                  22 Matthews Chase, Binfield, Berkshire, RG42 4UR.  U.K.

 
 
 

1. Ide performance (was RAID0 Performance problems)

Hi,
I've posted about performance problems with my RAID0 setup.
RAID works fine, but it's too slow.
But now it seems not to be a problem with the md code, it's an ide problem.
There are two HDs in my PC: /dev/hda and /dev/hdc. No other devices are
attached to the ide-bus.
PC is a SMP-System, 2 Celeron 533, Gigabyte 6BXDS with Intel BX-Chipset,
66MHz FSB.
The hdparm settings:

bash-2.04# hdparm /dev/hda /dev/hdc

/dev/hda:
 multcount    =  0 (off)
 I/O support  =  3 (32-bit w/sync)
 unmaskirq    =  1 (on)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 nowerr       =  0 (off)
 readonly     =  0 (off)
 readahead    =  0 (off)
 geometry     = 59556/16/63, sectors = 60032448, start = 0

/dev/hdc:
 multcount    =  0 (off)
 I/O support  =  3 (32-bit w/sync)
 unmaskirq    =  1 (on)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 nowerr       =  0 (off)
 readonly     =  0 (off)
 readahead    =  0 (off)
 geometry     = 59556/16/63, sectors = 60032448, start = 0

I've tested a lot of variations of this settings (multcount=16,
unmaskirq=0...) without succes.
The performance of the RAID doesn't increase :-(
hdparm -tT on a single HD (dev/hda3 is the RAID-partition) reports a very
good performance:

bash-2.04# hdparm -tT /dev/hda3

/dev/hda3:
 Timing buffer-cache reads:   128 MB in  1.42 seconds = 90.14 MB/sec
 Timing buffered disk reads:  64 MB in  2.27 seconds = 28.19 MB/sec

But if I start 2 hdparms simultanous (one on /dev/hda3 the other on
/dev/hdc3) the performance on the HDs decreases to 1/2 of the original speed:

bash-2.04# hdparm -tT /dev/hda3

/dev/hda3:
 Timing buffer-cache reads:   128 MB in  2.27 seconds = 56.39 MB/sec
 Timing buffered disk reads:  64 MB in  4.56 seconds = 14.04 MB/sec

bash-2.04# hdparm -tT /dev/hdc3

/dev/hdc3:
 Timing buffer-cache reads:   128 MB in  2.25 seconds = 56.89 MB/sec
 Timing buffered disk reads:  64 MB in  4.49 seconds = 14.25 MB/sec

The performance of the RAID0:

bash-2.04# hdparm -tT /dev/md0

/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.35 seconds = 94.81 MB/sec
 Timing buffered disk reads:  64 MB in  3.11 seconds = 20.58 MB/sec

Tests with bonnie or iozone have the same reuslts, RAID is slower then a
single HD :-(

Does anybody has an idea what's wrong with my setup??

Thx,
Andreas
--

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. average number of licenses.

3. penthium performance problem

4. local email only on NT network

5. Performance problem iowait

6. diald routing problem

7. Solaris 9 - ufsdump performance problem

8. Help setting up Prism3 USB-based WLAN on kernel 2.6.8.1

9. performance problems - os5.0.4, Adaptec2940 and DAT

10. RAID 5 performance problems

11. Veritas File System 3.2.5 I/O Performance Problem

12. Performance problems in reading shared libraries in Sun Workshop debug (dbx) V5.0

13. performance problems (due to mixed system?)