performance issues with raid5 and I2

performance issues with raid5 and I2

Post by Enriq » Fri, 29 Jun 2001 03:20:54



Hi,

  We are setting up a M80 with 6GB RAM and 65+ GB hard drive, the
issue here is initially we need to load history information on the I2
propietary DB (indexed files)which might grow upto 2.1 GB.

 A F80 with less memory is kicking a M80's butt, still runs just about
under 7 hours, so loading and reloading is not the issue neither are
indexed files size (about 1.5 to 2.1 GB),after 5 hours of guessing I
finally asked about disk layout and on the M80 they have a raid 5
array (SSA) and raid0 on the F80, so it is a good reason, since
processor speed is not that superior on the M80, however the customer
gives a sh... about it and wants it to run it as fast as the F80, I
suggested replacing the raid5 for raid0 config and setup a performance
baseline, they are reluctant about it, since it will be running with
raid5 in production, also suggested using raid10 or mirroring but
anyway I would like to explore the posibility to improve raid5 perf.

 I found a couple of hits suggesting to increase the number of
queue_depth and max_coalesce, I changed those two values but it had a
negative impact on the performance

  any ideas appreciated.

  the raid5 is built out of 7 18GB SSA disks using the 6230 SSA
adapter, fast-write cache is enabled but I belive write op's are
washing that out.

regards,
esv.
ps. an apology for spelling errors.

 
 
 

performance issues with raid5 and I2

Post by Rizwan Abbas » Fri, 29 Jun 2001 05:40:03


Hum! performance problem. Well RAID 0+1 always give better performance
them RAID5 but one moment
Can you pls check on F80 with SSA you are using blue SSA cable or black
cable? I suppose it will be blue cable on F80 and black on M80s. So pls
replace the cable with blue ones on M80s. Also there will be SDRAM card on
F80 6230 adapter in addition to FAST write cache. Also with 18.2GB disks
performance will be slow I suggest to break your loop in to two i.e. 3+4
disks loop so that you get the performance out of that two loops instead
of one. You can use port B1B2 on the 6230 card for that purpose. so try
out these then let us know the results.

> Hi,

>   We are setting up a M80 with 6GB RAM and 65+ GB hard drive, the
> issue here is initially we need to load history information on the I2
> propietary DB (indexed files)which might grow upto 2.1 GB.

>  A F80 with less memory is kicking a M80's butt, still runs just about
> under 7 hours, so loading and reloading is not the issue neither are
> indexed files size (about 1.5 to 2.1 GB),after 5 hours of guessing I
> finally asked about disk layout and on the M80 they have a raid 5
> array (SSA) and raid0 on the F80, so it is a good reason, since
> processor speed is not that superior on the M80, however the customer
> gives a sh... about it and wants it to run it as fast as the F80, I
> suggested replacing the raid5 for raid0 config and setup a performance
> baseline, they are reluctant about it, since it will be running with
> raid5 in production, also suggested using raid10 or mirroring but
> anyway I would like to explore the posibility to improve raid5 perf.

>  I found a couple of hits suggesting to increase the number of
> queue_depth and max_coalesce, I changed those two values but it had a
> negative impact on the performance

>   any ideas appreciated.

>   the raid5 is built out of 7 18GB SSA disks using the 6230 SSA
> adapter, fast-write cache is enabled but I belive write op's are
> washing that out.

> regards,
> esv.
> ps. an apology for spelling errors.


 
 
 

performance issues with raid5 and I2

Post by John Smit » Fri, 29 Jun 2001 10:19:38


Can you do a single RAID-5 array spanning two loops with that adapter?


> Hum! performance problem. Well RAID 0+1 always give better performance
> them RAID5 but one moment
> Can you pls check on F80 with SSA you are using blue SSA cable or black
> cable? I suppose it will be blue cable on F80 and black on M80s. So pls
> replace the cable with blue ones on M80s. Also there will be SDRAM card on
> F80 6230 adapter in addition to FAST write cache. Also with 18.2GB disks
> performance will be slow I suggest to break your loop in to two i.e. 3+4
> disks loop so that you get the performance out of that two loops instead
> of one. You can use port B1B2 on the 6230 card for that purpose. so try
> out these then let us know the results.


> > Hi,

> >   We are setting up a M80 with 6GB RAM and 65+ GB hard drive, the
> > issue here is initially we need to load history information on the I2
> > propietary DB (indexed files)which might grow upto 2.1 GB.

> >  A F80 with less memory is kicking a M80's butt, still runs just about
> > under 7 hours, so loading and reloading is not the issue neither are
> > indexed files size (about 1.5 to 2.1 GB),after 5 hours of guessing I
> > finally asked about disk layout and on the M80 they have a raid 5
> > array (SSA) and raid0 on the F80, so it is a good reason, since
> > processor speed is not that superior on the M80, however the customer
> > gives a sh... about it and wants it to run it as fast as the F80, I
> > suggested replacing the raid5 for raid0 config and setup a performance
> > baseline, they are reluctant about it, since it will be running with
> > raid5 in production, also suggested using raid10 or mirroring but
> > anyway I would like to explore the posibility to improve raid5 perf.

> >  I found a couple of hits suggesting to increase the number of
> > queue_depth and max_coalesce, I changed those two values but it had a
> > negative impact on the performance

> >   any ideas appreciated.

> >   the raid5 is built out of 7 18GB SSA disks using the 6230 SSA
> > adapter, fast-write cache is enabled but I belive write op's are
> > washing that out.

> > regards,
> > esv.
> > ps. an apology for spelling errors.

 
 
 

performance issues with raid5 and I2

Post by Christer Pal » Fri, 29 Jun 2001 14:30:09



> Can you do a single RAID-5 array spanning two loops with that adapter?

No, and splitting into two smaller RAID-5 arrays hurts the performance
benefit of RAID-5 over RAID-3. I think the fast-write cache will give
you a tremendous performance improvement if you don't already have one.

--
Christer Palm

 
 
 

performance issues with raid5 and I2

Post by esanche » Sat, 30 Jun 2001 07:43:27


SDRAM?? sounds interesting but to make things worse the F80 Array0 is
running on SCSI disks not SSA, we changed to SSA Raid0 on the M80, it
helped but it is still running behind F80 mark.

 We are thinking of spliting the SSA Loop and doing RAID10 next.

 Right now we are fiddling with the "scat_gat_pages" and "dma_mem"
 parameters, also will test with only 4 SSA disks on the RAID0 array.

 Fast Write Cache Memory did not help since the file is so big , 2.3GB.

  > Hum! performance problem. Well RAID 0+1 always give better performance
  > them RAID5 but one moment Can you pls check on F80 with SSA you are
  > using blue SSA cable or black cable? I suppose it will be blue cable on
  > F80 and black on M80s. So pls replace the cable with blue ones on M80s.
  > Also there will be SDRAM card on F80 6230 adapter in addition to FAST
  > write cache. Also with 18.2GB disks performance will be slow I suggest
  > to break your loop in to two i.e. 3+4 disks loop so that you get the
  > performance out of that two loops instead of one. You can use port B1B2
  > on the 6230 card for that purpose. so try out these then let us know
  > the results.

--
Enrique Sanchez Vela
Unix Consultant      
52+ (8) 280-3689

Posted via dBforums, http://dbforums.com

 
 
 

performance issues with raid5 and I2

Post by esanche » Sat, 30 Jun 2001 23:17:21


problem fixed.....

  now we are not kicking the F80 *but at least got closer and using
  raid5, changes all at the same time...

  We are still using raid5... but only one huge array instead of two in
  the same loop....(thanks).

  PP size of the VG is 1024M (hehehehe, remember large serial I/O
  operations..)

  disk parameters.. for 14 18.2 GB SSA disks lsattr -El hdisk2 | grep
  True

queue_depth 112 Queue depth True write_queue_mod 3 Write queue depth
modifier True primary_adapter adapter_a Primary adapter True
reserve_lock yes RESERVE device on open True
### max_coalesce 0x1e0000 Maximum coalesced operation True

 the default value for max_coalesce was 0x1a0000, also for a 7 disk ssa
 array the default value was around 0x20000

for the ssa adapter we also changed the following...

scat_gat_pages 35 Pages allocated for scatter/gather True dma_mem
0x4000000 DMA bus memory length True

the default values are much smaller, this would has the side effect of
using more memory, but since we were not using enough memory to start
using paging space, we bumped them up.

regards, esv.

--
Enrique Sanchez Vela
Unix Consultant      
52+ (8) 280-3689

Posted via dBforums, http://www.veryComputer.com/

 
 
 

performance issues with raid5 and I2

Post by Rizwan Abbas » Sun, 01 Jul 2001 14:35:40


Well, in this case what is the out pout of iostat on F80 and M80?. Also are you
using large enabled filesystem? pls check the Microcode level of 6230 adapter
card it should be latest (the current level of code for this adapter is A400)
and also the device driver levels of SSA filesets pls check and download the
latest filesets from IBM SSA web site www.hursley.ibm.com, SSA  is always
v.fast then SCSI because of arbitrated and mutipath diffrences.
( just for knowldge pls also check the site
http://www.storage.ibm.com/hardsoft/products/ssa/docs/index.html)

> SDRAM?? sounds interesting but to make things worse the F80 Array0 is
> running on SCSI disks not SSA, we changed to SSA Raid0 on the M80, it
> helped but it is still running behind F80 mark.

>  We are thinking of spliting the SSA Loop and doing RAID10 next.

>  Right now we are fiddling with the "scat_gat_pages" and "dma_mem"
>  parameters, also will test with only 4 SSA disks on the RAID0 array.

>  Fast Write Cache Memory did not help since the file is so big , 2.3GB.


>   > Hum! performance problem. Well RAID 0+1 always give better performance
>   > them RAID5 but one moment Can you pls check on F80 with SSA you are
>   > using blue SSA cable or black cable? I suppose it will be blue cable on
>   > F80 and black on M80s. So pls replace the cable with blue ones on M80s.
>   > Also there will be SDRAM card on F80 6230 adapter in addition to FAST
>   > write cache. Also with 18.2GB disks performance will be slow I suggest
>   > to break your loop in to two i.e. 3+4 disks loop so that you get the
>   > performance out of that two loops instead of one. You can use port B1B2
>   > on the 6230 card for that purpose. so try out these then let us know
>   > the results.

> --
> Enrique Sanchez Vela

> Unix Consultant

> 52+ (8) 280-3689

> Posted via dBforums, http://dbforums.com

 
 
 

1. How alpha are gcc-i2.5.8/i2.6.3?

        I've been using gcc-2.5.8 on my Pentium machine quite happily for
a while now, but lately I've been wondering about the wisdom of upgrading
to i2.6.3.  I've read of some problems people have had with 2.6.3, but I
haven't been able to tell how serious they are.  Both i2.5.8 and i2.6.3
are being called alpha, but how alpha are they?  I can live with an
alpha-stage game having bugs, but if I'm going to recompile a bunch of
programs, I'd just as soon have the compiler solid and slightly slower than
blazing fast and buggy.
        If I only wanted to do this half-way, am I better off upgrading
to 2.6.3 or i2.5.8?  Or neither?  This is another way of asking how much
of an improvement in performance I would notice over 2.5.8 -- would it be
substantial enough to be worth the trouble?

Cheers,
Dave
--

* Dept. of Agricultural Meteorology     ph:  (402) 437-5178 x 20
* University of Nebraska                FAX: (402) 437-5712
* Lincoln, NE  68583-0728

2. how to remove decimal point with sed

3. Transfer old Raid5 on new Raid5 ?

4. change file extension in directories et subdirs

5. RAID5 performance

6. Sun system device naming and configuring

7. RAID5 Performance Dilemma between ibm rs/6000 Model 590 and H50

8. Jaz Drive support?

9. FibreChannel Raid5 Array Performance

10. Software RAID5 Performance Tuning

11. Adaptec 2010S raid5 low performance

12. adaptec 2010S raid5 low performance

13. Very bad performance on 2498 RAID5 configuration with 2104 !!