StorEdge 6320 low throughput. How to tune?

StorEdge 6320 low throughput. How to tune?

Post by Supak Lailer » Sun, 09 May 2004 02:01:39



Hi,

I'm setting up an SF12K connected to a SE6320 with 2 storage trays. I
configured the arrays into 2 RAID5+1 volumes as follows.

Array 1:
  Volume 1: RAID5 with 9 drives
  Volume 2: RAID5 with 5 drives
Array 2:
  Volume 1: RAID5 with 9 drives
  Volume 2: RAID5 with 5 drives

Volume 1 on array 1 and volume 1 on array 2 are mirrored. Volume 2 on
array 1 and volume 2 on array 2 are mirrored.

The sequential write test (mkfile 2g bigfile) to the volumes yields only
50MB/sec which is slower than writing to a single disk!

If I run 1 mkfile process writing to volume 1 or volume 2, I get 50 MB/s
throughput (iostat -xnM 1)
If I run 2 mkfile processes one writing to volume1 and another writing
to volume 2, I get 25MB throughput for each volume which total to
50MB/sec.

What is imposing this limit? Is there any settings that I should look at
either on SE6320 or in Solaris?

All FC connections are 2Gbps. HBA is Qlogic QLA2300. Switch is Brocade.
MPxIO is configured.

Supak

 
 
 

StorEdge 6320 low throughput. How to tune?

Post by Elia » Sun, 09 May 2004 03:58:46



> Hi,

> I'm setting up an SF12K connected to a SE6320 with 2 storage trays. I
> configured the arrays into 2 RAID5+1 volumes as follows.

> Array 1:
>   Volume 1: RAID5 with 9 drives
>   Volume 2: RAID5 with 5 drives
> Array 2:
>   Volume 1: RAID5 with 9 drives
>   Volume 2: RAID5 with 5 drives

> Volume 1 on array 1 and volume 1 on array 2 are mirrored. Volume 2 on
> array 1 and volume 2 on array 2 are mirrored.

> The sequential write test (mkfile 2g bigfile) to the volumes yields only
> 50MB/sec which is slower than writing to a single disk!

> If I run 1 mkfile process writing to volume 1 or volume 2, I get 50 MB/s
> throughput (iostat -xnM 1)
> If I run 2 mkfile processes one writing to volume1 and another writing
> to volume 2, I get 25MB throughput for each volume which total to
> 50MB/sec.

> What is imposing this limit? Is there any settings that I should look at
> either on SE6320 or in Solaris?

> All FC connections are 2Gbps. HBA is Qlogic QLA2300. Switch is Brocade.
> MPxIO is configured.

> Supak

Are the two bricks in a HA (partner pair) config or are they separate?

Are the HBA's in 33MHz or 66MHz slots, and do they reside in different
hsPCI assemblies?

Are you running MPxIO in round robin?

Elias

 
 
 

StorEdge 6320 low throughput. How to tune?

Post by Supak Lailer » Sun, 09 May 2004 17:19:54


----- Original Message -----

Newsgroups: comp.unix.solaris
Sent: Saturday, May 08, 2004 1:58 AM
Subject: Re: StorEdge 6320 low throughput. How to tune?
[snip]

> Are the two bricks in a HA (partner pair) config or are they separate?

> Are the HBA's in 33MHz or 66MHz slots, and do they reside in different
> hsPCI assemblies?

> Are you running MPxIO in round robin?
Hi,

The bricks are in HA config, HBA in 66MHz slot on different hsPCI
assemblies. MPxIO in roundrobin mode.

What is your thought on the 50MB/sec throughput? Does that sound absurdly
low or about right?

Thanks,
Supak

 
 
 

StorEdge 6320 low throughput. How to tune?

Post by Elia » Mon, 10 May 2004 03:16:51



> ----- Original Message -----

> Newsgroups: comp.unix.solaris
> Sent: Saturday, May 08, 2004 1:58 AM
> Subject: Re: StorEdge 6320 low throughput. How to tune?
> [snip]

>>Are the two bricks in a HA (partner pair) config or are they separate?

>>Are the HBA's in 33MHz or 66MHz slots, and do they reside in different
>>hsPCI assemblies?

>>Are you running MPxIO in round robin?

> Hi,

> The bricks are in HA config, HBA in 66MHz slot on different hsPCI
> assemblies. MPxIO in roundrobin mode.

> What is your thought on the 50MB/sec throughput? Does that sound absurdly
> low or about right?

> Thanks,
> Supak

It does sound low.  I had a problem once with a 6120 where it had poor
performance and I got it to work again by unmounting all filesystems
and doing a hardware reset on the array.  Not the most elegant
solution but it worked in that case.

I would start with the Solaris recommended patch cluster and also
install the latest firmware on the storage bricks.

Once you have everything patched and if the problem still exists
(which it probably will), then call Sun Service.  This is an issue
that needs to be examined more closely than can be accomplished
through this type of questions and answer session.

Elias

 
 
 

StorEdge 6320 low throughput. How to tune?

Post by Fran » Mon, 10 May 2004 18:46:04



> The sequential write test (mkfile 2g bigfile) to the volumes yields only
> 50MB/sec which is slower than writing to a single disk!

> If I run 1 mkfile process writing to volume 1 or volume 2, I get 50 MB/s
> throughput (iostat -xnM 1)
> If I run 2 mkfile processes one writing to volume1 and another writing
> to volume 2, I get 25MB throughput for each volume which total to
> 50MB/sec.

I did the same test on a 3510 with raid 5 over 9 disks. Disk array is
direct attached, so no switches are involved. IOSTAT reports the
following throughputs:

3510 write cache enabled and UFS cache enabled: 150 MB/s until Minnow
cache is saturated, it stabilizes at 75 MB/s.

3510 write cache enabled and UFS cache disabled (forcedirectio): 5 MB/s

3510 write cache disabled and UFS cache enabled: 75 MB/s

3510 write cache disabled and UFS cache disabled (forcedirectio): 0.5 MB/s

Greetz,
Frank

 
 
 

StorEdge 6320 low throughput. How to tune?

Post by Sami.Ket.. » Tue, 11 May 2004 14:17:34



>I'm setting up an SF12K connected to a SE6320 with 2 storage trays. I
>configured the arrays into 2 RAID5+1 volumes as follows.

So you have a Starcat. You then must have a support contract with
sun. What was sun support response to your problem.

Quote:>If I run 1 mkfile process writing to volume 1 or volume 2, I get 50 MB/s
>throughput (iostat -xnM 1)
>If I run 2 mkfile processes one writing to volume1 and another writing
>to volume 2, I get 25MB throughput for each volume which total to
>50MB/sec.

mkfile is not a benchmark tool. repeat this 10 times. mkfile allocates
<size> amount of disk space and then starts to fill it with 8k chunks
from end-to-start with single thread. Is your application workload
going to be similar?

Anyway, you did not mention what is the f/w level in the SE6120 units
inside your SE6320 "package"? and if youre using lun permission mapping
or just "all_lun rw all_wwn"?

There is performance related bugs with some f/w levels when lun
permission mappings are in place.

--
  Sami

 
 
 

StorEdge 6320 low throughput. How to tune?

Post by Fredrik Lundho » Tue, 11 May 2004 18:28:44




>Hi,

>I'm setting up an SF12K connected to a SE6320 with 2 storage trays. I
>configured the arrays into 2 RAID5+1 volumes as follows.

What kind of disks is in the SE6320?
Which Solaris version do you run on the SF12k? if Solaris 8 the you need
to tune /etc/system for good performance. If Solaris 9 usa a later release
than 12/02, use ufs, use logging

Quote:>Array 1:
>  Volume 1: RAID5 with 9 drives
>  Volume 2: RAID5 with 5 drives

Pretty sane config, I think I would make one large RAID 5 volume with
one hot spare in each tray and carve out two LUN:s from that.

How do you mirror? DiskSuite?

/wfr
Fredrik

--
Fredrik Lundholm  

 
 
 

StorEdge 6320 low throughput. How to tune?

Post by McBof » Tue, 11 May 2004 18:56:42





>>I'm setting up an SF12K connected to a SE6320 with 2 storage trays. I
>>configured the arrays into 2 RAID5+1 volumes as follows.
> What kind of disks is in the SE6320?

That's not totally relevant, since there is a whopping great cache
and a fast raid engine between the host and the actual disks.

Quote:> Which Solaris version do you run on the SF12k? if Solaris 8 the you need
> to tune /etc/system for good performance. If Solaris 9 usa a later release
> than 12/02, use ufs, use logging
>>Array 1:
>> Volume 1: RAID5 with 9 drives
>> Volume 2: RAID5 with 5 drives
> Pretty sane config, I think I would make one large RAID 5 volume with
> one hot spare in each tray and carve out two LUN:s from that.
> How do you mirror? DiskSuite?

Please, do _not_ make blithe statements about what a particular
person needs to do in order to improve performance when you have
_no_ information about what they are doing, what they want to
achieve, and what they have already tried. You do considerably more
harm than good!

What is more important is for Supak to remember what Sami
said earlier in the thread:

 >mkfile is not a benchmark tool. repeat this 10 times. mkfile
 >allocates <size> amount of disk space and then starts to fill
 >it with 8k chunks from end-to-start with single thread. Is your
 >application workload going to be similar?

ufs logging would be more likely to increase your overhead if
you insist on using mkfile.

Supak, if you want a _real_ benchmark of how the se6320 performs,
you should engage your local Sun sales force (Sun does have a
customer benchmark centre you know) and get their input on how
to appropriately benchmark your configuration given your expected
workload.

mcbofh.