Oracle on VMS performance issue

Oracle on VMS performance issue

Post by Zeth Johansso » Fri, 25 Sep 1998 04:00:00



Hi there!

We're running Oracle 7.3.2 on OpenVMS 7.1 (Alpha Server). The database
resides on a stripe-set. We suffer from performance problems related to
disk I/O. How does stripe size affect performance? I belive 64k stripes
are common with VMS. What would be the recommended stripe size, with
Oracle using 4k block-size? Does anyone have experience with this
configuration?

Thank you for your help!

Zeth Johansson

 
 
 

Oracle on VMS performance issue

Post by Dan O'Reill » Fri, 25 Sep 1998 04:00:00


--=====================_10438790==_.ALT
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable


Quote:>Hi there!

>We're running Oracle 7.3.2 on OpenVMS 7.1 (Alpha Server). The database
>resides on a stripe-set. We suffer from performance problems related to
>disk I/O. How does stripe size affect performance? I belive 64k stripes
>are common with VMS. What would be the recommended stripe size, with
>Oracle using 4k block-size? Does anyone have experience with this
>configuration?

Welcome to "RAID 101".  ;-)

The word you're looking for is "chunksize", not "stripesize".  This is a=20
bit from a whitepaper on RAID, that should be useful to you.

---------------------------

Crucial to understanding the effect of choosing a chunksize is=20
understanding the effect that chunk size vs. the size of a typical I/O=20
request (this could be a single record from a user program, a cluster of=20
database pages, activity to a system page file, etc.) has on performance:

=B7       A chunksize which is smaller than the average I/O request size to=
 a=20
volume provides higher data transfer rates (throughput).  A single I/O=20
request is broken up by the RAID controller into multiple I/O requests,=20
each of which go to a different disk within the array, and all of which=20
are serviced simultaneously.  The time required to break the I/O up into=20
multiple requests is insignificant; therefore, more data can be=20
transferred through the array in a given time slice, since more physical=20
disks are involved in the transfer.  This is usually accomplished at the=20
expense of higher I/O rates.

=B7       A chunksize which is larger than the average I/O request size
provides=20
high I/O rates (bandwidth).   In this case, exactly one physical drive is=20
involved in servicing the I/O request.  The other drives in the array are=20
thus free to participate in servicing other I/O=92s.  Since each physical=20
drive can handle exactly one data transfer at a time and more disks in=20
the array can be servicing more I/O=92s simultaneously, the overall I/O=20
rate for the array is high, but the overall throughput can be lower.

Choosing the wrong chunksize can (and typically, does) have major=20
performance implications for a RAID array, depending on if throughput or=20
bandwidth is desired.  A rule of thumb for providing a chunksize which=20
will satisfy the vast majority (over 95%) of all I/O requests (increasing=20
bandwidth) is to take into account the locality of reference (the=20
tendency of successive I/O=92s to access successive disk sectors or chunks)=
=20
of the I/O=92s, using the following:

Where there is high locality of reference - 10x the average I/O size

Where there is low locality of reference - 20x the average I/O size

Where the locality of reference is not known - 15x the average I/O size

It should be mentioned at this time that a common misconception is that=20
splitting up an I/O request into multiple components for parallel service=20
by the disk array is always fastest.  However, the overhead required for=20
generating and servicing the SCSI commands to each member disk of an=20
array can be non-trivial; in the case of small I/O requests, the time=20
required to generate SCSI commands to several disks (in addition to the=20
multiple data transfers) may be greater than that required to service=20
the request from a single disk.  The break-even point for chunksize is=20
usually in the 4kb-8kb range.

--
Dan O'Reilly
MCI Telcommunications
Systems Engineering/BT NIP
MS 1183/117
2424 Garden of the Gods Rd
Colorado Springs, CO  80919
719-535-1418

--=====================_10438790==_.ALT
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>

&gt;Hi there!<br>
&gt;<br>
&gt;We're running Oracle 7.3.2 on OpenVMS 7.1 (Alpha Server). The
database<br>
&gt;resides on a stripe-set. We suffer from performance problems related
to<br>
&gt;disk I/O. How does stripe size affect performance? I belive 64k
stripes<br>
&gt;are common with VMS. What would be the recommended stripe size,
with<br>
&gt;Oracle using 4k block-size? Does anyone have experience with
this<br>
&gt;configuration?<br>
<br>
Welcome to &quot;RAID 101&quot;.&nbsp; ;-)<br>
<br>
The word you're looking for is &quot;chunksize&quot;, not
&quot;stripesize&quot;.&nbsp; This is a <br>
bit from a whitepaper on RAID, that should be useful to you.<br>
<br>
---------------------------<br>
<br>
Crucial to understanding the effect of choosing a chunksize is <br>
understanding the effect that chunk size vs. the size of a typical I/O
<br>
request (this could be a single record from a user program, a cluster of
<br>
database pages, activity to a system page file, etc.) has on
performance:<br>
<br>
<font face=3D"Symbol">=B7<x-tab>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</=
x-tab></font>A
chunksize which is smaller than the average I/O request size to a <br>
volume provides higher data transfer rates (<i>throughput</i>).&nbsp; A
single I/O <br>
request is broken up by the RAID controller into multiple I/O requests,
<br>
each of which go to a different disk within the array, and all of which
<br>
are serviced simultaneously.&nbsp; The time required to break the I/O up
into <br>
multiple requests is insignificant; therefore, more data can be <br>
transferred through the array in a given time slice, since more physical
<br>
disks are involved in the transfer.&nbsp; This is usually accomplished at
the <br>
expense of higher I/O rates.<br>
<br>
<font face=3D"Symbol">=B7<x-tab>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</=
x-tab></font>A
chunksize which is larger than the average I/O request size provides
<br>
high I/O rates (<i>bandwidth</i>).&nbsp;&nbsp; In this case, exactly one
physical drive is <br>
involved in servicing the I/O request.&nbsp; The other drives in the
array are <br>
thus free to participate in servicing other I/O=92s.&nbsp; Since each
physical <br>
drive can handle exactly one data transfer at a time and more disks in
<br>
the array can be servicing more I/O=92s simultaneously, the overall I/O
<br>
rate for the array is high, but the overall throughput can be=20
lower.<br>
<br>
Choosing the wrong chunksize can (and typically, does) have major <br>
performance implications for a RAID array, depending on if throughput or
<br>
bandwidth is desired.&nbsp; A rule of thumb for providing a chunksize
which <br>
will satisfy the vast majority (over 95%) of all I/O requests (increasing
<br>
bandwidth) is to take into account the <i>locality of reference </i>(the
<br>
tendency of successive I/O=92s to access successive disk sectors or chunks)
<br>
of the I/O=92s, using the following:<br>
<br>
Where there is high locality of reference - 10x the average I/O=20
size<br>
<br>
Where there is low locality of reference - 20x the average I/O size<br>
<br>
Where the locality of reference is not known - 15x the average I/O
size<br>
<br>
It should be mentioned at this time that a common misconception is that
<br>
splitting up an I/O request into multiple components for parallel service
<br>
by the disk array is always fastest.&nbsp; However, the overhead required
for <br>
generating and servicing the SCSI commands to each member disk of an
<br>
array can be non-trivial; in the case of small I/O requests, the time
<br>
required to generate SCSI commands to several disks (in addition to the
<br>
multiple data transfers) may be greater than that required to service
<br>
the request from a single disk.&nbsp; The break-even point for chunksize
is <br>
usually in the 4kb-8kb range.<br>
<br>
<br>
<div>--</div>
<div>Dan O'Reilly</div>
<div>MCI Telcommunications</div>
<div>Systems Engineering/BT NIP</div>
<div>MS 1183/117</div>
<div>2424 Garden of the Gods Rd</div>
<div>Colorado Springs, CO&nbsp; 80919</div>
<div>719-535-1418</div>
</html>

--=====================_10438790==_.ALT--

 
 
 

Oracle on VMS performance issue

Post by William L. Gord » Fri, 25 Sep 1998 04:00:00



>Welcome to "RAID 101".  ;-)

>The word you're looking for is "chunksize", not "stripesize".  This is a=20
>bit from a whitepaper on RAID, that should be useful to you.

Is this whitepaper available anywhere on the 'net?  It looks interesting...

Bill
--
My real e-mail address is wlg AT infinet | Sorry for the trouble, but I HATE
dot com                                  | junk e-mail!
                                         | I do NOT expect e-mail replies,
                                         | follow-ups are just fine, thanks!

 
 
 

Oracle on VMS performance issue

Post by Dan O'Reill » Fri, 25 Sep 1998 04:00:00




>>Welcome to "RAID 101".  ;-)

>>The word you're looking for is "chunksize", not "stripesize".  This is a=20
>>bit from a whitepaper on RAID, that should be useful to you.

>Is this whitepaper available anywhere on the 'net?  It looks interesting...

Er, actually, no, it's not.  I wrote it.  I would have to check with my
employer to see about releasing it.

--
Dan O'Reilly
MCI Telcommunications
Systems Engineering/BT NIP
MS 1183/117
2424 Garden of the Gods Rd
Colorado Springs, CO  80919
719-535-1418

 
 
 

Oracle on VMS performance issue

Post by Phil Coo » Sun, 27 Sep 1998 04:00:00


You don't want an Oracle block to span multiple stripes.  Your real problem
is how much IO your applications are performing.  I would start looking
at v$sqlarea and the number of IOs.  Run explain plans on the top
statements, and see if these can be tuned.

If you really want to brute force the application with hardware add more
memory cache at the disk or OS level.  Adding more buffer cache to
the sga may help, look at your buffer hit ratio.


Quote:> Hi there!

> We're running Oracle 7.3.2 on OpenVMS 7.1 (Alpha Server). The database
> resides on a stripe-set. We suffer from performance problems related to
> disk I/O. How does stripe size affect performance? I belive 64k stripes
> are common with VMS. What would be the recommended stripe size, with
> Oracle using 4k block-size? Does anyone have experience with this
> configuration?

> Thank you for your help!

> Zeth Johansson

 
 
 

Oracle on VMS performance issue

Post by Hein RMS van den Heuv » Wed, 30 Sep 1998 04:00:00



>Hi there!

>We're running Oracle 7.3.2 on OpenVMS 7.1 (Alpha Server). The database
>resides on a stripe-set. We suffer from performance problems related to
>disk I/O. How does stripe size affect performance?

Mimimally so, unless you go for the extremes.
Hardware or software striping? Large or small db_block_size

You want to use UTLBSTAT/UTLESTAT or similar over a representative
test period to find whether you are doing a lot of sequential reads
or not (compare blocks_read with reads!).
If lotsa sequential reads then maybe up the read_ahaed multi block
count and use a smaller stripe size to ATTEMPT to get parallel IO in.

If mostly random IO, then just ose a largish stripe ( > 10 * block_size )
to avoid boundery fragmentation.

Quote:> I belive 64k stripes are common with VMS.


Quote:>What would be the recommended stripe size, with
>Oracle using 4k block-size?

64 Kb is probably fine ( 128 blocks) .

Quote:> Does anyone have experience with this configuration?

Whether you can really eploit striping depends on the exact controllers,
and disks being used, the excess CPU power and spare IO bandwith.

hth,
        Hein.

 
 
 

Oracle on VMS performance issue

Post by Y&A Shova » Wed, 30 Sep 1998 04:00:00


just remmeber to NOT put your redologs and system tablespace
on the stripes disk. put them only on regular one.

Asaf.
============================================


>You don't want an Oracle block to span multiple stripes.  Your real problem
>is how much IO your applications are performing.  I would start looking
>at v$sqlarea and the number of IOs.  Run explain plans on the top
>statements, and see if these can be tuned.

>If you really want to brute force the application with hardware add more
>memory cache at the disk or OS level.  Adding more buffer cache to
>the sga may help, look at your buffer hit ratio.




Quote:>> Hi there!

>> We're running Oracle 7.3.2 on OpenVMS 7.1 (Alpha Server). The database
>> resides on a stripe-set. We suffer from performance problems related to
>> disk I/O. How does stripe size affect performance? I belive 64k stripes
>> are common with VMS. What would be the recommended stripe size, with
>> Oracle using 4k block-size? Does anyone have experience with this
>> configuration?

>> Thank you for your help!

>> Zeth Johansson

 
 
 

Oracle on VMS performance issue

Post by Bob Kapl » Thu, 01 Oct 1998 04:00:00



> You don't want an Oracle block to span multiple stripes.  Your real problem
> is how much IO your applications are performing.  I would start looking
> at v$sqlarea and the number of IOs.  Run explain plans on the top
> statements, and see if these can be tuned.

> If you really want to brute force the application with hardware add more
> memory cache at the disk or OS level.  Adding more buffer cache to
> the sga may help, look at your buffer hit ratio.

More hardware cache probably won't help. You're describing 3 levels of
cache: the application (Oracle), the host system (VIOC?), and the hardware
(HS controller?). For the most part, unless outer cache layers are larger
than the inner layers, the contents of the outer cache will do nothing more
than duplicate the inner layer cache. But the outer cache will never be
"hit" because the request will be satisfied by the inner layer. It's mostly
a waste of $$$ and can actually slow performance.

One feature we're waiting for is supposed to be in the hs*80 controllers:
read ahead cache. For read transactions with locality, and especially for
sequential reads, this should help performance.

        Bob Kaplow      


 
 
 

Oracle on VMS performance issue

Post by Derek fag » Thu, 01 Oct 1998 04:00:00


Not being an Oracle expert, but would writeback caching at the
controller level on the transaction log device not help ?

Derek...

> -----Original Message-----

> Sent:      30 September 1998 15:50

> Subject:   Re: Oracle on VMS performance issue



> > You don't want an Oracle block to span multiple stripes.  Your real
> problem
> > is how much IO your applications are performing.  I would start
> looking
> > at v$sqlarea and the number of IOs.  Run explain plans on the top
> > statements, and see if these can be tuned.

> > If you really want to brute force the application with hardware add
> more
> > memory cache at the disk or OS level.  Adding more buffer cache to
> > the sga may help, look at your buffer hit ratio.

> More hardware cache probably won't help. You're describing 3 levels of
> cache: the application (Oracle), the host system (VIOC?), and the
> hardware
> (HS controller?). For the most part, unless outer cache layers are
> larger
> than the inner layers, the contents of the outer cache will do nothing
> more
> than duplicate the inner layer cache. But the outer cache will never
> be
> "hit" because the request will be satisfied by the inner layer. It's
> mostly
> a waste of $$$ and can actually slow performance.

> One feature we're waiting for is supposed to be in the hs*80
> controllers:
> read ahead cache. For read transactions with locality, and especially
> for
> sequential reads, this should help performance.

>    Bob Kaplow      


> MARS!

 
 
 

Oracle on VMS performance issue

Post by Dan Sugalsk » Thu, 01 Oct 1998 04:00:00



Quote:>Not being an Oracle expert, but would writeback caching at the
>controller level on the transaction log device not help ?

Writeback caching does help for writes that don't overflow the controller's
cache. (Well, it helps for those too, but not nearly as much) If your
averate write rate is lower than the rate that the controller can empty
it's cache, you can completely avoid waiting on writes, which can help out
a bunch.

                                        Dan

---------------------------------------------"it's like this"--------------
Dan Sugalski   (541) 737-3346                even samurai
SysAdmin                                     have teddy bears
Oregon University System                     and even the teddy bears

 
 
 

Oracle on VMS performance issue

Post by Hein RMS van den Heuv » Thu, 01 Oct 1998 04:00:00



>Not being an Oracle expert, but would writeback caching at the
>controller level on the transaction log device not help ?

As usual, it depends...

Writeback cache on the log file can  help individual transaction
response times but will increase system resource consumption
and decrease throughput. If CPU power is at a premium, then
it may actually hurt both responsetime and throughput (at hundreds
if not thousands transactions per second levels).

This is because Oracle will buffer-up fresh commits while the
previous IO is still going. So if you need 500 commits in a second
and your disk respond in 20 mS normally and 2 mS with writeback
caching on, in the normal case you'll do 50 IOs and with WB caching
you'll do 500 IOs. All transactions are still commited in that second,
but you 'burned' 10 times more IOs. Do you have the CPU horses
to spare for that?

hth,
        Hein.

 
 
 

1. Oracle V7.1 vs Oracle V7.3 performance

Just ran a very simple test:
Pure SQLPLUS:

SQL> SELECT * FROM BIG_TABLE ( no index )

On Identical Hardware:

VMS 6.2 with ORACLE 7.1.5.2.3 ===> 22 seconds of CPU time.
VMS 7.1 with ORACLE 7.3.3.5    ===> 33 seconds of CPU time.

Am I seeing things or is it a fact? ORACLE 7.3  = 50% more costly to run?

Anybody seen the same thing?

Frank

2. Digital Laptop VP 735

3. oracle vms performance vrs unix

4. OS/2 Newsreader update?

5. Oracle VMS 7.1 performance

6. FTP Controll only works with capital letters

7. Oracle/VMS performance problems with Multithreaded Server

8. Free SPAM Program

9. CDD VS. RDB/ORACLE - License Issue

10. SYS$QIOW - Performance Issue

11. What about performance issues??

12. UCX NFS Client Poor performance issue

13. Performance issues with Alpha and VAX on same CI?