FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by Dirk Moolma » Fri, 31 Aug 2001 21:44:52



One thing - you will need more disk space for the upgrade

We also went from a K250 to an L2000, but from Online5 to IDS2000 (64bit).
We unloaded / loaded and had no data problems.
We just have  performance problems - big time. I've been looking at the
system for weeks / months now (with the help of Tech Support), and I don't
know what to do anymore. So ours is not one of the success stories.

-----Original Message-----


Sent: Wednesday, August 29, 2001 10:02 PM

Subject: Re: Migrating from Dynamic Server 7.2 to IDS 9 (2000)


Here's my two cents (US) worth . . .
>We are currently upgrading our hardware from an HP9000 K250 to a L2000 but
>more importantly we are going from Dynamic Server 7.22 to IDS 9 (2000).

Congrats!  We went from a K200 to an L2000 . . . . until they tried to take
it away from me and my users.   But that's a whole
different story.

>I've been informed that the ontape backup utility is not backward
compatible
>although I don't know if this is true.

That is true.  You'd be looking at an inplace upgrade or export / import.

>There is a database-level backup utility but will this ensure my dbspace /
>table space allocations are preserved as we have spent quite a bit of time
>ensuring that the balance of disk utilisation is even.

>If anybody has achieved this migration painlessly I would appreciate any
>pointers,

Not painlessly, but it wasn't a nightmare as well.  I ended up running an
HPL export / import after my upgrade.  Some issue with
referential integrity going dumb on me; I could delete a master record
without Informix yelling at me.

Oops, forgot another thing.  We were using KAIO on our production and test
system.  We weren't able to enable KAIO on test when we
upgraded, but I didn't think anything about it, until we went to production.
KAIO had been working on production, so I installed
the engine and brought it up.  All went well until Informix AF'ed on me in
the middle of the conversion.  Part of the database had
been migrated (rootdbs), while the databases hadn't been migrated yet.
Reinstalled 7.30, restored tapes, and restarted the
procedure.

DBA NOTE:  Take a few level 0 backups before the upgrade process . . . . .
you might need them.

JWC

==========================================================
This message contains information intended for the perusal, and/or use (if
so stated), by the stated addressee(s) only. The information is
confidential and privileged. If you are not an intended recipient, do not
peruse, use, disseminate, distribute, copy or in any manner rely upon the
information contained in this message (directly or indirectly). The sender
and/or the entity represented by the sender shall not be held accountable
in the event that this prohibition is disregarded. If you receive this
message in error, notify the sender immediately by e-mail, fax or telephone
representations contained in this message, whether express or implied, are
those of the sender only, unless that sender expressly states them to be
the views or representations of an entity or person, who shall be named by
the sender and who the sender shall state to represent. No liability shall
otherwise attach to any other entity or person.
==========================================================

 
 
 

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by DJW » Fri, 14 Sep 2001 07:23:01



>One thing - you will need more disk space for the upgrade

>We also went from a K250 to an L2000, but from Online5 to IDS2000 (64bit).
>We unloaded / loaded and had no data problems.
>We just have  performance problems - big time. I've been looking at the
>system for weeks / months now (with the help of Tech Support), and I don't
>know what to do anymore. So ours is not one of the success stories.

 What does

onstat -p
onstat -D
onstat -c
onstat -g ioq

give?

>-----Original Message-----


>Sent: Wednesday, August 29, 2001 10:02 PM

>Subject: Re: Migrating from Dynamic Server 7.2 to IDS 9 (2000)


>Here's my two cents (US) worth . . .
>>We are currently upgrading our hardware from an HP9000 K250 to a L2000 but
>>more importantly we are going from Dynamic Server 7.22 to IDS 9 (2000).

>Congrats!  We went from a K200 to an L2000 . . . . until they tried to take
>it away from me and my users.   But that's a whole
>different story.

>>I've been informed that the ontape backup utility is not backward
>compatible
>>although I don't know if this is true.

>That is true.  You'd be looking at an inplace upgrade or export / import.

>>There is a database-level backup utility but will this ensure my dbspace /
>>table space allocations are preserved as we have spent quite a bit of time
>>ensuring that the balance of disk utilisation is even.

>>If anybody has achieved this migration painlessly I would appreciate any
>>pointers,

>Not painlessly, but it wasn't a nightmare as well.  I ended up running an
>HPL export / import after my upgrade.  Some issue with
>referential integrity going dumb on me; I could delete a master record
>without Informix yelling at me.

>Oops, forgot another thing.  We were using KAIO on our production and test
>system.  We weren't able to enable KAIO on test when we
>upgraded, but I didn't think anything about it, until we went to
production.
>KAIO had been working on production, so I installed
>the engine and brought it up.  All went well until Informix AF'ed on me in
>the middle of the conversion.  Part of the database had
>been migrated (rootdbs), while the databases hadn't been migrated yet.
>Reinstalled 7.30, restored tapes, and restarted the
>procedure.

>DBA NOTE:  Take a few level 0 backups before the upgrade process . . . . .
>you might need them.

>JWC

>==========================================================
>This message contains information intended for the perusal, and/or use (if
>so stated), by the stated addressee(s) only. The information is
>confidential and privileged. If you are not an intended recipient, do not
>peruse, use, disseminate, distribute, copy or in any manner rely upon the
>information contained in this message (directly or indirectly). The sender
>and/or the entity represented by the sender shall not be held accountable
>in the event that this prohibition is disregarded. If you receive this
>message in error, notify the sender immediately by e-mail, fax or telephone
>representations contained in this message, whether express or implied, are
>those of the sender only, unless that sender expressly states them to be
>the views or representations of an entity or person, who shall be named by
>the sender and who the sender shall state to represent. No liability shall
>otherwise attach to any other entity or person.
>==========================================================


 
 
 

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by Dirk Moolma » Fri, 14 Sep 2001 17:52:28


A lot of information, but here goes ...

but some comments before the onstat info.
1. We are not using NOAGE, engine keeps aborting when we do.
2. We are not using RESIDENCY, slows system down for some reason
3. We are not using KAIO, also slows system down

All of the above were tried with the help of Tech Support

****************************************************************************
**********
onstat -p
Profile
dskreads pagreads bufreads %cached dskwrits pagwrits bufwrits %cached
389354892 65336210 14193111210 97.26   23259841 25583827 268719155 91.34

isamtot  open     start    read     write    rewrite  delete   commit
rollbk
11948167084 124600934 324408344 10339002630 83545704 42555536 1626765
3868055
4697

gp_read  gp_write gp_rewrt gp_del   gp_alloc gp_free  gp_curs
0        0        0        0        0        0        0

ovlock   ovuserthread ovbuff   usercpu  syscpu   numckpts flushes
0        0            0        441036.75 150612.93 1262     2536

bufwaits lokwaits lockreqs deadlks  dltouts  ckpwaits compress seqscans
13115029 657      14618933717 3        0        8197     4525346  2791475

ixda-RA  idx-RA   da-RA    RA-pgsused lchwaits
37136504 1861327  234036811 271742203  41904784

****************************************************************************
**********
onstat -D
Chunks
address          chk/dbs offset   page Rd  page Wr  pathname
c00000005e4bf028 1   1   0        479101   1435455  /dev/informix/opsroot01
c00000006076a1b8 2   2   0        1426356  1131237
/dev/informix/opslogical01
c00000006076a338 3   3   0        230020   1399101  /dev/informix/opstemp01
c00000006076a4b8 4   4   0        206022   742549   /dev/informix/opsrtt01
c00000006076a638 5   4   0        2041652  698637   /dev/informix/opsrtt02
c00000006076a7b8 6   4   0        280509   1039967  /dev/informix/opsrtt03
c00000006076a938 7   4   0        2075392  1441873  /dev/informix/opsrtt04
c00000006076aab8 8   4   0        1531402  997163   /dev/informix/opsrtt05
c00000006076ac38 9   4   0        1420184  351938   /dev/informix/opsrtt06
c00000006076adb8 10  4   0        546668   504990   /dev/informix/opsrtt07
c00000006079b028 11  5   0        673073   444157   /dev/informix/opsmach401
c00000006079b1a8 12  5   0        11       1        /dev/informix/opsmach402
c00000006079b328 13  6   0        698023   808743   /dev/informix/opsdebt01
c00000006079b4a8 14  6   0        455052   174269   /dev/informix/opsdebt02
c00000006079b628 15  6   0        2020490  841888   /dev/informix/opsdebt03
c00000006079b7a8 16  7   0        45307    4591     /dev/informix/opssmall01
c00000006079b928 17  8   0        662248   1630849  /dev/informix/opstemp02
c00000006079baa8 18  9   0        50940    1089963  /dev/informix/opstemp03
c00000006079bc28 19  25  0        815527   1596055  /dev/informix/opstempdbs
c00000006079bda8 20  11  0        1062551  14720    /dev/informix/opsrtt201
c00000006079c028 21  11  0        880471   12940    /dev/informix/opsrtt202
c00000006079c1a8 22  11  0        175411   98043    /dev/informix/opsrtt203
c00000006079c328 23  11  0        52413    69777    /dev/informix/opsrtt204
c00000006079c4a8 24  12  0        1976169  4562     /dev/informix/opsrtt301
c00000006079c628 25  12  0        342153   463      /dev/informix/opsrtt302
c00000006079c7a8 26  12  0        846179   103441   /dev/informix/opsrtt303
c00000006079c928 27  13  0        1895301  123211   /dev/informix/opsrtt401
c00000006079caa8 28  13  0        240674   174378   /dev/informix/opsrtt402
c00000006079cc28 29  14  0        1528429  157953   /dev/informix/opsrtt501
c00000006079cda8 30  14  0        844269   40890    /dev/informix/opsrtt502
c00000006079d028 31  15  0        871857   37028    /dev/informix/opsrtt601
c00000006079d1a8 32  15  0        1964984  465      /dev/informix/opsrtt602
c00000006079d328 33  15  0        1767586  2975     /dev/informix/opsrtt603
c00000006079d4a8 34  15  0        18876    188253   /dev/informix/opsrtt604
c00000006079d628 35  16  0        1711690  8105     /dev/informix/opsrtt701
c00000006079d7a8 36  16  0        1986471  115252   /dev/informix/opsrtt702
c00000006079d928 37  12  0        161959   5229     /dev/informix/opsrtt304
c00000006079daa8 38  17  0        2003187  4709
/dev/informix/rtthistdbs101
c00000006079dc28 39  17  0        1351895  118
/dev/informix/rtthistdbs102
c00000006079dda8 40  18  0        946813   1024241  /dev/informix/opsrtt801
c00000006079e028 41  18  0        715542   1002760  /dev/informix/opsrtt802
c00000006079e1a8 42  18  0        873452   690751   /dev/informix/opsrtt803
c00000006079e328 43  18  0        598906   136939   /dev/informix/opsrtt804
c00000006079e4a8 44  19  0        2036852  3622
/dev/informix/rtthistdbs201
c00000006079e628 45  19  0        1553076  0
/dev/informix/rtthistdbs202
c00000006079e7a8 46  17  0        10       0
/dev/informix/rtthistdbs103
c00000006079e928 47  20  0        925508   1023570
/dev/informix/debthistdbs101
c00000006079eaa8 48  20  0        772467   154161
/dev/informix/debthistdbs102
c00000006079ec28 49  5   0        11       1        /dev/informix/opsmach403
c00000006079eda8 50  21  0        1466458  4306
/dev/informix/rtthistdbs301
c00000006079f028 51  21  0        486855   12
/dev/informix/rtthistdbs302
c00000006079f1a8 52  13  0        1392900  456269   /dev/informix/opsrtt406
c00000006079f328 53  13  0        1638975  212998   /dev/informix/opsrtt405
c00000006079f4a8 54  19  0        10       0
/dev/informix/rtthistdbs203
c00000006079f628 55  13  0        1397475  11506    /dev/informix/opsrtt403
c00000006079f7a8 56  16  0        755172   181653   /dev/informix/opsrtt703
c00000006079f928 57  6   0        752858   68423    /dev/informix/opsdebt04
c00000006079faa8 58  14  0        123059   325937   /dev/informix/opsrtt503
c00000006079fc28 59  4   0        11       1        /dev/informix/opsrtt10
c00000006079fda8 60  15  0        521031   169265   /dev/informix/opsrtt609
c0000000607a0028 61  22  0        1510612  23705
/dev/informix/delwaybill101
c0000000607a01a8 62  15  0        10       0        /dev/informix/opsrtt611
c0000000607a0328 63  15  0        10       0        /dev/informix/opsrtt612
c0000000607a04a8 64  23  0        1323224  24256
/dev/informix/delwaybill201
c0000000607a0628 65  13  0        1576481  163881   /dev/informix/opsrtt404
c0000000607a07a8 66  4   0        1632792  633374   /dev/informix/opsrtt08
c0000000607a0928 67  6   0        165678   27018    /dev/informix/opsdebt05
c0000000607a0aa8 68  4   0        1049850  424840   /dev/informix/opsrtt09
c0000000607a0c28 69  10  0        2072164  977635   /dev/informix/opstemp04
c0000000607a0da8 70  24  0        43655    1490     /dev/informix/debtarch01
c0000000607a5028 71  26  0        387361   24308
/dev/informix/delwaybill301
c0000000607a51a8 72  27  0        579221   143088
/dev/informix/delwaybill401
c0000000607a5328 73  27  0        712849   188079
/dev/informix/delwaybill402
 73 active, 2047 maximum

****************************************************************************
**********
onstat -c

(raw disk, no informix mirroring, using raid01 - mirroring & striping)

# Root Dbspace Configuration
ROOTSIZE        2048000         # Size of root dbspace (Kbytes)

# Physical Log Configuration

PHYSDBS         rootdbs         # Location (dbspace) of physical log
PHYSFILE        80000           # Physical log file size (Kbytes)

# Logical Log Configuration

LOGFILES        300             # Number of logical log files
LOGSIZE         5000            # Logical log size (Kbytes)

# Diagnostics

TBLSPACE_STATS  1               # Maintain tblspace statistics

# System Configuration

SERVERNUM       1               # Unique id corresponding to a OnLine
instance
DBSERVERNAME    ops             # Name of default database server
DBSERVERALIASES ops_soc,ops_shm # List of alternate dbservernames

#changed 03/08/2001 - Dirk - on request of Tech Support - case# 265 221
#NETTYPE         ipcstr,4,300,CPU
#NETTYPE         ipcshm,1,100,NET

NETTYPE         ipcstr,4,600,NET
NETTYPE         soctcp,1,100,NET
NETTYPE         ipcshm,1,100,CPU

DEADLOCK_TIMEOUT 60              # Max time to wait of lock in distributed
env.
RESIDENT        0               # Forced residency flag (Yes = 1, No = 0)

MULTIPROCESSOR  1               # 0 for single-processor, 1 for
multi-processor
#NUMCPUVPS       4               # Number of user (cpu) vps
SINGLE_CPU_VP   0               # If non-zero, limit number of cpu vps to
one
VPCLASS         cpu,num=4,aff=0-3

#Changed on 15/09/2000 as per Darren - Tech Support
#NOAGE           1               # Process aging
#disabled again 04/06/2001 - Dirk - engine kept on aborting
#NOAGE           0               # Process aging
#commented out - using VPCLASS instead of NUMCPUVPS
#NOAGE           0               # Process aging

#commented out - using VPCLASS instead of NUMCPUVPS
#AFF_SPROC       1               # Affinity start processor - changed dorian
#AFF_SPROC       0               # Affinity start processor
#AFF_NPROCS      4               # Affinity number of processors

# Shared Memory Parameters

LOCKS           1000000         # Maximum number of locks
#BUFFERS         400000          # Maximum number of shared buffers
# Set to 450000 on 1 Dec 2000 by DWF
#BUFFERS         450000          # Maximum number of shared buffers
#increased on 06/01/2001
#BUFFERS         550000          # Maximum number of shared buffers
#increased on 30/08/2001
BUFFERS         650000          # Maximum number of shared buffers

#changed 02/02/2001,  onstat -g iov showed io/wup > 1
#NUMAIOVPS       50              # Number of IO vps
#changed 06/02/2001,  onstat -g iov showed io/wup > 1
#NUMAIOVPS       70              # Number of IO vps
#changed 08/06 - Dirk
NUMAIOVPS       96              # Number of IO vps

PHYSBUFF        64              # Physical log buffer size (Kbytes)
LOGBUFF         64              # Logical log buffer size (Kbytes)

#Changed 15/08 - database was doing foreground ...

read more »

 
 
 

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by Jack Parke » Sat, 15 Sep 2001 00:55:11


You don't indicate how long this box has been up.  How long did it take it
to get to 12 billion isam instructions?  That would be kind of interesting
to know. 1200 checkpoint waits - so 1000 hours?  Or are you getting
checkpoints more frequently than CKPTINTL - my guess is yes, so maybe 2-300
hours?

You don't include an onstat -d - but from the -D I can see some layout
issues.  Your dbspaces are deep, - probably designed around tables?  They
are not balanced to the number of CPUs.  A table fragmented over eight or
twelve dbspaces might perform better that one built on a single seven-chunk
dbspace.  I'm not really sure what your schema is or your application - so
YMMV.  However you can see that the load on the disks is not even.  Your
disks are getting hammered  You number of temp spaces should be a multiple
of the number of CPUs.  I would pull the whole thing apart and rebuild it
just to get the disks to group on the onstat -d output.  .

Nice write cache hit rate.  Read cache hit - ok.

4.5 million compresses?  Someone is doing a lot of deletes to cause this.
Find them, blame them, make them pay.  Look into why they are deleting -
perhaps truncates, or table rewrite would be more effective.

2.7 million seq scans?  Do you have a lot of small tables without indices?
Or without stats updated?  If sequential scans are desirable then you might
want to look into light scans and bumping up your DS parameters to support
those.

While we're there, your RA_PAGES and THRESHOLD are way out of whack.  They
should be closer to one another.  With a setting of 128, you are doing a lot
of reading ahead - which goes into your buffers and implies management of
same.  This may be desirable, in which case your threshold should increase
some to meet it.  120 for example.  If you want light scans then 128/120 is
an ideal setting.  However if you are doing OLTP work - which the rest of
this engine looks like, then you want to decrease these dependant on the
size of your tables.  Again, I don't know your application or how the data
is being read/used.  You may have a buffer management issue here 13 million
buffer waits, 42 million latch waits - depending on how long the engine has
been up.  You may have a problem here where you are flooding your buffers
with read ahead for a random access set of applications.

Impressively low lockwaits - especially considering 14 billion lock
requests.  Why so many?  Is there really a need for all of those locks or
has some developer gone amok and decided that he will always lock everything
(s)he reads - even when running a report.  Find them....

Are you really supporting 2600 connections?

First thing I would do is decrease the read ahead.
Next thing I would do is plan a weekend of pulling apart the dbspaces and
rebuilding them.

Then I would re-visit and see what affect that had.

Of course that's my .02 worth.  I'm sure others will gently correct me or
provide other insight.

cheers
j.

----- Original Message -----
From: "Dirk Moolman" <di...@reach.co.za>
To: "DJW" <d...@smooth1.fsnet.co.uk>

Cc: "Informix List" <informix-l...@iiug.org>
Sent: Thursday, September 13, 2001 4:52 AM
Subject: RE: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

> A lot of information, but here goes ...

> but some comments before the onstat info.
> 1. We are not using NOAGE, engine keeps aborting when we do.
> 2. We are not using RESIDENCY, slows system down for some reason
> 3. We are not using KAIO, also slows system down

> All of the above were tried with the help of Tech Support

****************************************************************************
> **********
> onstat -p
> Profile
> dskreads pagreads bufreads %cached dskwrits pagwrits bufwrits %cached
> 389354892 65336210 14193111210 97.26   23259841 25583827 268719155 91.34

> isamtot  open     start    read     write    rewrite  delete   commit
> rollbk
> 11948167084 124600934 324408344 10339002630 83545704 42555536 1626765
> 3868055
> 4697

> gp_read  gp_write gp_rewrt gp_del   gp_alloc gp_free  gp_curs
> 0        0        0        0        0        0        0

> ovlock   ovuserthread ovbuff   usercpu  syscpu   numckpts flushes
> 0        0            0        441036.75 150612.93 1262     2536

> bufwaits lokwaits lockreqs deadlks  dltouts  ckpwaits compress seqscans
> 13115029 657      14618933717 3        0        8197     4525346  2791475

> ixda-RA  idx-RA   da-RA    RA-pgsused lchwaits
> 37136504 1861327  234036811 271742203  41904784

****************************************************************************
> **********
> onstat -D
> Chunks
> address          chk/dbs offset   page Rd  page Wr  pathname
> c00000005e4bf028 1   1   0        479101   1435455
/dev/informix/opsroot01
> c00000006076a1b8 2   2   0        1426356  1131237
> /dev/informix/opslogical01
> c00000006076a338 3   3   0        230020   1399101
/dev/informix/opstemp01
> c00000006076a4b8 4   4   0        206022   742549   /dev/informix/opsrtt01
> c00000006076a638 5   4   0        2041652  698637   /dev/informix/opsrtt02
> c00000006076a7b8 6   4   0        280509   1039967  /dev/informix/opsrtt03
> c00000006076a938 7   4   0        2075392  1441873  /dev/informix/opsrtt04
> c00000006076aab8 8   4   0        1531402  997163   /dev/informix/opsrtt05
> c00000006076ac38 9   4   0        1420184  351938   /dev/informix/opsrtt06
> c00000006076adb8 10  4   0        546668   504990   /dev/informix/opsrtt07
> c00000006079b028 11  5   0        673073   444157
/dev/informix/opsmach401
> c00000006079b1a8 12  5   0        11       1
/dev/informix/opsmach402
> c00000006079b328 13  6   0        698023   808743
/dev/informix/opsdebt01
> c00000006079b4a8 14  6   0        455052   174269
/dev/informix/opsdebt02
> c00000006079b628 15  6   0        2020490  841888
/dev/informix/opsdebt03
> c00000006079b7a8 16  7   0        45307    4591
/dev/informix/opssmall01
> c00000006079b928 17  8   0        662248   1630849
/dev/informix/opstemp02
> c00000006079baa8 18  9   0        50940    1089963
/dev/informix/opstemp03
> c00000006079bc28 19  25  0        815527   1596055
/dev/informix/opstempdbs
> c00000006079bda8 20  11  0        1062551  14720
/dev/informix/opsrtt201
> c00000006079c028 21  11  0        880471   12940
/dev/informix/opsrtt202
> c00000006079c1a8 22  11  0        175411   98043
/dev/informix/opsrtt203
> c00000006079c328 23  11  0        52413    69777
/dev/informix/opsrtt204
> c00000006079c4a8 24  12  0        1976169  4562
/dev/informix/opsrtt301
> c00000006079c628 25  12  0        342153   463
/dev/informix/opsrtt302
> c00000006079c7a8 26  12  0        846179   103441
/dev/informix/opsrtt303
> c00000006079c928 27  13  0        1895301  123211
/dev/informix/opsrtt401
> c00000006079caa8 28  13  0        240674   174378
/dev/informix/opsrtt402
> c00000006079cc28 29  14  0        1528429  157953
/dev/informix/opsrtt501
> c00000006079cda8 30  14  0        844269   40890
/dev/informix/opsrtt502
> c00000006079d028 31  15  0        871857   37028
/dev/informix/opsrtt601
> c00000006079d1a8 32  15  0        1964984  465
/dev/informix/opsrtt602
> c00000006079d328 33  15  0        1767586  2975
/dev/informix/opsrtt603
> c00000006079d4a8 34  15  0        18876    188253
/dev/informix/opsrtt604
> c00000006079d628 35  16  0        1711690  8105
/dev/informix/opsrtt701
> c00000006079d7a8 36  16  0        1986471  115252
/dev/informix/opsrtt702
> c00000006079d928 37  12  0        161959   5229
/dev/informix/opsrtt304
> c00000006079daa8 38  17  0        2003187  4709
> /dev/informix/rtthistdbs101
> c00000006079dc28 39  17  0        1351895  118
> /dev/informix/rtthistdbs102
> c00000006079dda8 40  18  0        946813   1024241
/dev/informix/opsrtt801
> c00000006079e028 41  18  0        715542   1002760
/dev/informix/opsrtt802
> c00000006079e1a8 42  18  0        873452   690751
/dev/informix/opsrtt803
> c00000006079e328 43  18  0        598906   136939
/dev/informix/opsrtt804
> c00000006079e4a8 44  19  0        2036852  3622
> /dev/informix/rtthistdbs201
> c00000006079e628 45  19  0        1553076  0
> /dev/informix/rtthistdbs202
> c00000006079e7a8 46  17  0        10       0
> /dev/informix/rtthistdbs103
> c00000006079e928 47  20  0        925508   1023570
> /dev/informix/debthistdbs101
> c00000006079eaa8 48  20  0        772467   154161
> /dev/informix/debthistdbs102
> c00000006079ec28 49  5   0        11       1
/dev/informix/opsmach403
> c00000006079eda8 50  21  0        1466458  4306
> /dev/informix/rtthistdbs301
> c00000006079f028 51  21  0        486855   12
> /dev/informix/rtthistdbs302
> c00000006079f1a8 52  13  0        1392900  456269
/dev/informix/opsrtt406
> c00000006079f328 53  13  0        1638975  212998
/dev/informix/opsrtt405
> c00000006079f4a8 54  19  0        10       0
> /dev/informix/rtthistdbs203
> c00000006079f628 55  13  0        1397475  11506
/dev/informix/opsrtt403
> c00000006079f7a8 56  16  0        755172   181653
/dev/informix/opsrtt703
> c00000006079f928 57  6   0        752858   68423
/dev/informix/opsdebt04
> c00000006079faa8 58  14  0        123059   325937
/dev/informix/opsrtt503
> c00000006079fc28 59  4   0        11       1        /dev/informix/opsrtt10
> c00000006079fda8 60  15  0        521031   169265
/dev/informix/opsrtt609
> c0000000607a0028 61  22  0        1510612  23705
> /dev/informix/delwaybill101
> c0000000607a01a8 62  15  0        10       0
/dev/informix/opsrtt611
> c0000000607a0328 63  15  0        10       0
/dev/informix/opsrtt612
> c0000000607a04a8 64  23  0        1323224  24256
> /dev/informix/delwaybill201
> c0000000607a0628 65  13  0        1576481  163881
/dev/informix/opsrtt404
> c0000000607a07a8 66  4   0        1632792  633374   /dev/informix/opsrtt08
> c0000000607a0928 67  6   0        165678   27018
/dev/informix/opsdebt05
> c0000000607a0aa8 68  4   0        1049850  424840   /dev/informix/opsrtt09
> c0000000607a0c28 69  10  0        2072164  977635
/dev/informix/opstemp04
> c0000000607a0da8 70  24  0        43655  

...

read more »

 
 
 

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by Dirk Moolma » Sat, 15 Sep 2001 18:24:33


-----Original Message-----
From: owner-informix-l...@iiug.org

[mailto:owner-informix-l...@iiug.org]On Behalf Of Jack Parker
Sent: Thursday, September 13, 2001 5:55 PM
To: Dirk Moolman; DJW
Cc: Informix List
Subject: Re: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

>You don't indicate how long this box has been up.  How long did it take it
>to get to 12 billion isam instructions?  That would be kind of interesting
>to know. 1200 checkpoint waits - so 1000 hours?  Or are you getting
>checkpoints more frequently than CKPTINTL - my guess is yes, so maybe 2-300
>hours?

Yes, sorry, the system was up for approximately 4 days by the time I sent
the mail.

>You don't include an onstat -d - but from the -D I can see some layout
>issues.  Your dbspaces are deep, - probably designed around tables?  They
>are not balanced to the number of CPUs.  A table fragmented over eight or
>twelve dbspaces might perform better that one built on a single seven-chunk
>dbspace.  I'm not really sure what your schema is or your application - so
>YMMV.  However you can see that the load on the disks is not even.  Your
>disks are getting hammered  You number of temp spaces should be a multiple
>of the number of CPUs.  I would pull the whole thing apart and rebuild it
>just to get the disks to group on the onstat -d output.  .

I am starting to see disk bottlenecks yes. The previous weekend I
unloaded and fragmented one of our bigger tables that gets accessed
a lot.
Fragmented by round robin and  the disks seem much better this week.
(This specific table  gets a lot of reads and not so many writes).

Could you perhaps elaborate a bit more on the issue of balancing dbspace to
the number of cpus ? Are you just refering to fragmentation of tables so
that multithreading can take place ?

>Nice write cache hit rate.  Read cache hit - ok.

>4.5 million compresses?  Someone is doing a lot of deletes to cause this.
>Find them, blame them, make them pay.  Look into why they are deleting -
>perhaps truncates, or table rewrite would be more effective.

In our environment we do a lot of deletes yes. Our data gets old quickly
and then get moved to "history" tables. The data is still online and
reports / queries still take place on the older data, especially the
very recent records.
By the way, what exactly are compresseses - is this index rebuilds ?

>2.7 million seq scans?  Do you have a lot of small tables without indices?
>Or without stats updated?  If sequential scans are desirable then you might
>want to look into light scans and bumping up your DS parameters to support
>those.

Yes, we have some small tables and do a lot of sequential scans.

Update stats runs every night and is up to date. I double checked this
recently by running the query -
select tabname, max(constructed) from
systables,sysdistrib
where systables.tabid = sysdistrib.tabid
group by 1
order by 2,1

I have a problem with the DS (PDQ) parameters. When I give a value  to
MAX_PDQPRIORITY, all the users  start using PDQ, even without PDQPRIORITY
being set in their environments. I had to disable MAX_PDQPRIORITY to get
around this problem.
I did the same for DS_TOTAL_MEMORY, gave it the lowest value I could.

>While we're there, your RA_PAGES and THRESHOLD are way out of whack.  They
>should be closer to one another.  With a setting of 128, you are doing a
lot
>of reading ahead - which goes into your buffers and implies management of
>same.  This may be desirable, in which case your threshold should increase
>some to meet it.  120 for example.  If you want light scans then 128/120 is
>an ideal setting.  However if you are doing OLTP work - which the rest of
>this engine looks like, then you want to decrease these dependant on the
>size of your tables.  Again, I don't know your application or how the data
>is being read/used.  You may have a buffer management issue here 13 million
>buffer waits, 42 million latch waits - depending on how long the engine has
>been up.  You may have a problem here where you are flooding your buffers
>with read ahead for a random access set of applications.

I will play around with these values thank you.

>Impressively low lockwaits - especially considering 14 billion lock
>requests.  Why so many?  Is there really a need for all of those locks or
>has some developer gone amok and decided that he will always lock
everything
>(s)he reads - even when running a report.  Find them....

I do not know where the lock requests come from - any way I can trace this ?
We do a lot of inserts on this system. To give you a better picture, this is
a transport company scanning thousands of barcodes into and out of vehicles.
Isn't this perhaps where  the lock requests  come from ?

>Are you really supporting 2600 connections?

My NETTYPE was originally set to 4,300   ,which I thought were 1200
connections. Tech Support (and also in a recent discussion on this list)
told me that it is in fact only 300 connections, and they requested me
to change it to 4,600    ,which according to them is now only 600
connections.
Apparently the number of poll threads do not affect the number of
connections like I originally thought.

The change to 4,600 was due to the error
-25580  System error occurred in network function.

>First thing I would do is decrease the read ahead.
>Next thing I would do is plan a weekend of pulling apart the dbspaces and
>rebuilding them.

>Then I would re-visit and see what affect that had.

>Of course that's my .02 worth.  I'm sure others will gently correct me or
>provide other insight.

Thank you for all the input. We are moving from the current L2000 to
an N4000 server next weekend (at long last!) and from there  on I think
most of my effort will go into disk reorgs.

Regards
Dirk

>cheers
>j.

----- Original Message -----
From: "Dirk Moolman" <di...@reach.co.za>
To: "DJW" <d...@smooth1.fsnet.co.uk>
Cc: "Informix List" <informix-l...@iiug.org>
Sent: Thursday, September 13, 2001 4:52 AM
Subject: RE: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

> A lot of information, but here goes ...

> but some comments before the onstat info.
> 1. We are not using NOAGE, engine keeps aborting when we do.
> 2. We are not using RESIDENCY, slows system down for some reason
> 3. We are not using KAIO, also slows system down

> All of the above were tried with the help of Tech Support

****************************************************************************
> **********
> onstat -p
> Profile
> dskreads pagreads bufreads %cached dskwrits pagwrits bufwrits %cached
> 389354892 65336210 14193111210 97.26   23259841 25583827 268719155 91.34

> isamtot  open     start    read     write    rewrite  delete   commit
> rollbk
> 11948167084 124600934 324408344 10339002630 83545704 42555536 1626765
> 3868055
> 4697

> gp_read  gp_write gp_rewrt gp_del   gp_alloc gp_free  gp_curs
> 0        0        0        0        0        0        0

> ovlock   ovuserthread ovbuff   usercpu  syscpu   numckpts flushes
> 0        0            0        441036.75 150612.93 1262     2536

> bufwaits lokwaits lockreqs deadlks  dltouts  ckpwaits compress seqscans
> 13115029 657      14618933717 3        0        8197     4525346  2791475

> ixda-RA  idx-RA   da-RA    RA-pgsused lchwaits
> 37136504 1861327  234036811 271742203  41904784

****************************************************************************
> **********
> onstat -D
> Chunks
> address          chk/dbs offset   page Rd  page Wr  pathname
> c00000005e4bf028 1   1   0        479101   1435455
/dev/informix/opsroot01
> c00000006076a1b8 2   2   0        1426356  1131237
> /dev/informix/opslogical01
> c00000006076a338 3   3   0        230020   1399101
/dev/informix/opstemp01
> c00000006076a4b8 4   4   0        206022   742549   /dev/informix/opsrtt01
> c00000006076a638 5   4   0        2041652  698637   /dev/informix/opsrtt02
> c00000006076a7b8 6   4   0        280509   1039967  /dev/informix/opsrtt03
> c00000006076a938 7   4   0        2075392  1441873  /dev/informix/opsrtt04
> c00000006076aab8 8   4   0        1531402  997163   /dev/informix/opsrtt05
> c00000006076ac38 9   4   0        1420184  351938   /dev/informix/opsrtt06
> c00000006076adb8 10  4   0        546668   504990   /dev/informix/opsrtt07
> c00000006079b028 11  5   0        673073   444157
/dev/informix/opsmach401
> c00000006079b1a8 12  5   0        11       1
/dev/informix/opsmach402
> c00000006079b328 13  6   0        698023   808743
/dev/informix/opsdebt01
> c00000006079b4a8 14  6   0        455052   174269
/dev/informix/opsdebt02
> c00000006079b628 15  6   0        2020490  841888
/dev/informix/opsdebt03
> c00000006079b7a8 16  7   0        45307    4591
/dev/informix/opssmall01
> c00000006079b928 17  8   0        662248   1630849
/dev/informix/opstemp02
> c00000006079baa8 18  9   0        50940    1089963
/dev/informix/opstemp03
> c00000006079bc28 19  25  0        815527   1596055
/dev/informix/opstempdbs
> c00000006079bda8 20  11  0        1062551  14720
/dev/informix/opsrtt201
> c00000006079c028 21  11  0        880471   12940
/dev/informix/opsrtt202
> c00000006079c1a8 22  11  0        175411   98043
/dev/informix/opsrtt203
> c00000006079c328 23  11  0        52413    69777
/dev/informix/opsrtt204
> c00000006079c4a8 24  12  0        1976169  4562
/dev/informix/opsrtt301
> c00000006079c628 25  12  0        342153   463
/dev/informix/opsrtt302
> c00000006079c7a8 26  12  0        846179   103441
/dev/informix/opsrtt303
> c00000006079c928 27  13  0        1895301  123211
/dev/informix/opsrtt401
> c00000006079caa8 28  13  0        240674   174378
/dev/informix/opsrtt402
> c00000006079cc28 29  14  0        1528429  157953
/dev/informix/opsrtt501
> c00000006079cda8 30  14  0        844269   40890
/dev/informix/opsrtt502
> c00000006079d028 31  15  0        871857   37028
/dev/informix/opsrtt601
> c00000006079d1a8 32  15  0        1964984  465
/dev/informix/opsrtt602
> c00000006079d328 33  15  0        1767586  2975
/dev/informix/opsrtt603

...

read more »

 
 
 

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by Jack Parke » Sun, 16 Sep 2001 00:19:40


> >You don't indicate how long this box has been up.  How long did it take
it
> >to get to 12 billion isam instructions?  That would be kind of
interesting
> >to know. 1200 checkpoint waits - so 1000 hours?  Or are you getting
> >checkpoints more frequently than CKPTINTL - my guess is yes, so maybe
2-300
> >hours?

> Yes, sorry, the system was up for approximately 4 days by the time I sent
> the mail.

> >You don't include an onstat -d - but from the -D I can see some layout
> >issues.  Your dbspaces are deep, - probably designed around tables?  They
> >are not balanced to the number of CPUs.  A table fragmented over eight or
> >twelve dbspaces might perform better that one built on a single
seven-chunk
> >dbspace.  I'm not really sure what your schema is or your application -
so
> >YMMV.  However you can see that the load on the disks is not even.  Your
> >disks are getting hammered  You number of temp spaces should be a
multiple
> >of the number of CPUs.  I would pull the whole thing apart and rebuild it
> >just to get the disks to group on the onstat -d output.  .

> I am starting to see disk bottlenecks yes. The previous weekend I
> unloaded and fragmented one of our bigger tables that gets accessed
> a lot.
> Fragmented by round robin and  the disks seem much better this week.
> (This specific table  gets a lot of reads and not so many writes).

> Could you perhaps elaborate a bit more on the issue of balancing dbspace
to
> the number of cpus ? Are you just refering to fragmentation of tables so
> that multithreading can take place ?

If you have a table fragmented over 4 dbspaces and 4 CPUs, you will get 4
scan threads.  Odds are that they will run together and complete in about
the same amount of time.  If you have 5 dbspaces, then you will get 5 scan
threads - one of those will have to bounce around from CPU to CPU.  8
threads and you would get two per cpu - no bouncing.  Mind you this is best
when your database has to do scans of all fragments.

Fragmented indices tend to also be shallower - hence not as many reads when
looking for data.  If the index is big........

Round Robin is a lovely thing when it comes time to load, but it means no
fragment elimination when it comes time to read.  Read on.

> >Nice write cache hit rate.  Read cache hit - ok.

> >4.5 million compresses?  Someone is doing a lot of deletes to cause this.
> >Find them, blame them, make them pay.  Look into why they are deleting -
> >perhaps truncates, or table rewrite would be more effective.

> In our environment we do a lot of deletes yes. Our data gets old quickly
> and then get moved to "history" tables. The data is still online and
> reports / queries still take place on the older data, especially the
> very recent records.
> By the way, what exactly are compresseses - is this index rebuilds ?

When you 'delete' a row that space is not automatically re-available.  There
is a 'next space' pointer in the page header which always points to the end
of the data - new data is always placed at the 'end' of the page.  Over time
the engine will recognize that a page is getting 'holey' and will re-write
the page freeing up space at the end - this is a compress.  Note that there
is stuff at the end of the physical page - but let's not go there now -
hence the tic marks around the word 'end'.

In a delete situation like this consider fragmenting by the date of the data
(provided it is not updated - that would be a disaster).  Then when you need
to delete you can detach the old data fragment and re-attach it as a new
fragment for the new data.  This may imply a lot of fragments - depending on
how low a granularity you need and how long you keep the data.  It may also
mean unbalancing your fragments against the number of CPUs.  But it's way
fast.  I've done 33 fragments on a 16CPU box for this sort of thing (40GB of
data).  The table screamed.

> >2.7 million seq scans?  Do you have a lot of small tables without
indices?
> >Or without stats updated?  If sequential scans are desirable then you
might
> >want to look into light scans and bumping up your DS parameters to
support
> >those.

> Yes, we have some small tables and do a lot of sequential scans.

Check into those and check the size of the tables involved - if they are 8
pages (or whatever your disk read is) then that means one disk access to get
to your data.  If they are 32 pages (e.g.) that means on average two disk
accesses.  An indexed read is always one index read + 1 data read.  For each
level your index descends the number of reads required to find your index
may be one per level.  Hence sequential scans CAN be more effective in some
situations.  Generally in OLTP they are undesirable.

A quick check might be select tabname from systables where npused > 8 and
nindexes = 0;

> Update stats runs every night and is up to date. I double checked this
> recently by running the query -
> select tabname, max(constructed) from
> systables,sysdistrib
> where systables.tabid = sysdistrib.tabid
> group by 1
> order by 2,1

> I have a problem with the DS (PDQ) parameters. When I give a value  to
> MAX_PDQPRIORITY, all the users  start using PDQ, even without PDQPRIORITY
> being set in their environments. I had to disable MAX_PDQPRIORITY to get
> around this problem.
> I did the same for DS_TOTAL_MEMORY, gave it the lowest value I could.

Yes, it is an issue.  XPS has some stored procedures which will be run
before a user session starts where you can change that, but I see your
problem.  You can also set the environment variable PDQPRIORITY=1 to acheive
the same effect.

- Show quoted text -

> >While we're there, your RA_PAGES and THRESHOLD are way out of whack.
They
> >should be closer to one another.  With a setting of 128, you are doing a
> lot
> >of reading ahead - which goes into your buffers and implies management of
> >same.  This may be desirable, in which case your threshold should
increase
> >some to meet it.  120 for example.  If you want light scans then 128/120
is
> >an ideal setting.  However if you are doing OLTP work - which the rest of
> >this engine looks like, then you want to decrease these dependant on the
> >size of your tables.  Again, I don't know your application or how the
data
> >is being read/used.  You may have a buffer management issue here 13
million
> >buffer waits, 42 million latch waits - depending on how long the engine
has
> >been up.  You may have a problem here where you are flooding your buffers
> >with read ahead for a random access set of applications.

> I will play around with these values thank you.

> >Impressively low lockwaits - especially considering 14 billion lock
> >requests.  Why so many?  Is there really a need for all of those locks or
> >has some developer gone amok and decided that he will always lock
> everything
> >(s)he reads - even when running a report.  Find them....

> I do not know where the lock requests come from - any way I can trace this
?
> We do a lot of inserts on this system. To give you a better picture, this
is
> a transport company scanning thousands of barcodes into and out of
vehicles.
> Isn't this perhaps where  the lock requests  come from ?

You can look at onstat -u and see who has a lot of lock requests and then
see what they're doing.  For something like that I generally do up a script
which scans every 'n' sceonds, when it finds something big in the locks
column it would call a function to onstat -g sql and ses that session.

> >Are you really supporting 2600 connections?

> My NETTYPE was originally set to 4,300   ,which I thought were 1200
> connections. Tech Support (and also in a recent discussion on this list)
> told me that it is in fact only 300 connections, and they requested me
> to change it to 4,600    ,which according to them is now only 600
> connections.
> Apparently the number of poll threads do not affect the number of
> connections like I originally thought.

Surprises me too.  Of course it's one of those 'set and forget' things.
I'll have to go read Manuella for that one.

- Show quoted text -

> The change to 4,600 was due to the error
> -25580  System error occurred in network function.

> >First thing I would do is decrease the read ahead.
> >Next thing I would do is plan a weekend of pulling apart the dbspaces and
> >rebuilding them.

> >Then I would re-visit and see what affect that had.

> >Of course that's my .02 worth.  I'm sure others will gently correct me or
> >provide other insight.

> Thank you for all the input. We are moving from the current L2000 to
> an N4000 server next weekend (at long last!) and from there  on I think
> most of my effort will go into disk reorgs.

"Know thy data and know thy queries".  Once you can figure out what the
users are doing against what data you can plan more effectively on how to
deal with it.

Let me know if the RA helps things.

cheers
j.

- Show quoted text -

> Regards
> Dirk

 
 
 

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by moss.. » Mon, 17 Sep 2001 16:25:17


Quote:

> I have a problem with the DS (PDQ) parameters. When I give a value  to
> MAX_PDQPRIORITY, all the users  start using PDQ, even without PDQPRIORITY
> being set in their environments. I had to disable MAX_PDQPRIORITY to get
> around this problem.
> I did the same for DS_TOTAL_MEMORY, gave it the lowest value I could.

Are you saying that their SQL statements start running with PDQ, even when
they have *not* set PDQPRIORITY in their environment?  If so, then:
(1) do most of these SQL stmts involve execution of SP's?
(2) do you set PDQPRIORITY when you do the "update statistics for
procedure..."?

If the answer to each of these questions is "yes", then try running the
update stats for procedures *without* setting PDQPRIORITY.  If you set
PDQPRIORITY for the run of the update stats, then that PDQPRIORITY setting
is LOCKED into that procedure, i.e., that SP will run with that PDQ setting
every time it is executed, even if the original caller's environment does
not have it set.  Clear as mud?

HTH,
Paul Mosser

 
 
 

FW: Migrating from Dynamic Server 7.2 to IDS 9 (2000)

Post by DJW » Fri, 21 Sep 2001 07:19:58


Sorry if bits have already been covered but I'm catching up!

Dirk Moolman wrote in message <9nptdj$56...@news.xmission.com>...

>A lot of information, but here goes ...

>but some comments before the onstat info.
>1. We are not using NOAGE, engine keeps aborting when we do.

 ?? What error? Get Informix to fix!
>2. We are not using RESIDENCY, slows system down for some reason

 ?? Sure you have enough physical memory on the machine and
  it is not paging/swapping? If so add memory or reduce BUFFERS
  settting.

>3. We are not using KAIO, also slows system down

  ?? Talk to Informix as this should not happen.

>All of the above were tried with the help of Tech Support

>***************************************************************************
*
>**********
>onstat -p
>Profile
>dskreads pagreads bufreads %cached dskwrits pagwrits bufwrits %cached
>389354892 65336210 14193111210 97.26   23259841 25583827 268719155 91.34

>isamtot  open     start    read     write    rewrite  delete   commit
>rollbk
>11948167084 124600934 324408344 10339002630 83545704 42555536 1626765
>3868055
>4697

>gp_read  gp_write gp_rewrt gp_del   gp_alloc gp_free  gp_curs
>0        0        0        0        0        0        0

>ovlock   ovuserthread ovbuff   usercpu  syscpu   numckpts flushes
>0        0            0        441036.75 150612.93 1262     2536

>bufwaits lokwaits lockreqs deadlks  dltouts  ckpwaits compress seqscans
>13115029 657      14618933717 3        0        8197     4525346  2791475

 Seq scans seem too high. What does onstat -u give?

>ixda-RA  idx-RA   da-RA    RA-pgsused lchwaits
>37136504 1861327  234036811 271742203  41904784

>***************************************************************************
*
>**********
>onstat -D
>Chunks
>address          chk/dbs offset   page Rd  page Wr  pathname
>c00000005e4bf028 1   1   0        479101   1435455  /dev/informix/opsroot01

 Do not create chunks under /dev.. Put under another area  and using links!
 Where is the list of dbspaces from onstat -D??

>c00000006076a1b8 2   2   0        1426356  1131237
>/dev/informix/opslogical01
>c00000006076a338 3   3   0        230020   1399101  /dev/informix/opstemp01
>c00000006076a4b8 4   4   0        206022   742549   /dev/informix/opsrtt01
>c00000006076a638 5   4   0        2041652  698637   /dev/informix/opsrtt02
>c00000006076a7b8 6   4   0        280509   1039967  /dev/informix/opsrtt03
>c00000006076a938 7   4   0        2075392  1441873  /dev/informix/opsrtt04
>c00000006076aab8 8   4   0        1531402  997163   /dev/informix/opsrtt05
>c00000006076ac38 9   4   0        1420184  351938   /dev/informix/opsrtt06
>c00000006076adb8 10  4   0        546668   504990   /dev/informix/opsrtt07
>c00000006079b028 11  5   0        673073   444157

/dev/informix/opsmach401
>c00000006079b1a8 12  5   0        11       1

/dev/informix/opsmach402
>c00000006079b328 13  6   0        698023   808743   /dev/informix/opsdebt01
>c00000006079b4a8 14  6   0        455052   174269   /dev/informix/opsdebt02
>c00000006079b628 15  6   0        2020490  841888   /dev/informix/opsdebt03
>c00000006079b7a8 16  7   0        45307    4591

/dev/informix/opssmall01
>c00000006079b928 17  8   0        662248   1630849  /dev/informix/opstemp02
>c00000006079baa8 18  9   0        50940    1089963  /dev/informix/opstemp03
>c00000006079bc28 19  25  0        815527   1596055

/dev/informix/opstempdbs

- Show quoted text -

>c00000006079bda8 20  11  0        1062551  14720    /dev/informix/opsrtt201
>c00000006079c028 21  11  0        880471   12940    /dev/informix/opsrtt202
>c00000006079c1a8 22  11  0        175411   98043    /dev/informix/opsrtt203
>c00000006079c328 23  11  0        52413    69777    /dev/informix/opsrtt204
>c00000006079c4a8 24  12  0        1976169  4562     /dev/informix/opsrtt301
>c00000006079c628 25  12  0        342153   463      /dev/informix/opsrtt302
>c00000006079c7a8 26  12  0        846179   103441   /dev/informix/opsrtt303
>c00000006079c928 27  13  0        1895301  123211   /dev/informix/opsrtt401
>c00000006079caa8 28  13  0        240674   174378   /dev/informix/opsrtt402
>c00000006079cc28 29  14  0        1528429  157953   /dev/informix/opsrtt501
>c00000006079cda8 30  14  0        844269   40890    /dev/informix/opsrtt502
>c00000006079d028 31  15  0        871857   37028    /dev/informix/opsrtt601
>c00000006079d1a8 32  15  0        1964984  465      /dev/informix/opsrtt602
>c00000006079d328 33  15  0        1767586  2975     /dev/informix/opsrtt603
>c00000006079d4a8 34  15  0        18876    188253   /dev/informix/opsrtt604
>c00000006079d628 35  16  0        1711690  8105     /dev/informix/opsrtt701
>c00000006079d7a8 36  16  0        1986471  115252   /dev/informix/opsrtt702
>c00000006079d928 37  12  0        161959   5229     /dev/informix/opsrtt304
>c00000006079daa8 38  17  0        2003187  4709
>/dev/informix/rtthistdbs101
>c00000006079dc28 39  17  0        1351895  118
>/dev/informix/rtthistdbs102
>c00000006079dda8 40  18  0        946813   1024241  /dev/informix/opsrtt801
>c00000006079e028 41  18  0        715542   1002760  /dev/informix/opsrtt802
>c00000006079e1a8 42  18  0        873452   690751   /dev/informix/opsrtt803
>c00000006079e328 43  18  0        598906   136939   /dev/informix/opsrtt804
>c00000006079e4a8 44  19  0        2036852  3622
>/dev/informix/rtthistdbs201
>c00000006079e628 45  19  0        1553076  0
>/dev/informix/rtthistdbs202
>c00000006079e7a8 46  17  0        10       0
>/dev/informix/rtthistdbs103
>c00000006079e928 47  20  0        925508   1023570
>/dev/informix/debthistdbs101
>c00000006079eaa8 48  20  0        772467   154161
>/dev/informix/debthistdbs102
>c00000006079ec28 49  5   0        11       1

/dev/informix/opsmach403

- Show quoted text -

>c00000006079eda8 50  21  0        1466458  4306
>/dev/informix/rtthistdbs301
>c00000006079f028 51  21  0        486855   12
>/dev/informix/rtthistdbs302
>c00000006079f1a8 52  13  0        1392900  456269   /dev/informix/opsrtt406
>c00000006079f328 53  13  0        1638975  212998   /dev/informix/opsrtt405
>c00000006079f4a8 54  19  0        10       0
>/dev/informix/rtthistdbs203
>c00000006079f628 55  13  0        1397475  11506    /dev/informix/opsrtt403
>c00000006079f7a8 56  16  0        755172   181653   /dev/informix/opsrtt703
>c00000006079f928 57  6   0        752858   68423    /dev/informix/opsdebt04
>c00000006079faa8 58  14  0        123059   325937   /dev/informix/opsrtt503
>c00000006079fc28 59  4   0        11       1        /dev/informix/opsrtt10
>c00000006079fda8 60  15  0        521031   169265   /dev/informix/opsrtt609
>c0000000607a0028 61  22  0        1510612  23705
>/dev/informix/delwaybill101
>c0000000607a01a8 62  15  0        10       0        /dev/informix/opsrtt611
>c0000000607a0328 63  15  0        10       0        /dev/informix/opsrtt612
>c0000000607a04a8 64  23  0        1323224  24256
>/dev/informix/delwaybill201
>c0000000607a0628 65  13  0        1576481  163881   /dev/informix/opsrtt404
>c0000000607a07a8 66  4   0        1632792  633374   /dev/informix/opsrtt08
>c0000000607a0928 67  6   0        165678   27018    /dev/informix/opsdebt05
>c0000000607a0aa8 68  4   0        1049850  424840   /dev/informix/opsrtt09
>c0000000607a0c28 69  10  0        2072164  977635   /dev/informix/opstemp04
>c0000000607a0da8 70  24  0        43655    1490

/dev/informix/debtarch01

- Show quoted text -

>c0000000607a5028 71  26  0        387361   24308
>/dev/informix/delwaybill301
>c0000000607a51a8 72  27  0        579221   143088
>/dev/informix/delwaybill401
>c0000000607a5328 73  27  0        712849   188079
>/dev/informix/delwaybill402
> 73 active, 2047 maximum

>***************************************************************************
*
>**********
>onstat -c

>(raw disk, no informix mirroring, using raid01 - mirroring & striping)

># Root Dbspace Configuration
>ROOTSIZE        2048000         # Size of root dbspace (Kbytes)

   Too large - 100Mb should be sufficent as there should be nothing but
   sysmaster/ sysutis (an possibly syscdr) database in there!

># Physical Log Configuration

>PHYSDBS         rootdbs         # Location (dbspace) of physical log
>PHYSFILE        80000           # Physical log file size (Kbytes)

  Move to a seperate dbspace on it's own disk.

- Show quoted text -

># Logical Log Configuration

>LOGFILES        300             # Number of logical log files
>LOGSIZE         5000            # Logical log size (Kbytes)

># Diagnostics

>TBLSPACE_STATS  1               # Maintain tblspace statistics

># System Configuration

>SERVERNUM       1               # Unique id corresponding to a OnLine
>instance
>DBSERVERNAME    ops             # Name of default database server
>DBSERVERALIASES ops_soc,ops_shm # List of alternate dbservernames

>#changed 03/08/2001 - Dirk - on request of Tech Support - case# 265 221
>#NETTYPE         ipcstr,4,300,CPU
>#NETTYPE         ipcshm,1,100,NET

>NETTYPE         ipcstr,4,600,NET
>NETTYPE         soctcp,1,100,NET
>NETTYPE         ipcshm,1,100,CPU

  ?? Why ipcstr, local client should be able to use ipcshm??

- Show quoted text -

>DEADLOCK_TIMEOUT 60              # Max time to wait of lock in distributed
>env.
>RESIDENT        0               # Forced residency flag (Yes = 1, No = 0)

>MULTIPROCESSOR  1               # 0 for single-processor, 1 for
>multi-processor
>#NUMCPUVPS       4               # Number of user (cpu) vps
>SINGLE_CPU_VP   0               # If non-zero, limit number of cpu vps to
>one
>VPCLASS         cpu,num=4,aff=0-3

>#Changed on 15/09/2000 as per Darren - Tech Support
>#NOAGE           1               # Process aging
>#disabled again 04/06/2001 - Dirk - engine kept on aborting
>#NOAGE           0               # Process aging
>#commented out - using VPCLASS instead of NUMCPUVPS
>#NOAGE           0               # Process aging

>#commented out - using VPCLASS instead of NUMCPUVPS
>#AFF_SPROC       1               # Affinity start processor - changed
dorian
>#AFF_SPROC       0               # Affinity start processor
>#AFF_NPROCS      4              

...

read more »