Solaris Volume Manage Master Replica Database

Solaris Volume Manage Master Replica Database

Post by Brian E. Seppane » Wed, 17 Mar 2004 02:14:23



Hi Folks:

I have a solaris 9 E450 with 7 drives, six of those drives are set up
for a RAID 1 Mirror.   One of those drives may be experiencing problems,
and currently this drive shows that it is the master database for
replication

d20: Mirror
     Submirror 0: d21
       State: Okay
     Submirror 1: d22
       State: Okay
     Pass: 1
     Read option: roundrobin (default)
     Write option: parallel (default)
     Size: 213317982 blocks (101 GB)

d21: Submirror of d20
     State: Okay
     Size: 213317982 blocks (101 GB)
     Stripe 0: (interlace: 32 blocks)
         Device     Start Block  Dbase        State Reloc Hot Spare
         c2t0d0s0       8667     Yes           Okay   Yes
         c2t1d0s0       8667     Yes           Okay   Yes
         c2t2d0s0       8667     Yes           Okay   Yes

d22: Submirror of d20
     State: Okay
     Size: 213317982 blocks (101 GB)
     Stripe 0: (interlace: 32 blocks)
         Device     Start Block  Dbase        State Reloc Hot Spare
         c0t1d0s0       8667     Yes           Okay   Yes
         c0t2d0s0       8667     Yes           Okay   Yes
         c0t3d0s0       8667     Yes           Okay   Yes

Device Relocation Information:
Device   Reloc  Device ID






         flags           first blk       block count
      a m  p  luo        16              8192            /dev/dsk/c0t1d0s0
      a    p  luo        16              8192            /dev/dsk/c0t2d0s0
      a    p  luo        16              8192            /dev/dsk/c0t3d0s0
      a    p  luo        16              8192            /dev/dsk/c2t0d0s0
      a    p  luo        16              8192            /dev/dsk/c2t1d0s0
      a    p  luo        16              8192            /dev/dsk/c2t2d0s0

If the device has any further problems with that drive, will the master
database failover to a different drive, or should I be concerned that
one drive could ultimately cause problems for the remaining five.

Any insight appreciated.

Thanks,
Brian Seppanen

 
 
 

Solaris Volume Manage Master Replica Database

Post by Peter Sattle » Wed, 17 Mar 2004 07:31:23


Hello Brian,

I don't think you're in trouble. As far as I can remember this "master"
flag is given to the next one, if the disk fails.

Cheers Peter

Am Mon, 15 Mar 2004 12:14:23 -0500 schrieb Brian E. Seppanen

> Hi Folks:

> I have a solaris 9 E450 with 7 drives, six of those drives are set up
> for a RAID 1 Mirror.   One of those drives may be experiencing problems,
> and currently this drive shows that it is the master database for
> replication

> d20: Mirror
>      Submirror 0: d21
>        State: Okay
>      Submirror 1: d22
>        State: Okay
>      Pass: 1
>      Read option: roundrobin (default)
>      Write option: parallel (default)
>      Size: 213317982 blocks (101 GB)

> d21: Submirror of d20
>      State: Okay
>      Size: 213317982 blocks (101 GB)
>      Stripe 0: (interlace: 32 blocks)
>          Device     Start Block  Dbase        State Reloc Hot Spare
>          c2t0d0s0       8667     Yes           Okay   Yes
>          c2t1d0s0       8667     Yes           Okay   Yes
>          c2t2d0s0       8667     Yes           Okay   Yes

> d22: Submirror of d20
>      State: Okay
>      Size: 213317982 blocks (101 GB)
>      Stripe 0: (interlace: 32 blocks)
>          Device     Start Block  Dbase        State Reloc Hot Spare
>          c0t1d0s0       8667     Yes           Okay   Yes
>          c0t2d0s0       8667     Yes           Okay   Yes
>          c0t3d0s0       8667     Yes           Okay   Yes

> Device Relocation Information:
> Device   Reloc  Device ID






>          flags           first blk       block count
>       a m  p  luo        16              8192            
> /dev/dsk/c0t1d0s0
>       a    p  luo        16              8192            
> /dev/dsk/c0t2d0s0
>       a    p  luo        16              8192            
> /dev/dsk/c0t3d0s0
>       a    p  luo        16              8192            
> /dev/dsk/c2t0d0s0
>       a    p  luo        16              8192            
> /dev/dsk/c2t1d0s0
>       a    p  luo        16              8192            
> /dev/dsk/c2t2d0s0

> If the device has any further problems with that drive, will the master
> database failover to a different drive, or should I be concerned that
> one drive could ultimately cause problems for the remaining five.

> Any insight appreciated.

> Thanks,
> Brian Seppanen

--
Erstellt mit M2, Operas revolution?rem E-Mail-Modul:
http://www.opera.com/m2/

 
 
 

Solaris Volume Manage Master Replica Database

Post by Mr. Johan Andersso » Wed, 17 Mar 2004 18:05:16



> Hi Folks:

> I have a solaris 9 E450 with 7 drives, six of those drives are set up
> for a RAID 1 Mirror.   One of those drives may be experiencing problems,
> and currently this drive shows that it is the master database for
> replication

> d20: Mirror
>      Submirror 0: d21
>        State: Okay
>      Submirror 1: d22
>        State: Okay
>      Pass: 1
>      Read option: roundrobin (default)
>      Write option: parallel (default)
>      Size: 213317982 blocks (101 GB)

> d21: Submirror of d20
>      State: Okay
>      Size: 213317982 blocks (101 GB)
>      Stripe 0: (interlace: 32 blocks)
>          Device     Start Block  Dbase        State Reloc Hot Spare
>          c2t0d0s0       8667     Yes           Okay   Yes
>          c2t1d0s0       8667     Yes           Okay   Yes
>          c2t2d0s0       8667     Yes           Okay   Yes

> d22: Submirror of d20
>      State: Okay
>      Size: 213317982 blocks (101 GB)
>      Stripe 0: (interlace: 32 blocks)
>          Device     Start Block  Dbase        State Reloc Hot Spare
>          c0t1d0s0       8667     Yes           Okay   Yes
>          c0t2d0s0       8667     Yes           Okay   Yes
>          c0t3d0s0       8667     Yes           Okay   Yes

> Device Relocation Information:
> Device   Reloc  Device ID






>          flags           first blk       block count
>       a m  p  luo        16              8192            /dev/dsk/c0t1d0s0
>       a    p  luo        16              8192            /dev/dsk/c0t2d0s0
>       a    p  luo        16              8192            /dev/dsk/c0t3d0s0
>       a    p  luo        16              8192            /dev/dsk/c2t0d0s0
>       a    p  luo        16              8192            /dev/dsk/c2t1d0s0
>       a    p  luo        16              8192            /dev/dsk/c2t2d0s0

> If the device has any further problems with that drive, will the master
> database failover to a different drive, or should I be concerned that
> one drive could ultimately cause problems for the remaining five.

No, this is an exemplary config, your dafe from single disk failures, in
fact you could have three disks fail, if it were the right ones :-)

The only thing I personally see as a "fault" is that you keep the state
databases on the same partition as the data. I would recommend a separate
slice for that.

Quote:> Any insight appreciated.

> Thanks,
> Brian Seppanen

/Johan A
 
 
 

1. Solaris Volume Manager - State Database Replicas

Hi,

Please let me first appologise for my lack of knowledge about Solaris and
SoftRaid but i'm eager to learn.  I've been give a few Sun machines and host
one for my personal website.  I've printed off the Solaris Volume Manager
PDF and read through it a few times, but I confess that I don't fully
understand it. And my attempts to configure SVM via the SMC result in
errors.

Fisrt off, I've got a Netra T1 105 server that has two 9.1GB SCSI drives
with only one of the drives being used to store the OS and data.  I'm
looking to improve it's resilience as its located about 50 miles away ;)

After reading the SVM manual I'm still not sure if SVM be used to mirror a
bootable drive, so that if the primary drive fails, the machine will still
keep running, even after a reboot (with the faulty master drive still in the
machine)?  If not, does anyone know what additional steps are required and
do these require console access (which i would have to pay more money to
access)?

The drives can be pulled out of the Netra machine, each in a caddys so i
take it that they are hot swapable?  Hot swappable in the machine hardware
or also in Solaris 9 12/03?  If a drive fails and i swap out the drive, will
it be automatically rebuilt.  I guess it would for the backup drive, but
what if the master drive failed and was replaced, would it be rebuilt from
the backup drive?

Does anyone actually use SoftRaid in production, or is it too slow /
unreliable in practise?

Out of interest, the replicas are 8192 blocks, but in the SMC you can only
create partitions in MBs and %.  And idea how to calculate the MB size from
the block size given?

Many thanks in advance.

Neil

2. cannot boot my computer now!

3. delete master meta database replica

4. MOTD for group base ?

5. State Database and State Database Replicas problem?

6. request scripts, installp, and AIX

7. passwd on Solaris and iPlanet LDAP single master with replicas

8. Xconfig for CL5434 Orchid Kelvin VLB

9. Solaris 9: cannot create state database replicas

10. how volumes are managed in solaris.

11. Upgrade from Solaris 2.6 with Volume Manager 2.6 to Solaris 8 with Volume Manager 3.2

12. HELP- remove NIS+ replica record from master, etc..

13. NIS+ Master and Replica