more raid fun: stale database

more raid fun: stale database

Post by Maurici » Mon, 12 Apr 2004 23:48:39



   I am trying to create a new raid1 to replace the one that failed.  I
want to do it from scratch, but the old one seem not to want to die:

# metadb
# metaclear -a
metaclear: biostat: d6: stale databases

#

Why am I getting this stala database error if there are no databases
left (AFAIK)?

--
Mauricio                                raub-kudria-com      
(if you need to email me, use this address =)

 
 
 

more raid fun: stale database

Post by Matt » Tue, 13 Apr 2004 01:04:36



>    I am trying to create a new raid1 to replace the one that failed.  I
> want to do it from scratch, but the old one seem not to want to die:

> # metadb
> # metaclear -a
> metaclear: biostat: d6: stale databases

> #

> Why am I getting this stala database error if there are no databases
> left (AFAIK)?

I would presume becuase the configuration is stored in the
meta state database. If you try to remove items w/o the
configuration, how would metaclear know what to kill? Someone
please correct me if I am wrong.

 
 
 

more raid fun: stale database

Post by Maurici » Tue, 13 Apr 2004 05:16:13





> >    I am trying to create a new raid1 to replace the one that failed.  I
> > want to do it from scratch, but the old one seem not to want to die:

> > # metadb
> > # metaclear -a
> > metaclear: biostat: d6: stale databases

> > #

> > Why am I getting this stala database error if there are no databases
> > left (AFAIK)?

> I would presume becuase the configuration is stored in the
> meta state database. If you try to remove items w/o the
> configuration, how would metaclear know what to kill? Someone
> please correct me if I am wrong.

   Idf that is the case, what could I do?  I could try creating new
database replicas, but that does not seem to take me very far:

# metadb -a -f -c2 c2t0d0s0 c2t0d0s1
# metadb
        flags           first blk       block count
     a        u         16              1034            /dev/dsk/c2t0d0s0
     a        u         1050            1034            /dev/dsk/c2t0d0s0
     a        u         16              1034            /dev/dsk/c2t0d0s1
     a        u         1050            1034            /dev/dsk/c2t0d0s1
# metaclear -a
metaclear: biostat: d6: stale databases

#

--
Mauricio                                raub-kudria-com      
(if you need to email me, use this address =)

 
 
 

more raid fun: stale database

Post by Matt » Tue, 13 Apr 2004 08:05:36






>> >    I am trying to create a new raid1 to replace the one that failed.  I
>> > want to do it from scratch, but the old one seem not to want to die:

>> > # metadb
>> > # metaclear -a
>> > metaclear: biostat: d6: stale databases

>> > #

>> > Why am I getting this stala database error if there are no databases
>> > left (AFAIK)?

>> I would presume becuase the configuration is stored in the
>> meta state database. If you try to remove items w/o the
>> configuration, how would metaclear know what to kill? Someone
>> please correct me if I am wrong.

>    Idf that is the case, what could I do?  I could try creating new
> database replicas, but that does not seem to take me very far:

> # metadb -a -f -c2 c2t0d0s0 c2t0d0s1
> # metadb
>         flags           first blk       block count
>      a        u         16              1034            /dev/dsk/c2t0d0s0
>      a        u         1050            1034            /dev/dsk/c2t0d0s0
>      a        u         16              1034            /dev/dsk/c2t0d0s1
>      a        u         1050            1034            /dev/dsk/c2t0d0s1
> # metaclear -a
> metaclear: biostat: d6: stale databases

> #

If I was required to restore from this situation, I
would do the following:

1) Save the configuration

2) Mount the devices as c0t0d0sX and reboot

3) Create a new set of state databases

4) Mirror the devices from scratch

There may be an easier method depending on how things
are currently oriented, but this will allow you to
start fresh. You might want to save the output of
"metastat -p" in case this happens again.

 
 
 

more raid fun: stale database

Post by Maurici » Tue, 13 Apr 2004 10:57:41








> >> >    I am trying to create a new raid1 to replace the one that failed.  I
> >> > want to do it from scratch, but the old one seem not to want to die:

> >> > # metadb
> >> > # metaclear -a
> >> > metaclear: biostat: d6: stale databases

> >> > #

> >> > Why am I getting this stala database error if there are no databases
> >> > left (AFAIK)?

> >> I would presume becuase the configuration is stored in the
> >> meta state database. If you try to remove items w/o the
> >> configuration, how would metaclear know what to kill? Someone
> >> please correct me if I am wrong.

> >    Idf that is the case, what could I do?  I could try creating new
> > database replicas, but that does not seem to take me very far:

> > # metadb -a -f -c2 c2t0d0s0 c2t0d0s1
> > # metadb
> >         flags           first blk       block count
> >      a        u         16              1034            /dev/dsk/c2t0d0s0
> >      a        u         1050            1034            /dev/dsk/c2t0d0s0
> >      a        u         16              1034            /dev/dsk/c2t0d0s1
> >      a        u         1050            1034            /dev/dsk/c2t0d0s1
> > # metaclear -a
> > metaclear: biostat: d6: stale databases

> > #

> If I was required to restore from this situation, I
> would do the following:

> 1) Save the configuration

   I just tarred/gzipped the entire /etc directory ;)

Quote:> 2) Mount the devices as c0t0d0sX and reboot

   c0t0d0sX?  I am confused here; isn't that the first (boot for me)
disk?

Quote:> 3) Create a new set of state databases

   I thought that was what I tried to do above.

Quote:> 4) Mirror the devices from scratch

   Well, as soon as I start building the mirrors, it was nagging me
about d6.  So, I have been, since then, trying to make it forget about
the old databases so I can start fresh. =(

Quote:> There may be an easier method depending on how things
> are currently oriented, but this will allow you to
> start fresh. You might want to save the output of
> "metastat -p" in case this happens again.

--
Mauricio                                raub-kudria-com      
(if you need to email me, use this address =)
 
 
 

more raid fun: stale database

Post by Darren Dunha » Wed, 14 Apr 2004 00:40:22



Quote:

>> I would presume becuase the configuration is stored in the
>> meta state database. If you try to remove items w/o the
>> configuration, how would metaclear know what to kill? Someone
>> please correct me if I am wrong.
>    Idf that is the case, what could I do?  I could try creating new
> database replicas, but that does not seem to take me very far:
> # metadb -a -f -c2 c2t0d0s0 c2t0d0s1
> # metadb
>         flags           first blk       block count
>      a        u         16              1034            /dev/dsk/c2t0d0s0
>      a        u         1050            1034            /dev/dsk/c2t0d0s0
>      a        u         16              1034            /dev/dsk/c2t0d0s1
>      a        u         1050            1034            /dev/dsk/c2t0d0s1
> # metaclear -a
> # metaclear: biostat: d6: stale databases

Seems pretty strange.  Can you reboot the machine now that you've
created the new databases?  It looks like something's wedged in the
running DB.  Sounds like a bug.

--

Senior Technical Consultant         TAOS            http://www.taos.com/
Got some Dr Pepper?                           San Francisco, CA bay area
         < This line left intentionally blank to confuse you. >

 
 
 

more raid fun: stale database

Post by Maurici » Wed, 14 Apr 2004 00:58:58





>>> I would presume becuase the configuration is stored in the
>>> meta state database. If you try to remove items w/o the
>>> configuration, how would metaclear know what to kill? Someone
>>> please correct me if I am wrong.

>>    Idf that is the case, what could I do?  I could try creating new
>> database replicas, but that does not seem to take me very far:

>> # metadb -a -f -c2 c2t0d0s0 c2t0d0s1
>> # metadb
>>         flags           first blk       block count
>>      a        u         16              1034          
>>      /dev/dsk/c2t0d0s0 a        u         1050            1034      
>>          /dev/dsk/c2t0d0s0 a        u         16              1034  
>>              /dev/dsk/c2t0d0s1 a        u         1050          
>>      1034            /dev/dsk/c2t0d0s1

>> # metaclear -a
>> # metaclear: biostat: d6: stale databases

> Seems pretty strange.  Can you reboot the machine now that you've
> created the new databases?  It looks like something's wedged in the
> running DB.  Sounds like a bug.

        I have been thinking to be the case myself.  And, here are my
concerns:  let's say the metadevices are corrupted beyond salvation,
causing the stale databases.  Is there a file I can go and kill any
reference to the metadevices so when I reboot the machine has no idea they
were ever around?  

Since I am running Solaris8 in this E450, I have tried running metatool and
then deleting the two metadevices (I was going to replace both for a single
larger setup).  All seems to be well when I delete them, but if I quit
metatool and then restart it, the two metadevices will be there waiting for
me.  Why would that happen?  Could it be that even metatool is acting up
(which would not surprise me given the history of how this machine was
managed) and is not updating whatever it has to (/etc/system comes to
mind).

 
 
 

more raid fun: stale database

Post by Darren Dunha » Wed, 14 Apr 2004 02:37:24



Quote:>> created the new databases?  It looks like something's wedged in the
>> running DB.  Sounds like a bug.

>            I have been thinking to be the case myself.  And, here are my
> concerns:  let's say the metadevices are corrupted beyond salvation,
> causing the stale databases.  Is there a file I can go and kill any
> reference to the metadevices so when I reboot the machine has no idea they
> were ever around?  

Yes.  You've done that by using 'metadb' to remove all replicas.  After
doing that, confirm that there are no 'metadb' lines in /etc/system....

After that, you have no replicas and the system doesn't know of any.
You therefore have no SDS configuration at all.  You'll need to create
new replicas and initialize any metadevices.

Quote:> Since I am running Solaris8 in this E450, I have tried running metatool and
> then deleting the two metadevices (I was going to replace both for a single
> larger setup).  All seems to be well when I delete them, but if I quit
> metatool and then restart it, the two metadevices will be there waiting for
> me.  Why would that happen?  Could it be that even metatool is acting up
> (which would not surprise me given the history of how this machine was
> managed) and is not updating whatever it has to (/etc/system comes to
> mind).

Perhaps.  Especially if 'metadb' doesn't show any replicas.

Check /etc/system to see if it has references to metadb locations.

--

Senior Technical Consultant         TAOS            http://www.taos.com/
Got some Dr Pepper?                           San Francisco, CA bay area
         < This line left intentionally blank to confuse you. >

 
 
 

more raid fun: stale database

Post by David Magd » Wed, 14 Apr 2004 05:56:56







> >> >    I am trying to create a new raid1 to replace the one that failed.  I
> >> > want to do it from scratch, but the old one seem not to want to die:

> >> > # metadb
> >> > # metaclear -a
> >> > metaclear: biostat: d6: stale databases

To the OP,

Have you tried the "-d" option for metadb(1M)?

     -d    Deletes all replicas that are located on the specified
           slice.   The /etc/system file is automatically updated
           with the new information and the /etc/lvm/mddb.cf file
           is updated.

A friend of mine sent the following notes he had:

    To clear a drive from the meta database:

        metaclear -r d17
        metadb -d mddb17

    The following will clear one drive from the state replicas

        metadb -d c3t9d0s0

--
David Magda <dmagda at ee.ryerson.ca>, http://www.magda.ca/
Because the innovator has for enemies all those who have done well under
the old conditions, and lukewarm defenders in those who may do well
under the new. -- Niccolo Machiavelli, _The Prince_, Chapter VI

 
 
 

1. Raid performace issues (Raid 5 vs. Raid 0+1) with database files

Hi folks,

We are thinking to place one of our DB's on a Sun Storedge 5200 and
Sun Enterprise 4500.

Since we have a web-based OLTP environment, well, actually mixed,
since we have major updates running at nights against most of the
data, I thought that RAID-5 is not great, since during the day there
are a lot of small reads/writes.
I was thinking about 0+1 on datafiles with actualy data, but I am not
sure whether I need to mirror the TEMP tablespace or whether I should
place the REDO logs on the mirror.

Any advice would be appreciated.

Thanx
---------------
Andrey Dmitriev  eFax: (978) 383-5892  Daytime: (917) 373-5417
AOL: NetComrade  ICQ: 11340726 remove NSPAM to email

2. what to do if linux crashes

3. Fun fun fun! :)

4. 128 Mb RAM, only 64 available

5. fun, fun fun

6. Upgrading Libraries

7. KIllustrator: fun, fun, fun.

8. make zImage errors

9. metainit: hostname: stale databases

10. Disk sets: databases always stale after reboot?

11. Fun & games with RAID

12. Ethernet card fun/APM fun

13. RAID RAID RAID