raid device not mounting at boot

raid device not mounting at boot

Post by Mr. Happy Flap » Fri, 20 Jun 2003 15:09:59



Hi Unix Folk..!

I have an interesting situation.  I administer an ES40 running Tru64
5.1a with an Indy2400 raid array attached via SCSI.  The array is
configured with 2 slices, each one is a 1 Terabyte AdvFS filesystem.
The system and the raid array were stable.  The slices mounted at boot
properly, and everything seemed groovy.

I temporarily moved the raid array to a different machine (DS20e) for
a few weeks while I rebuilt the ES40.  Again, no problem, the DS20e
mounted the raid array just fine.  When I moved the raid array back to
the rebuilt ES40, it now  fails to mount during boot.  I get an error
on the console telling me that the device "is an invalid device or
cannot be opened."  I can mount both devices manually, however.
Everything looks right in /etc/fstab and I can mount them with "#
mount -a".  It just pukes on boot.

I have changed the device names with "# dfsmgr -m dsk46 dsk47", and
fixed the /etc/fdmns with "# advscan -r dsk47", but that didn't solve
the problem.

Any ideas..?

Many thanks in advance..

-flappy

 
 
 

raid device not mounting at boot

Post by Lukasz Tylutk » Tue, 24 Jun 2003 18:01:16


Uz.ytkownik Mr. Happy Flappy napisa?:

Quote:> Hi Unix Folk..!

> I have an interesting situation.  I administer an ES40 running Tru64
> 5.1a with an Indy2400 raid array attached via SCSI.  The array is
> configured with 2 slices, each one is a 1 Terabyte AdvFS filesystem.
> The system and the raid array were stable.  The slices mounted at boot
> properly, and everything seemed groovy.
> I have changed the device names with "# dfsmgr -m dsk46 dsk47", and
> fixed the /etc/fdmns with "# advscan -r dsk47", but that didn't solve
> the problem.

What the stystem was on DS20, maybe clustered system.
If yes, try to use "/usr/sbin/cleanPR  clean" script.  This script
cleans from the disks cluster marks (used for cluster).

Best regards:
                TYlut

--
?ukasz Tylutki, Gdynia, Poland.

 
 
 

raid device not mounting at boot

Post by Mr. Happy Flap » Fri, 27 Jun 2003 14:31:43



> What the stystem was on DS20, maybe clustered system.
> If yes, try to use "/usr/sbin/cleanPR  clean" script.  This script
> cleans from the disks cluster marks (used for cluster).

Lukasz,
   I'm not running a cluster, just a single machine.

-flappy

 
 
 

1. Problem with 2.4.14 mounting i2o device as root device Adaptec 3200 RAID controller?


I have intel i2o controller but I cannot install (install works but cannot
up my mashine after reboot) with RH7.2 but RH7.1 works fine, also RH7.2
has stipied fdisk it see 1 (one) cylinder in my array druid see all but
... installer is stupied with my i2o.... I try use kernel from my  working
RH  7.1 (2.4.12) with same hardware (i2o intel stl2 mainboard 512MB RAM..)
and kernel panic.... stupied installer in RH 7.2 ?
Sorry for my english...
Adam

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. motif 1.0.X for Solaris 2.1

3. Software Raid and booting from Raid Device

4. Solaris2.x on Sparc5 "stack underflow" where x > 5 ?

5. booting with a raid mount (not /)

6. NAS installation trouble?

7. HELP! : Specified device does not match mounted device

8. X and Diamond Stealth 32 - cannot CTRL-ALT-F1

9. specified device does not match mounted device

10. mount /mnt/cdrom ==> mount: the kernel does not recognize /dev/cdrom as a block device

11. Mount point does not exist/device fd0 does not exist

12. NFS mounts in Red Hat 8.0 not mounted at boot