1. System works ! Interesting challenge with mount points
I have a really interesting problem that is not a technical one but more of
a logistical one.
I am currently building a new file server for work and was talked into an
Adaptec 2100S RAID controller with four IBM Ultrastar 18.1Gb SCSI drives in
a RAID 1,0 arrangement (drive 3 mirrors drive 1, drive 4 mirrors drive 2 and
data stripe across 1 and 2). Hardware level setup proceeded with no
problems. All other hardware details are not relevant because it all works.
I previously had a server running RH 6.0 so RH 7.2 official release was my
first choice for opsys.
After some initial research I found that support for the Adaptec i2o low
level drivers was not in the kernel until 2.4.10. I had a new 20Gb IDE
drive available so I put that into the machine next to the SCSI's and
installed RH 7.2 (kernel 2.4.7) to use as a boot drive. The RAID won't
support booting as the 2.4.7 kernel doesn't know it is there until the i2o
drivers are loaded.
I downloaded the 2.4.10 RH source and after recompiling to include the i2o
device drivers I got the RAID to work fine. I have made a filesystem on it
and had it running alongside the IDE drive. It is *very* fast :-)
I have put a 500 Mb /boot partition on the IDE drive and have LILO set up to
properly offer the choice between kernels at boot time. After booting the
2.4.10 kernel I made the appropriate changes to fstab to mount the RAID
filesystem on boot and it all works.
Here is the challenge:
I would really like to use the boot drive only at boot time to minimise the
chance of failure of the IDE drive.
If I mount /dev/sda (the RAID) as /home most of the other program files such
as those in /bin, /sbin and all the system files like /var, /etc and /usr
will all still be on the IDE and only the data will end up on the RAID.
During install the system is booted from the 2.4.7 kernel on the install
disk and only knows about the IDE so that's where it puts / and the rest of
the system files.
Hence for normal operation the IDE will probably work as hard as the RAID
and is not mirrored so I still have a single point of failure in the system.
The best I have come up with is to partition the RAID with a number of
partitions, one for each of the mount points /bin, /sbin, /etc ... and to
mount each of these separately (yuk!)
I was always told the idea behind having a few partitions is that if you
trash one part of your system it is isolated damage because the partitions
separate the data onto different physical parts of the drive. Doesn't this
become a bit meaningless with data striping? The blocks will be all over
the place so partitioning for data security seems nonsensical.
Ideally I would like to have /dev/sda mounted as / and only /dev/hda5 (the
IDE boot partition) mounted as /boot but I cannot see how to do it.
Any ideas, as always, greatly appreciated.
Top of the Range Food Products
2. Finding the difference of 2 DB snapshots
3. Interesting challenge - beeper software
4. Start CDE w/o having root access ?
5. Interesting challenge with Importvg on Aix 4.3
6. Adaptec 1542C BIOS problem
7. Interesting Challenge: Query, how to "Insert (Driver) Module into kernel?
8. Anonymous ftp
9. Interesting ipchains/lpd problem (solved)
10. interesting problem with ipchains
11. ipchains-save, ipchains-restore (and WINS)
12. ipchains: command not found - only sometimes (ipchains newbie)
13. ipchains log analysis tool (ipchains-db.pl)