System works ! Interesting challenge with mount points

System works ! Interesting challenge with mount points

Post by Ian MacPherso » Tue, 11 Dec 2001 21:22:04



Hi Everybody

I have a really interesting problem that is not a technical one but more of
a logistical one.

I am currently building a new file server for work and was talked into an
Adaptec 2100S RAID controller with four IBM Ultrastar 18.1Gb SCSI drives in
a RAID 1,0 arrangement (drive 3 mirrors drive 1, drive 4 mirrors drive 2 and
data stripe across 1 and 2).  Hardware level setup proceeded with no
problems.  All other hardware details are not relevant because it all works.

I previously had a server running RH 6.0 so RH 7.2 official release was my
first choice for opsys.

After some initial research I found that support for the Adaptec i2o low
level drivers was not in the kernel until 2.4.10.  I had a new 20Gb IDE
drive available so I put that into the machine next to the SCSI's and
installed RH 7.2 (kernel 2.4.7) to use as a boot drive.  The RAID won't
support booting as the 2.4.7 kernel doesn't know it is there until the i2o
drivers are loaded.

I downloaded the 2.4.10 RH source and after recompiling to include the i2o
device drivers I got the RAID to work fine.  I have made a filesystem on it
and had it running alongside the IDE drive.  It is *very* fast :-)

I have put a 500 Mb /boot partition on the IDE drive and have LILO set up to
properly offer the choice between kernels at boot time.  After booting the
2.4.10 kernel I made the appropriate changes to fstab to mount the RAID
filesystem on boot and it all works.

Here is the challenge:

I would really like to use the boot drive only at boot time to minimise the
chance of failure of the IDE drive.

If I mount /dev/sda (the RAID) as /home most of the other program files such
as those in /bin, /sbin and all the system files like /var, /etc and /usr
will all still be on the IDE and only the data will end up on the RAID.
During install the system is booted from the 2.4.7 kernel on the install
disk and only knows about the IDE so that's where it puts / and the rest of
the system files.

Hence for normal operation the IDE will probably work as hard as the RAID
and is not mirrored so I still have a single point of failure in the system.
The best I have come up with is to partition the RAID with a number of
partitions, one for each of the mount points /bin, /sbin, /etc ... and to
mount each of these separately (yuk!)

I was always told the idea behind having a few partitions is that if you
trash one part of your system it is isolated damage because the partitions
separate the data onto different physical parts of the drive.  Doesn't this
become a bit meaningless with data striping?  The blocks will be all over
the place so partitioning for data security seems nonsensical.

Ideally I would like to have /dev/sda mounted as / and only /dev/hda5 (the
IDE boot partition) mounted as /boot but I cannot see how to do it.

Any ideas, as always, greatly appreciated.

Kind Regards
Ian MacPherson
Sysadmin
Top of the Range Food Products
Toowoomba Australia

 
 
 

System works ! Interesting challenge with mount points

Post by Kosh Vade » Tue, 11 Dec 2001 23:54:42


On Mon, 10 Dec 2001 22:22:04 +1000, "Ian MacPherson"


>Hi Everybody

>I have a really interesting problem that is not a technical one but more of
>a logistical one.

>I am currently building a new file server for work and was talked into an
>Adaptec 2100S RAID controller with four IBM Ultrastar 18.1Gb SCSI drives in
>a RAID 1,0 arrangement (drive 3 mirrors drive 1, drive 4 mirrors drive 2 and
>data stripe across 1 and 2).  Hardware level setup proceeded with no
>problems.  All other hardware details are not relevant because it all works.

Good...

Quote:>I previously had a server running RH 6.0 so RH 7.2 official release was my
>first choice for opsys.

>After some initial research I found that support for the Adaptec i2o low
>level drivers was not in the kernel until 2.4.10.  I had a new 20Gb IDE
>drive available so I put that into the machine next to the SCSI's and
>installed RH 7.2 (kernel 2.4.7) to use as a boot drive.  The RAID won't
>support booting as the 2.4.7 kernel doesn't know it is there until the i2o
>drivers are loaded.

Okay...

Quote:>I downloaded the 2.4.10 RH source and after recompiling to include the i2o
>device drivers I got the RAID to work fine.  I have made a filesystem on it
>and had it running alongside the IDE drive.  It is *very* fast :-)

Very good...

Quote:>I have put a 500 Mb /boot partition on the IDE drive and have LILO set up to
>properly offer the choice between kernels at boot time.  After booting the
>2.4.10 kernel I made the appropriate changes to fstab to mount the RAID
>filesystem on boot and it all works.

500MB for a "/boot" partition???  Why?  That's very excessive, and
wastes a lot of disk space to no good reason, regardless of the size
of the drive.  A typical "/boot" partition should only be between 16MB
and 32MB, at most.  Even with 16MB (ext2), you can house between 5-8
different kernels.  Keep this partition real small and light (that is,
no journaling).

Quote:>Here is the challenge:

>I would really like to use the boot drive only at boot time to minimise the
>chance of failure of the IDE drive.

That is to say, have "/dev/hda" only contain the "/boot" and "/"
partitions, correct?  Having a swap partition on this drive wouldn't
be bad either since it's destined to have lower disk-activity than
your RAID drive.  Do have a swap partition on the RAID drive though,
but you might give it a lower priority.  This configuration really
will depend on VM usage and the speed of your IDE drive.

Quote:>If I mount /dev/sda (the RAID) as /home most of the other program files such
>as those in /bin, /sbin and all the system files like /var, /etc and /usr
>will all still be on the IDE and only the data will end up on the RAID.
>During install the system is booted from the 2.4.7 kernel on the install
>disk and only knows about the IDE so that's where it puts / and the rest of
>the system files.

I would move off the IDE drive both "/usr" and "/var" to the RAID
drive, and if you use "/opt" and don't symlink it to somewhere else
like "/usr/local/opt", that one as well.

Quote:>Hence for normal operation the IDE will probably work as hard as the RAID
>and is not mirrored so I still have a single point of failure in the system.
>The best I have come up with is to partition the RAID with a number of
>partitions, one for each of the mount points /bin, /sbin, /etc ... and to
>mount each of these separately (yuk!)

Sorry, that's a very bad idea.  Root directories "/sbin", "/bin",
"/etc", "/root", "/lib", and "/dev" must stay with the root (/) file
system.   Don't attempt to put them as separate file systems.  Your
Linux system will fail to boot up if you do that.

Quote:>I was always told the idea behind having a few partitions is that if you
>trash one part of your system it is isolated damage because the partitions
>separate the data onto different physical parts of the drive.  Doesn't this
>become a bit meaningless with data striping?  The blocks will be all over
>the place so partitioning for data security seems nonsensical.

Actually, there are a couple of good reasons why partitioning is used.
First, you are logically isolating your files (system and user) to
help protect them from soft errors, like those errors that come in the
form of file system corruption or a administrative mishap.  For
instance, you can protected the "/usr" file system by making it
read-only.  Granted, it wouldn't save you from hard errors, as in
physical damage or malfunction, but that's why backups are performed.
Fail tolerance only gives protection from hard errors, but the soft
errors can still occur.  And partitioning helps to minimize widespread
damage.  It also helps in preventing users, processes, crackers, and
so on from filling up the whole file system, such as "/" if it were
the only file system.  For example, if there were a runaway process
that kept filling up one of the system log files in "/var/log", and
that directory was located on the root "/" file system, your system
would falter and/or fail if all the space was used up.  But if you had
made "/var" or "/var/log" a separate file system, the rest of your
system would continue function, giving you the chance to fix the
problem cleanly without having to bring down the whole system.

Quote:>Ideally I would like to have /dev/sda mounted as / and only /dev/hda5 (the
>IDE boot partition) mounted as /boot but I cannot see how to do it.

As it is late for me are now, I don't have much time to go into great
detail about this, so I'll provide a HOWTO URL that tells you how you
can push all content from the IDE drive to your RAID drive, and
totally free up your IDE drive.

http://www.linuxdoc.org/HOWTO/mini/Hard-Disk-Upgrade/

This plan should work since you already have a kernel that supports
your RAID hardware.  Take a look at it and see what you think.

Chris

Quote:>Any ideas, as always, greatly appreciated.

>Kind Regards
>Ian MacPherson
>Sysadmin
>Top of the Range Food Products
>Toowoomba Australia