RAID1 and LVM on RH9.0

RAID1 and LVM on RH9.0

Post by Greg Lark » Mon, 05 May 2003 09:08:09



Hi all,

I have already installed RH9.0 on a new server, and I think I may have
made a mistake getting LVM configured correctly.  The machine has
identical 60Gb IDE drives configured with RAID1 (software RAID).

Basically, I created all of my partitions and sized them to handle the
initial work I will be doing.  Eventually, I'd like to grow them as
necessary with LVM.  As you can see from the fdisk and parted output
below, /dev/hd[ac]15 still has 43Gb of space to allocate.  I created
/dev/md10 as an LVM partition consisting of /dev/hda15 and /dev/hdc15.
 I thought I would be able to extend the existing partitions by
allocating space from that large partition.

However, it looks like I should have created one giant LVM partition
and then allocated space for /home, /usr, /usr/local etc. from that
during the installation.  I used the RH9.0 text installation, so I
don't think I had the option to create volume groups with Disk Druid.

My question is - can I still allocate space from the /dev/md10 device
and create volume groups to extend the existing partitions?

I tried running "/sbin/pvcreate /dev/md10", but got this:


pvcreate -- invalid physical volume "/dev/md10"

If you look at the parted output below, /dev/hd[ac]15 does not have a
filesystem attached to it.  Do I need to change something in the
partition table to get the pvcreate running on that partition?  I
think I set that partition up as an LVM volume in Disk Druid during
installation, but I can't remember for sure now.

If anyone has an idea on how I can create volumes and extend the
existing partitions by allocating space from the large parition I have
left, please let me know.

Thank you,
Greg Larkin

The df output looks like this:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md9                513964    135448    378516  27% /
/dev/md3               2096316   1350152    746164  65% /Work1
/dev/md6               2096316    220688   1875628  11% /Work2
/dev/md5               2096316     32840   2063476   2% /Work3
/dev/md4                513964     41680    472284   9% /Work4
/dev/md8                202111     12252    179424   7% /boot
/dev/md2               2096316    334128   1762188  16% /home
/dev/md1               1052120     32840   1019280   4% /opt
none                    252888         0    252888   0% /dev/shm
/dev/md12               256884     33328    223556  13% /tmp
/dev/md13              2096316   1655960    440356  79% /usr
/dev/md0               2096316    100512   1995804   5% /usr/local
/dev/md11               513964    168808    345156  33% /var

Here's some fdisk output from one of the disks:


Disk /dev/hda: 61.4 GB, 61492838400 bytes
255 heads, 63 sectors/track, 7476 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/hda1   *         1        26    208813+  fd  Linux raid
autodetect
/dev/hda2            27       287   2096482+  fd  Linux raid
autodetect
/dev/hda3           288       548   2096482+  fd  Linux raid
autodetect
/dev/hda4           549      7476  55649160    5  Extended
/dev/hda5           549       809   2096451   fd  Linux raid
autodetect
/dev/hda6           810      1070   2096451   fd  Linux raid
autodetect
/dev/hda7          1071      1331   2096451   fd  Linux raid
autodetect
/dev/hda8          1332      1592   2096451   fd  Linux raid
autodetect
/dev/hda9          1593      1723   1052226   fd  Linux raid
autodetect
/dev/hda10         1724      1854   1052226   fd  Linux raid
autodetect
/dev/hda11         1855      1918    514048+  fd  Linux raid
autodetect
/dev/hda12         1919      1982    514048+  fd  Linux raid
autodetect
/dev/hda13         1983      2046    514048+  fd  Linux raid
autodetect
/dev/hda14         2047      2078    257008+  fd  Linux raid
autodetect
/dev/hda15         2079      7476  43359403+  fd  Linux raid
autodetect

And GNU parted:

Using /dev/hda
Information: The operating system thinks the geometry on /dev/hda is
7476/255/63.  Therefore, cylinder 1024 ends at 8032.499M.
(parted) print
Disk geometry for /dev/hda: 0.000-58644.140 megabytes
Disk label type: msdos
Minor    Start       End     Type      Filesystem  Flags
1          0.031    203.950  primary   ext3        boot, raid
2        203.950   2251.296  primary   reiserfs    raid
3       2251.296   4298.642  primary   reiserfs    raid
4       4298.643  58643.525  extended              
5       4298.673   6345.988  logical   reiserfs    raid
6       6346.020   8393.334  logical   reiserfs    raid
7       8393.366  10440.681  logical   reiserfs    raid
8      10440.712  12488.027  logical   reiserfs    raid
9      12488.058  13515.622  logical   reiserfs    raid
10     13515.653  14543.217  logical   linux-swap  raid
11     14543.249  15045.249  logical   reiserfs    raid
12     15045.280  15547.280  logical   reiserfs    raid
13     15547.311  16049.311  logical   reiserfs    raid
14     16049.342  16300.327  logical   reiserfs    raid
15     16300.358  58643.525  logical               raid

 
 
 

RAID1 and LVM on RH9.0

Post by Markku Kolkk » Mon, 05 May 2003 19:35:30



> My question is - can I still allocate space from the /dev/md10 device
> and create volume groups to extend the existing partitions?

Not directly. You can migrate your system to LVM space and and re-initialize
the existing partitions as LVM PVs.

> I tried running "/sbin/pvcreate /dev/md10", but got this:


> pvcreate -- invalid physical volume "/dev/md10"

Did you change the partition type of /dev/md10 to 0x8e?

--
        Markku Kolkka


 
 
 

RAID1 and LVM on RH9.0

Post by Greg Lark » Mon, 05 May 2003 22:49:18


Hi Markku,

Thanks for the reply.  Regarding the partition type of /dev/md10, can
I change the partition type of that device, or do I have to change the
underlying /dev/hda15 and /dev/hdc15 to partition type 0x8e?

When I look at the partition types with fdisk, they are all listed as
"fd" because they are part of the software RAID configuration.  Are
you saying I should change the partition types of /dev/hda15 and
/dev/hdc15 to 0x8e?  Will that break the RAID1 configuration for
/dev/md10 somehow?

Thanks,
Greg



> > My question is - can I still allocate space from the /dev/md10 device
> > and create volume groups to extend the existing partitions?

> Not directly. You can migrate your system to LVM space and and re-initialize
> the existing partitions as LVM PVs.

> > I tried running "/sbin/pvcreate /dev/md10", but got this:


> > pvcreate -- invalid physical volume "/dev/md10"

> Did you change the partition type of /dev/md10 to 0x8e?

 
 
 

RAID1 and LVM on RH9.0

Post by Markku Kolkk » Tue, 06 May 2003 04:16:52



> Thanks for the reply.  Regarding the partition type of /dev/md10, can
> I change the partition type of that device,

Yes, if you want to initialize that partition as a LVM physical volume

Quote:> or do I have to change the
> underlying /dev/hda15 and /dev/hdc15 to partition type 0x8e?

No, that would break soft-RAID, AFAIK.

--
        Markku Kolkka

 
 
 

1. Raid1 on RH9?

I have just got a new machine with dual 80G HDD specifically to set up Raid1
(mirroring) of the HD. With the intention that I can just change the slave
to master, yank the old master and reboot should the original master fail.
I have done a Raid 5 before (some time ago though) but not a 1, is there a
nice neat way to do so with the RedHat 9 install program? Or is there some
good documentation online to help me through it?

Thanks
Dave

2. WEB PAGES FOR HOME OR BUSINESS

3. 2.4.19: Strange raid1 resync problem with raid1 on top of multipath raids

4. Weird LILO problem ...

5. Bugs in LVM and ext2 + suggestion for fix (was: Problem mounting ext2 fs on LVM)

6. HELP: boca 56Kflex not recognized under RH 5.0

7. [linux-lvm] [ANNOUNCE] LVM reimplementation ready for beta testing

8. 2.5.53 with contest

9. [lvm-devel] [ANNOUNCE] LVM reimplementation ready for beta testing

10. Matrox Mystique ands X.

11. Mandrake 10.1 and Raid1 +sata

12. 2.5.34 unable to mount root fs on 09:00 (smp,raid1,devfs,scsi)

13. Promise Fasttrak PDC202 not recognizing my Raid1 array?