The box that I'm having trouble with ran RedHat 4.2 for over a year
with no difficulty. I have since installed RedHat 5.0 and my logical
partitions failed beyond repair, I chalked it up to a 5.0 conflict with
something on my system and simply installed SuSE 5.2 linux.
Now after about four days uptime, my logical partitions failed again.
I have absolutely no idea what the cause may be.
My windows partition on /dev/hda1 is just fine and has survived both of these
failures of the table.
In both instances,
I had shut down the system as normal using shutdown -h now. Not in either
case did I do anything at all dealing with the partition table or fdisk etc.,
that would have been an apparent cause of the trouble. I am the only user of
both of my systems, and all my ports are closed to outside access so foul play
is pretty much ruled out.
There have been no BIOS changes on my system either.
What is happening is that at bootup fsck starts and either gets a bad superblock
error as in the first time that was unsurmountable with the e2fsck -b, or an
error relating to the destruction of the logical partition that results in a
zero block partition size.
I am aware that if I wrote down the exact cylinder assignment for the table
I could simply re-write the table, but that is a no go unfortunately.
I want
to know if there is anything I did wrong or could check before I waste the
effort of another full install/configuration, and yes the cylinder assignment
will be written down this time.
I'm also going to stick with primary partitions only this time to see if the
logical partitions had anything to do with this.
Thanks in advance