I'm running Slackware 1.1.2, with the Linux 1.0 kernel. I have 2 IDE
drives, and a SCSI tape backup. A couple days ago I tried to make a
full backup of my system onto tape, and I noticed that four files
didn't copy properly, remnants of bad blocks on the hard drive.
So, I decided to try to get fsck to find and fix the bad blocks, but
running fsck -a (or e2fsck -a) on the drive didn't help. So, I tried
to get Ted T'so's new alpha e2fsck, and I tried to use that, but when
I ran that with "e2fsck -c /dev/hdb2", it would go off and run
badblocks (which is what it is supposed to do), and then I'd get a lot
of HD read_intr: status/error messages, and then get an HD read_error
on the device (and it would give a sector number.. 38600, I believe).
It would then go through this again and give the same sector, then it
would go through it again and give a different sectore (38841, I
think), and then again repeat the cycle with this sector. After that,
control seemed to return to e2fsck, even though it didn't run through
the whole disk!
I am currently running badblocks by hand.
Has anyone else seen this problem, or know of an easy way to fix bad
blocks? (I thought they were supposed to get automatically remapped?)
Grated, the four files that are affected by this aren't anything
important, and I can easily replace them (they are just off the
distribution, so I can just untar that file again). However, I'd like
to fix the bad blocks so I don't lose any of my files.
(On another note: I noticed today that there is a new release of Ted's
ext2fs tools, so I plan to grab a new copy of this tonight if I can't
get it to work when I get back home).
Any ideas? Thanks ahead of time for your help.
Derek Atkins, SB '93 MIT EE, G MIT Media Laboratory
Member, MIT Student Information Processing Board (SIPB)
Home page: http://www.mit.edu:8001/people/warlord/home_page.html