VOGONS


First post, by Zup

User metadata
Rank Oldbie
Rank
Oldbie

So my laptop hard disk developed a bad block, at 48% from beginning. That disk containt LOTS of compressed files that holds firmware and tools, and I'd like to know if some of them are damaged.

The laptop has a standard Windows 7 installation (small hidden partition + big system/data partition), so I'd like to get...

  • A tool that can check recursively the compressed files in a tree (zip/7z) and tell me which files are corrupt.
  • A tool that can map a physical damaged block to a file (or disk area, like boot, MFT, FAT), so I can know which file(s) are damaged. It should support disks with more than one partitions, and various kinds of filesystems (ext, NTFS, FAT). I know that ddrescue can do the trick, but it needs to make an image from disk (that takes some time) to log the bad blocks and I already know which blocks are bad.
  • A tool that can make a file that is placed directly over a physical block. I know that it's not a good idea keep using a disk with bad blocks, but I'd like to make sure that system won't write valuable data over a known bad block before I replace it. It should support the more than one partition, and at least FAT and NTFS filesystems.

Thanks in advance.

I have traveled across the universe and through the years to find Her.
Sometimes going all the way is just a start...

I'm selling some stuff!

Reply 1 of 3, by canthearu

User metadata
Rank Oldbie
Rank
Oldbie

My quick suggestions.

a) Backup software will tell you which files have unreadable sectors when you use it. If the bad block is just affecting a file and no directory structures, this should be sufficient.
b) Don't try to save the disk, replace it with an SSD, so you suffer fewer of these sorts of problems in the future.

If you need to go deeper than this, a script could be written in linux to verify compressed files recursively, especially if directory structures are damaged and you have to use a tool like ddrescue to make a bit to bit copy.

Reply 2 of 3, by retardware

User metadata
Rank Oldbie
Rank
Oldbie

What you are suffering is the (justified!) fear of data rot.
Best way to protect yourself is using a checksummed filesystem that supports RAID configurations.
ZFS is my favorite for that, and regularly scrubbing the data will give you early warning if there is anything wrong, so you can take measures (replace drives that failed or are degrading, and then resilver) before data loss/corruption reaches an incorrectable level.
For this reason, using Windows in a VM on top of an operating system that can use ZFS, on a computer with ECC RAM is the safest way to go.

Reply 3 of 3, by SirNickity

User metadata
Rank Oldbie
Rank
Oldbie

🤣 -- well, yeah, it sure is. But how many individuals are going to do all of that? 😀 As someone that is technically competent enough to do it, I still can't be bothered. Moving to ECC and deploying underlying VM hosts is also not really viable for laptops, like the OP's. Granted, ZFS or similar on a NAS isn't too big of an ask, and I've been thinking about transitioning my old JFS RAID to something more proactive for a while.

I don't know of any tools that will do the things OP is asking for out of the box. It sounds like script territory to me as well, but given that it's a Windows box, that would imply Power Shell and tools that expose API functionality thusly. Definitely easier to do in Linux, if you're fluent, since scriptable CLI tools are a given there. (But if that were an option for you, you probably wouldn't be asking.)

There's no need to write a file over a bad block, though. The file system itself is capable of marking unreadable sectors as unusable. If the block isn't technically unreadable (and thus hasn't been marked as bad by the FS), things get trickier. Hacking FAT tables is something tools like Norton Disk Edit used to do, but there's just not a lot of call for that kind of low-level access anymore (rather, people can't be expected to know how to use them without making a royal mess of things), and the OS has abstracted away so much that applications rarely have any notion of what's going on at the disk level. That's generally for the best, like if you've ever run into brain-dead applications preventing access to perfectly good storage space because it couldn't resolve UNCs or something along those lines.

Probably the easiest solution to this is the old standby ... write a DVD backup (or thumb drive or NAS or...) There are tools like Beyond Compare that can scan a directory tree and tell you if a mirror copy is still identical or has changed. Don't ever trust something important, stagnant, and irreplaceable to a single storage medium. When you find out the file is damaged, it's inherently too late to prevent it.