Grr. Another Backup Drive Failure…

I don’t know what it is, but the backup drives that I use, from various vendors, on different motherboards, they all end up giving me IO errors, and some time after that, then they are gone. They won’t show up in the BIOS anymore. Even DiskWarrior can’t fix them anymore.

Lots of work gone. Again. I need to think this over. I need a break. Enough is enough…

17 thoughts on “Grr. Another Backup Drive Failure…

    • It’s not the method that is causing the problems. It’s the massive amount of data that I backup. This causes the drives wear level indicator to lower too quick, for my need, and I need new drives ever 8 months or so.

  1. Sad 😞 when good drives die. Losing important data is not fun. Hopefully these were redundant backup drives, ie not your copy of that data. I’m paranoid about backups. Local backups,Cloud,etc

  2. Hi Pike, sorry to read that

    From 10.5 I am using my private backup strategy.

    1. I don’t rely on Time Machine.
    2. I leave my backup Disks unpowered when I don’t use them.
    3. I don’t do any backup through WiFi even if it works with my tool.
    4. I only use 2 Drives one 2GB Hitachi (most reliable) Sata in a Craft sata box
    and an USB3 2GB western digital.
    5. I only do differential backup.

    And from more than ten years never had such failures you got 2 times.

    Here is a summary of my tooll:

    ~ : sudo rsyncFF -h

    Usage : rsyncFF [-q|-v] [-n] [-e|E] [-r] [-h]

    Compare the SOURCE and TARGET disk/directory.
    Erases in TARGET disk/directory nonexistent items on SOURCE.
    Copy missing items in TARGET disk/directory except ‘.DS_Store’ files.

    Notice that ‘.DS_Store’ files are always treated as Excluded items and
    should not be put on in the ‘Excluded.txt’ file.

    Use sudo to remove all forbidden items warning.

    Filesystem mount points are not traversed and no symbolic links are followed.

    The options are as follows:

    -q Do not display an entry for each copied item in TARGET disk/directory.
    -v Display an entry for each deleted and copied item in TARGET disk/directory.
    If -q and -v options are specified together the -q option is ignored.
    -n Dry-run do not delete or copy any item.
    -E|e Reads the excluded items from the external file /usr/local/Excluded.txt.
    The -E option don’t deletes the excluded items in the TARGET disk/directory,
    instead the -e options deletes the excluded items and should only be used to
    synchronize a disk with a directory.
    -r Run rsyncFF remotely to build ‘/private/var/tmp/listFF.txt’ file.
    -h Show this help screen.

    Version : 2.2.0

    This tool is build on available Apple source code, it is very fast, it build list of 500.000 items
    in 4 or five seconds.

    On my machine with medium size files I reach around 100MB/sec copy speed with Sata II disk.

    You gives so much of your work than I feel I should share with you a bit of mine, feel free to send me an email and I will give to you the source code of my tool.

  3. Well I had an issue with similar symptoms recently. After changing pretty much every other component in my system bar none. Including the motherboard, and the drives themselves, and the SATA cables. In the end it turned out to be a dodgy sata POWER cable. The connection to the PSU through the cable would fluctuate intermittently. Which is why it was so difficult to pin down. This intermittend power fluctuation would occur randomly and at any time while the drive was powered on. It was enough to register many thousands of IO errors in just a couple of months (in zfs and dmesg sata errors / warnings).

    (Unlike you however) my drive was ssd. So therefore instead of causing some outright physical failure, the SSD simply ended up being really confused instead. And could then be saved with a ‘reset’ (all sectors cleared / zero’d fully). Which stopped al the disk IOs hanging all the time, and problem drive started working fine again like new. No further problems.

    • I also got a SSD replacement recently from OCZ. For the third time, but the boot sector was damaged beyond repair. It simply didn’t want to boot anymore.

      Note: I said: “different motherboards” but that should have read: “different computers“.

  4. Piker, you can just use a nas, buy one like synostore rs3614xs+ and make an raid 5/6 or btrfs and put 2cache ssds in,
    or make your own with freenas and make an raid_z3 or sth like that… Put 10gb intel nic inside and in both situations you get a really fast redundant storage:-)
    This is the second time it happens for you:-)
    Just my 2 cents.


    • I use 10Gbps NIC’s for my local network, but they are nowhere near as fast as Thunderbolt 3.

      p.s. My Internet connection should be 1Gbps but it is usually only 400~500Mbps (the router can’t handle it).

  5. I too use 10GB NICs on my LAN being Chelseo cards but only point to point at the moment.
    I had similar problems and moved to ZFS to better identify when bad sectors were popping up. (HFS isn’t exactly great for data storage from what I’ve read)
    After scratching my head from swapping out just about everything, I put an active UPS in place for my home server and backup machine and I’ve never had bad sectors since. I do rotate/decommission all my server drives after 4 years now, ie. relegate server drives to backup drives and retire old backup drives. This works reasonably well as drives increase in size over time so I can choose the best spindle count versus size to achieve the I/O I want/need.
    I deploy a lot of SSDs and in the last 3-4 years, Samsung and SanDisk are the only two models that we rarely get back from customers that are dead.

    Always luv your efforts Pike, I selfishly wish you had more time to post even more!

  6. Had exactly the same issue with local HDDs before, but I don’t want to bother with hard drives anymore..

    I use Arq, it’s a Mac application for creating encrypted backups in the cloud (Amazon, Google Drive, Dropbox, OneDrive, etc.. all supported). I think it’s totally worth the money, using Arq for a long time now, I can only recommend it!

    The data you upload is not readable or viewable on your cloud storage’s web manager (,, as it uploads only some encrypted “mess” if you look on the result of your backups.., so if someone hacks your google drive or dropbox where you set the backup destination, the data is not readable. If you want to recover files you type your encryption password in Arq, it collects the data and restores it in your file system.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s