VOGONS


Life expectancy of an "old" HDD

Topic actions

Reply 40 of 47, by CwF

User metadata
Rank Member
Rank
Member

Power quality and keeping cool is very true. Just about anything that moves wears the most on start up and shut down.

I used only scsi until the 2000's. I bought 20+. Then I did use some IDE's, maybe 8, and then a few sata's. All but 2 IDE's died in use, and 2 sata's were within warranty. 1 scsi died. Of all the 80 pins I had, they all still work. A few are approaching 100,000 hrs.

Another killer is altitude. At 10,500 ft only the old scsi's made it past 3-4 years.

I used to know what I was doing...

Reply 41 of 47, by waterbeesje

User metadata
Rank Oldbie
Rank
Oldbie

Scsi is another thing. These are designed to run 24/7 with a constant high load. More than often these ran for years without problems but wear was high. Once shut off for server maintenance they wouldn't spin up any more afterwards. Gone. Pray your backup was good

Stuck at 10MHz...

Reply 42 of 47, by CwF

User metadata
Rank Member
Rank
Member

I remember the 'shut down stick' to be a seagate thing? All mine are IBM's. The 1 inch 80 pins start up like a Turbine, a full second of something whining, a click and a pitch change as it ramps up. Ya, an apu sound for a few seconds, then the turbine fires!

I used to know what I was doing...

Reply 43 of 47, by techgeek

User metadata
Rank Newbie
Rank
Newbie

There are two types of failure modes for hard drives: controller failure or mechanical failure. The controller fails mostly due to aged capacitors or firmware bit rot. Expect a lifetime of about 15 to 30 years depending on the capacitors type. The first firmware that uses flash (for the purpose of updates) or EPROM and not ROM started appearing in the early 90s. These will start failing soon - I already have two hard drives, which are OK mechanically, but their firmware was corrupt - in some cases the EPROMs are easily removable and you can reprogram them if you have an image. The mechanical failure results from head stiction, platter damage (especially to the system area, which contains data critical for HDD operation) or PFPE lubricant aging. To prevent stiction in old hard drives you need to power them up at least once a year. However, the lubricants age and will become hardened after about 25-40 years. Then, it doesn't matter what you do, your hard drive will die.

Reply 44 of 47, by Caluser2000

User metadata
Rank l33t
Rank
l33t

I've got perfectly functioning old drives well over 25 years old. The 420 meg one in my 286 IRC client system is on 24/7.

Last edited by Caluser2000 on 2021-04-22, 19:17. Edited 1 time in total.

There's a glitch in the matrix.
A founding member of the 286 appreciation society.
Apparently 32-bit is dead and nobody likes P4s.
Of course, as always, I'm open to correction...😉

Reply 45 of 47, by probnot

User metadata
Rank Member
Rank
Member
maxtherabbit wrote on 2021-04-22, 02:08:

Hard drive sounds are a feature not a bug. Not liking them is an incorrect opinion

I love the clicky click sounds, and even some bearing whine. But I have some drives that have gotten so loud these drown out the speakers.

Reply 46 of 47, by shamino

User metadata
Rank l33t
Rank
l33t
starhubble wrote on 2021-04-22, 09:06:

Okay, so the spindowns: Good or bad? Is it better to have the drive spin constantly or spin down when idle? Someone mentioned that unnecessary spinups might put extra strain on the drive.

I think you have to judge whether it's useful for your application.
On a desktop PC with 1 hard drive, I think it's useless and will only serve to annoy you. If the desktop is actually idle, then I would use standby which will do more than just turn off the hard drive.

In a media center or other home server situation, there will be huge periods of time where the system is running but a drive isn't needed. In that case it makes sense to spin it down IMO because it will dramatically reduce the running hours of the drive. In that situation I think it would extend the drive's life, as long as the change in state doesn't happen very frequently.
On my server (which has multiple drives mostly holding media files) I use a software utility with a delay of something like 30 minutes. A couple drives probably get woken 2-3 times per day, others could go days or weeks between wakeups. Given that schedule I think 30 minutes is fine and keeps a drive from falling asleep if I'm still semi-actively using it.

I had a new 2.5" laptop hard drive in that server at one point. After several months the drive started failing to respond. I dug into the SMART data and realized it had racked up a stupidly high number of start/stop cycles. It was one of those 8-second-wonders. I basically ruined it.
It mostly still works but if it ever parks or spins down then it's always questionable whether it will wake up again. I took it out of the server.

This automatic 8 second behavior on some hard drives I think just parks the heads - not sure if it actually spins down the platters. Maybe it does both with different delays?
I object to the basic idea of this being an internal function of the drive. The job of hardware is to obey commands, not make up it's own.
The only reason for the firmware to do this kind of thing autonomously is if it somehow had better knowledge than the OS of when it made sense to park the heads (or spin down entirely). But reality is the opposite. The hard drive is blind to what's going on in the rest of the system.

The operating system is supposed to operate the hardware and provide the interaction between that hardware and the user. It's talking to everything in the box and knows what's going on with every device, software service, and user. It has lots of RAM, a powerful processor, graphics, and user input devices to make "power management" behavior adjustable and/or intelligent. Software can make these decisions as simple or complex as somebody believes is worthwhile to program.
Instead we have a little microcontroller on the back of a hard drive that has stolen this decision because it thinks it knows best.

I have to assume the reason the manufacturers started doing this was so they could advertise lower power usage. By internalizing power management, they can take credit for it. There might be some regulatory motivations involved also, who knows. It's a specs/marketing gimmick that is simply a regression when compared with the pre-existing ability to implement this behavior in software.

There are some WD drives with this behavior from factory where it can be disabled or at least made to wait longer using "WDIDLE". It doesn't work on the newest drives anymore but it did work for a lot of them in the past.

Reply 47 of 47, by Horun

User metadata
Rank l33t++
Rank
l33t++

I have some very old IDE and SCSI drives that still work perfectly. The oldest is a Seagate ST351A/x (stepper motor type) works like only a few years old.
As others have mentioned the Hours on drive, Start-Stop cycles, plus the heat and vibration it was exposed to are the main killers of old drives (that were not inflicted with some manufacturing defect)....

Hate posting a reply and then have to edit it because it made no sense 😁 First computer was an IBM 3270 workstation with CGA monitor. Stuff: https://archive.org/details/@horun