VOGONS


Which Drives Dies Sooner? SMR HDDs or SSDs?

Topic actions

First post, by Sabina_16bit.

User metadata
Rank Member
Rank
Member

Greetings.
I have read several articles questioning not just SMR HDDs' performance(which for me is irrelevant for upgrading old systems with SATA 1 or 2 bus & I knew this for all the time,I am using them & I really do not have performance problems,that could bother me,even on the newest systems with SATA 3,maybe Windows 7 causes no relevant load to it),but also their lifespan & sorting them by this parameter to the write 1ce,read many times category,thus for data drives only,stating,that when used as OS drives,the heads becomes overloaded so much,they tend to fail sooner,than SSD with same load becomes deplated of its E/P cycles,for which I do not use SSDs for OS.
If OS can kill a SMR HDD same easily as it eventually kills SSD,is it true also for Legacy OSs with much less writes compared to new OSs,which never let any HW rest?
The newest OS,I installed on SMR HDDs(because I thought,it is still better than SSD,it is slower,but not limited in case of rewrites & because I had no other choice than SSD or SMR HDD in some form factors & capacities,for example there r no CMR option,if I want a laptop or a compact desktop with 2TB hard drive) r Windows 7 Enterprise & Windows 8.1 Pro-UltraLite.On systems with huge RAM(10GB is really huge for me & for 32bit versions I consider huge 4GB of RAM,when I installed the extender to really allocate all 4GB) I disabled or minimized the pagefile & disabled hibernation(this I disabled on all multiboot systems,as it is only safe,if there is only 1 OS),I also disabled indexing.Did I sufficient precautions to prevent my SMR HDDs lose their heads,or they r damned all the way?
In these conditions r they still so bad as SSDs in term of longevity?
I found various articles,some states only performance issues for SMR HDDs & consider their other properties similar to CMR HDDs,something like,if U do not hurry,they r OK,which I was thought too,but other articles states,that their longevity/endurance is so bad,or even worse than SSDs'.
What is the truth?
R my OSs on SMR drives sitting on a timed bomb,like if they was on an SSD?

I am new to this strange 64bit SATA era,I came from the IDE age,where HDDs was almost immortal,except for extreme cases,like earthquake or EMP,or some really bad models,like most of Conner 1s & the absolutely last IDE drive model-Seagate's 750GB famous for overheating & soon dieing because inability to sufficiently cool them down,so these SMR problems r new to me,I thought,they r just slow,which is no problem, but still I trusted a HDD more than SSD,because I thought,HDD is HDD,for long time believed to be better for archiving than CD/DVD/BD/Floppies/flash drives of any kind...everything else...
So which is better for Legacy OSs?
SSD or SMR HDD(if I do not have CMR option)?
I am asking about survival of the drive,not the performance.

Reply 1 of 30, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie

Refusing to use SSDs due to write cycles, and using SMR drives as legacy OS drives ?
Wow, someone likes to write SF novels here 😁

U R not "old" enough to be from IDE era, if U R writing like this (+ block text).
ChatGPT too much ?
Good luck buddy.

@admin I'd mark this user as troll or plain data miner for AI (that is lazy AF).

Last edited by agent_x007 on 2025-02-27, 21:49. Edited 1 time in total.

Reply 2 of 30, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie

SMR seems like combining the downsides of SSDs and the downsides of HDDs in a single product. It seems like they're fine as an enterprise tape-replacement, but the sale to individuals as a main OS drive seems to be through deception.

Reply 3 of 30, by chinny22

User metadata
Rank l33t++
Rank
l33t++

My experience working in I.T. is SSD's die without warning. It's a case of 1 day it's working next day dead.
Spinning rust typically gives some kind of warning, SMART error, makes funny noises or slows right down. SMR/CMR I'm not really taking notice

I think the failure rates are still about the same but typically the HDD lasts the life of the PC.
This is on modern PC's running modern OS's.

Retro rigs get very little usage compared to daily drivers so it's difficult to get any meaningful statistics.
For example, how many read/writes does a retro PC that's only used for a few hours a couple of days a week make compared to a work computer that's on 8hrs a day 5 days a week.
Good chance old OS's lack of TRIM isn't really going to matter much on that retro rig.

I'd be picking storage type based on price, performance or capacity, whichever is more important for the person.
For me, I'm cheap. All my computer new and old have spinning rust salvaged for free from old work computers. Size of the drive been the next deciding factor.

Reply 4 of 30, by Sabina_16bit.

User metadata
Rank Member
Rank
Member
jakethompson1 wrote on 2025-02-27, 21:38:

SMR seems like combining the downsides of SSDs and the downsides of HDDs in a single product. It seems like they're fine as an enterprise tape-replacement, but the sale to individuals as a main OS drive seems to be through deception.

Thanx for the qualified response,I needed & not attacking nor false accussing me like the aggressor,who replied 1st.
Just for defending myself(even I have no such obligation,as those,who accuses,have to proove,of what they r accussing),I am same old as Windows 1.0 & unfortunately I am not an AI,unfortunately,because when under attack,lIke Your rude colleague did,I would better be an AI or a Vulcan & do not have emotions,because then such rude & despectful behavior could not hurt me.I never used ChatGPT,I do not use social networks & I guess,AI cannot use slang,abbreviations or so.False accusing is a criminal act in some countries.
I was warking in PC shop until the down of UEFI era,then I left,as the upcoming changes in IT I considered harmful to both,machines & users & I could not participate on selling garbage to customers & pretending,that r computers,since I left the professional IT sector for these ethical reasons,I am focused on Legacy systems only & doing it in my free time,thus I missed some newest technologies or deceptions,as there was well pointed.
I wish to thank to all other users of this forum for polite & respectful behavior,it is very rare,on a forum is only 1 aggressor,or misogynists & all the others behave respectfully & constructively,for which I am grateful & all the other users,who did not attacked me,r invited to visit me personally(1 by 1,not all @ 1ce,please) & c my collection,play some games & c,that unfortunately I am coded by As,Cs,Gs,& Ts & not by 1s & 0s,thus I am a very Legacy SW(this coding is used for about 4 billion years) & not some nasty 64bit AI,who sure knows everything about SMR technology & have no reason to ask such "stupid" questions.
When I left the IT job,there was still no SMR HDDs & old persons like me,really did not awaited an invention of an HDD,that is worse than a flash drive,thus I needed to ask some up to date IT persons.
About the deception,it is also supported by the fact,some manufacturers was not having the parameter in data sheets,until there was a lawsuit for that.
So is it that bad,I should decommission all the WD20SPZX s & reinstall all OSs to CMR HDDs,thus downgrade laptops from 2TB to 500GB?
Other thing,I forgot to ask:
If OS is unaware,that a DM-SMR HDD is moving data from track to track,I guess,if I shut down the OS,it may not wait for the HDD to finish moving data & shut down while HDD is still working internally & thus corrupt the data,am I right?

Reply 5 of 30, by Sabina_16bit.

User metadata
Rank Member
Rank
Member
chinny22 wrote on 2025-02-28, 01:06:
My experience working in I.T. is SSD's die without warning. It's a case of 1 day it's working next day dead. Spinning rust typic […]
Show full quote

My experience working in I.T. is SSD's die without warning. It's a case of 1 day it's working next day dead.
Spinning rust typically gives some kind of warning, SMART error, makes funny noises or slows right down. SMR/CMR I'm not really taking notice

I think the failure rates are still about the same but typically the HDD lasts the life of the PC.
This is on modern PC's running modern OS's.

Retro rigs get very little usage compared to daily drivers so it's difficult to get any meaningful statistics.
For example, how many read/writes does a retro PC that's only used for a few hours a couple of days a week make compared to a work computer that's on 8hrs a day 5 days a week.
Good chance old OS's lack of TRIM isn't really going to matter much on that retro rig.

I'd be picking storage type based on price, performance or capacity, whichever is more important for the person.
For me, I'm cheap. All my computer new and old have spinning rust salvaged for free from old work computers. Size of the drive been the next deciding factor.

Thanx for Your response.
Most of my HW is,as U described,rust saved be4 scrapping for museum reasons,but I wished to upgrade the newest to top parameters,thus I wished 2TB HDDs for some laptops & compact desktops,I was reading references,user reviews & between Toshiba/Seagate/WD it looked best for WD,I also asked WD,is the selected WD20SPZX suitable for Legacy OSs not aware of physical sectors,thus doing more RMWs & they replied,it is suitable & a retro OS will not shrink their lifespan,than I for sure bought them in German eshop to avoid fakes or drives not passed well when leaving the plant typical for Czech market(i try to buy system drives in a western country,as there is a lot of garbage being sold on eastern market using,that we r poor here),but I was not then aware about downsides of SMR,& I was also thinking like U,that with HDD I am warned by changes in their sound,if they r about to fail,while SSDs r silent & I cannot c any warning sign,so thanx for notice,that @ least this is still true.
The usage pattern for my affected laptops & compact desktops is in average using about 2ce per month,but often 2 to 4 days repeatedly,sometimes 2 days continuously.Average uptime is about 40hrs/month.
Shall I never let it run more than 8hrs.continuously?
I am watching their temperature & sounds.
Sure after being aknowledged about them,I will try to avoid them for next projects,but I dislike to dismantle a finished system.

Reply 6 of 30, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

SMR drives are trash.

They spend an inordinate amount of time re-layering their shingled data tracks, and this CANNOT properly be accomplished, with the way windows, and software written for windows, wants to CONSTANTLY WRITE TO THE DRIVE.

This leads to incomplete reshingling operations, and ultimately, data-integrity failures, and corrupt file systems.

Microsoft COULD mitigate this with changes to how their disk subsystem operates, but they have been completely unwilling to do so.

This same "WRITE ALL THE TIME! **ALL THE TIME!! WHEEEEE!!! HDD IS CHEAP, ABUNDANT, AND FREEEEEEEEEEEE!!!!!**" ideation by both the OS vendor, and the software ecosystem in general, make it highly destructive to SSDs as well.

Again, Microsoft could introduce mitigations into how their disk subsystem handles and batches writes, but they have been unwilling to do so.

On operating systems where such mitigations can be introduced without having to go through nearly as many hoops, SSDs work fantastically, and are very reliable. Even SMR drives can be used effectively, because they are given suitable time to re-shingle their data tracks, and the data does not lose integrity.

In the use case that SMR drives are normally placed into, they are simply inferior tech with no suitable utility.

Reply 7 of 30, by davidrg

User metadata
Rank Member
Rank
Member
wierd_w wrote on 2025-02-28, 02:18:

Microsoft COULD mitigate this with changes to how their disk subsystem operates, but they have been completely unwilling to do so.

And understandably so. Mechanical hard drives have been rapidly retreating from the consumer space for a few years now. Is it really worth them spending time optimising Windows for a technology few people are buying today and almost no one (outside of perhaps enterprises) will be buying in a couple years time?

Reply 8 of 30, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

This post is spoilerite because it waxes philosophical about linux and other filesystem flavors, that are more kind to SSDs. Many people dont like such topics, and would prefer not to see them. This topic is however, situationally appropriate, given the question.

Spoiler

A good part of the problem with the "Write all the time! HDD is abundant, cheap and free!" mantra from software makers, is that flash memory array designs evolve faster than filesystems that live on top of them do, and this leads to either very inefficient use of disk space (like with ExFAT), or very destructive write amplification (Like with NTFS) However, it seems pretty much impossible to convince software developers to simply not write things to the disk all the time. They want persistence of data, and windows gives them ample facilities to accomplish that. Being respectful of the end user's computer and its resources is never a priority, and the results are entirely expected, obvious, and harmful.

The reason NTFS is very harmful for SSDs, is because of how its cluster size arrangement works. The largest cluster size you can use with NTFS is 64k cluster, and if you do, quite a few features of the filesystem simply break spectacularly, like compression, or encryption.

Understanding why this is, requires understanding why the flash memory consortia out there want all the disposable flash memory devices to be using ExFAT: ExFAT allows you to set cluster sizes in the multiple megabyte sizes.

Why is this preferable for flash memory, and SSDs in general?, you might ask. So, I'll do my best to explain why this is.

Flash memory, be it an SDCard, an eMMC chip glued to an embedded device, a USB stick, or an NVME SSD, is arranged in discretely erasable units, and larger units are easier to manufacture, because they can use less sophisticated controllers. They thus perform at their best, when full erase blocks are written contiguously, all at once. This wears the flash memory array the least, data thruput is the best, and all around, it performs better in every capacity. The *ISSUE*, is that these blocks are *NOT* friendly sizes to these older, well-established file systems.

NTFS has a preference for 512 BYTE clusters. RAW SECTOR for old CMR spinny disk--- and again, Flash memory wants to be allocated in blocks SIGNIFICANTLY bigger than this.

Since Microsoft does not want to introduce batching mitigations in how their disk subsystem dispatches writes, so that large blocks are committed contiguously, and instead, wants to pretend the world still uses 512 byte spinning rust, because that is what plays nicest with their filesystem of choice-- when files get written to an NTFS volume, 'partial write' on the erase block happens, and write amplification occurs. This is especially true when disk compression is turned on, because of how NTFS writes compressed files. (Not contiguously!! They are written as small, allocation unit sized file fragments, with enough free space between them, that uncompress-in-place is possible. Now, consider that, in conjunction with how MS automatically compresses the component baseband service log (CBS.LOG), and how VERY VERBOSE they make that log. (That even when compressed, it routinely gets to be multiple megabytes in size), and how VERY VERY OFTEN they push out updates.

Windows is thus, horrifically destructive to SSDs, all by itself, without any software running.

It does not have to be that way-- the disk subsystem could very easily identify that the disk it is working with is an SSD, determine the ideal erase block size, and then batch write operations appropriately. it could likewise, inform the filesystem to commit compressed file fragments in a non-fragmented manner, but MS has consistently resisted and argued against this for several decades now.

Then, on top of that bundle of bologna, you have browser makers wanting to write multiple redundant copies of small files from websites, constantly, and without checking to see if the locally cached version is actually still valid or not-- and-- retaining old copies basically forever until manually cleared--

AND--

System process makers, like antivirus vendors, that want to keep very aggressive statefulness logs that scattershot the drive, and are continually and heavily written to, relentlessly. (Both Sentinel and Webroot security suites do this. VERY VERY aggressively and egregiously.)

But, AGAIN, it does not HAVE to be this way. I'll get to this in a bit.

Now, the reason why this is all spoilerite.

Other operating systems, where you dont have an opaque and obstinate developer population, and a tonedeaf management culture, allow more direct control over these aspects of the disk subsystem, and how their filesystems operate.

Take for instance, Linux.

Its default filesystem, on the outset, does not look all that much better than NTFS is for SSDs. Only very modestly so, with its default allocation unit size of 4kb. Arguably, that's ideally suited for "Slightly more modern" spinning rust, of the "Advanced Sector Format" variety, and would be just as harmful to SSDs as NTFS is.

However, unlike NTFS, EX4 can be told to batch writes according to user defined sizes and parameters. Setting that up requires the user to know what they are doing, but even that is leaps and bounds above and beyond what microsoft offers, which is "NO! WONT DO!".

Specifically, you can enable features that were initially intended for use on a RAID controller. Specifically, a hardware RAID controller-- one that does the data striping across the array in a way that is completely opaque to the host, but which would benefit from having data structures committed to it in larger, more structured batches. Since the raid controller's logical volume appears like it is already one disk, the RAID features of EXT4 allow you to set the "Stripe" and "Stride" sizes, even with only 1 physical disk defined. This is explicitly and specifically for these black-box hardware raid array controllers.

It also, happens to provide "Wants very large, contiguous writes" flash memory devices to be fed what they actually want, while retaining the smaller allocation unit sizes desirable for the filesystem.

The disk subsystem caches, batches, and dispatches writes in chunks that are friendly for the underlying flash memory device, and the user is completely unaware of this. Further, the filesystem driver itself is more intelligent than NTFS, and is aware when these options are enabled, such that blocks of free space for writes are more intelligently chosen to minimize issues with re-write, or incomplete-write. This means even abusive applications that want to be "OH SO VERY STATEFUL!", dont clobber the shit out of the medium.

Additionally, this filesystem allows you to mount other filesystems on folder mountpoints cleanly and efficiently, and the OS provides very easy to set up facilities to host such additional filesystems entirely out of RAM to begin with-- useful for stopping the deleterious effects of browser caches.

NTFS *DOES* allow you to mount additional NTFS filesystems at folders, as mount points, but does not offer an easy to use and define ephemeral ramdrive backed filesystem provider to do this with-- and instead-- permits the abusive writes, and further, writes them in the most egregiously stupid ways possible.

NTFS *COULD* very much receive an update that permits it to understand user-defined parameters for batching, staging, and arbitrarily large atomic writes, similar to how the RAID functions for EXT4 work, but microsoft does not want to "confuse" its users, and is adamant that everything is fine. (Even while the SDCard Assn goes out of its way to tell you that if you are not using ExFat with huge clusters on windows, YOU ARE NOT IN SPEC. Again, BECAUSE the media NEEDS large atomic writes.)

Reply 9 of 30, by Sabina_16bit.

User metadata
Rank Member
Rank
Member

Thanx 4 all the deep analysis of SMR & SSDs.Makes a lot of use,@ least for my next projects...
But still,I did not get answer to 1 essential question:
When Windows is shutting down unaware about SMR HDD's internal processes,is there some protection against shutting down when HDD is not done with reordering its data,or I risk data corruption on every shutdown,if Windows may finish shutdown sooner,then the HDD is done with rewriting its tracks internally?
Or even when I just put the laptop to sleep,power to the HDD may be cut sooner,then it finishes tracks reordering?
If it is so frigile,this is literally a silicon-based Alzheimer...

Reply 10 of 30, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

The way SMR works, goes a bit like this:

The data tacks 'overlap' by some percentage, and subsequent tracks continue this trend. Outermost tracks 'should' get written first, then progressively more inward tracks after.

Read operations can get the data from the bit of the track that doesnt get overwritten by subsequent passes.

This allows more data to be shoehorned onto the drive than would otherwise fit, if the tracks did not overlap like this.

For data that gets written in orderly, sequential, and rarely overwritten manners, this SHOULD work great.

As I yammered about earlier, though, this IS MOST CERTAINLY NOT what windows does, nor what windows software does.

SMR drives wait for periods of inactivity to 'reshingle' their data, so that the structures on the platter are the desired orderly ones needed for this tech to work correvtly. If they get this time, they can subsequently read and re-write successive tracks of the disk to accomplish this, even though something got written after the next tack is laid down.

This is similar in nature to SSDs doing wear leveling operations. The TRIM command tells the drive that certain areas are no longer storing useful data and can be treated as 'empty', and thus used for wear leveling considerations, or exempted from attempts to reshingle.

As per my spoilerite above, good use of cache and dispatch can make the os much less destructive to these storage devices, but MS refuses to accept this.

When either is prevented from happening (ssd too full, with excessive writes unrelenting, or, excessive random writes in general unrelenting for smr), areas of the drive get 'hot traffic', and the medium starts degrading.

In the case of flash, the actual gate array stops being able to seperate and hold charges due to ion migration in the gate itself, and with SMR, the disorderly, and repeated writes with tge comparatively large write head, and lack of reshingling, leads to bit pattern inversions and stuck magnetic domain islands, leading to corrupted data.

'Being polite and respectful of disk resources with your software' is the actually *correct* take away here, but again, developers hear that and act like you suggested pimping out their mother.

Failing that, 'the OS should cache and dispatch write operations in ways that play nice with the hardware underneath, and give it time to do its needed housekeeping operations' becomes the requirement.

But again, 'no, my need for immediate commital!' gets screamed. (Databases, etc, that cant afford data to get lost from a power failure because it was batched for dispatch, but had not yet been committed, etc.)

The reality of 'No. You simply cannot have what you want, it will destroy the disk.' Never sinks in.

CMR is an old concept, and likely is never coming back.

Small cell flash is prohibitively expsnsive.

These things being simple facts of the modern market, mean the notion of 'immediate committal' and 'small filesystem structures' needs to get burried. It's dead Jim.

The insistence on trying to retain these, leads to premature death of the medium.

Demand better written software, and disk subsystems that are written for THIS century.

Reply 11 of 30, by MikeSG

User metadata
Rank Member
Rank
Member
Sabina_16bit. wrote on 2025-02-28, 14:23:
But still,I did not get answer to 1 essential question: When Windows is shutting down unaware about SMR HDD's internal processes […]
Show full quote

But still,I did not get answer to 1 essential question:
When Windows is shutting down unaware about SMR HDD's internal processes,is there some protection against shutting down when HDD is not done with reordering its data,or I risk data corruption on every shutdown,if Windows may finish shutdown sooner,then the HDD is done with rewriting its tracks internally?
Or even when I just put the laptop to sleep,power to the HDD may be cut sooner,then it finishes tracks reordering?
If it is so frigile,this is literally a silicon-based Alzheimer...

Windows always finishes writes before shutting down. You can also audibly hear the HDD turn off.

In any version of windows with power saving, if the HDD has been idling for x mins it audibly shuts off.

Most windows versions also have a setting in the HDDs device manager properties about using the cache. Either as a quick disconnect (all writes are done immediately), or a mode that stacks writes for performance but the drive can't be switched off at any time. Windows still finishes all writes when shutting down. Power blackouts are the only risk.

For an old PC a laptop HDD can be a good solution. Cheap SSDs can lose data (all of it at once) if not powered for a year.

Reply 12 of 30, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

It's important to understand that the windows setting is related to the write and read cache that is baked into the drive itself, while the 'cache and batch operation' I am referring to, is done with system memory before a disk IO operation is even called.

This is because such features are there for blackbox raid controllers of yesteryear, which may or may not have cache ram installed on them, and still need the writes 'atomized' in sizes appropriate for their stripe length and width.

Because those lengths and widths are not communicated upstream and can be any number of combinations based on the raid level and number of drives connected, the administrator has to define it themselves, and are presumed to know this information, and to know what they are doing. It's not end-user friendly.

In modern flash designs, the 'analogous' sizes of the page, and erase block can change at the manufacturer on a whim, based on what is curently available or cheapest that production run. These manufacturers usually dont want to communicate this to end users, or to the operating system, leading to issues where this ideal alignment is 'not easy to know', and 'specific for THIS SSD.'

There ARE ways to derive this information empirically, with things like flashbench, but this gets progressively out of mainstream user's reach.

Flash drive makers really should just report page and eraseblock sizes as extended SMART table entries, imo. Then the OS could ask, get the info, and cache/batch appropriately without the user even having to be involved.

But they dont.

Because they dont, and the OS cannot be clairvoyant about the whimsy of the manufacturer on the prod run that drive was made in, 'simply not doing that' is how windows does things, because 'asking the user to do black magic voodoo to tune the subsystem' is not inside their desired market niche.


Again, though, linux totally lets you set that.

Setting that up, greatly enhances performance and write life.

Reply 13 of 30, by Sabina_16bit.

User metadata
Rank Member
Rank
Member

I did not asked about OS-aware writes,I know,the OS switches power off,juset after it finished all writes.
I asked about OS-unaware writes-those,which r only within the SMR HDD.We sumarized here,that after OS has done its writes,the SMR HDD is rewriting its tracks,rearranging data internally,of which the host is not aware,so for Windows it is done,no data flow between the HDD & MoBo,but the HDD is still realigning its data,as described above,so in this phase all data operations r between HDD's own controller & its heads & surface,OS does not know about it,because it is DM-SMR,not HM-SMR,thus OS just c,there is no data flow between MoBo & HDD,thus it thinks,everything is done & can power off the PSU.Or can it be informed somehow?
Or the HDD is signalling to the BIOS something like "I am still working,do not power off the PSU!"?
Again,I am asking about writes inside the HDD,after OS is done & the disk is realigning its shingles?
Maybe I must compare the sound of the HDD & the HDD LED activity,if I will hear the heads working,but not c the HDD LED flashing,it is sure a write operation inside the HDD,of which the host is not aware & when there is no data flow between HDD & MoBo.If I will never notice the heads' sound without LED blinking,then it is clear,the HDD is signalling to the host,it is working,even when it is only working internally & not exchanging data with the host,so I will test it myself,if no1 answers me,again,I am asking not for the OS->HDD's PCB writes,I am asking about the HDD's PCB->HDD's platters writes,those managed by HDD's PCB,not those managed by OS.How it is protected,if ever,against powering of the PSU by the OS in the latter stage?

Reply 14 of 30, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie
davidrg wrote on 2025-02-28, 03:24:
wierd_w wrote on 2025-02-28, 02:18:

Microsoft COULD mitigate this with changes to how their disk subsystem operates, but they have been completely unwilling to do so.

And understandably so. Mechanical hard drives have been rapidly retreating from the consumer space for a few years now. Is it really worth them spending time optimising Windows for a technology few people are buying today and almost no one (outside of perhaps enterprises) will be buying in a couple years time?

As there are two kinds of SMR drives: one where the SMR part is exposed to the OS (enterprise ones) and the consumer ones that try to hide it, maybe the takeaway is the ones that try to cover it over are a failure, and IMO deceptively sold in the first place.

Isn't one of the main consumer use cases for HDD a home RAID? and isn't RAID one of the worst things you can do to an SMR drive?

Reply 15 of 30, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

That is correct, but SMR drives are often sold in workstation bundle deals, in place of a proper storage device, for their reduced price.

They do not serve well in that role.

Case in point, I just recently reloaded a lenovo that came stock with a 1tb SMR drive as the OS volume. Upon discovering this catastrophe, I cloned it to a CMR drive I happened to have in my pile of junk. (I have long since switched to SSDs, and the use compressed ram backed swap on my linux boxes, and have no need for a CMR drive)

Putting one inside a RAID controller, is simply idiocy.

Sadly, Modern drive makers REALLY REALLY REALLY want to stop making CMR drives, and are only doing so, because RAID actually NEEDS it-- and have been trying (Unsuccessfully!) to foist off their SMR drives into this market with deceptive labelling. The consumer demographic catches wind almost immediately, and the affected models simply dont sell, much to the chagrin of these makers. Both Seagate and WesternDigital have tried this in recent years.

---

To answer the question, "What happens when I power down the drive in the middle of a shingling operation?"

Not a whole lot. The drive is reading a track, then re-writing the exact same data that was there previously. It is just trying to move the "Flipped up" shingle further down the drive, where it belongs, through subsequent reads, and re-writes.

The deleterious thing, is that if the shingle does not get moved where it belongs, it exerts stronger magnetic forces against its neighbors than is really desired, since data is being stored on the smaller fraction of the track that does not get overwritten by the next track laid down. (There is this big, "Full width of write head" area, exerting magnetic forces against its neighbors, that are "Very thin remnant of a track" in size, and they dont have the strength to resist the coercivity of their neighbor, and have increased risk of flipping) This enables bit inversions to happen on the surface, causing data corruption.

On SMR drives, the reshingling operation needs to be permitted to complete undisturbed, in order for things to work "as expected". RAID scrubs, Constant OS writes, and power-save operations spinning the drive down, all basically prevent this from happening, and are bad for SMR drives.

Reply 16 of 30, by Sabina_16bit.

User metadata
Rank Member
Rank
Member

So for ensuring finishing reshindling,when I am shutting down the PC,I would need to disable the ACPI function or so to enforce the PC to behave like an AT PC,so do not let the OS to turn off the PSU & then shut it down by long press,when I no longer hear the heads working?
To ensure the reshindling be finished,I would need some time up between OS is ready to shut down & disk is ready to shut down...
But can I force Windows 7 to behave like Windows 95 with AT PSU(but instead of displaying a notification "Now it is safe to turn off the computer" display a mod notification "Wait,until U will not hear HDD heads working,then it will be safe to turn off the computer.".
But still,what about sleeping mode?
If I have a full taskbar & I need to continue later from where I am,it is quite impractical to shut down instead of just closing the lid & let the laptop sleep...Or must I wait for a rare moment,HDD will be silent to enter sleep mode safely?
But who has several hours to dedicate just to wait for a silent moment to close the lid?
It would probably require disabling antivirus be4 entering a sleep mode to even get a chance for a silent moment.
...

Reply 17 of 30, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

Uhm...

About that... (and why I feel SMR drives are just SHIT in general)

To reshingle correctly, it would have to completely re-write the whole drive. If that is a 1tb or larger drive, that's a pretty protracted process, and it isnt like it is going gangbusters to do it either.

They tend to do it rather lazily, so that they can be interrupted, and do whatever the host actually wants done, in a reasonably timely manner.

SMR drive use case, best case, is like this:

The drive serves read-only data, and is always powered on, constantly.

Think, some kind of embedded application, where it is used to load an image into memory, where things then get run from.

Reply 18 of 30, by Sabina_16bit.

User metadata
Rank Member
Rank
Member

So most deceptive is to mark these as laptop drives,Toshiba even calls them L200 & r marketted as laptop drives,WD also markets the WD20SPZX as meant for laptops & when I asked them upon buying,r these drives suitable for running Legacy OSs from them,they wrote "Yes".& in fact by Your description these r suitable only for external backup drives & surveillance drives,nothing else.When I am now clear about the deception,I know,I must only use CMR HDDs for laptops,which r limited to 500GB & use SD cards for movies,or so,but it is late for the finished main laptop...So it is rather a miracle,I got no data corruption yet on this machine completed 4 years ago...

Reply 19 of 30, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

No, there are/were some WD Blue 1tb laptop drives in CMR.

that is exactly what I put in the previously mentioned Lenovo.

[take for instance, WD10TPVT. This is a Scorpio Blue 1TB, and came out in 2011. SMR did not hit shelves until 2013.]

https://techgage.com/article/western_digital_ … d_drive_review/

https://en.wikipedia.org/wiki/Shingled_magnetic_recording