VOGONS


SMR drives

Topic actions

First post, by ncmark

User metadata
Rank Oldbie
Rank
Oldbie

I have been doing a lot of reading about storage lately.
It seems like you can no longer get any bus-powered external HDDs without getting into SMR (shingled magnetic recording)
I have read multiple things abut this - some say not that much difference, others say it makes the drive unusable. It doesn't sound good. Does anyone have experience with this?
The fear of SMR has largely shifted me into to SSDs. Are we really to the point where the SSDs have replaced HDDs for external (USB) storage?

Reply 1 of 23, by javispedro1

User metadata
Rank Member
Rank
Member

To escape SMR HDDs by going into SSDs, isn't it like going from the frying pan into the ashes? I would have presume whatever problems one had with SMR 's "shingles" , you're also likely to have with SSD's large erase block sizes...

In fact, I always thought SMR was a reaction to how little people care about the problems with SSDs with large erase block sizes (trim, write amplification, etc.)

Reply 2 of 23, by darry

User metadata
Rank l33t++
Rank
l33t++

IMHO, suitability depends on use case more than anything.

For portable, bus-powered use, which presumably implies writing data to drive, moving the drive somewhere else and reading back the data as a primary use case I favor SSDs because they are not shock sensitive, are lighter and typically much faster. If one chooses one with decent write and endurance characteristics for one's needs (decent size pSLC write cache, avoid HMB NVME drives in a USB enclosure, etc), I don't see an issue. DATA retention of an SSD during long periods of being unplugged is a consideration, however.

For long-term storage in a primarily read centered scenario, SMR drives might work well enough, but I don't really see the point personally as CMR drives are available in 3.5" form factors (externally power required) and I don't see a practical reason for using a bus-powered drive for this, personally.

If one actually wants to run software from or read/write large files from a bus-powered device, the only options that makes sense to me are NVME drives with a DRAM buffer or possibly SATA ones (also with a DRAM buffer) in a USB enclosure. Using a Thunderbolt with an NVME driver might be an option too (I suspect HMB works over Thunderbolt, but I could be wrong).

In all cases if the DATA is irreplaceable or at least not expendable, one should always have multiple copies of it (backups) and keep those up-to-date/synched.

Getting back to your initial inquiry, the first thing to ask is how you expect to use your drives.

Reply 3 of 23, by st31276a

User metadata
Rank Member
Rank
Member

I detest SMR drives for a multitude of reasons.

However -

In a typical external drive workload use case, the difference would probably not be noticable.

Reply 4 of 23, by ncmark

User metadata
Rank Oldbie
Rank
Oldbie

I have numerous 2.5-inch bus-powered HDDs. I have some older Toshiba ones (circa XP era) and some newer WD Mypassport ones. The former run circles around the latter. I threw our a couple of the passports for simply being too slow to even be usable.
Knowing what I know now, I suspect the passports are SMR.
This is one case where I wish I could travel back and time and buy more of something in the past.

Reply 5 of 23, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie
st31276a wrote on 2025-03-25, 08:58:

I detest SMR drives for a multitude of reasons.

However -

In a typical external drive workload use case, the difference would probably not be noticable.

I disagree here, as SMR drives need long periods of being powered and spun up, without accesses happening, to reshingle. Otherwise data rots in the cmr cache area, and it never gets vacated to the smr area.

It's best-use scenario is as a read only volume serving boot images for emergency recovery, or embedded platforms, IMO.

Some place where the drive can stay powered up, very few (if any) writes occur, disk read speed only marginally important, and it needs to retain data for long periods.

Think for instance, 'bootable ramdisk images for grub+memdisk' or 'large initial ramdisk for embedded device, as /boot mountpoint backend.'

Etc.

The next best use is 'It's bulk local mass storeage for all my old junk data that I dont want to store on a NAS, or that needs faster access than a NAS, like a big collection of ISO images'

Reply 6 of 23, by darry

User metadata
Rank l33t++
Rank
l33t++
wierd_w wrote on 2025-04-11, 15:22:
I disagree here, as SMR drives need long periods of being powered and spun up, without accesses happening, to reshingle. Otherwi […]
Show full quote
st31276a wrote on 2025-03-25, 08:58:

I detest SMR drives for a multitude of reasons.

However -

In a typical external drive workload use case, the difference would probably not be noticable.

I disagree here, as SMR drives need long periods of being powered and spun up, without accesses happening, to reshingle. Otherwise data rots in the cmr cache area, and it never gets vacated to the smr area.

It's best-use scenario is as a read only volume serving boot images for emergency recovery, or embedded platforms, IMO.

Some place where the drive can stay powered up, very few (if any) writes occur, disk read speed only marginally important, and it needs to retain data for long periods.

Think for instance, 'bootable ramdisk images for grub+memdisk' or 'large initial ramdisk for embedded device, as /boot mountpoint backend.'

Etc.

The next best use is 'It's bulk local mass storeage for all my old junk data that I dont want to store on a NAS, or that needs faster access than a NAS, like a big collection of ISO images'

I use a set of SMR drives in RAID-1 in a NAS for non critical stuff, mostly as an experiment. The use case is highly read-centric. These have been working fine for about 6 years (overdue for replacement) and will likely be replaced by CMR drives.

Reply 7 of 23, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

It will suffer terribly on routine raid scrubs, which occur to catch/combat bitrot.

Reply 8 of 23, by javispedro1

User metadata
Rank Member
Rank
Member
wierd_w wrote on 2025-04-11, 15:22:

I disagree here, as SMR drives need long periods of being powered and spun up, without accesses happening, to reshingle. Otherwise data rots in the cmr cache area, and it never gets vacated to the smr area.

I do not see why data would rot any faster on the CMR cache area (if any!). Is there any data on this? It is the complete opposite to experience on SSDs ...

wierd_w wrote on 2025-04-11, 18:14:

It will suffer terribly on routine raid scrubs, which occur to catch/combat bitrot.

Unless you're imagining that these "routine" scrubs continuously end up having to rewrite a significant portion of the disk (which should most definitely NOT be happening), I don't see why either...

Reply 9 of 23, by darry

User metadata
Rank l33t++
Rank
l33t++
javispedro1 wrote on 2025-04-11, 21:53:
I do not see why data would rot any faster on the CMR cache area (if any!). Is there any data on this? It is the complete oppos […]
Show full quote
wierd_w wrote on 2025-04-11, 15:22:

I disagree here, as SMR drives need long periods of being powered and spun up, without accesses happening, to reshingle. Otherwise data rots in the cmr cache area, and it never gets vacated to the smr area.

I do not see why data would rot any faster on the CMR cache area (if any!). Is there any data on this? It is the complete opposite to experience on SSDs ...

wierd_w wrote on 2025-04-11, 18:14:

It will suffer terribly on routine raid scrubs, which occur to catch/combat bitrot.

Unless you're imagining that these "routine" scrubs continuously end up having to rewrite a significant portion of the disk (which should most definitely NOT be happening), I don't see why either...

Mine seem to be doing well. When mdraid runs a check, they don't seems to take an eternity to finish.
I only run these drives in a mirror. Nobody should run these in parity RAID or ZFS or anything that is inherently write-heavy.
EDIT: Hardware_ECC_Recovered on one of the drives seems like it might be a potential concern. I believe one of drives has that much more write usage because the array had to be rebuilt a while due to a human error (don't recall what I did, but no data was lost).

Device Model:     ST8000DM004-2CX188

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-- 100 064 006 - 324
3 Spin_Up_Time PO---- 092 091 000 - 0
4 Start_Stop_Count -O--CK 100 100 020 - 228
5 Reallocated_Sector_Ct PO--CK 100 100 010 - 0
7 Seek_Error_Rate POSR-- 085 060 045 - 312780016
9 Power_On_Hours -O--CK 044 044 000 - 49765h+11m+32.965s
10 Spin_Retry_Count PO--C- 100 100 097 - 0
12 Power_Cycle_Count -O--CK 100 100 020 - 206
183 Runtime_Bad_Block -O--CK 090 090 000 - 10
184 End-to-End_Error -O--CK 100 100 099 - 0
187 Reported_Uncorrect -O--CK 100 100 000 - 0
188 Command_Timeout -O--CK 100 096 000 - 8 8 8
189 High_Fly_Writes -O-RCK 100 100 000 - 0
190 Airflow_Temperature_Cel -O---K 065 046 040 - 35 (Min/Max 31/38)
191 G-Sense_Error_Rate -O--CK 100 100 000 - 0
192 Power-Off_Retract_Count -O--CK 100 100 000 - 121
193 Load_Cycle_Count -O--CK 098 098 000 - 4057
194 Temperature_Celsius -O---K 035 054 000 - 35 (0 24 0 0 0)
195 Hardware_ECC_Recovered -O-RC- 100 064 000 - 324
197 Current_Pending_Sector -O--C- 100 100 000 - 0
198 Offline_Uncorrectable ----C- 100 100 000 - 0
199 UDMA_CRC_Error_Count -OSRCK 200 200 000 - 51
240 Head_Flying_Hours ------ 100 253 000 - 3460h+31m+23.484s
241 Total_LBAs_Written ------ 100 253 000 - 30556566585
242 Total_LBAs_Read ------ 100 253 000 - 1008395805154
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning



Device Model: ST8000DM004-2CX188

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-- 083 064 006 - 179201505
3 Spin_Up_Time PO---- 092 091 000 - 0
4 Start_Stop_Count -O--CK 100 100 020 - 229
5 Reallocated_Sector_Ct PO--CK 100 100 010 - 0
7 Seek_Error_Rate POSR-- 085 060 045 - 312674115
9 Power_On_Hours -O--CK 044 044 000 - 49765h+04m+37.926s
10 Spin_Retry_Count PO--C- 100 100 097 - 0
12 Power_Cycle_Count -O--CK 100 100 020 - 207
183 Runtime_Bad_Block -O--CK 100 100 000 - 0
184 End-to-End_Error -O--CK 100 100 099 - 0
187 Reported_Uncorrect -O--CK 100 100 000 - 0
188 Command_Timeout -O--CK 100 096 000 - 5 9 9
189 High_Fly_Writes -O-RCK 100 100 000 - 0
190 Airflow_Temperature_Cel -O---K 066 044 040 - 34 (Min/Max 31/38)
191 G-Sense_Error_Rate -O--CK 100 100 000 - 0
192 Power-Off_Retract_Count -O--CK 100 100 000 - 105
Show last 16 lines
193 Load_Cycle_Count        -O--CK   098   098   000    -    4081
194 Temperature_Celsius -O---K 034 056 000 - 34 (0 24 0 0 0)
195 Hardware_ECC_Recovered -O-RC- 083 064 000 - 179201505
197 Current_Pending_Sector -O--C- 100 100 000 - 0
198 Offline_Uncorrectable ----C- 100 100 000 - 0
199 UDMA_CRC_Error_Count -OSRCK 200 200 000 - 0
240 Head_Flying_Hours ------ 100 253 000 - 3973h+18m+27.422s
241 Total_LBAs_Written ------ 100 253 000 - 14990077908
242 Total_LBAs_Read ------ 100 253 000 - 1027078862233
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning

Reply 10 of 23, by zyzzle

User metadata
Rank Member
Rank
Member

It baffles my mind that people can actually justify SMR on any level.

SMR is one of the absolute worst marketing devolutions of all time. It simply shouldn't exist. All spinning rust hard drive should be CMR. The only reason SMR exists is because the companies are cheap asses and foisted SMR upon us without being either fair or transparent.

Of course, now most drives are becoming SMR. We've been groomed and "weaned" on their with marketing garbage and most people simply didn't have the technical knowledge to know they've been utterly and resolutely scammed.

Reply 11 of 23, by darry

User metadata
Rank l33t++
Rank
l33t++
zyzzle wrote on 2025-04-12, 00:30:

It baffles my mind that people can actually justify SMR on any level.

SMR is one of the absolute worst marketing devolutions of all time. It simply shouldn't exist. All spinning rust hard drive should be CMR. The only reason SMR exists is because the companies are cheap asses and foisted SMR upon us without being either fair or transparent.

Of course, now most drives are becoming SMR. We've been groomed and "weaned" on their with marketing garbage and most people simply didn't have the technical knowledge to know they've been utterly and resolutely scammed.

I use SSDs for speed, CMR drives for reliable and fast data storage and SMR drives for some low important read-heavy scenarios.

There are reasonable use cases for practically everything and then there are OEMs (morons) who put an SMR drive as a boot drive in a laptop.

Reply 12 of 23, by zyzzle

User metadata
Rank Member
Rank
Member

Well the same thing could be said for rubbish QLC SSD drives. They didn't use to exist. They exist now because companies are cheap and cater to lowest common denominator. Talk about selling a defective product. These drives (and SMR hard drives) are defective products which never should have been foisted upon a gullible public. We simply shouldn't stand for them. Unfortunately, few seem to care. Morons just want their computers "to work" and don't care about data integrity or longevity.

It has become very difficult and expensive to find even a TLC SSD with a RAM cache. Almost all SSDs used to offer these bare-minimum options 4 years ago. And, 2 years ago, they were *less* expensive than trash QLC drives of today with no DRAM! Talk about being scammed.

Reply 13 of 23, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
zyzzle wrote on 2025-04-12, 01:28:

Well the same thing could be said for rubbish QLC SSD drives. They didn't use to exist. They exist now because companies are cheap and cater to lowest common denominator. Talk about selling a defective product. These drives (and SMR hard drives) are defective products which never should have been foisted upon a gullible public. We simply shouldn't stand for them. Unfortunately, few seem to care. Morons just want their computers "to work" and don't care about data integrity or longevity.

It has become very difficult and expensive to find even a TLC SSD with a RAM cache. Almost all SSDs used to offer these bare-minimum options 4 years ago. And, 2 years ago, they were *less* expensive than trash QLC drives of today with no DRAM! Talk about being scammed.

I agree that QLC is terrible if you try to use it as if its a TLC/SLC/MLC Dram SSD but I own an 8Tb Samsung QLC SSD and its the perfect drive for using it as a drive for storing data that is infrequently used or for programs that dont write to the drive often. Have had it for 4 years now and its never skipped a beat, it was worth exactly what I paid for it.

Is it as fast as the above TLC/SLC/MLC drives .. nope but I bought it fully knowing that.

I also disagree that they are catering to the lowest denominator, they are catering to a wide market with drive types and prices for all users from the enthusiasts to the budget buyers, not everyone can afford or even needs the expensive TLC/SLC Dram SSD drives. So QLC fits into that budget drive category along with HBM drives and Cache less drives, if you think they are trash then you are obviously not the intended market. SSDs are just like GPUs, you have many different models and makers to cater to a huge market at all price points, there is nothing wrong with this.

I do own a crazy fast PCIe 5.0 SSD that happily hits 14Gb a sec in both directions but its was as expensive as many budget systems out there and its not a drive I would ever under any circumstances recommend to anyone to ever buy but I can understand why people would. I also own a bunch of lower end SSDs and for their price point they perform exactly as I expected, none of them are rubbish for what they cost.

So instead of acting all snobbish about low cost SSDs perhaps take a look at why they even exist and realise that you are not the intended market for them.

Reply 14 of 23, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

I prefer to think that QLC SSDs just need special care and feeding, which is how I approach them. (I have waxed enough about this topic in previous threads. There are technological ways to properly handle the foibles of an SSD, and those dont really translate for a slow SMR drive.)

If you approach them the right ways, they are actually quite performant.

Reply 15 of 23, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
wierd_w wrote on 2025-04-12, 01:58:

I prefer to think that QLC SSDs just need special care and feeding, which is how I approach them. (I have waxed enough about this topic in previous threads. There are technological ways to properly handle the foibles of an SSD, and those dont really translate for a slow SMR drive.)

If you approach them the right ways, they are actually quite performant.

Yup dont over provision, dont fill them up and dont use them for any application that is going to read and write to the drive constantly, for a long term data store, backup drive or an application and program drive they are just fine.

All that said 8Tb NVME drives have come down in price quite a lot the last couple of years to the point you can buy a Dram NVME drive for about the same cost I paid for the SATA 8Tb QLC SSD, if I had to buy again I would go for the NVME drive.

Still IIRC 16Tb NVME drives should be on the market soon so Im waiting on that before I retire the QLC drive.

Still none of this has anything to do with SMR HDDs so Ill leave any further discussion for a different thread.

Reply 16 of 23, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

Sadly, there are application developers that have chosen to die on the hill of "Disk is plentiful, cheap, and I can write all over it as much as I want!"

This is especially true in the windows ecosystem. there are at least two very high profile antivirus and security suites that are guilty as hell of this: Webroot antivirus, and SentinelOne security.

Both of them like to take very excessive liberties with your hard drive, to the point of being destructive. Webroot for instance, keeps about 20gb of database files that it constantly scribbles on nonstop. The changes it makes are very small atomic writes-- the very things that are deadly as fuck to SSDs.

SentinelOne is not much better, and keeps about 50 log files of various sizes, that it also scribbles all over constantly. Additionally, it takes control of the volume shadow service, and makes lots of highly fragmented system restore points. If you use both of them together, they will trash your SSD's performance in mere days, because they both want to turn on windows drive compression to deal with the fact that they are writing excessive amounts of log to the disk. The problem, is that microsoft's disk compression technology NEVER WRITES CONTIGUOUSLY. It always writes in small 512 byte fragments, in the nearest freespace it can find. its basically hot wind and sand in the face of these kinds of drives.

Good luck getting them to stop doing that. 😜

Reply 17 of 23, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
wierd_w wrote on 2025-04-12, 03:31:
Sadly, there are application developers that have chosen to die on the hill of "Disk is plentiful, cheap, and I can write all ov […]
Show full quote

Sadly, there are application developers that have chosen to die on the hill of "Disk is plentiful, cheap, and I can write all over it as much as I want!"

This is especially true in the windows ecosystem. there are at least two very high profile antivirus and security suites that are guilty as hell of this: Webroot antivirus, and SentinelOne security.

Both of them like to take very excessive liberties with your hard drive, to the point of being destructive. Webroot for instance, keeps about 20gb of database files that it constantly scribbles on nonstop. The changes it makes are very small atomic writes-- the very things that are deadly as fuck to SSDs.

SentinelOne is not much better, and keeps about 50 log files of various sizes, that it also scribbles all over constantly. Additionally, it takes control of the volume shadow service, and makes lots of highly fragmented system restore points. If you use both of them together, they will trash your SSD's performance in mere days, because they both want to turn on windows drive compression to deal with the fact that they are writing excessive amounts of log to the disk. The problem, is that microsoft's disk compression technology NEVER WRITES CONTIGUOUSLY. It always writes in small 512 byte fragments, in the nearest freespace it can find. its basically hot wind and sand in the face of these kinds of drives.

Good luck getting them to stop doing that. 😜

Dont use any program that does this .. its not hard or force it to install on to a spinning rust HDD .. an exceptionally slow one.

Reply 18 of 23, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

Mentioned, because I have to contend with them at work.

they are enforced by our RMM provider, and my company issued laptop ONLY has an SSD. I noticed this was happening when I checked the fragmentation level and about had my eyeballs pop out.

Forensic investigation revealed these two as the culprits. Webroot will write over 1000 fragments in literal seconds, after the service is restarted.

SentinelOne will tattle on you to the RMM provider if you tamper with its service daemon in any way, and I dont fancy getting a phone call that my boss wont understand.
(We use an RMM provider for insurance and liability reasons, as we handle medical records, and this means we have to make... sacrifices... for legal reasons. I would never allow these services to run on any computer I owned, but I must allow them to run on the computers I maintain at work, for this reason.)

Instead, I just have the bristles on the back of my neck stand up, and bare my teeth in irritation.

(and the next time an application programmer says disk is cheap and convenient, I will have to fight very hard not to punch him.)

Reply 19 of 23, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
wierd_w wrote on 2025-04-12, 04:51:
Mentioned, because I have to contend with them at work. […]
Show full quote

Mentioned, because I have to contend with them at work.

they are enforced by our RMM provider, and my company issued laptop ONLY has an SSD. I noticed this was happening when I checked the fragmentation level and about had my eyeballs pop out.

Forensic investigation revealed these two as the culprits. Webroot will write over 1000 fragments in literal seconds, after the service is restarted.

SentinelOne will tattle on you to the RMM provider if you tamper with its service daemon in any way, and I dont fancy getting a phone call that my boss wont understand.
(We use an RMM provider for insurance and liability reasons, as we handle medical records, and this means we have to make... sacrifices... for legal reasons. I would never allow these services to run on any computer I owned, but I must allow them to run on the computers I maintain at work, for this reason.)

Instead, I just have the bristles on the back of my neck stand up, and bare my teeth in irritation.

(and the next time an application programmer says disk is cheap and convenient, I will have to fight very hard not to punch him.)

The only silver lining here is you dont have to pay any costs asso0ciated with them other than your mental health 😁

Personally I would keep a spare dead laptop handy to bash the hell out of to relieve my frustration.