VOGONS


Which operating system does not destroy C.F.?

Topic actions

Reply 20 of 37, by douglar

User metadata
Rank l33t
Rank
l33t

I think the important questions here are:

1) how many write cycles is my device rated for? CF devices vary from 1000+ write cycles to 1000000+ write cycles depending on what you own.
2) how many writes have been applied to my device? Most CF devices support some sort of s.m.a.r.t. reporting
3) how many writes per hour do I see in my work load?

Reply 21 of 37, by jmarsh

User metadata
Rank Oldbie
Rank
Oldbie

Tracking writes is pretty pointless when the device is using a translation layer to spread them over unused blocks.

Reply 22 of 37, by douglar

User metadata
Rank l33t
Rank
l33t
jmarsh wrote on 2026-01-14, 01:16:

Tracking writes is pretty pointless when the device is using a translation layer to spread them over unused blocks.

I'm curious. Why do you think it's pointless? It's got some sort of correlation with the life of the device, yes? Is there a better metric?

Reply 23 of 37, by jmarsh

User metadata
Rank Oldbie
Rank
Oldbie
douglar wrote on 2026-01-14, 02:26:

I'm curious. Why do you think it's pointless? It's got some sort of correlation with the life of the device, yes? Is there a better metric?

Because different devices will use different wear-levelling algorithms and how much of the storage is in use also has an effect of how the writes are spread around. There's too many variables to extract relevant data.

Say your device is rated for 10,000 writes and you perform 10,000 writes; you have no idea if they all hit one sector or they were spread over the entirety of the disk, resulting in possibly <1 write per sector.

Reply 24 of 37, by Jo22

User metadata
Rank l33t++
Rank
l33t++

What I also remember is that the default NTFS settings do cause a lot of writes.
Things like timestamps about when a file was last time accessed and so on.
There are registry settings to change that behavior, though.
https://www.forensicfocus.com/forums/general/ … tfs-timestamps/

But still, NTFS does at least support creating an alignment that matches the flash cells.
FAT32, the default for USB pen drives or CF cards was worse here.
It wrote data all over the place and was hard to align in any consistant way.
I guess it was mainly used because many outdated OSes had FAT32 support (Win 9x, MacOS 9, Linux).

Edit: The Windows Explorer does cache previews in thumbs.db by default.
It's possible to disable this feature in the Windows Explorer options.
If disabled, Windows will examine each file for an icon or a picture preview, which causes lots of read access.
Bad for HDDs which are mechanically stressed in that situation here,
but that's no problem for SSD and other flash media, afterall.

Edit: Also useful, maybe:

How to kill CF cards ?
https://www.pcengines.ch/cfwear.htm

Microsoft about SSDs vs swap file (Windows 7):

Should the pagefile be placed on SSDs? Yes. Most pagefile operations are small random reads or larger sequential writes, both of […]
Show full quote

Should the pagefile be placed on SSDs?
Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.
In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that

Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.

In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns,
there are few files better than the pagefile to place on an SSD.

https://learn.microsoft.com/de-de/archive/blo … id-state-drives

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 25 of 37, by DaveDDS

User metadata
Rank Oldbie
Rank
Oldbie
jmarsh wrote on 2026-01-14, 05:46:

.... different devices will use different wear-levelling algorithms and how much of the storage is in use also has an effect of how the writes are spread around ....

Exactly! This is why I still tend to prefer "spinning rust" when running commercisl OSs.

In the cases where I need to use flash (usually for low-power or reliability reasone), I use one of my own OSs if I can, where I conrol exactly how and how otften that flash gets written. For example, I've know of many systems which run my own ARMOS (full multitasking embedded OS) and have been in 24/7 operation without any flash failures for >20 years!

In cases where I do have to use a commercial OS (esp. a consumer orienter one), I tend to use what many would consider truly "excessive" amount of RAM. For a couple of reasaone:

- Disable paging.

- Have a large RAMdisk where transiant/working files can be placed.

I also try to keep flash <50% full, and try to setup regular "ghost" type backups.
24/7 systems running a commercial OS *will* fail eventually.

Dave ::: https://dunfield.themindfactory.com ::: "Daves Old Computers"->Personal

Reply 26 of 37, by douglar

User metadata
Rank l33t
Rank
l33t
DaveDDS wrote on 2026-01-14, 08:23:
Exactly! This is why I still tend to prefer "spinning rust" when running commercisl OSs. […]
Show full quote
jmarsh wrote on 2026-01-14, 05:46:

.... different devices will use different wear-levelling algorithms and how much of the storage is in use also has an effect of how the writes are spread around ....

Exactly! This is why I still tend to prefer "spinning rust" when running commercisl OSs.

In the cases where I need to use flash (usually for low-power or reliability reasone), I use one of my own OSs if I can, where I conrol exactly how and how otften that flash gets written. For example, I've know of many systems which run my own ARMOS (full multitasking embedded OS) and have been in 24/7 operation without any flash failures for >20 years!

In cases where I do have to use a commercial OS (esp. a consumer orienter one), I tend to use what many would consider truly "excessive" amount of RAM. For a couple of reasaone:

- Disable paging.

- Have a large RAMdisk where transiant/working files can be placed.

I also try to keep flash <50% full, and try to setup regular "ghost" type backups.
24/7 systems running a commercial OS *will* fail eventually.

All that stuff is good advice but .....

Don't all computers fail eventually? The big difference I see with SSD's is that it's possible to estimate when your SSD will fail with a reasonable confidence, if you know your media and you know your write cycles. With decent quality CF devices, that estimated failure date is going to be many years in the future even if you are running XP. And the spinning disks? Just because you can't estimate when your spinning disk is going to fail doesn't mean that your spinning disk is not going to fail first, especially if that disk is >20 years old.

https://www.enterprisestorageforum.com/hardwa … ncy-of-a-drive/

Reply 27 of 37, by DaveDDS

User metadata
Rank Oldbie
Rank
Oldbie

Fair enough,,,

BTW: I just sold my Altair last year, a computer I got in 1978 and still going strong!
(5.25" SS SD diskettes at 90k/disk = slightly less capacity than these days)

But yes, all machines do eventially fail - and you MUST plan for that in essential systems (at least regular backups) - but why take measures to intentionally shorten that life - running high-swap disk write intensive "modern" consumer OS with low RAM and "full" flash drives is pretty much guaranteed to do this.

When I design/build a control system, I try my best to not have it require "scheduled maintainence" every few weeks/months and have much higher than industry average instances of "sudden death".

But ... I do admin to being pretty "old school"!

Dave ::: https://dunfield.themindfactory.com ::: "Daves Old Computers"->Personal

Reply 28 of 37, by douglar

User metadata
Rank l33t
Rank
l33t

Let's look at it this way, suppose you are using Windows XP and it generates 50MB of writes per day. And let's assume 10x write amplification because you are not using trim. Here are some simple calculations---

If you have 32GB Transcend CF170 CompactFlash with pseudo-SLC, you are looking at 30,000 write cycles. If the device is 1/2 full, you have 16GB of free space. That gives you about 480TB of life and you are looking at >2000 years before you exhaust the flash.

Now suppose you are using a cheaper 32GB Sandisk Extreme with MLC, you are looking at 3,000 write cycles. If the device is 1/2 full, you are still looking at >200 years before you exhaust the flash.

Now suppose a worst case like a generic 64GB SD with TLC rated at 500 write cycles and limited wear leveling, so let's say 50x write amplification. If the device is 1/2 full, you should get >20 years and still have plenty of life left.

Am I off base here?

Reply 29 of 37, by Tiido

User metadata
Rank l33t
Rank
l33t

It very much depends on where those writes go and on a full drive, as most of these embedded things I see tend to be, one will not see anywhere near these sort of timeframes until a failure happens in such circumstances.
The few cases I have had to change out the storage device (DOM and CF) in some industrial thing that ran XP in one of my previous jobs, the failure was caused by unwritable blocks. I could clone the card to another and things continued to work again without needing to do much more than that.

T-04YBSC, a new YMF71x based sound card & Official VOGONS thread about it
Newly made 4MB 60ns 30pin SIMMs ~
mida sa loed ? nagunii aru ei saa 😜

Reply 30 of 37, by DaveDDS

User metadata
Rank Oldbie
Rank
Oldbie

Probably not ... but I do suspect 50m/day could be low - depends on what is running, how much RAM etc... but there could be a fair bit of swapping...

And I admit to being "a bit" biased! - see "Daves Old Compujters" - I've collected many systems that are very old (at least by computing standards), and many parts are "irreplaceable" .. so I'm very sensitive to intentionally shortening life spans.

And I don't really trust maufacturers "could go as long as" numbers.. I've seen many devices that were taken care of and didn't get close to published MTBF (and to be fair, I've seen many that excedded MTBF) - Mean Time Before Failure does not indicate that a device will last at least this long .. "mean" = average.

And I have seen at least 1 flash fail - I have an Acer W500 - Windows8 tablet, fortunately the flash device is removable .. I had replaced the original 32G with a 128G which should have lasted much longer ... but even though only used while I travelled (not a "daily driver") is failed before 5 years.

But.. It was running Win8 .. Fortunatelly my "Digital2" Win8 tabled with 16g of non-replacable flash is still going! (and both supported SDcard and I go "out of my way" to put user/application files on that replacable media)

Dave ::: https://dunfield.themindfactory.com ::: "Daves Old Computers"->Personal

Reply 31 of 37, by jmarsh

User metadata
Rank Oldbie
Rank
Oldbie

I have seen CF cards suffer from bitrot when left unpowered for several years. They can still be rewritten afterwards, but the original data is gone.

Reply 32 of 37, by douglar

User metadata
Rank l33t
Rank
l33t
Tiido wrote on 2026-01-14, 16:37:

It very much depends on where those writes go and on a full drive, as most of these embedded things I see tend to be, one will not see anywhere near these sort of timeframes until a failure happens in such circumstances.
The few cases I have had to change out the storage device (DOM and CF) in some industrial thing that ran XP in one of my previous jobs, the failure was caused by unwritable blocks. I could clone the card to another and things continued to work again without needing to do much more than that.

If you have to fail, that's a graceful way to go.

And it is quite true that the life expectancy of your SSD storage is going to be shorter if the device is 95% full vs 50% full.

And it's true that if you if your flash storage doesn't support any wear leveling and your run windows NT, the pagefile.sys and NTuser.dat files will burn into the flash like the start button on a CRT monitor. And you probably want to use FAT32, not NTFS to avoid metadata burn in.

Reply 33 of 37, by douglar

User metadata
Rank l33t
Rank
l33t

I'm willing to sacrifice a pair of cheapo generic SD cards out of curiosity.

  1. Set up a Windows XP-SP3 computer with 1GB ram on a 64GB SD, default page file, NTFS
  2. I'll fill the SD 1/2 with files and then start rebooting the system.
  3. Script the computer to reboot every 10 minutes and log the count in a file

Should stress out the registry hives, event logs, & NTFS journal ($LogFile) pretty well. Maybe I'll even see what aspect fails first.

And then I can do a second build with the page file on a separate device and FAT32 for the boot drive and see how long it takes for the registry hives to burn a hole in the media.

Any other suggestions?

Reply 34 of 37, by eM-!3

User metadata
Rank Newbie
Rank
Newbie
douglar wrote on 2026-01-15, 20:19:

Any other suggestions?

Try to use Enhanced Write Filter (EWF) which was already mentioned by Jo22. It will help a lot.

Reply 35 of 37, by douglar

User metadata
Rank l33t
Rank
l33t
DaveDDS wrote on 2026-01-11, 16:23:

Flash degrades slightly with each write. A technique is used (called "wear levelling") which spreads writes around, rewriting the same logical sector does not write the same physical sector. The more sectors are free, the easier/better wear-leveling.

I was skeptical about wear leveling on SD cards. Maybe I should not have been. Looks like consumer SD cards have had internal wear leveling for at least a decade.

Not sure this is an authoritative response, but:
https://forums.sandisk.com/t/which-32gb-micro … -features/35314
the wear leveling feature is supported by all the flash memory devices, so all cards, flash drives and ssds support this feature.

There is an interesting post here about static vs dynamic wear leveling. "Dynamic wear leveling" only concerns itself with wear leveling changed files. "Static wear leveling" will move unchanged files from time to time to even things out.

https://forums.sandisk.com/t/internal-wear-le … er-loss/34477/3

https://wiki.linaro.org/WorkingGroups/Kernel/ … FlashCardSurvey:

"The card performs wear leveling by keeping a pool of physical allocation that are invisible to the user and choosing a new group from that pool when writing to a new logical group, putting the previous physical group back into the pool after either the new group has been completely written, or the old data been moved over to the new group as part of garbage collection. This method is called dynamic wear leveling groups and guarantees that all logical allocation groups that sometimes get written to are aging at about the same rate.

The dynamic wear leveling can be easily observed by analyzing the timing for the garbage collection.

Very few SD cards use static wear leveling, which would also puts allocation groups back into the pool that are only written once during initialization of the card but remain stable afterwards. This would be necessary to maximize the expected lifetime of a memory card that is mostly filled with a typical root file system but has a few files being written constantly.

However, some CF cards are known to do static wear leveling in a way that leads to data loss when the supply voltage gets lost while the card is doing static wear leveling. This is even the case for read-only cards, since the static wear leveling can get triggered by read accesses on those cards. "

Reply 36 of 37, by atar

User metadata
Rank Newbie
Rank
Newbie
douglar wrote on 2026-01-11, 17:26:

Two notes about your SD-CF bridge:
1) it is almost certainly a sinitechi device just like the Pata-SD bridges with the same firmware & same issues
2) my experience is that the “type I” SD-CF bridges require a CF adapter that provides 3.3v power and won’t work at 5v

How does it behave with 5v only? Is the card not visible at all? In a neighboured topic I tried to use it in PCMCIA->CF->SD and it failed while CardBus->CF->SD worked just fine. PCMCIA-CF worked fine with two different CF cards, so it's definitely CF-SD which fails. So, I don't know yet whether the reason is missing 3.3v or the 16-bit vs. 32-bit access. Have you by any chance tried CF-SD on a 16 bit controller (XT or something pre-PCI/pre-VLB)?

Reply 37 of 37, by douglar

User metadata
Rank l33t
Rank
l33t
atar wrote on 2026-01-26, 14:51:
douglar wrote on 2026-01-11, 17:26:

Two notes about your SD-CF bridge:
1) it is almost certainly a sinitechi device just like the Pata-SD bridges with the same firmware & same issues
2) my experience is that the “type I” SD-CF bridges require a CF adapter that provides 3.3v power and won’t work at 5v

How does it behave with 5v only? Is the card not visible at all? In a neighboured topic I tried to use it in PCMCIA->CF->SD and it failed while CardBus->CF->SD worked just fine. PCMCIA-CF worked fine with two different CF cards, so it's definitely CF-SD which fails. So, I don't know yet whether the reason is missing 3.3v or the 16-bit vs. 32-bit access. Have you by any chance tried CF-SD on a 16 bit controller (XT or something pre-PCI/pre-VLB)?

My CF-SD bridge does not report any storage when running at 5v on an ISA controller.