VOGONS


CF, SD, SSD, etc for Win9x and WinXP

Topic actions

First post, by KT7AGuy

User metadata
Rank Oldbie
Rank
Oldbie

Over the past two years or so I've been noticing posts here on VOGONS (and videos on YouTube) where fellow enthusiasts advocate the use of CF, SD, and SSD storage devices under Win9x and WinXP. Phil also seems to like the idea of using them for his various videos and projects.

I was always under the impression that CF and SD were a bad idea for DOS and Win9x due to their slow write speeds and limited number of re-write cycles. It was only due to a shortage of cheap, period-correct, and functional hard drives that users were being forced to consider them as second-choice alternatives. Have things changed? Are modern flash memory alternatives now superior to original low-capacity mechanical hard drives?

SSD devices were also a no-no under WinXP due to that operating system's lack of TRIM support, wear leveling, and trash collection. Have things changed? Are certain models of SSD now safe to use under WinXP and even possibly Win9x? Do modern SSD devices have wear leveling, trash collection, and TRIM built into their firmware or something?

Thank you

Reply 1 of 24, by stamasd

User metadata
Rank l33t
Rank
l33t

I'll be interested to hear answers to this question also. I'm using CF cards for DOS, but for 98/2000/XP/etc I use microdrives (still have a number of them around). This of course limits me to 4GB (or in some cases 6GB) usable space per drive. Right now I'm building an AlphaPC and I'm debating whether to use a CF/SSD or go with a microdrive for NT and Linux.

I/O, I/O,
It's off to disk I go,
With a bit and a byte
And a read and a write,
I/O, I/O

Reply 2 of 24, by mcfly

User metadata
Rank Newbie
Rank
Newbie

I wouldn't worry about SSD longevity. I used for a couple of years 32GB samsung SSD (MLC) on XP without a problem. You just need to be aware of few things: disable prefetch, hibernation, indexing, scheduled defrag, pagefile on HDD and you should be ready to go. The most common scenario is to use SSD for XP, and HDD for storage - if someone is afraid of their precious data. Another key factor is that SSD on sata3 or better interface will work much snappier than on sata1. If I remember correctly OCZ or Corsair were including some applications in their bundles working under XP and trim could be performed via this software. Anyways, stick with a good brand and just use it. Good brands start showing its age around few hundreds written terabytes (from what I read in longevity tests from 2015 https://techreport.com/review/27909/the-ssd-e … theyre-all-dead), so I can't see myself for example writing that much of data especially on retro hardware.

Reply 4 of 24, by canthearu

User metadata
Rank Oldbie
Rank
Oldbie

Only thing I would really suggest if you are going to use an SSD for DOS/Win98 or WinXP is to do the drive partitioning in a program that can align the partitions to 4K boundaries to help the SSD with writes.

Otherwise, the lack of TRIM, while obviously not optimal, won't significantly screw over the life of any SSD you use on an old computer. Drives will work fine without TRIM, but might be a little slower under highly sustained write loads. (which is basically never)

I generally would suggest SSD over CF cards, as SSDs have more sophisticated controllers that will take better care of the NAND flash. It probably doesn't matter that much though, particularly in DOS as it isn't very write heavy anyway!

Reply 5 of 24, by dr_st

User metadata
Rank l33t
Rank
l33t
KT7AGuy wrote:

SSD devices were also a no-no under WinXP due to that operating system's lack of TRIM support, wear leveling, and trash collection. Have things changed? Are certain models of SSD now safe to use under WinXP and even possibly Win9x? Do modern SSD devices have wear leveling, trash collection, and TRIM built into their firmware or something?

Wear-leveling is built into the firmware most of the time; TRIM cannot be built into the firmware, since it is an ATA command that relies on software to tell you when sectors can be recycled. Some vendors have their own utilities to sort-of substitute for TRIM / send it manually, or so I understood.

mcfly wrote:

I wouldn't worry about SSD longevity. I used for a couple of years 32GB samsung SSD (MLC) on XP without a problem. You just need to be aware of few things: disable prefetch, hibernation, indexing, scheduled defrag, pagefile on HDD and you should be ready to go.

In other words, disable everything that makes SSD useful? Pagefile on HDD? Come on. It doesn't get any more pointless than that.

The only thing that really and definitely makes sense to disable is defragmentation (SSD-aware OSes should do it automatically, but 9x/XP are not). Indexing and prefetch - you either want them or you don't, in which case it makes sense to disable them whether you use SSD or HDD. Same for hibernation.

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 6 of 24, by Jo22

User metadata
Rank l33t++
Rank
l33t++
canthearu wrote:

Only thing I would really suggest if you are going to use an SSD for DOS/Win98 or WinXP is to do the drive partitioning in a program that can align the partitions to 4K boundaries to help the SSD with writes.

This true for NTFS, but several months ago another Vogons member explained that this has little effect to FATx volumes,
Long story short, FAT doesn't store things in a strict, predefined manner. The beginning of it (the file table) can variy,
and so does the beginning of the FAT partition and the files. So even if it is aligned nice and tidy in 4K boundaries, things can still be off. 🙁

Edit: That beeing said, I still use CF cards for DOS PCs and never got a dead card so far.
Even the oldest CF cards that I own apparently do internal housekeeping by implementing a simple form of wear-leveling.
See Re: Generic Compact Flash Cards

Last edited by Jo22 on 2018-08-07, 12:21. Edited 1 time in total.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 8 of 24, by stamasd

User metadata
Rank l33t
Rank
l33t

SLC really? I can't afford SLC drives on my main computer. 🙁

I/O, I/O,
It's off to disk I go,
With a bit and a byte
And a read and a write,
I/O, I/O

Reply 9 of 24, by mcfly

User metadata
Rank Newbie
Rank
Newbie
dr_st wrote:

In other words, disable everything that makes SSD useful? Pagefile on HDD? Come on. It doesn't get any more pointless than that.

The only thing that really and definitely makes sense to disable is defragmentation (SSD-aware OSes should do it automatically, but 9x/XP are not). Indexing and prefetch - you either want them or you don't, in which case it makes sense to disable them whether you use SSD or HDD. Same for hibernation.

Indexing and prefetch services was meant for HDD to dimnish access/seek/loading time problems (for typical consumer tier HDD drives). This is not a problem for SSD, causes unneccessary I/O operations and may be disabled. Whether somebody likes or not, pagefile is being used constantly. Eg. my system with 5Gb RAM running XP (+Linux on another hdd), has 270 MB paging use after boot up on idle. AFAIK XP defaults to half of the RAM for paging space, and aside some permanently occupied space, pages are constantly being pushed in and out by the system. It means more pressure on SSD wear leveling (when no TRIM is available) and to remedy this, pagefile usually goes on hdd - you won't notice a speed decrease anyway (did you while using HDD only? - only if RAM was full). Besides, I did not invent those tricks, they were born by experience of many people in times when SSD were sucky, small, expensive and people still used OS older than Win7 wihout TRIM. Even Linux has TRIM support from circa 2010 (kernel 2.6.30 something). So not that pointless. I even disable some services to increase available RAM. But you may enable all of them - it is your PC.

stamasd wrote:

SLC really? I can't afford SLC drives on my main computer. 🙁

I can recommend two approaches:
1) buy some recent MLC drive, samsung, crucial or other reputable brand 120GB if for system + data on HDD or 250+ for system+data
2) budget solution: msata SSD + msata to sata adapter from ebay + HDD for data. You can score cheaply on ebay drives from old netbooks varied 8-32 or even 64gb used, sometimes even SLC if you're lucky (those from first netbooks), employ few tricks to make yours SSD drive life longer and your're ready to go. Even some old drives do not have TRIM at all, just the Garbage Collector, so don't worry if you don't chew few hundreds of terabytes of data on your retro system. I myself have one XP system on Toshiba SSD 8GB which is MLC, has single flash chip, is slow 9for SSD), but still running fine. Your choice and money.

From experience I used Samsung SSD 840pro on sata1 (150) and I felt full speed of it only on sata3 interface. Using SSD on slow interface won't be a superb experience, but still noticeably faster than HDD, even when considering access time only. Keep that in mind though. My 2 cents.

Reply 10 of 24, by dr_st

User metadata
Rank l33t
Rank
l33t
mcfly wrote:

Whether somebody likes or not, pagefile is being used constantly. Eg. my system with 5Gb RAM running XP (+Linux on another hdd), has 270 MB paging use after boot up on idle. AFAIK XP defaults to half of the RAM for paging space, and aside some permanently occupied space, pages are constantly being pushed in and out by the system. It means more pressure on SSD wear leveling (when no TRIM is available) and to remedy this, pagefile usually goes on hdd - you won't notice a speed decrease anyway (did you while using HDD only? - only if RAM was full). Besides, I did not invent those tricks, they were born by experience of many people in times when SSD were sucky, small, expensive and people still used OS older than Win7 wihout TRIM.

You are right that these were common suggestions back in the day. But just because something is common does not make it good. I don't disagree about indexing/prefetch (I even disable indexing on my HDD most of the time), but I disagree about the pagefile. I guess it depends on the basic assumptions - if you assume that the RAM never gets full and the pagefile is used gratuitously, then maybe there is no point in keeping it on the SSD, or keeping it at all - just trim it down to the lowest possible size to prevent some stupid apps from crashing and to allow kernel dumps to be saved in the event of a system crash.

However, as you said, Windows does use the pagefile from time to time even if there is free RAM, and if you go back to a program that was stuck in the background for a while, it was probably paged out; it seems that you would be willing to accept the slower page-in times to save SSD write cycles, whereas I prefer to have my hardware work for me, and take advantage of the SSD speed in this case.

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 12 of 24, by mcfly

User metadata
Rank Newbie
Rank
Newbie
dr_st wrote:

However, as you said, Windows does use the pagefile from time to time even if there is free RAM, and if you go back to a program that was stuck in the background for a while, it was probably paged out; it seems that you would be willing to accept the slower page-in times to save SSD write cycles, whereas I prefer to have my hardware work for me, and take advantage of the SSD speed in this case.

This is so pessimistic approach 😀 I wouldn't worry about any significant delays. Page in/out times are in range of miliseconds and it is very rare for system to exchange every page non-stop and grinding the platters like 50MB/s when 90% of RAM is free. Also some applications won't be ever touching pagefile, that depends on the application. But ok, anyone has his own priorities. I always move pagefile to some other location on every system with 8GB SSD, because it is freaking huge. I also double checked about pagefile size. Microsoft recommends 1.5*RAM.

swaaye wrote:

I wouldn't be worried about any general OS activity significantly impacting a SSD's lifetime.

CF and SD on the other hand are a very different story.

I got an impression we go in a direction of 'all OSes'. OSes without TRIM support < Win7 could make an impact, if >=Win7 then it doesn't matter anymore. About CS/SD I agree, however Win9x, 3.11 or DOS may be ok if you do not attempt to write multiple random files on it. Once I attempted to put XP on SD on an old Thinkpad X41 (SD to PATA converter, and another PATA to SATA soldered in motherboard, some experiment to remove it - I wasn't that brave). It crawled at best, on CF was better (used Kingston 266x), but not perfect due to NTFS features (journal) and specifics of CF controller.

Reply 15 of 24, by WildW

User metadata
Rank Member
Rank
Member

For quite a while I've been using old 30GB and 40GB SSDs, that cost me more than I care to remember when they were new, in Windows XP and 98 machines, and I haven't managed to kill them yet. You just don't write as much data to them as you might think.

Reply 16 of 24, by canthearu

User metadata
Rank Oldbie
Rank
Oldbie

I wouldn't worry about moving the pagefile to another drive. Just leave it on the SSD for performance.

The write wear from a pagefile simply won't matter unless you leave the computer running 24/7 for 25 years, which is definitely not the use case for retro computers. Worrying about trim is pointless, TRIM does NOT help pagefile performance in any way, as the pagefile is a fixed area of the drive and TRIM is never used when the computer is using the pagefile.

Reply 17 of 24, by dr_st

User metadata
Rank l33t
Rank
l33t
mcfly wrote:

This is so pessimistic approach 😀 I wouldn't worry about any significant delays.

You wouldn't worry about significant delays, and I wouldn't worry about any effects on SSD longevity. 😀

mcfly wrote:

I always move pagefile to some other location on every system with 8GB SSD, because it is freaking huge. I also double checked about pagefile size. Microsoft recommends 1.5*RAM.

I wasn't under the impression that we were talking about 8GB SSDs. Where do you even find 8GB SSDs? 😳 In any case, Microsoft's default recommendation for 1.5*RAM is outdated bullcrap, and even Microsoft's own experts (e.g., Mark Russinovich) will tell you that. This was perhaps useful in the days were RAM was low. Today RAM is abundant, and most systems can really do with no pagefile at all (except that it's architecturally required for some cases). The correct value to use is that RAM+Pagefile is big enough for your most extreme workflows.

In fact, modern OSes use smarter values automatically. On my current Win10 PC with 16GB RAM it shows 2.9GB recommended, 2.4GB actual.

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 18 of 24, by mcfly

User metadata
Rank Newbie
Rank
Newbie
dr_st wrote:
You wouldn't worry about significant delays, and I wouldn't worry about any effects on SSD longevity. :) […]
Show full quote
mcfly wrote:

This is so pessimistic approach 😀 I wouldn't worry about any significant delays.

You wouldn't worry about significant delays, and I wouldn't worry about any effects on SSD longevity. 😀

mcfly wrote:

I always move pagefile to some other location on every system with 8GB SSD, because it is freaking huge. I also double checked about pagefile size. Microsoft recommends 1.5*RAM.

I wasn't under the impression that we were talking about 8GB SSDs. Where do you even find 8GB SSDs? 😳 In any case, Microsoft's default recommendation for 1.5*RAM is outdated bullcrap, and even Microsoft's own experts (e.g., Mark Russinovich) will tell you that. This was perhaps useful in the days were RAM was low. Today RAM is abundant, and most systems can really do with no pagefile at all (except that it's architecturally required for some cases). The correct value to use is that RAM+Pagefile is big enough for your most extreme workflows.

In fact, modern OSes use smarter values automatically. On my current Win10 PC with 16GB RAM it shows 2.9GB recommended, 2.4GB actual.

Agreed, in modern days on modern systems. But in this thread were talking about Win XP world, that OP asked about, that is almost 18 yo system and you are stick with 3,5GB max RAM (- some for VRAM if you have integrated video card) on 32bit (64bit XP is very rare thing) which isn't much by today's standards and even like 10 years back could be easily filled by some of the applications. On my 32GB Linux workstation I use like 8GB swap, so yes the modern situation looks better but not then. And I am not even touching Win9x where without some dirty tricks, max RAM is 512MB. So yes, I would agree with you about Win7-10 class OS or any modern Linux distribution, but this is an old XP technology. 8GB SSD? On internet, ebay and stuff. That was just an example, I got other system on 32GB msata and I did exactly the same. Very useful, credit card sized drives, besides it's retro hardware forums, why shouldn't we use an old tech?

PS. Microsoft experts should update this document which was updated in 2017 https://support.microsoft.com/en-us/help/2160 … ment-in-windows

"Users frequently ask "how big should I make the pagefile?" There is no single answer to this question because it depends on the amount of installed RAM and on how much virtual memory that workload requires. If there is no other information available, the typical recommendation of 1.5 times the installed RAM is a good starting point. On server systems, you typically want to have sufficient RAM so that there is never a shortage and so that the pagefile is basically not used. On these systems, it may serve no useful purpose to maintain a really large pagefile. On the other hand, if disk space is plentiful, maintaining a large pagefile (for example, 1.5 times the installed RAM) does not cause a problem, and this also eliminates the need to worry over how large to make it."

Reply 19 of 24, by dr_st

User metadata
Rank l33t
Rank
l33t
mcfly wrote:

PS. Microsoft experts should update this document which was updated in 2017 https://support.microsoft.com/en-us/help/2160 … ment-in-windows

"Users frequently ask "how big should I make the pagefile?" There is no single answer to this question because it depends on the amount of installed RAM and on how much virtual memory that workload requires. If there is no other information available, the typical recommendation of 1.5 times the installed RAM is a good starting point. On server systems, you typically want to have sufficient RAM so that there is never a shortage and so that the pagefile is basically not used. On these systems, it may serve no useful purpose to maintain a really large pagefile. On the other hand, if disk space is plentiful, maintaining a large pagefile (for example, 1.5 times the installed RAM) does not cause a problem, and this also eliminates the need to worry over how large to make it."

Yes, perhaps they should. Even though the article was "updated" in 2017, it only contains information up to 2010. In any case, I submitted feedback on it. 😀

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys