VOGONS


Reply 20 of 50, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++
elianda wrote on 2020-06-04, 19:06:

Good memory managers had native support for the most common disk compression drivers and can load them nearly fully to XMS.

So the memory argument is only an issue if you have a game that requires real mode and that game was stored by yourself on the compressed disk.
And the solution to that is rather easy, just put the game on the uncompressed disk and don't load (or unload) the disk compression.

That I did not know. I'm not finding any reference to this after a quick search. Could you point me in the right direction?

In any case, I thought the whole purpose was to reduce needed space in order to keep people from having to buy a larger disk.

If they have to buy a second disk to use for non-compressible files or for games/programs that require a lot of free conventional memory, then again, what would the point be?
I guess partitioning would be useful here if you have only one disk, but it still seems like pretty useless feature just like it did back then.

I tried it more than once and each time it left me with not wanting to use it after a short while.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK

Reply 21 of 50, by elianda

User metadata
Rank l33t
Rank
l33t
cyclone3d wrote on 2020-06-04, 19:50:
That I did not know. I'm not finding any reference to this after a quick search. Could you point me in the right direction? […]
Show full quote

That I did not know. I'm not finding any reference to this after a quick search. Could you point me in the right direction?

In any case, I thought the whole purpose was to reduce needed space in order to keep people from having to buy a larger disk.

If they have to buy a second disk to use for non-compressible files or for games/programs that require a lot of free conventional memory, then again, what would the point be?
I guess partitioning would be useful here if you have only one disk, but it still seems like pretty useless feature just like it did back then.

I tried it more than once and each time it left me with not wanting to use it after a short while.

Here from the QEMM97 tech notes for Stacker:

 Q. What are the different sizes of the Stacker driver?

A. The size of the driver is strongly dependend on the size of
our hard drive and the size of Stacker's compressed clusters.
If you are using Stacker with DPMS.EXE and the /QD parameter,
the driver's resident size will be as little as 10K. Without
the /QD parameter, the driver will typically be at least 8K
larger. If you are using Stacker's /EMS switch, the driver
should be at least 25K. If you are not using DPMS.EXE or the
/EMS switch, the driver should be at least 44K. The
initialization size, the size necessary to load the driver
before it shrinks down to its resident size, is 87K no matter
what parameters you use.

So the driver takes up 10 kB that can be loaded to UMBs.
If you don't use any memory manager or Novells Dos Protected Mode services then it takes 44kB.

Also the compressed drive is just a fixed file on the uncompressed disk. It is not required to set a whole partition as compressed drive.
For example if you have a single hard disk with a single primary partition you can create a compressed drive that is taking 50% capacity. The compressed drive is then a file on the partition with the size of 50% of the partitions size.
Then one can e.g. install windows in it and office etc., put TEMP and the swap file on the uncompressed disk. If the setup is good one can shrink the compressed drive to minimum size afterwards... This frees up space in the uncompressed part of the hdd again.

Retronn.de - Vintage Hardware Gallery, Drivers, Guides, Videos. Now with file search
Youtube Channel
FTP Server - Driver Archive and more
DVI2PCIe alignment and 2D image quality measurement tool

Reply 22 of 50, by appiah4

User metadata
Rank l33t++
Rank
l33t++
elianda wrote on 2020-06-04, 19:06:

Good memory managers had native support for the most common disk compression drivers and can load them nearly fully to XMS.

So the memory argument is only an issue if you have a game that requires real mode and that game was stored by yourself on the compressed disk.
And the solution to that is rather easy, just put the game on the uncompressed disk and don't load (or unload) the disk compression.

Back in those days I almost always used QEMM and I never had conventional memory issues with Drivespace.

Reply 23 of 50, by kixs

User metadata
Rank l33t
Rank
l33t

Used Stacker and DBLspace on 286/16... not at the same time of course 🤣 40MB HDD was pretty small and giving around 20MB more was nice... but it took toll on the performance. I've only noticed that when I removed it. Later I used compressed files/directories on NT4 - mostly on system files.

Visit my AmiBay items for sale (updated: 2025-03-14). I also take requests 😉
https://www.amibay.com/members/kixs.977/#sales-threads

Reply 24 of 50, by Jo22

User metadata
Rank l33t++
Rank
l33t++
kixs wrote on 2020-06-04, 22:57:

Used Stacker and DBLspace on 286/16... not at the same time of course 🤣 40MB HDD was pretty small and giving around 20MB more was nice... but it took toll on the performance. I've only noticed that when I removed it. Later I used compressed files/directories on NT4 - mostly on system files.

I guess it depends on the system in question. If the HDD is slow (say wrong interleave facor, 8-Bit IDE, 8-Bit BIOS/not shadowed etc), compression can have a positive effect also.
After all, it effectively causes less data to be read and written most of the time. Anyway, it also depends on other factors. Games are not good for compression,perhaps.
But if a lot of small text files, databases (dBase files) or source code (C, BASIC, Pascal) etc are being processed, compression isn't that bad, IMHO.
- Remember, even a small file (say an ASCII file containing "Hello World") wastes a lot of space in FAT12/FAT16..
In such a case, a compressed HDD makes better use of the free space than an uncompressed HDD. 😉
(By the way, DoubleDrive/Space also has a Defrag command. Shouldn't hurt using that from time to time, just like MS Defrag or PC-Tools' "Compress".)

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 25 of 50, by appiah4

User metadata
Rank l33t++
Rank
l33t++

Most people who used these seem to have used them on <100MB hard drives, which makes sense because a Windows 3.1 install on these took up substantial amount of space and it could be significantly compressed. The first time I used them was on my father's 386SX laptop and that thing had a 40MB HDD IIRC. Believe me, it really makes a life saving difference at that size. I later used it for a while with my desktop 486DX's 213MB HDD and found it to be of questionable worth. There was a window of time when hard drive capacity was scarce and very expensive and at the time these products had their value. It's hard to understand today, looking back..

Reply 26 of 50, by Keatah

User metadata
Rank Member
Rank
Member

Defragging a compressed volume was a two step process. You had to defrag the host drive to make sure the compressed volume was contiguous. Then you had to defrag within the compressed volume to make sure it was internally contiguous too, and in order. Only then did head movement decrease and access time improved. It didn't really matter which one you did first, the end result was the same. But I believe it was more time-efficient to do the host drive first.

At the time I liked using Norton Speedisk to do the job. I was learning Norton Utilities and I liked the interface. Made me feel super professional at the time.

Reply 27 of 50, by chinny22

User metadata
Rank l33t++
Rank
l33t++
appiah4 wrote on 2020-06-05, 06:37:

It's hard to understand today, looking back..

Yep.
I compressed a 420MB drive with the version that came with Win95 Plus! as an archive drive.
late 90's, High school student with no income, 1GB HDD primary drive. Disk space was always tight.

Performance hit was worth it.

Reply 28 of 50, by esbardu

User metadata
Rank Newbie
Rank
Newbie
Keatah wrote on 2020-06-03, 01:04:
Anyone ever use DoubleSpace or DriveSpace back in the day? […]
Show full quote

Anyone ever use DoubleSpace or DriveSpace back in the day?

Back then they were near godsends. Realtime disk compression seemed to work well and provided tangible benefits. Both then and now I was impressed with how transparent it all was despite the amount of convoluted shenanigans that went on "behind the scenes". Renaming and redirecting of drives. Hiding certain files. All that.

I found it particularly valuable in the 486 era. The DX2 processors were fast enough to decompress on-the-fly with power left over. These systems were often left waiting on disk transfers to finish. So the less data transferred the faster your system ran. And you got about 40% extra storage space, the main advertised benefit!

So what where your experiences with DoubleSpace & DriveSpace?
https://en.wikipedia.org/wiki/DriveSpace

I used it a lot. I had a 386SX 16Mhz 2Mb with 40Mb for several years. I didn't lose data, the computer was running MS-DOS 6.2 and Windows 3.1.

I was a teenager, used the PC to play graphic adventures and program in Turbo Pascal , the computer was slow with and without DoubleSpace 😀. For me it was really useful, I think that I was able to get more than 60Mb. I can not recall using it with my second PC (486DX2 66).

Reply 29 of 50, by jesolo

User metadata
Rank l33t
Rank
l33t

I never used it. Mainly, because I used my PC for playing games and most game files were already compressed and would have added very little benefit.
Also heard some stories about corrupt data, etc. so I decided to stay away from it.

Reply 30 of 50, by yawetaG

User metadata
Rank Oldbie
Rank
Oldbie
jesolo wrote on 2020-06-13, 20:31:

Also heard some stories about corrupt data, etc. so I decided to stay away from it.

It was one of the "best" ways to end up with a system that needed a full reinstall after a lock-up. 😜

Reply 31 of 50, by Carrera

User metadata
Rank Member
Rank
Member

I did it once and found it was not worth the hassle of losing the data and I think it made things slower.
I did use a program called "FreeSpace" in Windows 95 and I liked it a lot at the time.
It basically shrank individual files and hardly added to the load time .
It caused problems on Windows 98 though so I stopped using it.

It was pretty much the same as the a feature that existed on Windows NT 4.0 I think.

Reply 32 of 50, by appiah4

User metadata
Rank l33t++
Rank
l33t++

I used a game called Freespace on Windows 98 too and it was awesome.

Reply 33 of 50, by swaaye

User metadata
Rank l33t++
Rank
l33t++

The more interesting aspect to me for filesystem compression was the speed boost. The CPU is usually waiting on the drive and compression can really boost the data rate of reads.

But I think fragmentation is even more of a problem with compressed volumes.

And it's probably not reliable enough until NTFS.

Reply 34 of 50, by BoraxMan

User metadata
Rank Newbie
Rank
Newbie

I did use both of them on a 386 which had a small 65M hard drive. It worked OK, but eventually I stripped back the files so I could get rid of it. Most annoying aspect was having 5M reported free, but running out of space adding a 100K file. Defragmenting the harddrive also took much longer.

I think it is a 'last resort' option, to be avoided until absolutely necessary. However, I use transparent file compression on BTRFS today with no problem.

Reply 35 of 50, by m82

User metadata
Rank Newbie
Rank
Newbie

I was a bit fascinated with this tech as a kid. Our first family PC came with doublespace enabled by default. In fact if you reformatted and reinstalled everything from the supplied floppy disks you could see it compressing the drive and enabling doublespace. By the way never had an issue with it or corruption. There were corruption issues in DOS 6 but Microsoft ironed most out in 6.2. Win95 had native support with a 32-bit driver so you could seemlessly upgrade (or compress from Win95). Further, Win 95 Plus package (and Win 98) had DriveSpace 3 which added 2GB support and 2 new compression formats, a recompression agent etc. I'm impressed how long Microsoft kept investing further in this
Anyway, fast forward to 2024 and I had to scratch this itch. So I've reimplemented support for reading (and limited writing support) from DoubleSpace/DriveSpace (incl. 3) compressed volumes files! And it works 😉 I don't know what to use it for but it was fun. Implemented in Java currently but I could quite easily convert to practically any other platform if there was a usage. If anyone has some idea of usage or if it could be beneficial in some nostalgia/preservation kind of way let me know.

Reply 36 of 50, by m82

User metadata
Rank Newbie
Rank
Newbie

Regarding the fragmentation issue it is indeed bigger in compressed volumes; someone mentioned fragmentation and that compressed drives can be fragmented in two ways - the uncompressed host drive and the compressed drive. That is true but also a bit simplified. In fact there's 4 levels of fragmentation compared to just 1 on a normal drive. Here's why:

1) The compressed volume file (CVF) is stored as a (often hue) plain file on a normal disk (called the host drive)... this CVF can itself be fragmented (DOS had a limit on how many fragments but I think you could configure it - at the cost of more ram usage). So there's one level. You can defrag this by defragging the host file (with the volume unmounted)
2) Inside the CVF there's a normal FAT file system living with FAT, clusters, directories in completely standard format (the FAT itself is stored uncompressed and so is the root dir - subdirectories are stored in the usual format in clusters but MAY be compressed. It depends on version whether compression happens or not). Here the well known type of fragmentation can occur where a file can be split into non-contigious clusters. When you run defrag on a compressed volume it will run in two phases and this part here is handled by the first phase.
3) Below the normal FAT there's a structure special for compressed volumes, the MDFAT (Microsoft Doublespace FAT). It has an entry for every cluster # in the normal FAT - it is organized as just a fixed array of such entries with one for each FAT. The key information for each entry is 1) Where is the compressed data for the given cluster physically stored on disk, 2) How many sectors does this cluster occupy in compressed form, 3) Is the cluster compressed (a clustre could be non-compressed). So on a disk with a cluster size of 8K = 16 sectors, one cluster might occupy 16 sectors (non-compressible) whereas another migt occupy 8 sectors (50% compressible etc.) DoubleSpace allocated those sectors to back up the clustres from a huge sector heap making up the majority of the CVF file. But as clusters were rewritten and with varying levels of compressibility and thereby # of sectors occupied the physical location of clusters become non-contigious. Cluster #5 might be stored at sectors 2010-20 and cluster #6 at sectors 210-225 and so on. So even a file contigous at level #2 might be fragmented at this level #3. In fact it is ultimately the level of fragmentation at level #3 (taking into account the cluster number sequence and their mapping to physical sectors in the MDFAT) that determines the performace e.g. if it is 'truly' fragmented physically on the disk. The second phase of defrag takes care of this (in DOS 6 this phase is non-graphical but is started automatically by defrag after the defragmentation is complete).
4) The fragmentation discussed under #3 can be so bad that the free space on the drive is all in isolated islands of a few sectors. Imagine if we first filled the disk with compressible data (e.g. every 8K cluster occupied 16 sectors) and then we rewrite all clusters with data slightly compressible down to 15 sectors. The result will be a disk with every 16 sectors free. So there's only single isolated sectors but a lot of them - you could have many MB's of data but all isolated like this. If you then store something that requires a cluster that takes more than a single sector to store, what should doublespace do? In MS-DOS 6 and 6.2 the user would simply get an error that there was no more disk space (even though there was plenty). If the user defragged the disk cf. #3 this issue would be solved because all clusters would be moved together and contigous. But DriveSpace 3 improves on this situation, and allows the cluster to "on the fly" be split into multiple sets of non-contigious sectors via an additional level of data structure. So now you can have that not just are the different clusters making up a file stored non-sequentially, even a single cluster may have an internal fragmentation and stored non-sequentially. The defrag tool in DriveSpace 3 also takes care of this type.

Reply 37 of 50, by theelf

User metadata
Rank Oldbie
Rank
Oldbie

I used compression all my life, in DOS era, stacker, dblspace, etc, in win9x drivespace, and in NT I always compress disk, in fact i have all sdd compressed in all my computers

Reply 38 of 50, by kixs

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2020-06-05, 03:22:
I guess it depends on the system in question. If the HDD is slow (say wrong interleave facor, 8-Bit IDE, 8-Bit BIOS/not shadowed […]
Show full quote
kixs wrote on 2020-06-04, 22:57:

Used Stacker and DBLspace on 286/16... not at the same time of course 🤣 40MB HDD was pretty small and giving around 20MB more was nice... but it took toll on the performance. I've only noticed that when I removed it. Later I used compressed files/directories on NT4 - mostly on system files.

I guess it depends on the system in question. If the HDD is slow (say wrong interleave facor, 8-Bit IDE, 8-Bit BIOS/not shadowed etc), compression can have a positive effect also.
After all, it effectively causes less data to be read and written most of the time. Anyway, it also depends on other factors. Games are not good for compression,perhaps.
But if a lot of small text files, databases (dBase files) or source code (C, BASIC, Pascal) etc are being processed, compression isn't that bad, IMHO.
- Remember, even a small file (say an ASCII file containing "Hello World") wastes a lot of space in FAT12/FAT16..
In such a case, a compressed HDD makes better use of the free space than an uncompressed HDD. 😉
(By the way, DoubleDrive/Space also has a Defrag command. Shouldn't hurt using that from time to time, just like MS Defrag or PC-Tools' "Compress".)

A few years ago I've tested DoubleDrive on a Pentium 133 machine and pretty slow 210MB Conner drive. Even though the P-133 is much much faster then anything back in 1992/93. The performance was still worse then without DoubleDrive. Hack if I still have the drive somewhere I can put it in P-3 or even faster machine. As theoretically it's like you've said. If the drive is slow, the reading&decompressing should be faster when the CPU is fast enough.

I plan on redoing my 286-16 soon and abusing it like in the old days. Even installing Stacker/DoubleSpace on the 40MB Conner and running Windows 3.1 with 1MB of memory 🤣

What showed the most on my 286 was when doing DIR listing. On DBLSpaced drive it took some time to do it. While without it, it was alot faster. I should make a YT video of horrible experience on 286 😉

PS:
On small drives the sector overhead isn't a concern. Of course I used defrag regularly.

Visit my AmiBay items for sale (updated: 2025-03-14). I also take requests 😉
https://www.amibay.com/members/kixs.977/#sales-threads

Reply 39 of 50, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

I found it very useful mid 90s. As an impoverished student, I could not afford much HDD space, imagine it like 10 years ago when you were trying to get everything on a 64GB SSD, that and worse. Anyway, I had a bunch of sub 200MB drives, several under 100MB and 80% of them came from computer fair junkboxes... so yah, the worst state drives you could imagine, slow, flaking out, dieing. Now their smallness made one use of drivespace obvious, but the other? Reliability testing. Scandisk did not pick up bad sectors until there was something on them quite frequently, so I'd create compressed volumes then scandisk again and again and again, eventually I'd get a good map of bad sectors, partition out any big chunks, and end up with a sorta reliable drive... I scandisked them frequently. ... It also helped a bit with the data juggling, I would zip compressed volume files and move them around, yah, zip didn't gain anything 2-5% if you put it on max compression, but you could use zipsplit tools etc then, just made them easier packaged to handle.

Speedwise, the MS versions seemed faster than uncompressed from DX33 up CPUs, and not much unrecoverable trouble was had with crashes because I knew enough to turn off write behind caching. That 80kb RAM hit was annoying though and I had multiple boot configs.

Actual space gain used to run at about 1.5x for me. I didn't find many of the games I had to be much compressed already. I know a lot of the "unconventional distributors" ran exe packers on their "releases" to package stuff smaller.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.