VOGONS


First post, by Robin4

User metadata
Rank l33t
Rank
l33t

I like to know how big harddisk size could handle my systems, before the cpu would getting to overloaded/ bottlenecked.

I have these systems, and iam looking for how big on drive sizes i can go, so the systems could handle it easilly.

I know these where in the past the original sizes the systems would use:

XT machine 5MB- 30MB
286 - 12 40MB - 60MB
386 -40 40MB- 150MB
486- 66 120MB - 540MB
Pentium 166 800MB - 3.2 GB

And yeah its just an indication. Now i dont want to have period correct drives in my systems, because i want to have some plenty of storage to put a lot of data on in. (software and games offcourse)
How far can i go with the drive size before it will bottleneck the processor.. So calculate the disk space would take to long and would getting annoying.

What i personally think of:

XT machine (i know it chokes in a 80MB harddisk) But probably i wouldnt go further like a 60MB disk.. Or using some bigger disk and partitioning it to 60MB.
286-12 i thinking off that it could handle to a 500MB hdd
386-40Mhz (in de past i would going with an 1,3 or 1,7 GB disk.. But today iam guessing an 800MB disk would plenty enough.
486-66 Could at least using an 1,2 GB disk of some what bigger. (perhaps 3,2 GB would do as well.
Pentium 133 / 150 / 166 Guessing a 6,4 GB disk would do the trick. (not going bigger as 10GB)

What do you could advise me on this idea`s.. Like to hear some suggestion from your side.

Thanks anyways for any insight.

~ At least it can do black and white~

Reply 2 of 9, by utahraptor

User metadata
Rank Newbie
Rank
Newbie
gdjacobs wrote on 2018-12-10, 21:08:
These are some common disk size limits you'll encounter. It's a guideline only with plenty of workarounds and exceptions. […]
Show full quote

These are some common disk size limits you'll encounter. It's a guideline only with plenty of workarounds and exceptions.

286/386 generation IDE controllers:

The 528 MB limit

If the same values for c,h,s are used for the BIOS Int 13 call and for the IDE disk I/O, then both limitations combine, and one can use at most 1024 cylinders, 16 heads, 63 sectors/track, for a maximum total capacity of 528482304 bytes (528MB), the infamous 504 MiB limit for DOS with an old BIOS. This started being a problem around 1993, and people resorted to all kinds of trickery, both in hardware (LBA), in firmware (translating BIOS), and in software (disk managers). The concept of `translation' was invented (1994): a BIOS could use one geometry while talking to the drive, and another, fake, geometry while talking to DOS, and translate between the two.

486 generation IDE controllers:

The 8.4 GB limit

Finally, if the BIOS does all it can to make this translation a success, and uses 255 heads and 63 sectors/track (`assisted LBA' or just `LBA') it may reach 1024*255*63*512=8422686720 bytes, slightly less than the earlier 8.5 GB limit because the geometries with 256 heads must be avoided. (This translation will use for the number of heads the first value H in the sequence 16, 32, 64, 128, 255 for which the total disk capacity fits in 1024*H*63*512, and then computes the number of cylinders C as total capacity divided by (H*63*512).)

Pentium generation IDE controllers:

The 33.8 GB limit (August 1999)

The next hurdle comes with a size over 33.8 GB. The problem is that with the default 16 heads and 63 sectors/track this corresponds to a number of cylinders of more than 65535, which does not fit into a short. Many BIOSes couldn't handle such disks. (See, e.g., Asus upgrades for new flash images that work.) Linux kernels older than 2.2.14 / 2.3.21 need a patch. See IDE problems with 34+ GB disks below.

This should help I think.

Reply 3 of 9, by Kreshna Aryaguna Nurzaman

User metadata
Rank l33t
Rank
l33t
Robin4 wrote on 2020-08-31, 17:24:
I like to know how big harddisk size could handle my systems, before the cpu would getting to overloaded/ bottlenecked. […]
Show full quote

I like to know how big harddisk size could handle my systems, before the cpu would getting to overloaded/ bottlenecked.

I have these systems, and iam looking for how big on drive sizes i can go, so the systems could handle it easilly.

I know these where in the past the original sizes the systems would use:

XT machine 5MB- 30MB
286 - 12 40MB - 60MB
386 -40 40MB- 150MB
486- 66 120MB - 540MB
Pentium 166 800MB - 3.2 GB

A Pentium 100 is naturally slower than a Pentium 166. Yet, mine had 4 GB hard drive, and I never experienced hard drive-related bottleneck/slowdown.

Never thought this thread would be that long, but now, for something different.....
Kreshna Aryaguna Nurzaman.

Reply 4 of 9, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Robin4 wrote on 2020-08-31, 17:24:

I like to know how big harddisk size could handle my systems, before the cpu would getting to overloaded/ bottlenecked.

HDD PIO modes will use the same amount of CPU time for a given transfer rate, but size doesn't really impact that metric. You will run into issues with controller, BIOS, and operating system compatibility, but that has essentially nothing to do with the CPU. For example, an XT machine can run just fine with a XTIDE adapter and a massive hard disk. This works because the adapter can handle modern LBA addressing and includes a ROM to extend the built in BIOS for modern hard drive formats.

All hail the Great Capacitor Brand Finder

Reply 6 of 9, by Jorpho

User metadata
Rank l33t++
Rank
l33t++
Thallanor wrote on 2020-09-10, 23:25:

I can tell you from personal experience that an IBM 5150 w/ a 1 GB CF is _not_ acceptable when it does the free space calculation. 😉

How would you even get an IBM 5150 to recognize that..? Wouldn't you need a version of DOS that exceeds the system requirements in terms of RAM?

Reply 7 of 9, by Thallanor

User metadata
Rank Member
Rank
Member
Jorpho wrote on 2020-09-10, 23:31:

How would you even get an IBM 5150 to recognize that..? Wouldn't you need a version of DOS that exceeds the system requirements in terms of RAM?

I use a simple 8-bit ISA XT-IDE. 1 GB was the smallest CF card I had at the time. I am using MS-DOS 6.22. My 5150 itself has an AST SixPakPlus to bring total RAM up to 640K. 😀

Reply 9 of 9, by Jinxter

User metadata
Rank Member
Rank
Member

FREESP - https://youtu.be/k71q5CYrlt0
-----------------

Calculates the free space on a FAT 16 disk very fast.
On XT Class systems with large FAT 16 hard disks the initial DIR
command can take upwards of 15-30 seconds to complete, this is due to
the slow method that DOS uses for calculating free space on the hard
disk. However once DOS has calculated this value it stores a cache of
it which it keeps updated unless an application does raw disk accesses
such as CHKDSK. This means that every subsequent DIR will take no time
to display free space. This program is designed to quickly calculate
the free space on the disk by reading through the FAT and counting up
the used clusters then calculating the total number of clusters on the
disk and subtracting to get the number of free clusters. Once
calculated this value is stored into the Disk Parameter Block from DOS
in the field storing the free clusters which DIR uses for calculating
free space. With a 2GB FAT 16 partiton on a CF card in an XTIDE
on my Turbo XT (running at 4.77MHz) this program takes <2s to complete
while the initial DIR call takes 25s.

This program should run on any PC or compatible with DOS 4.0 or later
it has only been tested on MS-DOS 6.22 and 5.0. This program should be
unneeded on DOS 3.2 and earlier as DIR does not perform a free space
calculation, as well these versions also use FAT12 (or FAT16 <32MB)
for disks which DOS keeps more metadata in memory and will avoid
the slowdown. It requires 66kB of free memory to run, but once
complete does not use any, it is not a TSR.

Check out my YouTube channel: Retro Erik https://www.youtube.com/c/RetroErik
My collection: https://retro.hageseter.com