Reply 20 of 45, by Anonymous Coward
- Rank
- l33t
Yes, but which VLB SCSI controller is faster?
"Will the highways on the internets become more few?" -Gee Dubya
V'Ger XT|Upgraded AT|Ultimate 386|Super VL/EISA 486|SMP VL/EISA Pentium
Yes, but which VLB SCSI controller is faster?
"Will the highways on the internets become more few?" -Gee Dubya
V'Ger XT|Upgraded AT|Ultimate 386|Super VL/EISA 486|SMP VL/EISA Pentium
I've heard good things about some acard chips but windows drivers are almost nonexistent for a lot of them
I personally found the 284x to be solid, but one disadvantage that's notable is the SCSI chip is a very long physical distance from the VL bus. However I generally got about 9MB/sec sustained reads uncached which was pretty much maxing out the SCSI interface last time I played with it
most of the issue with caching controllers nowadays is that the cache algorithms they used are extremely outdated, and they use very slow RAM compared to a drive's onboard cache. so modern EIDE or SCSI drives will start to perform better on non-caching controllers for random reads
for a CF card I can't say as I try to avoid them in desktop systems, but I'd suspect it wouldn't impact performance a lot either way. your advantage with a caching controller then becomes not needing to run SMARTDRV or other software caching, freeing up your PC's main memory for other things.
there was a thread some years back that I remember reading where someone benched the controllers and the regular promise 20230 was outperforming the caching one in random reads on CF. VLB IDE cache controllers, benchmark
at a glance random access was better in the non-caching controllers, with buffered sustained reads giving cache an advantage in a synthetic benchmark, but this won't translate to real world performance
someone probably needs to sit down and do a real, thorough scientific benchmark of all these controllers using a fixed set of PC platform, OS, benchmark tool, loaded drivers, CF card and EIDE/SCSI drive, or have folks with the various cards do the benchmark themselves set to those specific criteria
Well, I commend pshipkov’s disk controller benchmarking effort in his mega-thread: Re: 3 (+3 more) retro battle stations
There’s a bunch of soft facts and very little good data on this area in general. It didn’t really matter for DOS or most 90’s era use cases.
I was under the impression that since all the caching controllers have their own CPU they would be less stressful on the 486 CPU when doing transfers. Plus if you have limited RAM on the 486 motherboard you could save a few MB not having to use smartcache with main RAM.
The SCSI cards seem to be faster from what I recall compared to the IDE ones (probably because I was using newer drives).
Collector of old computers, hardware, and software
Unknown_K wrote on 2021-11-07, 18:48:I was under the impression that since all the caching controllers have their own CPU they would be less stressful on the 486 CPU when doing transfers. Plus if you have limited RAM on the 486 motherboard you could save a few MB not having to use smartcache with main RAM.
The SCSI cards seem to be faster from what I recall compared to the IDE ones (probably because I was using newer drives).
the controller chip is used whether the cache is enabled or not, in the case of controllers using a dedicated 80186 or whatever CPU that was just handling cache algorithm crunching in lieu of a custom chip, which was much more expensive at the time
all PIO transfers are harder on the host CPU even in PCI controllers, which is why SCSI was always so much better at the time even when its bus was slower. any DMA fast 10 SCSI controller and drives mopped the floor with all EIDE controllers for general windows performance for a while in the later 90s until ATA33 hit. even then it took till about 2003 and SATA before conventional desktop drives were competitive with SCSI drives years behind them, by which time SCSI was being phased out and merged with SATA into SAS
the only reason EIDE is competitive with SCSI nowadays is because of the cheapness of UDMA capable CF cards, rendering SCSI benefits moot because of raw seek time advantage.
On somthing so slow as a 486 the main bottle neck is the bus and the CPU so even with scsi you can only go so fast.
Once sata 2 or SAS 300 came out and there were faster drives like WD raptors that could spin at 10k or 15k, at that point sata became competitive with U320 SCSI part of that was most of the SCSI setups at that time were stuck on PCI-X133 bus. And it wasn't uncommon at the time from SATA 300 interface and more common for sata 1 to be on PCI-E 1x lane bottle necking that sata to 1.5gbs but when host cards were inserted on 4x lanes it was all over for scsi.
Then SSDs came out and that was the death of mechanical drives in general except for being used to store large amounts of data.
CF cards are slower as well when compared to SSD becasue they have they have no DRAM cache and secondly their write and read speed is significantly slower which is a bottle neck. A traditional UDMA ATA harddrive can beat a CF card in certain senarios depending how good the drive is and what its connected to.
You need to be using somthing faster than a 486 to get a fair comparison for scsi vs CF. Once you eleimate the BUS and CPU bottle necks and CFs become the main bottle neck CF doesn't hold a candle to the speed of scsi. CF reads might be okish but their writes will garbage and random rw will be terribad.
if all I care about is performance id take a proper SCSI controller like a u160 on a PCI bus over a CF card on IDE on the PCI bus anydays. But these days Id much prefer a SATA controler on a PCI bus with a sata HDD or SSD.
Warlord wrote on 2021-11-07, 20:30:You need to be using somthing faster than a 486 to get a fair comparison for scsi vs CF. Once you eleimate the BUS and CPU bottle necks and CFs become the main bottle neck CF doesn't hold a candle to the speed of scsi. CF reads might be okish but their writes will garbage and random rw will be terribad.
That's a fair point... The tests should be conducted with something like a Cyrix 5x86 variant... I've got an Asus VL/I-486SV2GX4 system in the works with such a CPU. I might do some benchmarking.
sounds like there's probably a market for a simple bus mastering VL SATA controller if someone were to develop one somehow
mockingbird wrote on 2021-11-07, 21:17:Warlord wrote on 2021-11-07, 20:30:You need to be using somthing faster than a 486 to get a fair comparison for scsi vs CF. Once you eleimate the BUS and CPU bottle necks and CFs become the main bottle neck CF doesn't hold a candle to the speed of scsi. CF reads might be okish but their writes will garbage and random rw will be terribad.
That's a fair point... The tests should be conducted with something like a Cyrix 5x86 variant... I've got an Asus VL/I-486SV2GX4 system in the works with such a CPU. I might do some benchmarking.
Id say atleast a pentium 4... a Cyrix 586 is still a 486. The point is to remove all bottle necks to measure things the same. Like putting a PCI Voodoo 2 card in a Core 2 duo, where its not the CPU holding the card back. Then you can see where the voodoo 2 caps out at on its own.
well, in this case this is specific to VLB cards, which are only in 486s (more or less) so testing the cards in a pentium 4 would be a bit... impossible
the results of CF versus SCSI in a P4 are probably obvious
What exactly would show any perceivable performance increase with faster drive throughput on a 486 class system over a regular VLB controller and solid state storage?
Games and other programs already load extremely fast compared to spinning rust.
Have any tests been done with solid state storage on a VLB caching controller? Is there are speed increase with the cache enabled vs disabled?
No but I have seen tests where a newer traditinal HDD is faster than a CF on a 586 on the same controller, becasue the newer HDD had a large disk buffer cache. Since CFs don't have any buffer Cache the HDD was faster. It still didnt really matter becasue the BUS hamstringed the throughput of both.
SSDs don't need a caching controller, and likely a caching controller especially a old one is going to be slower than just accessing the SDDs own D-RAM. Nevermind that some vintage controller probably doesn't have writeback cache anyways, and is only writethrough.
CFs on the otherhand on a chaching controller might help them.
Warlord summarized it well in one of his previous posts.
With 486 class hardware late EIDE adapters + CF cards hold the high ground.
Also, this configuration handles overclocking better than the SCSI controllers.
But yes, there is much ambiguity around the subject.
Warlord wrote on 2021-11-07, 23:20:Id say atleast a pentium 4... a Cyrix 586 is still a 486. The point is to remove all bottle necks to measure things the same. Like putting a PCI Voodoo 2 card in a Core 2 duo, where its not the CPU holding the card back. Then you can see where the voodoo 2 caps out at on its own.
I don't see what P4 has to do with any of this, please elaborate sir.
libby wrote on 2021-11-07, 21:27:sounds like there's probably a market for a simple bus mastering VL SATA controller if someone were to develop one somehow
Something along those lines is in my todo list but nobody knows when the todo list reaches this point 🤣
T-04YBSC, a new YMF71x based sound card & Official VOGONS thread about it
Newly made 4MB 60ns 30pin SIMMs ~
mida sa loed ? nagunii aru ei saa 😜
bakemono wrote on 2021-11-07, 09:42:EIDE2300+ ....
In every case, HDTach showed high CPU usage, so I assume that DMA is not used under Windows.
mockingbird wrote on 2021-11-07, 15:17:Regarding DMA and Windows, do you have the "DMA" box checked in the HDD properties of device manager (or is that only a Windows 98 thing)?
DMA is not used at all because PDC20630 is not a bus mastering controller. It doesnt support DMA on the CPU side, it only supports DMA when talking to the disk and translates this invisibly into PIO CPU accesses.
>Note: DMA transfer is between disk and PDC20630 only, for transfer from PDC20630 to computer standard ins/outs are used.
linux/blob/master/drivers/ata/pata_legacy.c
>The 20620 DMA support is weird being DMA to controller and PIO’d to the host and not supported.
I found this interesting post on usenet (comp.sys.ibm.pc.hardware.storage) from 1995 by John Wehman (dude worked for Seagate since 1988 for ~30 years, currently at Phison) :
wrote:Here's the tricky part about Promise...they clock the I/O cycles off the Motherboard Bus clock, and blindly say "Fastest transfe […]
Here's the tricky part about Promise...they clock the I/O cycles off the
Motherboard Bus clock, and blindly say "Fastest transfer shall be: three clock
cycles for assertion of IO (R/W) and three clock cycles for de-assertion of same."
On a 33MHz Bus-clock (486-33DX, 486-66DX-2), this computes to 90ns ON, 90ns Off, or
180 ns Cycle time (1/33MHz = 30ns). That's fine, except where 1)the drive can support
PIO Mode 3, but at a reduced t0 cycle time (as reported in ID Data), or 2) the drive
can achieve faster transfer rates (as your drive below).However, on a 50MHz bus, this timing becomes (20ns*3)=60ns ON and 60ns off, or 120ns,
Mode 4 (sort of). With a faster drive (as below), this would be okay, but put just
about any other drive on, and your talking C.O.R.R.U.P.T.I.O.N.
That thread also mentions Acculogic 4VL and Adaptec AVA-2825 both supporting PIO MODE 4
Standard IDE or EIDE (AVA-2825 only)
■ 3.3 MBytes/sec standard IDE data transfer rate
■ 5.0, 8.3, 11.1, or 16.0 MBytes/sec EIDE data transfer rate
depending on PIO Mode (must have an EIDE drive that supports the transfer rate, and FLEXI-Driver software must be
installed)
Tiido wrote on 2021-11-08, 03:18:libby wrote on 2021-11-07, 21:27:sounds like there's probably a market for a simple bus mastering VL SATA controller if someone were to develop one somehow
Something along those lines is in my todo list but nobody knows when the todo list reaches this point 🤣
I'd do it but building the FPGA to translate between PCI SATA controller chip and VL bus is outside my skillset. I doubt any SATA controller chips support direct local bus operation in that manner
(I say PCI as PCIe is vast overkill and probably harder, and there's probably someone out there with an inventory of 1000 or whatever otherwise worthless silicon image or promise PCI SATA chips sitting around for a couple bucks apiece, also PCIe would require a clock doubler and more complicated design than PCI which could just operate at the local bus speed)
I have such a card I planed on using in my 486 build, but my alaris cougar already has this controller integrated on board so it's just sitting in storage.
PS the Appian ADI/2, CL-PD7220, AIC-25VL01 is all the same chip. They all can use the same drivers.
libby wrote on 2021-11-08, 05:43:Tiido wrote on 2021-11-08, 03:18:libby wrote on 2021-11-07, 21:27:sounds like there's probably a market for a simple bus mastering VL SATA controller if someone were to develop one somehow
Something along those lines is in my todo list but nobody knows when the todo list reaches this point 🤣
I'd do it but building the FPGA to translate between PCI SATA controller chip and VL bus is outside my skillset. I doubt any SATA controller chips support direct local bus operation in that manner
There are cheap PATA 2 SATA bidirectional bridge chips on the market. For example http://www.users.on.net/~fzabkar/temp/JM20330 … ec_Rev.-2.3.pdf
You can buy whole $4 adapters on ebay. This means a Bus Mastering ISA/VLB PATA UDMA4 controller is the optimal solution. In theory you could achieve 133MB/s transfer on VLB from SSD 😉
$8 MAX10 FPGA has an application note (aka copy&paste block), but its super basic for pio 0 only https://www.intel.com/content/dam/www/program … re/an/an495.pdf
The biggest problem with any hardware projects is you cant buy FPGAs anymore as the world burns around us, you can only buy $100 Chinese dev boards with them ... Industry is in such a stupid state the best way to get your hands on reasonably priced FPGAs is reusing Chinese $30 LED panel controllers https://github.com/q3k/chubby75