soggi wrote on 2024-12-18, 06:29:
OK, did some research...I knew those controllers are "cheap" ones for consumer desktop boards, but didn't know it's that disastrous. So it's even quite better to have a real IDE RAID controller - a PCI card f.e. with i960 and its own RAM. And much better would be a SCSI controller on PCI-X, but this has other pitfalls sneaking around the corner...
It's actually not that disastrous, depending on your use case.
The use case for these chips was to get some extra I/O speed with striping RAID 0. Mirroring using RAID 1 was also supported, as was mirroring+striping. None of this is CPU-intensive, unlike the parity calculations needed for RAID 3, 5 or 6. So the performance hit on the CPU isn't huge, and it's even smaller if you consider that that CPU is usually waiting for I/O at precisely the times that it would be called on to facilitate that I/IO. Conversely the system CPU runs much faster than an i960 or similar on a card, so I/O performance can be higher, and with less latency - so long as the CPU impact isn't bottlenecking, it's actually faster.
Of course that all applies to full software RAID too - but that requires an OS that supports it, which Win9x didn't, and you couldn't boot from the RAID partition. Using these 'fake' RAID chips pushed management of the array into firmware, allowing any OS to use it and allowing you to boot from it, avoiding the need for complex partitioning (which was beyond the abilities of the average desktop user and for that matter the installers of contemporary operating systems).
For server tasks none of this applies. There you want more complex RAID modes and overall load would be more I/O intensive (and less latency-sensitive) so APUs on dedicated hardware RAID cards were a far better idea. And if you still wanted/needed software RAID, server OSs supported that long before desktop OSs did.
TLDR: in their day and for their intended purpose, these solutions made sense.