VOGONS


First post, by feipoa

User metadata
Rank l33t++
Rank
l33t++

I've noticed that some of these U320 SCSI cards, e.g. Adaptec ASC-39320A, are dual channel, that is, they have two 68-pin connectors for U320. The manual mentions that this is done so that you can have one channel for U320, and to use the other channel for more legacy devices. This way, your U320 channel isn't slowed down by being on the same channel as legacy devices.

This card also supports RAID 0, 1, and 10. I was thinking of setting up RAID 0 (stripe), which is supposed to increase performance compared to mirror. The manual doesn't specifically say anything about setting up RAID with one HDD on channel A and the other on Channel B. My question is: Is there any performance or reliability benefit of having both HDD's on one channel, or to split the HDD's up between the two channels?

Plan your life wisely, you'll be dead before you know it.

Reply 1 of 15, by FuzzyLogic

User metadata
Rank Member
Rank
Member

You should split the drives between the channels. Even though your drives will probably never saturate a 320MB/s channel, it actually gives you a little performance boost. And it really helps if you are doing a RAID setup. Why share the lane when you can have one all to yourself?

As for reliability, probably no difference at all. Unless you hot-swap. Hot-swapping might cause the HBA might reset the bus on that channel and might even rescan the bus. So again, one drive per channel if you can afford it.

Reply 2 of 15, by elod

User metadata
Rank Member
Rank
Member

SCSI RAID was all drives on a single cable. With 2 drives from that era you can not saturate the bus. With better cards you have a nice large cache on a DIMM right on the controller so it's even less of an issue.
The specific card you are talking about offers fake (CPU assisted) RAID. Kind of beats the whole purpose of SCSI - it's supposed to offload the burden from the CPU.

Reply 3 of 15, by feipoa

User metadata
Rank l33t++
Rank
l33t++

Oh, is it better to use this SCSI card in non-RAID mode then? Or even in non-RAID mode, its still sucking up CPU? Can't be worse than IDE, can it?

SCSI RAID was all drives on a single cable

I don't quite follow this sentence. Are you saying that I cannot split the RAID up between the channels?

Is there a better Adaptec U320 PCI-X card you can recommend with non-fake RAID?

Concerning DIMM cache on the PCI-X card - I thought hard drives contained cache on them? I remember buying a SATA drive the late 20009's. I think it had 32 MB of cache. So is there still benefit of having on-card cache?

Looking at the datasheet for the Seagate U320 drives, the quantity of cache is quite a bit smaller than I'd have thought.

In general, 6,991 kbytes of the physical buffer space in the drive can be used as storage space for cache operations.

Plan your life wisely, you'll be dead before you know it.

Reply 4 of 15, by elod

User metadata
Rank Member
Rank
Member
feipoa wrote:

Oh, is it better to use this SCSI card in non-RAID mode then? Or even in non-RAID mode, its still sucking up CPU? Can't be worse than IDE, can it?

It will be better than IDE for sure, SCSI drives have very fast seeks (and the exquisite noise that come with it). It's a bit more problematic to find good drives (large capacity 36-72GB and up with no motor whine). The are way rarer than good IDE drives and I expect most to be equipped with the hotplug (SCA?) connector and not 68 pin.

In non RAID mode it will work as a standard Adaptec.

SCSI RAID was all drives on a single cable
I don't quite follow this sentence. Are you saying that I cannot split the RAID up between the channels?

I've never seen one. Hardware RAID controllers have large caches in DIMM format, the standard controllers do not.

Reply 5 of 15, by feipoa

User metadata
Rank l33t++
Rank
l33t++

Thanks for your reply, but I am still confused. Is it better (overall faster) to use this Adaptec SCSI host controller in RAID or non-RAID mode?

To clarify, you've never seen a RAID built across two SCSI channels and are, therefore, uncertain if it will be of any speed benefit? Could you please confirm. Thanks.

Plan your life wisely, you'll be dead before you know it.

Reply 6 of 15, by PC Hoarder Patrol

User metadata
Rank l33t
Rank
l33t

In a desktop environment I'm not sure you'd see useful performance gains if any by splitting (only) two drives across two channels, but what you are doing is adding another potential point of failure to the least redundant / resilient raid level.

Reply 7 of 15, by dionb

User metadata
Rank l33t++
Rank
l33t++
feipoa wrote:

Thanks for your reply, but I am still confused. Is it better (overall faster) to use this Adaptec SCSI host controller in RAID or non-RAID mode?

To clarify, you've never seen a RAID built across two SCSI channels and are, therefore, uncertain if it will be of any speed benefit? Could you please confirm. Thanks.

Define "speed" and the answers start to line up.

If you're measuring throughput, RAID 0 can nearly double it. However for a snappy - feeling system it's the read & write latencies that matter. They don't improve with RAID. So from that perspective you're just reducing reliability.

Bottom line is what you're trying to achieve. Generally I'd side with the "don't bother" crowd. RAID is a bit like SLI - it only really makes sense if the fastest single drive you could use does not offer the throughput you need. And it only really works with completely identical drives. Otherwise it's just a PITA with little or no benefits.

Reply 8 of 15, by feipoa

User metadata
Rank l33t++
Rank
l33t++

The question is largely theoretical. I'm in the process of setting up a two Ultra320, 146 GB hard drives using a PCI-X LSI MegaRAID 320-2 with 256 MB cache. It has two SCSI channels. Is there any theoretical performance benefit in connecting each hard drive to separate channels on the host card, that is, opposed to connecting each hard drives to the same channel if a) using a RAID 0 (stripe) setup or b) using a RAID 1 (mirror) setup? Second question is what stress tests can I use to compare these conditions?

Because people like photos,

LSI_MegaRAID_320-2_Card.jpg
Filename
LSI_MegaRAID_320-2_Card.jpg
File size
365.48 KiB
Views
793 views
File license
Fair use/fair dealing exception
LSI_MegaRAID_320-2_Card_256MB_cache.jpg
Filename
LSI_MegaRAID_320-2_Card_256MB_cache.jpg
File size
367.08 KiB
Views
793 views
File license
Fair use/fair dealing exception
Dual_channel_RAID.jpg
Filename
Dual_channel_RAID.jpg
File size
1.7 MiB
Views
793 views
File license
Fair use/fair dealing exception
LSI_MegaRAID_320-2_BIOS.jpg
Filename
LSI_MegaRAID_320-2_BIOS.jpg
File size
328.85 KiB
Views
793 views
File license
Fair use/fair dealing exception

Plan your life wisely, you'll be dead before you know it.

Reply 9 of 15, by chinny22

User metadata
Rank l33t++
Rank
l33t++

Mirror is slower as it has to write everything twice, once to each HDD. In theory this is what the 256MB cache helps with. OS writes to the cache , cache hands it down to the HDD's
it was always a case of sacrificing speed for redundancy.

I'd be interested in seeing how much faster a stipe array is vs just having 2 separate disks, I'm betting not much in real terms for aa general use PC.
Copy a large file say a DVD iso and compare the difference (that would simulate say a workstation grinding away at a photoshop image or something, basically 1 large continues file traditionally stripe array's arena.
Next copy the same file but also a bunch of smaller files, this simulates a fileserver, database server, etc. lots of read write requests all over the place, This is where a mirror is handy as files can be read of each disk.

I don't know about the duel channel thing. I just put the drives into the front of the server 😉

Also worth mentioning RAID arrays are cool, I've got a few machines setup with a stripe array with crappy sata onboard controller just to see the message at post. the performance hit isn't much in real world terms and ok if 1 disk fails I've lost everything on both, but I'd be in the same boat if the PC only had 1 drive anyway. it's not like I keep anything on them anyway accept games.

Reply 10 of 15, by epicbrad

User metadata
Rank Newbie
Rank
Newbie

To be honest I'm not sure if there would be a major benefit depending on your usage. I would like to however, see some benchmarks to see how it performs and that would really answer the question being asked if there is any significant improvement making this a worthy setup because I am in the process if building a retro rig myself (planning stage) and have a few boards I can try out, Scsi could be an option as I do have quite a few spare SCSI / SAS drives (Sas card needed for that)

this is somewhat like the card I used to have
ftp://ftp.packardbell.com/pub/itemnr_old/NECD … si_320-2_hg.pdf which describes in a fair amount of details about the specification of cable lengths, transfers etc. Actual performance would come down to quite a few factors.

Just remember one thing, if you've ever had to system architect something - there is no such thing as Hardware Raid no matter if you spend $1 or $1,000,000 - Because at some point there is software involved to run it be it a dedicated card or not. Ideally you would want some very fast U320 15K hard drives and optimize the file system for what you are using it for. You could easily grab a cheap used SSD these days and it would blow away SCSI in most retro applications with a lot less fuss.

But having said that, it's still a good experience 😀

Reply 11 of 15, by feipoa

User metadata
Rank l33t++
Rank
l33t++

Isn't the main benefit of 15K vs 10K for U320 SCSI due to reduced seek time? I went with 10K U320 SCSI's in this setup because of a) less heat, b) less noise potential, c) less cost. I also recall reading somewhere that later generation 10K drives will outperform earlier generation 15K drives. What is reality, I don't know. I have another dual Tualatin with a 15K drive, but not setup in RAID - just a single drive. I eventually would like to benchmark it with my current build.

chinny22: Are you saying that the 256 MB cache on my RAID controller doesn't help much if using a stripe setup? I don't want to do mirror unless I'm doing RAID 10, but then I'd have to buy 2 more drives. There are various RAID configuration options in which the large cache may come into play, but I'm not familiar enough with RAID performance to determine what manual settings to enact. There is an option for direct I/O or cache I/O. I was reading up that for 99% of cases, you want to use direct I/O, so this is how I have it setup now. I think someone said that the OS cache would be larger and faster anyway. I don't know. 256 MB seems huge. Plus the HDD has 8 MB cache. I'm basically using the LSI wizard's recommendation for the setup.

RAID = cool. haha. Well, it is something I've always been curious about and I decided it a good time to go all out if possible. If I have a non-everyday motherboard with built-in ATA RAID, I try to use it in Stripe mode for the shear purpose of using it. For a large part of this hobby, practicality is nearly irrelevant.

Plan your life wisely, you'll be dead before you know it.

Reply 12 of 15, by oohms

User metadata
Rank Member
Rank
Member

Cache and processing power on a raid card is used most for parity raid eg 5 or 6. Mirror or stripe isn't very processor intensive and doesn't require much cache.

DOS/w3.11/w98 | K6-III+ 400ATZ @ 550 | FIC PA2013 | 128mb SDram | Voodoo 3 3000 | Avancelogic ALS100 | Roland SC-55ST
DOS/w98/XP | Core 2 Duo E4600 | Asus P5PE-VM | 512mb DDR400 | Ti4800SE | ForteMedia FM801

Reply 13 of 15, by epicbrad

User metadata
Rank Newbie
Rank
Newbie

Yeah it's true, some 10K drives can outperform 15K counterparts. It depends on the drives in question and you'd need to compare benchmarks. Now, to be transparent - I don't know about any SCSI performance reviews over the last xx years as it's been a very long time as I left the IT world and came back to it - and it seems everything is Fibre Channel or SAS / SATA these days with only older servers retaining scsi.

So what you are saying makes sense Feipoa 😀
I would like to see some benchmarks - as.....TBH I just like benchmarks. 🤣.
I have an 80TB raid 0 on my home workstation, they are just 7200 rpm disks - even a single one of those drives would blow any SCSI 320 drive out of the water simply due to the later technology. Hard drives have come a long way.

One thing I definitely remember is, Staggered SCSI spin up due to the power draw of those old systems. Followed by the sheer LOUDNESS of it all. Shoving even just 4x 15K Scsi drives out of a case, it was like an aeroplane - just madness!

Reply 14 of 15, by feipoa

User metadata
Rank l33t++
Rank
l33t++
oohms wrote:

Cache and processing power on a raid card is used most for parity raid eg 5 or 6. Mirror or stripe isn't very processor intensive and doesn't require much cache.

Could there be any performance hit by using 256 MB of RAID cache vs. 64 MB? I can even install as little as 32 MB. The two drives I am using, Cheetah 10K.7, ST3146707LW, contain about 7 MB of cache, while my 15K.5 drive, ST373455LW, contains 13 MB.

Concerning loudness, these drives are quiet. It can be hit or miss when buying used working pulls, however I got lucky. I have some U160 SCSI drives from old rack mount servers and those make a whistling screech just with idle spin. I guess this is some kind of bearing going bad, or lube which has dried out? Any means for the end-user to re-grease these?

Benchmarks. Not there yet. I'm going to get started on the OS. I'm hoping I can get most of the setup going from a HDD clone and that Windows will auto detect the differences in motherboard hardware (ServerSet III LE vs. ServerSet III HE SL). It is a tri-boot configuration with W2K, W2k3, and XP Pro. So far, W2K didn't cooperate.

Once I get it setup and benchmarked, I'm wondering how I can easily re-bench with the two HDD's on a single channel, because won't recreating the stripe array mess up the data on the drives?

Plan your life wisely, you'll be dead before you know it.

Reply 15 of 15, by oohms

User metadata
Rank Member
Rank
Member
feipoa wrote:

Could there be any performance hit by using 256 MB of RAID cache vs. 64 MB? I can even install as little as 32 MB. The two drives I am using, Cheetah 10K.7, ST3146707LW, contain about 7 MB of cache, while my 15K.5 drive, ST373455LW, contains 13 MB.

I don't think you would notice a difference in normal use

feipoa wrote:

Concerning loudness, these drives are quiet. It can be hit or miss when buying used working pulls, however I got lucky. I have some U160 SCSI drives from old rack mount servers and those make a whistling screech just with idle spin. I guess this is some kind of bearing going bad, or lube which has dried out? Any means for the end-user to re-grease these?

They are all sealed units. It would be great if there was a way, as nearly all old IDE drives i have spin very loudly

feipoa wrote:

Once I get it setup and benchmarked, I'm wondering how I can easily re-bench with the two HDD's on a single channel, because won't recreating the stripe array mess up the data on the drives?

Imaging maybe? I have no idea if you need drivers for that scsi card in raid mode, but you could always experiment before benchmarking

DOS/w3.11/w98 | K6-III+ 400ATZ @ 550 | FIC PA2013 | 128mb SDram | Voodoo 3 3000 | Avancelogic ALS100 | Roland SC-55ST
DOS/w98/XP | Core 2 Duo E4600 | Asus P5PE-VM | 512mb DDR400 | Ti4800SE | ForteMedia FM801