VOGONS


Reply 20 of 33, by Horun

User metadata
Rank l33t++
Rank
l33t++

Yes it is possible it does not like double sided or "double banked" DIMMS, have seen that issue before where most double sided are actually also double banked. Some older motherboards come to mind that would accept 2 or 4 single sided but only one or two if double sided.

Hate posting a reply and then have to edit it because it made no sense 😁 First computer was an IBM 3270 workstation with CGA monitor. Stuff: https://archive.org/details/@horun

Reply 22 of 33, by Beluga

User metadata
Rank Newbie
Rank
Newbie

To be more precise: I am pretty sure that this is NOT a 64bit PCI card. This is a card that (as the name says) uses the rare "RAID PORT" slot extension. This would be labeled as such on a motherboard that supports it. This part of the slot would usually be brown.

Reply 23 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

Beluga: It would be advisable to read the thread from the beginning before posting. It has already been revealed in the OP that this card is a RAIDPort III card and is being used in an appropriate motherboard. The question at hand now is the differences in ECC EDO DIMMs.

Horun: Any chance you have any 64 MB ECC EDO unbuffered DIMMs which are single-sided? Finding this may prove challenging.

Plan your life wisely, you'll be dead before you know it.

Reply 24 of 33, by maxtherabbit

User metadata
Rank l33t
Rank
l33t
feipoa wrote on 2020-07-05, 01:35:

Horun: Any chance you have any 64 MB ECC EDO unbuffered DIMMs which are single-sided? Finding this may prove challenging.

there's a chance I may have one, I'll have to look

non-SDRAM DIMMs are pretty sparse around my collection though, probably everywhere else too

Reply 25 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

This is curious. I was running some HDD benchmarks on my Dell Precision Workstation 410 which has that Adaptec ARO-1130U2 64-bit RAID card installed. If you recall, that card works in conjunction with the motherboard's onboard Adaptec 2940U2W controller. I ran the benchmarks using three different WinXP apps then compared the results to a stand-alone PCI Adaptec 2940U2W card. The RAID has two ST373207LW connected in stripe, while the Adaptec 2940U2W has a single ST373207LW connected to it. I was surprised by the results - the RAID is slower!

ARO-1130U2 RAID -----> Adaptec 2940U2W

ATTO Harddisk Benchmark
I used the 32 MB length and ran block sizes from 0.5 KB to 8192 KB
Max Read = 44 MB/s -----> 69 MB/s
Max Write = 38 MB/s -----> 53 MB/s

Roadkil DiskSpeed
Max Linear Read = 48.6 MB/s -----> 48.4 MB/s
Cached Read = 123 MB/s -----> 126 MB/s
Overall Score = 798.5 -----> 779.5

HDTune
Min Read = 33.6 MB/s -----> 37.1 MB/s
Max Read = 42.7 MB/s -----> 62.1 MB/s
Ave Read = 41.7 MB/s -----> 57.8 MB/s
Burst Read = 55.7 MB/s -----> 56.7 MB/s

Any idea why the stand-alone 2940U2W is faster compared to a 64-bit RAID 2940U2W in stripe? I thought maybe the RAID card's 16 MB of RAM might be slowing it down, but the card won't function without a DIMM installed.

Plan your life wisely, you'll be dead before you know it.

Reply 26 of 33, by darry

User metadata
Rank l33t++
Rank
l33t++
feipoa wrote on 2021-06-14, 04:49:
This is curious. I was running some HDD benchmarks on my Dell Precision Workstation 410 which has that Adaptec ARO-1130U2 64-bi […]
Show full quote

This is curious. I was running some HDD benchmarks on my Dell Precision Workstation 410 which has that Adaptec ARO-1130U2 64-bit RAID card installed. If you recall, that card works in conjunction with the motherboard's onboard Adaptec 2940U2W controller. I ran the benchmarks using three different WinXP apps then compared the results to a stand-alone PCI Adaptec 2940U2W card. The RAID has two ST373207LW connected in stripe, while the Adaptec 2940U2W has a single ST373207LW connected to it. I was surprised by the results - the RAID is slower!

ARO-1130U2 RAID -----> Adaptec 2940U2W

ATTO Harddisk Benchmark
I used the 32 MB length and ran block sizes from 0.5 KB to 8192 KB
Max Read = 44 MB/s -----> 69 MB/s
Max Write = 38 MB/s -----> 53 MB/s

Roadkil DiskSpeed
Max Linear Read = 48.6 MB/s -----> 48.4 MB/s
Cached Read = 123 MB/s -----> 126 MB/s
Overall Score = 798.5 -----> 779.5

HDTune
Min Read = 33.6 MB/s -----> 37.1 MB/s
Max Read = 42.7 MB/s -----> 62.1 MB/s
Ave Read = 41.7 MB/s -----> 57.8 MB/s
Burst Read = 55.7 MB/s -----> 56.7 MB/s

Any idea why the stand-alone 2940U2W is faster compared to a 64-bit RAID 2940U2W in stripe? I thought maybe the RAID card's 16 MB of RAM might be slowing it down, but the card won't function without a DIMM installed.

Could one of the ST373207LW drives on the RAID card be deteriorating/failing ?

Reply 27 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

I don't know. There are no obvious signs of it failing. All three drives are relatively quiet, that is, they don't have that characteristic high pitched squeal of worn out drives. I haven't had any read or write errors pop up. I suspect the 16 MB of RAM on the RAID card might be slower than the HDD's onboard RAM, so it is creating a bottleneck on these much newer SCSI HDDs.

I did recently order a 64 MB EDO DIMM for these which has 50 ns chips on it, but SPD is programmed for 60 ns based on the module's sticker. Perhaps if I can find some DIMMs which have 50 ns programmed into them I might get the speed up?

EDO_DIMM_ECC_60ns_single-sided.jpg
Filename
EDO_DIMM_ECC_60ns_single-sided.jpg
File size
139.32 KiB
Views
593 views
File license
Fair use/fair dealing exception

(sellers image)

EDIT: I remember there was some post somewhere about reprogramming the SPDs on SDRAM module. I wonder if I should undertake this EDO DIMM...

Plan your life wisely, you'll be dead before you know it.

Reply 28 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

Small update to this thread...

My AVICC shipment has arrived and among the loot I had two 64 MB EDO DIMMs, both with 50 ns IC's. One of them I had specifically asked this memory manufacturer on eBay to have the SPD programmed for 50 ns rather than 60 ns. They said they could do that. Note that many of these modules have 50 ns chips but are still programmed for 60 ns. Anyway, I don't think I have any motherboards which can read EDO DIMMs, so I'm not sure how to verify if the SPD is really programmed for 50 ns? Can some SDRAM motherboards read EDO DIMMs but not mentioned it in the manual?

Nonetheless, I tested both modules in this RAID controller and they both function, unlike the double-sided 64 MB parity DIMMs I had tested a year earlier. Unfortunately, both 64 MB EDO DIMMs demonstrated the same benchmark results as the 16 MB 60 ns DIMM which came with the RAID card. This is probably as far as I can take things here. I don't understand why the 2940U2W RAID controller benchmarks slower than the stand-alone PCI 2940U2W card, but at least I have 64 MB RAID cache working now rather than the 16 MB.

Plan your life wisely, you'll be dead before you know it.

Reply 29 of 33, by darry

User metadata
Rank l33t++
Rank
l33t++
feipoa wrote on 2021-07-27, 10:37:

Small update to this thread...

My AVICC shipment has arrived and among the loot I had two 64 MB EDO DIMMs, both with 50 ns IC's. One of them I had specifically asked this memory manufacturer on eBay to have the SPD programmed for 50 ns rather than 60 ns. They said they could do that. Note that many of these modules have 50 ns chips but are still programmed for 60 ns. Anyway, I don't think I have any motherboards which can read EDO DIMMs, so I'm not sure how to verify if the SPD is really programmed for 50 ns? Can some SDRAM motherboards read EDO DIMMs but not mentioned it in the manual?

Nonetheless, I tested both modules in this RAID controller and they both function, unlike the double-sided 64 MB parity DIMMs I had tested a year earlier. Unfortunately, both 64 MB EDO DIMMs demonstrated the same benchmark results as the 16 MB 60 ns DIMM which came with the RAID card. This is probably as far as I can take things here. I don't understand why the 2940U2W RAID controller benchmarks slower than the stand-alone PCI 2940U2W card, but at least I have 64 MB RAID cache working now rather than the 16 MB.

For that performance difference, some ideas :

a) To compare apples to apples, maybe re-run the benchmarks while comparing single drive vs single drive and RAID against RAID, preferably using same drive (or drives when testing striping) for all tests .
b) Is the onboard SCSI controller disabled while testing the PCI one ? Maybe there is some weird driver issue if both are active at the same time
c) Could you use something like HWINFO check to see if you PCI card is actually operating at its expected PCI speed and bus width ( I doubt that is the issue, but why not check just in case ) ?

Reply 30 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

No, I don't disable the onboard SCSI when testing the PCI one. However, when I view the hard drives via the Device Manager, I see that the stand-alone HDD on the 2940U2W controller has write cache enabled, while the HDD on the RAID controller has the "write cache" option greyed out and not checked. When I setup the RAID arrays, I don't remember seeing a write cache option to enable. Could this be the cause of the speed discrepancy?

Plan your life wisely, you'll be dead before you know it.

Reply 31 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

I have made some progress on this effort. It seems like the bootable RAID setup diskette doesn't have all the array configuration features, even if you go to custom build. The Express RAID building option has some settings, but they remain hidden to the user. You get basic options like optimisations for "Database application", "technical/graphics application", and "other". But, it won't tell you what cache settings it is using. As for the block size, it seems that "Database application" uses 64K, while "technical" uses 32K. "Custom" is fixed at 64K.

To view the cache settings, one has to install the Adaptec CI/O Management Software. v4.01 is available on the adaptec website: https://storage.microsemi.com/en-us/support/_ … _raid/cio_4.01/

v4.02 can be found from Dell: https://delldriverdownload.info/precision-410 … nt-4-0-drivers/

The Adaptec websites mentions a version 4.03, but when trying to download, it says file not found.
https://storage.microsemi.com/en-us/support/_ … _raid/cio_4.03/

Here are some of the visible options in the software. It works in NT4, W2K, and XP. It supposedly works in Win9x too, but there aren't any ARO-1130U2 RAID drivers for Win9x.

ARO-1130U2_Caching_A.png
Filename
ARO-1130U2_Caching_A.png
File size
43.61 KiB
Views
543 views
File license
Public domain
ARO-1130U2_Caching_B.png
Filename
ARO-1130U2_Caching_B.png
File size
31.29 KiB
Views
543 views
File license
Public domain
ARO-1130U2_Caching_C.png
Filename
ARO-1130U2_Caching_C.png
File size
32.17 KiB
Views
543 views
File license
Public domain
ARO-1130U2_Caching_D.png
Filename
ARO-1130U2_Caching_D.png
File size
35.06 KiB
Views
543 views
File license
Public domain

The default cache size per array is 12284 KB. This can be increased to 64 MB minus 4 KB.

There are various pre-set cache optimisations, for example, "Browsers for Win/NT NTFS file system", etc. All this does is change the number of cache blocks for:
Read Caching:Demand Caching
Read Caching: Look-Ahead Caching Factor
Write Caching: Write-back cache
write Caching: Write-through cache

Each cache block is 4 KB.

I have run various benchmarks with different values and it seems that having any value in the "Look-Ahead Caching Factor" decreases performance. In fact, the best performance is with all cache disabled (all set to 0's). When cache is disabled, the benchmark performance of the RAID array matches that of a basic disk on the 2940U2W controller. I guess the cache on the HDD is too fast for cache on a RAID card to make any difference, and in this case, it hurts perf.

Here's the benchmarks using Roadkil with the cache disabled:

ARO-1130U2_Caching_0-0-0-0.png
Filename
ARO-1130U2_Caching_0-0-0-0.png
File size
27.83 KiB
Views
543 views
File license
Public domain

Plan your life wisely, you'll be dead before you know it.

Reply 32 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

However, disabling the cache entirely makes the RAID controller nearly useless. To make myself feel better, I tried to determine values that would almost match that with cache disabled. And for this, I have determined that

Read Caching:Demand Caching: 0
Read Caching: Look-Ahead Caching Factor: 0
Write Caching: Write-back cache: 128
write Caching: Write-through cache: 128

yields the best results. Below are the roadkil results.

ARO-1130U2_Caching_0-0-128-128.png
Filename
ARO-1130U2_Caching_0-0-128-128.png
File size
27.84 KiB
Views
543 views
File license
Public domain

I've also included some results with the other cache results here:

ARO-1130U2_Caching_16-0-128-128.png
Filename
ARO-1130U2_Caching_16-0-128-128.png
File size
27.88 KiB
Views
543 views
File license
Public domain
ARO-1130U2_Caching_16-0-16-16.png
Filename
ARO-1130U2_Caching_16-0-16-16.png
File size
27.79 KiB
Views
543 views
File license
Public domain
ARO-1130U2_Caching_0-128-16-16.png
Filename
ARO-1130U2_Caching_0-128-16-16.png
File size
28.03 KiB
Views
543 views
File license
Public domain
ARO-1130U2_Caching_0-0-16-16.png
Filename
ARO-1130U2_Caching_0-0-16-16.png
File size
27.61 KiB
Views
543 views
File license
Public domain

Plan your life wisely, you'll be dead before you know it.

Reply 33 of 33, by feipoa

User metadata
Rank l33t++
Rank
l33t++

I ran a few more tests with RAID cache enabled (0/0/128/128) vs. disabled. I did tests with a 175 MB file and a 2 GB file.

From ARO-1130U2 PCI RAID ---> 2940U2W PCI card
Cache enabled = 28.3 MB/s
Cache disabled = 36.2 MB/s
[this test used the 2 GB file]

From ARO-1130U2 PCI RAID partition A ---> ARO-1130U2 PCI RAID partition B
Cache enabled = 6.4 MB/s
Cache disabled = 6.0 MB/s
[this test only used the 175 MB file and the 4s and 5s readings are within human error]

1 gigabit PCI ethernet transfer
From Ubuntu 18.04 LTS ---> ARO-1130U2 PCI RAID
Cache enabled = 11.2 MB/s
Cache disabled = 18.6 MB/s
[this test used the 175 MB file]

The network test showed the most difference in speed and ultimately made me decide to disable the RAID card's 64 MB cache. I guess I need to use some old period-correct HDDs to fully realise the advantage of RAID cache.

Plan your life wisely, you'll be dead before you know it.