VOGONS


Post your 'current' PC

Topic actions

Reply 140 of 642, by bristlehog

User metadata
Rank Oldbie
Rank
Oldbie
kithylin wrote:

Most likely upgrading it to a dell H700 Sas 6 Gbps pci-e 2.0 card, and look in to 8 x 300GB 6 Gbps drives

H700 controllers are about $90 on ebay, while Perc 6i are about $30. What's there that is worth that extra $60? Are you going to use the cachecade feature?

Hardware comparisons and game system requirements: https://technical.city

Reply 141 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
bristlehog wrote:
kithylin wrote:

Most likely upgrading it to a dell H700 Sas 6 Gbps pci-e 2.0 card, and look in to 8 x 300GB 6 Gbps drives

H700 controllers are about $90 on ebay, while Perc 6i are about $30. What's there that is worth that extra $60? Are you going to use the cachecade feature?

PERC 6/i are still Sas 3 Gbps controllers, and the onboard cache memory is limted to permantly affixed 256 MB, and while it is a native PCI-Express interface, it's still PCI-E 1.0.

The H700 is a PCI-Express 2.0 controller, supports Sas 6 Gbps, and comes with a 512MB cache module, upgradable to 1GB.

And yes, using the cache module is the entire point and should (and usually is) always used with these cards.

My older PERC 5/i is the first generation pci-e controllers.. and it's not even native pci-e, it's a PCI-X chip, connected across a PCIX-to-PCIE bridge. Even despite all of this, and using the older low-buffer 15K drives, the performance is still faster than any of the mid-range Sata-III SSD's sold today and is only surpassed by the high-end samsung pro series SSD's, once it's up and running. And with Raid-5 it's able to sustain a failure and keep on truckin. The only problem is my PERC 5/i only gets up to Raid-5, using either the H700 or the PERC 6/i would let me have up to Raid-6 and survive double failures.

Also, the larger the cache on the cards, it reduces access times on the array overall, and significantly boosts performance. Across 8 drives in raid-5 I get 7ns access time with my 512MB cache on my 5/i.

EDIT: And all of these controllers will also accept and take normal SATA hard drives, and normal Sata SSD's, and do all the raiding with em. the PERC 5/i are limited to 2TB drives and I don't know what the new ones support. Although using the cheapie SSD's on these things might not last long without TRIM support. I haven't tried it yet myself.

Reply 142 of 642, by bristlehog

User metadata
Rank Oldbie
Rank
Oldbie
kithylin wrote:

the PERC 5/i are limited to 2TB drives and I don't know what the new ones support. Although using the cheapie SSD's on these things might not last long without TRIM support. I haven't tried it yet myself.

I can only speak of Dell H710p. Saw a configuration of 710p with 10x 4 TB SAS + 2x TLC SSD for cache.

However, rumours are that H700 also supports drives larger than 2TB with firmware update. Which is nice because H710 is about $400 instead of $90 for H700.

Last edited by bristlehog on 2015-03-09, 19:05. Edited 2 times in total.

Hardware comparisons and game system requirements: https://technical.city

Reply 143 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
bristlehog wrote:
kithylin wrote:

the PERC 5/i are limited to 2TB drives and I don't know what the new ones support. Although using the cheapie SSD's on these things might not last long without TRIM support. I haven't tried it yet myself.

I can only speak of Dell H710p. Saw a configuration of 710p with 10x 4 TB SAS + 2x TLC SSD for cache.

However, rumours are that H700 also supports drives larger than 2TB with firmware update. Which is nice because H710 is about $400 instead of $90 for H700.

Thanks, nice to know. The PERC 5/i cards work in most normal Intel desktop boards, at least the x58 and x68 series, and some older intel boards (my file server is a P45-based 775 board, uses a PERC 5/i too), and they're quite cheap for a full hardware raid card, and the performance is rather good.

The only thing is we have to buy a 40mm fan and bolt it on to the heatsink for use in desktops, they're designed to be used in forced-airflow chassis with rackmount servers, but small fan in desktop tower is enough and fine. Been running mine in my file server for a little over 2 years 24-7 now, 8 x 500GB drives, my own little "personal cloud".

Last edited by kithylin on 2015-03-09, 19:15. Edited 2 times in total.

Reply 144 of 642, by bristlehog

User metadata
Rank Oldbie
Rank
Oldbie
kithylin wrote:

he H700 is a PCI-Express 2.0 controller, supports Sas 6 Gbps

Is it really that matters for a home PC, the 6 Gbps vs. 3 Gbps? What purposes are you using your PC for?

Hardware comparisons and game system requirements: https://technical.city

Reply 145 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
bristlehog wrote:
kithylin wrote:

he H700 is a PCI-Express 2.0 controller, supports Sas 6 Gbps

Is it really that matters for a home PC, the 6 Gbps vs. 3 Gbps? What purposes are you using your PC for?

Occasional video editing and working with 3ds max and some adobe for large uncompressed videos for friends and personal. Friend of mine has two samsung pro ssd's in raid-0 (Probably close in performance to a h700 + 8x 6 Gbps 15k drives) and his load times in to star citizen are 2-2.5 mins, and people he knows on single spinner drives are in the 20-25 minute load times.

So good for gaming, too, and peace of mind knowing I won't lose anything from a hard drive failure (I still do backups too, though). So all in all, it's my preferred storage option for computers. I have my gaming machine and my file server on PERC's and raid-5, and I'd like to put my secondary web-browsing / backup gaming computer on it too, some day.

I've had failures happen to me by the way a few times. Playing a WoW raid with friends and can't stop or I'll miss out on things.. multi-hour multiplayer sessions, had a drive die on me. I'd just reach in the tower and unplug it and keep on going as if nothing happened, and just buy a replacement drive and later when I had time, swap it out, 30 minutes rebuilding, ready to go. It's pretty nice.

Reply 146 of 642, by bristlehog

User metadata
Rank Oldbie
Rank
Oldbie

Sounds attractive.

How do you choose a motherboard for such a project? Most home-oriented MOBOs wouldn't support Xeons and server SAS controllers I believe.

Hardware comparisons and game system requirements: https://technical.city

Reply 147 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
bristlehog wrote:

Sounds attractive.

How do you choose a motherboard for such a project? Most home-oriented MOBOs wouldn't support Xeons and server SAS controllers I believe.

I'm using it in a EVGA x58 CLASSIFIED 3-WAY-SLI desktop motherboard, and my "file server" is a gigabyte EP-45-UD3P desktop 775 motherboard, using it there too. I've used it just fine in an older dual-socket 940 AMD "work station" board, and people on overclock.net have reported success using em in multiple z68 desktop sandy-bridge motherboards. No modifications needed, stick it in and go, usually.

I don't know about the later cards but the PERC 5/i cards have built in microsoft drivers native to the OS, no drivers needed to install windows on em, at least for Win7 and vista. Dunno about 8.

Reply 148 of 642, by bristlehog

User metadata
Rank Oldbie
Rank
Oldbie

What are drawbacks of such a build you can think of? I see high CPU TDP, but that possibly can be solved with just another CPU.

Hardware comparisons and game system requirements: https://technical.city

Reply 149 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
bristlehog wrote:

Sounds attractive.

How do you choose a motherboard for such a project? Most home-oriented MOBOs wouldn't support Xeons and server SAS controllers I believe.

bristlehog wrote:

What are drawbacks of such a build you can think of? I see high CPU TDP, but that possibly can be solved with just another CPU.

Xeon chips have the exact same TDP as desktop chips. And you are right on the above post.. (Sorry I never replied to it.) After x58 / first-generation I7 platforms (Sandy bridge and later) we no longer can use xeon chips in desktop motherboards any more, as the newer generation xeons have an entirely different platform controller hub (PCH / AKA Chipset), and different socket (LGA-2011). LGA-775 can use a lot of LGA-775 xeons, and even if you go "advanced" you can modify the socket, put stickers on the chips and install LGA-771 server xeons in some desktop 775 boards, my file server is one. Running a Harpertown core 12MB cache quad core 771 chip in a normal 775 board.

Some advantages.. xeon chips are really dirt cheap today. my 771 quad core for my file server was $24 6 months ago. And overclocking, usually the xeons can handle higher thermals than desktop chips are, so we can clock em further without sustaining damage. AMD's opterons are the same way, rated for +20c over desktop chips some models. With x58 there's two kinds of xeons.. dual-socket designed ones, and single-socket ones. Most x58 motherboards will all take the single-socket ones without any issue, but only a rare few can take the dual-socket ones.

Newer LGA-2011 xeons can only be overclocked in single-socket platforms too, dual-socket = no overclocking. For LGA-1366 / x58 there was -ONE- motherboard ever sold by EVGA, the SR-2 that allowed dual-socket xeons and full overclocking, all the way up to dual 6-core-12-threaded chips.

Reply 150 of 642, by Skyscraper

User metadata
Rank l33t
Rank
l33t
kithylin wrote:
Xeon chips have the exact same TDP as desktop chips. And you are right on the above post.. (Sorry I never replied to it.) After […]
Show full quote
bristlehog wrote:

Sounds attractive.

How do you choose a motherboard for such a project? Most home-oriented MOBOs wouldn't support Xeons and server SAS controllers I believe.

bristlehog wrote:

What are drawbacks of such a build you can think of? I see high CPU TDP, but that possibly can be solved with just another CPU.

Xeon chips have the exact same TDP as desktop chips. And you are right on the above post.. (Sorry I never replied to it.) After x58 / first-generation I7 platforms (Sandy bridge and later) we no longer can use xeon chips in desktop motherboards any more, as the newer generation xeons have an entirely different platform controller hub (PCH / AKA Chipset), and different socket (LGA-2011). LGA-775 can use a lot of LGA-775 xeons, and even if you go "advanced" you can modify the socket, put stickers on the chips and install LGA-771 server xeons in some desktop 775 boards, my file server is one. Running a Harpertown core 12MB cache quad core 771 chip in a normal 775 board.

Some advantages.. xeon chips are really dirt cheap today. my 771 quad core for my file server was $24 6 months ago. And overclocking, usually the xeons can handle higher thermals than desktop chips are, so we can clock em further without sustaining damage. AMD's opterons are the same way, rated for +20c over desktop chips some models. With x58 there's two kinds of xeons.. dual-socket designed ones, and single-socket ones. Most x58 motherboards will all take the single-socket ones without any issue, but only a rare few can take the dual-socket ones.

Newer LGA-2011 xeons can only be overclocked in single-socket platforms too, dual-socket = no overclocking. For LGA-1366 / x58 there was -ONE- motherboard ever sold by EVGA, the SR-2 that allowed dual-socket xeons and full overclocking, all the way up to dual 6-core-12-threaded chips.

Asus, Gigabyte and EVGA should all support socket 1366 dual QPI Xeons with the latest BIOS. The first generation EVGA socket 1366 boards could not run any 6 cores without a mod though. The Asus P6T line of boards dosnt support the disabeling of TDP induced turbo throtteling so when overclocking with those its best to stick with non turbo multipliers which limitis the use of the cheapest Xeons somewhat.

If someone wants a system both fast enough for high end gaming and suitable for video editing and other really demanding tasks I can really vouch for the EVGA SR-2. A warning when it comes to the SR-2 and SAS controllers though, the EVGA SR-2 uses Nvidia NF200 PCI-E bridge chips so its best to check that the SAS controller can work with those before buying.

Last edited by Skyscraper on 2015-03-09, 20:43. Edited 1 time in total.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 151 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
Skyscraper wrote:

Asus, Gigabyte and EVGA should all support socket 1366 dual QPI Xeons with the latest BIOS. The first generation EVGA socket 1366 boards could not run any 6 cores without a mod though. The Asus P6T line of boards dosnt support disabeling TDP induced turbo throtteling so when overclocking with those its best to stick with non turbo multipliers which limitis the use of the cheapest Xeons somewhat.

If someone wants a system both fast enough for high end gaming and suitable for video editing and other really demanding tasks I can really vouch for the EVGA SR-2. A warning when it comes to the SR-2 and SAS controllers thhough, the EVGA SR-2 uses Nvidia NF200 PCI-E bridge chips so its best to check that the SAS controller can work with those before buying.

The SR-2 boards wern't really feasable back when they were sold new and the xeon chips were nearly $3000 each. But now that Dual-QPI xeon prices have fallen through the floor (Just $400 for a pair of used 3600 mhz 6 core chips!) it's more of an appetizing option. Sadly the motherboards themselves are usually like $600+ when they do appear used today. Almost makes me wish I would of had the foresight to buy a board new when it came out and sit on it for a few years for xeon prices to fall.. 🙁

Reply 152 of 642, by Skyscraper

User metadata
Rank l33t
Rank
l33t
kithylin wrote:
Skyscraper wrote:

Asus, Gigabyte and EVGA should all support socket 1366 dual QPI Xeons with the latest BIOS. The first generation EVGA socket 1366 boards could not run any 6 cores without a mod though. The Asus P6T line of boards dosnt support disabeling TDP induced turbo throtteling so when overclocking with those its best to stick with non turbo multipliers which limitis the use of the cheapest Xeons somewhat.

If someone wants a system both fast enough for high end gaming and suitable for video editing and other really demanding tasks I can really vouch for the EVGA SR-2. A warning when it comes to the SR-2 and SAS controllers thhough, the EVGA SR-2 uses Nvidia NF200 PCI-E bridge chips so its best to check that the SAS controller can work with those before buying.

The SR-2 boards wern't really feasable back when they were sold new and the xeon chips were nearly $3000 each. But now that Dual-QPI xeon prices have fallen through the floor it's more of an apatizing option. Sadly the motherboards themselves are usually like $600+ when they do appear used today. Almost makes me wish I would of had the foresight to buy a board new when it came out and sit on it for a few years for xeon prices to fall.. 🙁

The CPU prices did not make me think twice when it came to buying an EVGA SR-2 when it was relesed. I started out with two E5620 4 Core CPUs which I ran @3.8 GHz, then I upgraded to E5620 + a X5670 6 core when I found one for cheap, then X5650 + X5670 which I ran for years @4000 Mhz. A fun fact is that the SR-2 do allow the CPUs to run at different speeds with turbo and all but I settled for 20*200 which both CPUs accepted. Then the CPUs got cheaper and I got another X5650 so I could use turbo multipliers with both CPUs at the same speed and with the CPUs at 4.4 GHz the system was mighty fast. That is now my spare rig.

The SR-2 with dual X5690 I am using now was an offer I could not refuse, I did not even pay the list price of one CPU and got the whole complete system with 96GB memory, case, SSDs, Corsair AX750 PSU and all. I downgraded the memory to 48GB to be able to run 200 BCLK with the memory at 2000 MHz. With the full 96GB the system would only post at stock speed and the post was a 5 minut ordeal, this could be because of the memory beeing a mix of Samsung and Hynix sticks though.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 153 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
Skyscraper wrote:

The CPU prices did not make me think twice when it came to buying an EVGA SR-2 when it was relesed. I started out with two E5620 4 Core CPUs which I ran @3.8 GHz, then I upgraded to E5620 + a X5670 6 core when I found one for cheap, then X5650 + X5670 which I ran for years @4000 Mhz. A fun fact is that the SR-2 do allow the CPUs to run at different speeds with turbo and all but I settled for 20*200 which both CPUs accepted. Then the CPUs got cheaper and I got another X5650 so I could use turbo multipliers with both CPUs at the same speed and with the CPUs at 4.4 GHz the system was mighty fast. That is now my spare rig.

The SR-2 with dual X5690 I am using now was an offer I could not refuse, I did not even pay the list price of one CPU and got the whole complete system with 96GB memory, case, SSDs, Corsair AX750 PSU and all. I downgraded the memory to 48GB to be able to run 200 BCLK with the memory at 2000 MHz. With the full 96GB the system would only post at stock speed and the post was a 5 minut ordeal, this could be because of the memory beeing a mix of Samsung and Hynix sticks though.

I now have a dual-xeon machine, a SuperMicro motehrboard with two LGA-1366 sockets, and it will run any of the entire line-up of 1366 xeons. But this one has to be matched pairs of chips to work together and only works at stock speeds (no overclocking). The nice part is the motherboard + a tower was gifted to me free. I've populated it with 12 x 1GB sticks (enough for what I need) and even with this it still does take a good 1-2 minutes to complete POST and power on, but it's nice when it's up and running.

Oh and the BIOS for my big i7-gaming machine with the PERC in it takes a full 2.5 minutes from cold boot to desktop. But I rarely shut it off once it's up and running, usually leaving it on all day so I don't mind a slow start up.. it stays on once it's on.

Reply 154 of 642, by Skyscraper

User metadata
Rank l33t
Rank
l33t
kithylin wrote:

I now have a dual-xeon machine, a SuperMicro motehrboard with two LGA-1366 sockets, and it will run any of the entire line-up of 1366 xeons. But this one has to be matched pairs of chips to work together and only works at stock speeds (no overclocking). The nice part is the motherboard + a tower was gifted to me free. I've populated it with 12 x 1GB sticks (enough for what I need) and even with this it still does take a good 1-2 minutes to complete POST and power on, but it's nice when it's up and running.

Oh and the BIOS for my big i7-gaming machine with the PERC in it takes a full 2.5 minutes from cold boot to desktop. But I rarely shut it off once it's up and running, usually leaving it on all day so I don't mind a slow start up.. it stays on once it's on.

Tylersburg is a great platform, its stability has never been matched.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 155 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
Skyscraper wrote:

Tylersburg is a great platform, its stability has never been matched.

The ram speed is nice too. With dual socket triple channel + NUMA, I'm getting 26.7 GB/sec ram speed just from DDR3-1066 ram. I couldn't imagine what SR-2 with 2ghz ram could do :> I have to use SiSoft Sandra to test ram speed, nothing else supports multi-threading + numa for ram tests.

Reply 156 of 642, by bristlehog

User metadata
Rank Oldbie
Rank
Oldbie
kithylin wrote:

So good for gaming, too, and peace of mind knowing I won't lose anything from a hard drive failure (I still do backups too, though).

You can still lose everything due to a controller failure. How do you mitigate that risk?

Hardware comparisons and game system requirements: https://technical.city

Reply 157 of 642, by kithylin

User metadata
Rank l33t
Rank
l33t
bristlehog wrote:
kithylin wrote:

So good for gaming, too, and peace of mind knowing I won't lose anything from a hard drive failure (I still do backups too, though).

You can still lose everything due to a controller failure. How do you mitigate that risk?

Backups of the entire array on to another computer that get rotated off of that to external medium every so often? It's not hard. Plus with the PERC series, the array data gets written to the hard drives themselves. So if the controller dies, you just buy another one, plug the drive cables in and import foreign configuration and rebuild and you're up and running without losing anything.

Reply 158 of 642, by smeezekitty

User metadata
Rank Oldbie
Rank
Oldbie

8 x 15,000 rpm, 16MB buffer, SAS 3 Gbps @ 73.4 GB Seagate Cheetah 15k.5 hard drives in raid-5, 512Gb primary storage.

Wowzer!

Anyway:

Core2 Quad Q9550
Asrock G41M-S3
4 GB 1066MHz Mushkin DDR3
Asus DirectCU AMD Radeon 7850 1GB
Several mixed hard drives
PCI Wifi card
Corsair CX430
Compaq OEM case
Windows Vista 😵

Current monitor setup if 1920x1080 primary and 1280x1024 secondary.