First post, by Kahenraz
I've mentioned this system before and I'm often asked specific questions about it, so I decided to put together this thread to use as a reference for those who may be interested.
This is a system that I have built and upgraded several times over the past 10 years or so. I have accumulated a number of spare parts as they became available cheaply, and am very familiar with its hardware, compatibility, and various failure modes as things have gone wrong in the past, either by chance or a fault of my own during maintenance. It's no longer a competetive system to build today, other than the fact that it has a lot of PCIe I/O and supports a lot of inexpensive DDR3 memory, up to 512GB. It can support up to 32 threads across two processors, but one or two 8 core processors are optimal for single-threaded performance.
The motherboard I use is a Supermicro H8DG6-F. This board is otherwise identical to the H8DGI-F, but also includes two onboard SAS ports. This is an important feature to maximize the size of available disk arrays, depending on how the other PCIe slots are used.
My primary NAS array is made up of 8x 4TB disks in a RAID-Z3 configuration (three disk parity) with a 9th disk as a hot spare and an SSD for a read cache. This was my original array that I built years ago, when this particular configuration was economical. A year or two ago, I expanded the redundancy with a second pair of 18TB disks in RAID-1, which mirror the larger array periodically, but remains unmounted when not in use. The choice of ZFS protects against bit rot and includes compression, as well as block-level duplication to make optimal use of the available space. The system has 256GB of memory, since only one socket is in use at this time. This is an excellent configuration for ZFS, and between the large read cache and memory, network transfers are very fast.
This is what I've been using to manage all of my data for a number of years now, and I'm very happy with it. There are certainly faster, better, and more optimal platform configurations available now, but upgrading would require a large investment that would not affect my use case at all. I have upgraded this setup incrementally several times over the years and have a small inventory of spare parts that I can swap in immediately in the event of a failure, including CPUs, memory, and spare motherboards.
When it becomes cost effective to do so, I will eventually replace the 4TB spinning disks with an array of SSDs. That's the next planned upgrade, once it becomes affordable to do so. But I have no plans to replace the CPU, motherboard, or memory at time soon.
I used to use a Supermicro 4U tower, but I eventually replaced this with a custom modified Rosewill Thor V2-W, which supports much better airflow, is shorter and can fit into my hot water closet, and has more room for expansion. The modifications I made were to add 9x removable expansion bays to the front for the primary array, and to drill holes in the bottom to mount wheels. The wheels are actually very important, since the machine is too heavy to lift. The Supermicro case had to be "pivoted" to move, so I was dead set on wheels for my next case.
I also built custom fan expansions for the chipsets, which get very hot on their own. This is a server motherboard, and probably expects more airflow to be blowing across them than in a typical desktop scenario.
I'm very happy with this system. And I plan to run this system into the ground before I will replace it.