VOGONS

Common searches


First post, by deksar

User metadata
Rank Member
Rank
Member

Hello everyone,

Which is the most durable PC motherboard and power supply for a non-stop, 24/7 computer usage?

In the past, I remember there was "Military Grade" Gigabyte boards even for desktops/regular consumers, but I don't see them anymore, around.

And how about PSUs? Anyone ever tried Seasonic, FSP, Corsair brands on 24/7 usage? Any advice please on both?

Would be much appreciated!

Best.

Reply 1 of 16, by bakemono

User metadata
Rank Oldbie
Rank
Oldbie

I've seen high durability portable PCs but don't recall seeing anything specifically marketed that way for desktop. Anything that isn't complete junk should be able to run 24/7. You just have to be mindful of thermal and mechanical stresses. Don't use components that get super hot. Don't use some big heavy cooler that warps the motherboard over time. Etc.

GBAJAM 2024 submission on itch: https://90soft90.itch.io/wreckage

Reply 2 of 16, by thp

User metadata
Rank Member
Rank
Member

Depending on what you want to do (and if you don’t specifically need PC-compatible hardware), ARM-based SBCs like the Raspberry Pi 3 or newer might be a great alternative to look into for noiseless low-power 24/7 computing/server use without any moving parts. Also takes up minimal space.

Reply 3 of 16, by darry

User metadata
Rank l33t++
Rank
l33t++

Well, Gigabyte did have a line a PSUs that had a tendancy to go "boom" [1] which I imagine could have military applications. 😉

In all seriousness though, the keys to long term reliability, AFAIU, are
- build quality (well designed and built including long life fans and capacitors)
- over-speccing
- thermal management (keep it cool AND free of dust).

[1]
https://www.theverge.com/2021/8/17/22628465/g … xchange-returns

Reply 4 of 16, by pentiumspeed

User metadata
Rank l33t
Rank
l33t

365 days a year up time computers meant OEM computers. Consumer particularly the ones that is already pre-builts when sold does not belong to this uptime bandwagon since they were built down to a price point and that unreliability shows. Build your own with mentioned parts listed below:

Seasonic, middle of road motherboards by Asus and high end heatsinks. Noctua or Bequiet heatsinks, WD Gold and Black hard drive, SSD and Samsung SSD except QVO, Toshiba hard drives. Crucial and Micron SSDs are great. Put heatsinks on NVME SSDs. Use OEM oriented memory modules; Micron, Samsung, Hynix and Crucial for reliability. Never the third-party memory even gamer oriented, even not Kingston!

Processor i5 or i7 processor, Xeon.

For quick repair and ready to run, USED OEM PCs is hard to beat for good price, since for lot of replacement parts and partial built to start with for ready to run; cpu, small memory installed, small storage are available used to build upon with your upgrades as you go. I like these for G3 to G4 generation of HP models particularly elitedesk 800 series in tower. I have two mini 65w models since they will have copper heatsink and vented top cover if you want 65W model mini. This is good when you are intended to run Window 10. With these you will have Pro activated already with these, saving 199 dollars on COA.

For windows 11 users, HP G5 and later are great computers and COA also built in.

For HP Workstation 700W for gaming or heavy user with beefy CPU or lotta ram you will need:
Z440 for windows 10, or Z4 G2, G3 or later, G4 will support windows 11.
These are the ones you should have with Xeon, and ECC for uptime because you wanted uptime.

Notebooks HP elitebook, G3, G4 for window 10. G5 for windows 11. Numbers 8x0 denotes screen size. "2" means 12.1", 3 is 13.3" 4 is 14.1" 5 15" , so on. They do support NVME in these, good strong frame and i5 and i7 available.

This comes from my experience with notebooks due to what I repaired them for customers and had them, over the years. Thinkpads, Dell notebooks, are not good as you think due to poor, goofy heatsink designs and weak frame designs. Ditto to any consumer oriented notebook in general, are not good choice, not durable plastic, common point is hinge mounts in the base breaking and parts hard to find.

Avoid all school's surplus study notebooks and all chromebooks from there. Speaking from experience, and other notebooks I had also the Dell Lattiude 3350 and now no longer can find any parts.

PS: I just finished setting up a Z230 from my herd because one of computers at work failed, was a USB and is a consumer computer, still a major issue due to need to keep using due need for ready availability. My HP 800 G1 kept going 5 years straight at my work with no issue. I invested heavily back in mid 2018 when I got hired, good thing I did. At my work, boss miss-spent on consumer computers, and about 5 had to be replaced in 5 years period. Very bad financial and reliability view-point. The computers had to be replaced for too old or issues several times and they were too old also they were consumer not oriented for business.

Cheers,

Great Northern aka Canada.

Reply 5 of 16, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

If I just wanted somewhat grunty, not top end of bleeding edge, and didn't wanna mess around too much I'd just get an off lease ThinkStation. The Dell equivalent is probably fine too. Professional range HPs can be good, but IDK, sometimes they have some real lemons, sometimes they're bulletproof.

I've had wild and weird stuff running 24/7 for me over the years, usually doing something servery or routery, 5+ year 24/7 stars go to a LuckyStar 486 board, an XFX nForce SLI socket 775 board (Only had a Celeron 440 in though) and a HP nettle 6100MCP board running an X2 4400, Gateway by Acer sandybridge laptop, not stuff everyone would tell you is reliable. 3+ year 24/7 honorable mentions, Dell Dementia 5100, Dell Optiplex 320 (Still work, just only had about 3 years in 24/7 use), Abit Socket A 761 board, removed from service with cap bloat. Compaq DV2000 laptop, stull runs, backlight broke.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 6 of 16, by darry

User metadata
Rank l33t++
Rank
l33t++

My home NAS has been running pretty much 24x7 since early 2020 on a Gigabyte B450 Mini-ITX board (Aorus Pro Wifi) mothetboard in a Supermicro Mini-ITX 4-bay case ( https://www.supermicro.com/en/products/archiv … is/SC721tq-250b ). EDIT: CPU is a Ryzen 7 5700G with a 65W TDP . EDIT : I recommend Noctua fans for quietness and durability .

It got a CPU, RAM and NVME upgrade and a discrete GPU added (GT 730 ) about 6 months ago as well, but it has been reliable all this time . I also have 4 drives in a Mediasonic E-SATA 4-bay enclosure. That enclosure has been running 24x7 for about 7 pr 8 years. It got a fan replacement at installation, but is otherwise stock .

I have added some extra low noise fans fans here and there to keep temperatures in check .

Here are some idle temperatures for that :

#CPU
zenpower-pci-00c3
Adapter: PCI adapter
SVI2_Core: 738.00 mV
SVI2_SoC: 994.00 mV
Tdie: +37.0°C (high = +95.0°C)
Tctl: +37.0°C
SVI2_P_Core: 0.00 W
SVI2_P_SoC: 2.34 W
SVI2_C_Core: 0.00 A
SVI2_C_SoC: 2.35 A

#drives
root@omv:~# for i in $(smartctl --scan | awk '{print $1}'); do smartctl -a $i ; done | egrep "Device Model|Model Number|^194|^Temperature"
Device Model: ST8000DM004-2CX188
194 Temperature_Celsius 0x0022 035 056 000 Old_age Always - 35 (0 24 0 0 0)
Device Model: ST8000DM004-2CX188
194 Temperature_Celsius 0x0022 035 054 000 Old_age Always - 35 (0 24 0 0 0)
Device Model: WDC WD40EFRX-68N32N0
194 Temperature_Celsius 0x0022 118 108 000 Old_age Always - 32
Device Model: WDC WD40EFRX-68N32N0
194 Temperature_Celsius 0x0022 119 107 000 Old_age Always - 31
Device Model: ST4000VN008-2DR166
194 Temperature_Celsius 0x0022 039 049 000 Old_age Always - 39 (0 22 0 0 0)
Device Model: ST4000VN008-2DR166
194 Temperature_Celsius 0x0022 042 054 000 Old_age Always - 42 (0 23 0 0 0)
Device Model: ST4000VN008-2DR166
194 Temperature_Celsius 0x0022 043 055 000 Old_age Always - 43 (0 22 0 0 0)
Device Model: ST4000VN008-2DR166
194 Temperature_Celsius 0x0022 041 053 000 Old_age Always - 41 (0 22 0 0 0)
Model Number: T-FORCE TM8FP7001T
Temperature: 36 Celsius

Reply 7 of 16, by midicollector

User metadata
Rank Member
Rank
Member

There is a design of computers where you have two (or more) computers executing identical code simultaneously, with identical contents of ram, etc. A computer like this has 24/7 uptime, even if you need to repair it, or replace parts. You can repair one while the other is still running, and vice versa, so there's never any downtime. The computer version of a mirrored raid array.

Anyway, maybe someone could do something like that with a series of raspberry pis.

Reply 8 of 16, by darry

User metadata
Rank l33t++
Rank
l33t++
midicollector wrote on 2023-10-10, 05:39:

There is a design of computers where you have two (or more) computers executing identical code simultaneously, with identical contents of ram, etc. A computer like this has 24/7 uptime, even if you need to repair it, or replace parts. You can repair one while the other is still running, and vice versa, so there's never any downtime. The computer version of a mirrored raid array.

Anyway, maybe someone could do something like that with a series of raspberry pis.

Without needing to go that far, running a cluster or having an active/standby node setup (if the planned workload can be made to run in such a setup), could be an option.

Then, there is the even simpler option of a having a ready-to-go cold standby node stored on a shelf. If the active one fails, you just plug the standby one in and restore latest backed up data (assuming the type of workload even needs persistent data).

Reply 9 of 16, by deksar

User metadata
Rank Member
Rank
Member

Cool suggestions, but @pentiumspeed said;

"Use OEM oriented memory modules; Micron, Samsung, Hynix and Crucial for reliability. Never the third-party memory even gamer oriented, even not Kingston!"

And I was about to include Kingston memories. So why not Kingston but OEM ones?

This is a Playout computer of a Satellite TV channel, by the way.

Reply 10 of 16, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

Do you mean you are locally streaming a satellite channel, or it's going to be running a satellite channel ??

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 11 of 16, by pentiumspeed

User metadata
Rank l33t
Rank
l33t

Kingston is not an OEM but considered good but very boring. Mentioned OEM names were both producers of memory chips and assembling their memory modules and programmed SPD with reliable settings.

Difference is Kingston buys memory chips from others and purchases memory modules PCBs and have contract maker assemble them but at better standard but not same thing as OEM. The main problem, is do you know if kingston did purchase poorer chips and you wanted to know what chip Kingston is using, which is now impossible, by looking at their modules and you require particular kingston part number in a purchase order, because Kingston now using blank branding and lasered their "KINGSTON" name on them.

Good organization always order by part numbers to "specific" memory modules with same chips that they already validated in their quality testing and reliability before putting into use and if OEM memory revises or change anything, they generate a new part number and organization have to redo the process again to re-valiatate them and if good, can keep using the new part on next batch of orders.
But the Kignston cannot met that requirement due kingston always using same part numbers with different chips and different PCB boards. I know Kingston do change the part number prefixes after the part number and Kingston will not disclose what chip they were using.

This came from my personal experience. One time I thought a memory modules looked good on paper and used it, failed about a year later. It was a brand that was almost similar to Mushkin (which was good as Kingston) but was more of generic quality confused me thinking good one. Now I no longer buy third party memory modules. Now I insist on OEM and Crucial which is made by Micron for consumer facing stuff, Crucial also makes gamer modules which are excellent to get.

Cheers,

Great Northern aka Canada.

Reply 12 of 16, by Sphere478

User metadata
Rank l33t++
Rank
l33t++

Any hardware era? Or has to be retro? If retro, how old?

If no restrictions,

I would go fanlesss, desktop board with mobile cpu/gpu.

Sphere's PCB projects.
-
Sphere’s socket 5/7 cpu collection.
-
SUCCESSFUL K6-2+ to K6-3+ Full Cache Enable Mod
-
Tyan S1564S to S1564D single to dual processor conversion (also s1563 and s1562)

Reply 13 of 16, by darry

User metadata
Rank l33t++
Rank
l33t++
Sphere478 wrote on 2023-10-12, 18:35:

Any hardware era? Or has to be retro? If retro, how old?

If no restrictions,

I would go fanlesss, desktop board with mobile cpu/gpu.

If there is no need to go noiseless or low-noise (OP did not state that as a requirement, unless I missed that), there is no need to go fanless. Fans help lower heat, and heat usually affects reliability.

Unless the TDP is really low, to the point that heatsinks/convection are bringing temperatures to the point of fans really being unnecessary, I would understand . But even in that scenario, there may be relative hotspots (linear regulators, etc) that will benefit from even a slight airflow even if they are operating well within their rated temperature range without active cooling (EDIT: nearby temperature sensitive components like capacitors could also be getting unnecessarily heated up).

Then there are the questions of environment, maintenance and accesibility. If this will be running in an isolated environment with filtered air, like a datacenter, dust is not really a concern. If there is going to be dust, it will affect cooling if it accumulates on heatsinks over time. Fans will increase the airflow and will bring on more dust than a passive cooling setup. The frequency at which dusting maintenance might be needed will also vary with how overspecced the cooling setup is to begin with . And, of course, whether or not accessibility of to the equipment (for dusting) will be easy or not is another variable to consider .

Oh, and monitoring/alerting is important to. For example, a partially clogged air intake might be easily fixed by quick dusting if one notices temperatures rising above expect baselines whereas an unmonitored system in the same context might well throttle, crash or even fry itself (or all three) over time as the issues gets worse while going unnoticed .

EDIT : Corrected typos after doing a btter job at proof-reading

Reply 14 of 16, by Sphere478

User metadata
Rank l33t++
Rank
l33t++
darry wrote on 2023-10-16, 21:44:
If there is no need to go noiseless or low-noise (OP did not state that as a requirement, unless I missed that), there is no nee […]
Show full quote
Sphere478 wrote on 2023-10-12, 18:35:

Any hardware era? Or has to be retro? If retro, how old?

If no restrictions,

I would go fanlesss, desktop board with mobile cpu/gpu.

If there is no need to go noiseless or low-noise (OP did not state that as a requirement, unless I missed that), there is no need to go fanless. Fans help lower heat, and heat usually affects reliability.

Unless the TDP is really low, to the point that heatsinks/convection are bringing temperatures to the point of fans really being unnecessary, I would understand . But even in that scenario, there may be relative hotspots (linear regulators, etc) that will benefit from even a slight airflow even if they are operating well within their rated temperature range without active cooling (EDIT: nearby temperature sensitive components like capacitors could also be getting unnecessarily heated up).

Then there are the questions of environment, maintenance and accesibility. If this will be running in an isolated environment with filtered air, like a datacenter, dust is not really a concern. If there is going to be dust, it will affect cooling if it accumulates on heatsinks over time. Fans will increase the airflow and will bring on more dust than a passive cooling setup. The frequency at which dusting maintenance might be needed will also vary with how overspecced the cooling setup is to begin with . And, of course, whether or not accessibility of to the equipment (for dusting) will be easy or not is another variable to consider .

Oh, and monitoring/alerting is important to. For example, a partially clogged air intake might be easily fixed by quick dusting if one notices temperatures rising above expect baselines whereas an unmonitored system in the same context might well throttle, crash or even fry itself (or all three) over time as the issues gets worse while going unnoticed .

EDIT : Corrected typos after doing a btter job at proof-reading

Fanless components are low watt density and don’t have the failure component of a fan which is one of the first failures in many conputers. I suggested it as a way to make it more reliable and use less power

Sphere's PCB projects.
-
Sphere’s socket 5/7 cpu collection.
-
SUCCESSFUL K6-2+ to K6-3+ Full Cache Enable Mod
-
Tyan S1564S to S1564D single to dual processor conversion (also s1563 and s1562)

Reply 15 of 16, by darry

User metadata
Rank l33t++
Rank
l33t++
Sphere478 wrote on 2023-10-16, 23:28:
darry wrote on 2023-10-16, 21:44:
If there is no need to go noiseless or low-noise (OP did not state that as a requirement, unless I missed that), there is no nee […]
Show full quote
Sphere478 wrote on 2023-10-12, 18:35:

Any hardware era? Or has to be retro? If retro, how old?

If no restrictions,

I would go fanlesss, desktop board with mobile cpu/gpu.

If there is no need to go noiseless or low-noise (OP did not state that as a requirement, unless I missed that), there is no need to go fanless. Fans help lower heat, and heat usually affects reliability.

Unless the TDP is really low, to the point that heatsinks/convection are bringing temperatures to the point of fans really being unnecessary, I would understand . But even in that scenario, there may be relative hotspots (linear regulators, etc) that will benefit from even a slight airflow even if they are operating well within their rated temperature range without active cooling (EDIT: nearby temperature sensitive components like capacitors could also be getting unnecessarily heated up).

Then there are the questions of environment, maintenance and accesibility. If this will be running in an isolated environment with filtered air, like a datacenter, dust is not really a concern. If there is going to be dust, it will affect cooling if it accumulates on heatsinks over time. Fans will increase the airflow and will bring on more dust than a passive cooling setup. The frequency at which dusting maintenance might be needed will also vary with how overspecced the cooling setup is to begin with . And, of course, whether or not accessibility of to the equipment (for dusting) will be easy or not is another variable to consider .

Oh, and monitoring/alerting is important to. For example, a partially clogged air intake might be easily fixed by quick dusting if one notices temperatures rising above expect baselines whereas an unmonitored system in the same context might well throttle, crash or even fry itself (or all three) over time as the issues gets worse while going unnoticed .

EDIT : Corrected typos after doing a btter job at proof-reading

Fanless components are low watt density and don’t have the failure component of a fan which is one of the first failures in many conputers. I suggested it as a way to make it more reliable and use less power

I have seen some fanless stuff get quite hot (not even thinking of certain Apple products here 😉 ). Consumer grade fanless products built to a price point are expected to survive for a "commercially viable" amount of time (i.e. until after the warranty runs out) . Fanless TVs and home gateway routers are examples of products what are often designed to be fanless, but some of them get quite hot, which is never good for their reliability . I agree that fans can and do fail, but the crappy ones fail sooner . Fans used in datacenter grade equipment or even some the higher end consumers ones are quite reliable and long lasting . Redundant fans are an option too. Of course, any fan runnging in a dusty, nicotine and pet hair rich area will die sooner that later. A non fan cooled piece of hardware will get dust/hair inside it too eventually and while it that may run pefectly, albeit near or at its thermal limits when new and clean, it is going to get hotter as it cakes up and is not going to last as long as hardware that isn't operating close to its thermal limits .

That being said, if OP can source a fanless unit that draws little power, has low component density, has decent enough passive cooling (massive heatsinks, for example) and runs intended workloads reasonably without getting close to its thermal limits, that would be great . My experience with several consumer intended fanless designs, however, is that they run just cool enough that they do not self destruct immediately . Consequently, I would rather have something with a fan or a few but that can operate safely with one or two failed fans, but that is just my opinion .

Reply 16 of 16, by zolli

User metadata
Rank Newbie
Rank
Newbie

Supermicro all the way. I've built several 24/7 file servers over the years and all non-server motherboards have lasted around 3-6 years before blowing capacitors, having instability issues, etc. I now have two machines with Supermicro X9SCM + Xeon and they've been rock solid for many years. (You can buy these components off of eBay for really reasonable money. I recently bought a backup board at a flea market for $15.) I run debian on it with ZFS which means ECC memory. I tried FreeNAS but it didn't work well for me. In particular, I like to ssh into the file server and script file copying, renaming and archiving. Try installing python, or tkdiff or any number of developer tools onto FreeNAS... No good for me. Nothing against FreeNAS which is great for what it does.