VOGONS


Ultimate DirectX-9 Setup

Topic actions

First post, by kithylin

User metadata
Rank l33t
Rank
l33t

Okay, well... I was going to almost hijack someone else's thread about this (the P4 cedarmill thread that got up to the DX9 video cards) but I've decided it would be better to start my own, with my own discussion. All my current funds are being funneled in to this project currently.

I have a system I've been building / working on over the past several months. It's older by today's standards, but far from "retro" by any stretch of the word. This is also going to end up pretty much being a "Wet dream" for most people in the 2007-2009 era, and pretty much end up being a build I had only dreamed of at the time, but would of never been able to afford. However with today's prices on that hardware, I can go in and actually build and do this sort of thing and it won't break the bank. And...... I'm not entirely sure where to go with it, from where it currently is. I thought I would post what I've accomplished with it's design so far, state my plans and what I want from it for the future, and then get some input from you guys on here.

First off, the ultimate plan: The absolute maximum DirectX 9 gaming performance possible, with a note that what ever I chose would probably do bad / poorly at more modern DirectX-10 & DirectX-11 titles. This machine is more for older DX9 titles, like morrowind, oblivion, Race Driver: GRID, ETC. And by "Absolute maximum" I do mean maximum: custom-loop water cooled and overclocked within an inch of it's life.

This is the machine currently:
voNoskC.jpg
Large: http://i.imgur.com/KPJlUlK.jpg

Currently: Custom-loop water cooled CPU only, 2 x 120mm radiator mounted on the back, butchered up Antec three hundred case. LGA-775 CorssFireX motherboard.

Current hardware:
Intel Xeon LGA775 E3110 (equivalent to the E8400 Core2duo), wolfdale 6MB dual core, currently clocked at: 504x9 (4535 Mhz), Intel SpeedStep off, C1E off.
Ram: Kingston Hyper-X DDR2-1066 4GB dual channel kit, currently running @ DDR2-1008 5-5-5-15-1T
Hard Drive: Western Digital 300 GB Velociraptor Sata-6Gbps, 64 MB Buffer, 10000 RPM (Even though onboard controller is indeed Sata-3Gbps)
Video: Currently a pair of AMD Radeon HD4890's.
Power supply: Corsair 700 Watt unit, currently unable to support entire system on one PSU (runs over past 700 watts), so it's hybridized with a thermalTake PowerExpress 250 Watt video card power supply for second 4890.

Currently this system is 6-months stable like it is described above, and I run it 14-18 hours a day daily for web browsing, chatting, and general internet stuff (I'm typing on it right now).

Immediate concern: I'm using a cheap brand-less $25 Chinese water-block for the CPU off ebay. Here in the next couple months I plan to be replacing it with a Enzotech Stealth water block instead, that's my current conern. I've read these plastic-top water blocks can crack and leak, so switching over to an all-metal (all copper) one instead is the current concern.

Beyond that, then is getting a power supply that can handle quite a lot of GPU power, and the entire system on one unit. I'm thinking of aiming towards a 1250-watt 94% efficiency unit from Seasonic, or a 1200 watt one from PC Power & Cooling. That would easily handle the entire current system, and then could possibly handle 3-way SLI in the future of some rather hungry cards. Either way, I'm going to save up and get a large capacity, major-label Tier-1 unit some time.

After that's settled then I have to decide what to do with the system. I'll most likely be entirely replacing the whole motherboard so I'm not sure where to go. I for sure want to go nvidia. No discussing that, it's a personal choice. I want the nv-inspector for forcing modes on games, and I want PHYS-X for some games. End of discussion, that's my choice.

What -IS- up for discussion and the main reason I started this thread: What is the ultimate fastest nvidia video card for DirectX-9 ? I'm not concerned with how hot they'll get, how much power they'll use, or any of that. Just what is the fastest without concerning any of that side of it.

At first I was thinking a pair of GTX-295's, pretty beefy together and there's some pre-setup EVGA Hydro-Copper boards that come already designed for water that would be neato. But then I was considering a 3-way GTX-285 setup, 3 cards with 512-bit wide memory bus all together would be pretty interesting in SLI. Then I was almost thinking of a 3-way GTX-480 setup, they seem to have decent Dx9 performance. And I'm not going too new with this, because I already have a GTX-560-ti-448 in my i7, and right at the introduction with the gtx-500 series is when nvidia dropped compatibility with most older DX9 games. I've spoken with a lot of people using a wide range of 600 series, 700 series, and 500 series cards, they all have problems running older dx9 games on these cards. The problems range from games refusing to run at all, to red screen of death, to light-teal screen of death, (either of which usually result in a hard reset of the entire system or a power cycle), etc. So I'm going to be building up a special system for DX9 and older stuff. Also as a backup to my big i7 in case it goes tits-up some day.

Besides all of that, then I have to think of what platform to use for this build. Stay with 775 and go with nvidia 680i or 690i systems with a 12MB LGA775 quad core, or move on up and just get a bottom of the line x58 system and a bottom cheapie i7 quad core. A nice 680i system that supports CPU overclocking and a 3-way video is going to be the much more expensive option. And I already have a high-end exotic i7, I don't really need another one, so I'm kinda leaning towards 775.

Oh and some time I'm thinking of course switching it up to a bigger case with room for 2-3 more radiators, a second pump, and ducting for looping all the video cards in. I'm not sure what yet but that'll go in it too, what I have right now is just for starters.

Anyway.. thoughts, input? And don't bother speculating that I'm crazy, I already know that. 🤣 😵

Reply 1 of 61, by Tetrium

User metadata
Rank l33t++
Rank
l33t++

I was going to speculate you were crazy, but now you left us with nothing to discuss 🤣

Anyway, your system looks pretty mindblowing! I'm not sure, but from several forums I've learned that if you want backwards compatibility for games, then Nvidia was the way to go.
Don't even the current Nvidia cards work fine with DX9? I'm not so sure about the AMD ones though (and the problem is that I recently bought one and afterwards I read it's driver support is somewhat terrible atm).

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 2 of 61, by kithylin

User metadata
Rank l33t
Rank
l33t
Tetrium wrote:

I was going to speculate you were crazy, but now you left us with nothing to discuss 🤣

Anyway, your system looks pretty mindblowing! I'm not sure, but from several forums I've learned that if you want backwards compatibility for games, then Nvidia was the way to go.
Don't even the current Nvidia cards work fine with DX9? I'm not so sure about the AMD ones though (and the problem is that I recently bought one and afterwards I read it's driver support is somewhat terrible atm).

I'm currently at a draw between 2-way GTX295 (quad-SLI), 3-way GTX-285, or 3-way GTX-480. Still not sure yet which would result in the best DX9 performance, probably the 480's I'd lean towards.

Likely aiming towards 1920x1080 monitor with heavy AA. And likely using whatever older drivers are necessary, something around that period. Probably vista too. I don't know very much about nvidia around that time frame though, I was all AMD back then and sailed over like 5 nvidia families... so.. my knowledge is kinda limited in that area.

Reply 3 of 61, by obobskivich

User metadata
Rank l33t
Rank
l33t

Some thoughts from owning such a machine:

- On the motherboards:

  • For LGA 775 the king-of-the-world for nVidia is the nForce 790i Ultra, which supports DDR3. There are only a few boards I'm aware of that ship with that chipset - eVGA and XFX make two of them. The nForce 780i is basically the same chip, but supports DDR2 and doesn't officially support FSB1600, and there are more board options (I don't remember the 790 being on the market that long). However I've never heard consistently good things about nForce boards for Intel (600 or 700 series); I hear they're more of a mixed bag. The few that I've worked on in the past pretty much lived up to that - sometimes they work, sometimes they belly flop.

    You might consider switching over to AMD if you really want nForce and SLI - there's some various nifty nForce boards from the 2006-2008 period (remember that with AMD CPUs, the age of the board has less bearing on things, as long as the CPU and board socket are compatible) - the AMD QuadFX system for example is pretty slick, and the Asus board that supports it is very nice. Along with the nForce 900 series boards that support all of the SLI and other nVidia functionality (if you want the entire "system management" package and all that). Gives you an abundance of CPU options too.

    Alternatively, if you don't mind spending a reasonable amount more money, track down an Intel D5400XS. It supports all CF and SLI modes in one, and is generally a very snazzy board. It loads either a pair of Xeons or (as marketed by Intel) the Core 2 Extreme QX9775 (among the fastest LGA 771 chips ever made); in 2008 it was better than $3000 for the CPUs and around $600 for the board. Such a system *will* need a 1200W PSU if you want multiple graphics cards (along with the dual CPUs and all that). As far as performance goes, Guru3D showed that it did next to nothing for games compared to the X48 and higher-tier Core 2 Quad chips, but it will give you a lot of options for expansion.

- On the graphics cards:

  • PhysX is tied to nVidia if you're doing it via GPU (to the point that you couldn't just drop a GeForce in your current setup and enable it), but what about the Aegia cards? (I don't know if this would be a workable solution; but it might be worth looking into). If the answer is "no" and you have to go nVidia, then I'd look at one of a few cards: the GTX 280 or 285, or the 8800 GTX or Ultra. In general I'd steer away from dual-GPU cards (having owned one) because their biggest "catch" is you can never disable the multi-GPU operation (and some games really just don't like multi-GPU), whereas having multiple single GPU boards gives you more flexibility (it also means you can run more monitors if you want; I'm pretty sure nVidia supports (3D) Vision Surround on anything GeForce 8 or higher (the Quadros support Mosaic, which is similar)). It also has benefits for cooling and noise (basically there's absolutely no downsides to just having multiple single cards, budget and physical space aside).

    GTX 285 is the "top of the mark" as a single GPU board pre-Fermi (it should be generally a few % faster than HD 4890 as well; although that PowerColor you've got in there will give it a serious run for its money), and a very powerful board in its own right. The GTX 280 is a bit older, a bit slower, and uses a bit less power. The 8800's are also very respectable, although not quite as powerful - the 8800 GTX may save you some money though. Anything lower than the GTX won't do Tri-SLI.

    As far as "3 512-bit interfaces" - it won't work like that. With multi-GPU setups you only get to count the memory of a single card, because that's how the drivers/system/application is presented to it. Objects are duplicated across the card's buffers, not spanned (so having 3x1GB is probably more powerful than one of the 1GB cards, but it isn't the same as a 3GB card). The performance returns in Tri-SLI will usually be minimal compared to normal SLI, the big "gain" is that it will limit/curb the micro-stutter phenomenon (Tom's Hardware has an article (with measurements) on this). Performance hang-ups will occur in situations where the application or card is CPU bound, where driver/application enhancements are unavailable or impossible, and where memory is a bottleneck.

    If you going to stick with AMD, the HD 4890 is what I'd suggest hands down. They're faster than the 4870, and more power efficient to boot. You can run up to four of them if the motherboard will support it - again the primary advantage going beyond 2 is to curb the micro-stutter phenomenon. It will also make AA cheaper (with 3 GPUs I had no issues running 24x AA for most games); my understanding is nVidia has a similar feature for GeForce (I know Quadro FX does).

    If memory serves, the GTX 200 series is not capable of running PhysX and 3D all at once (be it in SLI or as a single card), so you'd need a dedicated GPU for that. It doesn't have to be as powerful as what you're running for 3D; a mid-level GeForce 8 or 9 would be a good choice. Like the 8600GTS or 9600GT. This may have changed with newer drivers or updates though.

On the games and support:

  • - Morrowind is not a DirectX 9 game. It runs terribly under Windows Vista/7 as well (I'm currently in the process of building an XP machine primarily because of Morrowind). Under Windows XP I had no issues with it on multi-GPU setups (like Oblivion, it does benefit, and doesn't seem to care if its run on 1, 2, 20, whatever GPUs), beyond its normal propensity to crash.

    - I've had no issues with DX9 game support with 4870X2 or 4890 under Windows 7 (any issues are tied to a specific game or bad configuration settings), and under Windows XP I had minimal issues with DX8 (usually just had to set a frame limiter if anything). DX8 under Windows 7 seems to be very hit and miss (usually miss) - a lot of things won't install or won't run (or won't run stable), or if they do, they perform terribly (for example Empire Earth is a DX8 game, and installs and runs in 7 with compatibility settings enabled, but the performance is AWFUL despite the machine's capabilities).

    - Multi-GPU systems will have compatibility squabbles with a some games. The worst offenders I've yet encountered are Skyrim (ironic, since Oblivion works beautifully with multiple GPUs, and was a poster-child for both nVidia and S3 for multi-GPU scaling and performance) and Mass Effect 2. Skyrim's menus and overlays will flicker on multi-GPU systems (some videos on YT also show graphical corruption happening - I've never experienced it that severely, but it certainly is very annoying). Mass Effect 2 became very unstable and exhibited a "hall of mirrors" effect on a number of textures/shaders on triple-GPU, was relatively unstable on dual-GPU, and will run 24x7 and look flawless on a single GPU. The most common issue I've observed with multiple GPUs is flickering or tearing (not as severely as Skyrim).

    I'm not saying that multi-GPU is "bad" - most other games will run very well with it. Oblivion, for example, is a model citizen. BioShock also seems to work quite nicely. The "catch" ime is just to have a system where you can disable the multi-GPU features down to a single GPU for games that end up having problems, and to ensure that the single GPU is still powerful enough to handle the game. That's where something like the HD 4890, GTX 280, etc comes in.

Other stuff to consider:

- You may consider adding more RAM if this machine is going to be used as an every-day system on top of gaming, at least if you're going with a 64-bit operating system that will support it.

- There are various peripherals from around that time that were "hip" to have on a top-end machine, like the Sound Blaster X-Fi and Philips amBX (its very nifty for Oblivion!).

- If you have the time to sink into it looking for parts, why not do a complete ESA SLI+PhysX build? You'd absolutely have to go with an nVidia based motherboard (and I think it has to come off of a list of ESA hardware at that), but it'd give you basically every possible option, menu, sub-menu, etc that can be enabled in the nVidia control panel, and be pretty slick to boot. More about it: http://hothardware.com/articles/NVIDIA_ESA__E … m_Architecture/

- You shouldn't need a 1200W PSU for twin 4890s (they use less than 400W full-in; most of the time closer to 300W); I have an 860W in mine, and had 3-GPU with a 4870X2 and 4890 and had no problems. PC Power & Cooling said it shouldn't be a problem with 2 4870X2s either, as long as I wasn't tasking it with dual Xeons or a dozen hard-drives as well. As far as going with PC Power - the TurboCool units are fantastic imho, but with OCZ's fate up in the air, you might want to consider another manufacturer that isn't potentially going to vanish in the next six months (on the other hand, you might be able to get a PSU on the cheap if they do go under). Seasonic or Corsair would be good choices.

kithylin wrote:

I'm currently at a draw between 2-way GTX295 (quad-SLI), 3-way GTX-285, or 3-way GTX-480. Still not sure yet which would result in the best DX9 performance, probably the 480's I'd lean towards.

Likely aiming towards 1920x1080 monitor with heavy AA. And likely using whatever older drivers are necessary, something around that period. Probably vista too. I don't know very much about nvidia around that time frame though, I was all AMD back then and sailed over like 5 nvidia families... so.. my knowledge is kinda limited in that area.

The pragmatist in me wants to tell you to just get a single GTX 285 and be done with it; at 1080p anything else won't be noticeable for the games you want to play. But I understand what you're trying to do here, the point is excess, so I'd say go with 2 or 3 and be happy. The GTX 480 is kind of like the "modern day" FX 5800 Ultra - it runs very loud, very hot, and sucks down a ton of power. Here's a guru3d review (about the 580) that provides some insight into that: http://www.guru3d.com/articles_pages/geforce_ … 0_review,1.html

The "generations" for nVidia went:

GeForce 8 -> GeForce 9. There isn't much difference there, the 9800GT/GTX are derived from the second-generation GeForce 8800GTS (the G92), they're clocked higher, but otherwise similar.

Then came GTX 200 series, which are different at the top-end. The mid-range GTS 250 is G92 based.

From there we had Fermi (400/500), and then Kepler (600/700).

For DirectX 9, you usually won't see much benefit from the DX11-era hardware, in some cases it actually loses out to the older DX10 parts. Most of the optimizations since around the GTX 285/GTX 480 time-frame have been towards improving GPU Computing performance, power efficiency (ha!), and supporting DirectX 11. You can see this with AMD too - the HD 5870 is a performance improvement over the 4870, but the 6870 is generally a step back (from the 5870) for DirectX 9 games.

The games you're talking about, however, will generally run pretty seriously maxed out on final-era DX9 hardware, like the 7900GTX. I'm assuming, however, that you'll probably move into later DX9 games like Mass Effect, Fallout 3, and Skyrim at some point, so the 8800/200 series is a good place to be. The 8800 will fulfill system requirements for any DirectX 9 game yet released, and the 200 will be more powerful (and if you go with the GTX 280, use less power too).

Reply 4 of 61, by vetz

User metadata
Rank l33t
Rank
l33t

This thread gives me a smile.

I still use a LGA775 system as my main computer (Asus P5Q-Pro, Intel Core2 Quad Extreme Q9650, 2x XFX 7970 in Crossfire, 8GB DD2 PC1066 RAM, SSD) and now you and other threads are talking about the LGA775 platform like it's retro... I guess it just shows the huge versatility of the platform.

I can run BF4 in Ultra on that setup and 10.000 3Dmarks in the newest version so I don't see any reason to call it "retro" just yet, even though the board is from 2008 (Intel P45).

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 5 of 61, by Skyscraper

User metadata
Rank l33t
Rank
l33t

Nice!
I have a system just like this!

E8600@4500 MHz. No water cooiling but a Thermalright Ultra-120 Extreme.
Gigabyte GA-X48-DS4.
4GB DDR2 memory @ ~1066 MHz (5 5 5 15), sometimes 2GB @ 1200+ MHz (5 5 5 15) if I am benching.
HD 4870 or HD 5770 Crossfire (the 5770 cards are in the system at the moment).
80 GB X25-M G2 Intel SSD with Windows 7.
80 GB Samsung SP0802N HDD with Windows XP 32bit.
Fractal Design Newton R2 1000W

I agree that these systems are not retro yet.
They are however very fun to play around with.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 6 of 61, by nforce4max

User metadata
Rank l33t
Rank
l33t

I used to have the XFX 780i and needless to say it was a very power hungry board plus the DDR2 memory controller just couldn't cope with the cpu even at very high speeds. The 790i is a different beast and is noticeably improved but not the same chipset as the 780i with even higher power consumption. The south bridge on these boards has been around for years since the nforce 4 days and dumps some heat on its own. In general these Nvidia boards are close with Intel boards in performance however they are not as simple to work with. The LLC on these boards isn't that great and causes a lot of fuss when doing heavy overclocking.

On a far away planet reading your posts in the year 10,191.

Reply 7 of 61, by kithylin

User metadata
Rank l33t
Rank
l33t

Quite a lot of interesting points here and I'll respond to them in sections.

obobskivich wrote:
Some thoughts from owning such a machine: - On the motherboards: […]
Show full quote

Some thoughts from owning such a machine:
- On the motherboards:

  • For LGA 775 the king-of-the-world for nVidia is the nForce 790i Ultra, which supports DDR3. There are only a few boards I'm aware of that ship with that chipset - eVGA and XFX make two of them. The nForce 780i is basically the same chip, but supports DDR2 and doesn't officially support FSB1600, and there are more board options (I don't remember the 790 being on the market that long). However I've never heard consistently good things about nForce boards for Intel (600 or 700 series); I hear they're more of a mixed bag. The few that I've worked on in the past pretty much lived up to that - sometimes they work, sometimes they belly flop.

    You might consider switching over to AMD if you really want nForce and SLI - there's some various nifty nForce boards from the 2006-2008 period (remember that with AMD CPUs, the age of the board has less bearing on things, as long as the CPU and board socket are compatible) - the AMD QuadFX system for example is pretty slick, and the Asus board that supports it is very nice. Along with the nForce 900 series boards that support all of the SLI and other nVidia functionality (if you want the entire "system management" package and all that). Gives you an abundance of CPU options too.

    Alternatively, if you don't mind spending a reasonable amount more money, track down an Intel D5400XS. It supports all CF and SLI modes in one, and is generally a very snazzy board. It loads either a pair of Xeons or (as marketed by Intel) the Core 2 Extreme QX9775 (among the fastest LGA 771 chips ever made); in 2008 it was better than $3000 for the CPUs and around $600 for the board. Such a system *will* need a 1200W PSU if you want multiple graphics cards (along with the dual CPUs and all that). As far as performance goes, Guru3D showed that it did next to nothing for games compared to the X48 and higher-tier Core 2 Quad chips, but it will give you a lot of options for expansion.

I currently already own an Intel Dx48BT2 "Bonetrail" LGA775 motherboard. I had originally bought it cheap for $30 off ebay (new old stock - bulk pack) and thought I might just use it for a while for this plan, as it used DDR3. But ultimately once I got it here and had a lot of time to spend with it, I've found it's -VERY- poor for overclocking, my 6MB wolfdale chip will not get stable above 3.6 Ghz in this board, no matter what I do with it, even with water cooling. On the other hand, using it in my Gigabyte GA-EP45-UD3P motherboard, the CPU just dials right up to 4.5 ghz with no hesitation and runs stable with 1.45v cpu vcore without a problem. So in general, after this experience, I'm never touching another Intel motherboard, ever again. Also of note: While I appreciate the idea, it's pretty much going to be not-feasible to water cool a dual-CPU system + 3 hot nvidia cards. I've run the estimates and even a 12MB LGA775 quad core is looking to be around 400 watts of heat @ 4.4 ghz, not counting what the video cards dump in to the cooling system, (a pair of GTX-295's would be about 560 watts more heat, for example) If I did all that I'd have to be getting in to 1500 watt power supplies and massively huge 19-inch radiators. So to be realistic and not go -totally insane- here I want to stick to a single-CPU system. Then of course it boils down to performance and price. I've looked and a 12MB wolfdale quad + a non-OEM 780i motherboard would be about close to $350 (I can't find a 790i board anywhere). Then a entry level i7 with x58 motherboard would accomplish the same task, probably end up a little faster and can be acquired for about $200. So for performance vs price, I do think I'll probably end up building another x58 system, most likely, even though I think I'd kinda perfer a 775 system somewhere.

Also about power supplies. A LGA775 quad @ 12MB @ 4ghz will pull around 350 watts just for the CPU, then add in something like 3 x GTX-480's (around 250 watts each), and I'd be looking at 1100 watts just for the board + GPU's, not factoring in the efficiency model of what ever power supply I use which will probably knock that down a lot. So.. I'm pretty much right on target with a 1200 - 1250 unit, generally we want the power supply to be about +10% above max load. It's not healthy to run a power unit @ 100% load at all times.

obobskivich wrote:

- On the graphics cards:

  • PhysX is tied to nVidia if you're doing it via GPU (to the point that you couldn't just drop a GeForce in your current setup and enable it), but what about the Aegia cards? (I don't know if this would be a workable solution; but it might be worth looking into). If the answer is "no" and you have to go nVidia, then I'd look at one of a few cards: the GTX 280 or 285, or the 8800 GTX or Ultra. In general I'd steer away from dual-GPU cards (having owned one) because their biggest "catch" is you can never disable the multi-GPU operation (and some games really just don't like multi-GPU), whereas having multiple single GPU boards gives you more flexibility (it also means you can run more monitors if you want; I'm pretty sure nVidia supports (3D) Vision Surround on anything GeForce 8 or higher (the Quadros support Mosaic, which is similar)). It also has benefits for cooling and noise (basically there's absolutely no downsides to just having multiple single cards, budget and physical space aside).

I wanted to comment right here first, with the nvidia inspector program you -can- disable SLI mode on a dual-GPU card for certain games if necessary. It is possible, I have a friend with a 9800 gx2 and he's done it and shown me, it's just a bit more "advanced" and most users don't know how to fiddle with it (No insult intended there to you personally, I'm just generalizing).

obobskivich wrote:

GTX 285 is the "top of the mark" as a single GPU board pre-Fermi (it should be generally a few % faster than HD 4890 as well; although that PowerColor you've got in there will give it a serious run for its money), and a very powerful board in its own right. The GTX 280 is a bit older, a bit slower, and uses a bit less power. The 8800's are also very respectable, although not quite as powerful - the 8800 GTX may save you some money though. Anything lower than the GTX won't do Tri-SLI.

As far as "3 512-bit interfaces" - it won't work like that. With multi-GPU setups you only get to count the memory of a single card, because that's how the drivers/system/application is presented to it. Objects are duplicated across the card's buffers, not spanned (so having 3x1GB is probably more powerful than one of the 1GB cards, but it isn't the same as a 3GB card). The performance returns in Tri-SLI will usually be minimal compared to normal SLI, the big "gain" is that it will limit/curb the micro-stutter phenomenon (Tom's Hardware has an article (with measurements) on this). Performance hang-ups will occur in situations where the application or card is CPU bound, where driver/application enhancements are unavailable or impossible, and where memory is a bottleneck.

If you going to stick with AMD, the HD 4890 is what I'd suggest hands down. They're faster than the 4870, and more power efficient to boot. You can run up to four of them if the motherboard will support it - again the primary advantage going beyond 2 is to curb the micro-stutter phenomenon. It will also make AA cheaper (with 3 GPUs I had no issues running 24x AA for most games); my understanding is nVidia has a similar feature for GeForce (I know Quadro FX does).

Actually there's been a lot of research done online, and with me and some of my friends. With Tri-SLI (and a game that supports it) and newer drivers, actually you can gain up to +90% for each card in the system -IF- you have enough CPU to push this sort of setup. That's actually what the limiting factor is and it's why most people have come to thinking Tri-SLI doesn't scale well. In fact it does scale perfectly fine, with a fast CPU setup. At minimum a quad core @ 4ghz is generally required to get the most out of it, a i7 @ 4.5 would be better.

obobskivich wrote:

If memory serves, the GTX 200 series is not capable of running PhysX and 3D all at once (be it in SLI or as a single card), so you'd need a dedicated GPU for that. It doesn't have to be as powerful as what you're running for 3D; a mid-level GeForce 8 or 9 would be a good choice. Like the 8600GTS or 9600GT. This may have changed with newer drivers or updates though.[/list]

Technically, any GPU produced with CUDA cores and at mimum 256 MB of onboard memory will be capable of running PHYSX & 3D at the same time, this is a specification and listed on the nvidia website somewhere.

obobskivich wrote:
On the games and support: […]
Show full quote

On the games and support:

  • - Morrowind is not a DirectX 9 game. It runs terribly under Windows Vista/7 as well (I'm currently in the process of building an XP machine primarily because of Morrowind). Under Windows XP I had no issues with it on multi-GPU setups (like Oblivion, it does benefit, and doesn't seem to care if its run on 1, 2, 20, whatever GPUs), beyond its normal propensity to crash.

    - I've had no issues with DX9 game support with 4870X2 or 4890 under Windows 7 (any issues are tied to a specific game or bad configuration settings), and under Windows XP I had minimal issues with DX8 (usually just had to set a frame limiter if anything). DX8 under Windows 7 seems to be very hit and miss (usually miss) - a lot of things won't install or won't run (or won't run stable), or if they do, they perform terribly (for example Empire Earth is a DX8 game, and installs and runs in 7 with compatibility settings enabled, but the performance is AWFUL despite the machine's capabilities).

The issues really is not present with any family of AMD cards from newest to oldest, the main issue is with nvidia cards starting with GTX-500 and newer, I described that in my original post. However, I do want to stick to nvidia. I want nvidia-inspector, and PHYSX (For some games that support it) So.. thanks for the comments on AMD but I'm still sticking to nvidia 😁 And I'll most likely be using vista for this build, which should eliminate most of the Win7 compatibility problems.

obobskivich wrote:

- Multi-GPU systems will have compatibility squabbles with a some games. The worst offenders I've yet encountered are Skyrim (ironic, since Oblivion works beautifully with multiple GPUs, and was a poster-child for both nVidia and S3 for multi-GPU scaling and performance) and Mass Effect 2. Skyrim's menus and overlays will flicker on multi-GPU systems (some videos on YT also show graphical corruption happening - I've never experienced it that severely, but it certainly is very annoying). Mass Effect 2 became very unstable and exhibited a "hall of mirrors" effect on a number of textures/shaders on triple-GPU, was relatively unstable on dual-GPU, and will run 24x7 and look flawless on a single GPU. The most common issue I've observed with multiple GPUs is flickering or tearing (not as severely as Skyrim).

This is another subject worth touching briefly. Actually Tri-GPU setup is usually fine (but not always, there's some games it will still screw up on). But the thing is with tri-SLI, all 3 video cards have to be an exact 100% match of all clock speeds, at all times. This is a big problem of tri-SLI stability issues, it's common for some people to "mix and match" different brands and then the clocks don't sync and then there you have problems. I'm aware of this and prepared to force cards in to same-clocks either with software or re-programming their bios's. Skyrim works great in multi-gpu, even a pair of 4890's it runs perfectly fine.. at lower settings with AA off, but it can run okay. I've played it on this system with the dual core and the 4890's, and as long as I run with AA off and no mods, it runs a smooth solid 60 FPS everywhere with no micro-stutter. I have my AMD 4890's reflashed on their bios's to match clocks though.

obobskivich wrote:

I'm not saying that multi-GPU is "bad" - most other games will run very well with it. Oblivion, for example, is a model citizen. BioShock also seems to work quite nicely. The "catch" ime is just to have a system where you can disable the multi-GPU features down to a single GPU for games that end up having problems, and to ensure that the single GPU is still powerful enough to handle the game. That's where something like the HD 4890, GTX 280, etc comes in. [/list]

Thanks for the comments, I was kind of leaning towards a GTX-480 maybe for this reason, if I encounter games that won't run on multi-GPU (and won't respond to it being forced-on) then at least this would be a powerful single-gpu option, maybe.

obobskivich wrote:

Other stuff to consider:

- You may consider adding more RAM if this machine is going to be used as an every-day system on top of gaming, at least if you're going with a 64-bit operating system that will support it.

I have a spare set of mushkin DDR3-2400 4GB sticks here (12 GB total). If I can find a ddr3 system later, that'll be at least 8GB in it, maybe 12 with a i7 system, so memory shouldn't be an issue.

obobskivich wrote:

- There are various peripherals from around that time that were "hip" to have on a top-end machine, like the Sound Blaster X-Fi and Philips amBX (its very nifty for Oblivion!).

I've never heard of the Phillips amBX, I'll research in to that later today.

obobskivich wrote:

- If you have the time to sink into it looking for parts, why not do a complete ESA SLI+PhysX build? You'd absolutely have to go with an nVidia based motherboard (and I think it has to come off of a list of ESA hardware at that), but it'd give you basically every possible option, menu, sub-menu, etc that can be enabled in the nVidia control panel, and be pretty slick to boot. More about it: http://hothardware.com/articles/NVIDIA_ESA__E … m_Architecture/

- You shouldn't need a 1200W PSU for twin 4890s (they use less than 400W full-in; most of the time closer to 300W); I have an 860W in mine, and had 3-GPU with a 4870X2 and 4890 and had no problems. PC Power & Cooling said it shouldn't be a problem with 2 4870X2s either, as long as I wasn't tasking it with dual Xeons or a dozen hard-drives as well. As far as going with PC Power - the TurboCool units are fantastic imho, but with OCZ's fate up in the air, you might want to consider another manufacturer that isn't potentially going to vanish in the next six months (on the other hand, you might be able to get a PSU on the cheap if they do go under). Seasonic or Corsair would be good choices.

I put in the power supply comments earlier above in this post, but I suspect you probably wern't overclocking the CPU past 4ghz then. Once you start doing that and getting in to the range of the CPU alone pulling 300-400 watts of power, you start getting in to totally different power requirements if you still want powerful GPU's mated to it too. this 700 watt corsair I have will run around 730 watts gaming and shut down after a few seconds, even with just a pair of 4890's and this dual core @ 4.5 ghz. So even this isn't enough, Ideally I'd like the power supply to be about +100 to +200 watts above what I actually use, better life for the power unit, and if you can push your load more towards the middle of it's rated maximum, then you can hit a power supply's maximum efficiency % and be better on consumption @ the outlet.

obobskivich wrote:
kithylin wrote:

I'm currently at a draw between 2-way GTX295 (quad-SLI), 3-way GTX-285, or 3-way GTX-480. Still not sure yet which would result in the best DX9 performance, probably the 480's I'd lean towards.

Likely aiming towards 1920x1080 monitor with heavy AA. And likely using whatever older drivers are necessary, something around that period. Probably vista too. I don't know very much about nvidia around that time frame though, I was all AMD back then and sailed over like 5 nvidia families... so.. my knowledge is kinda limited in that area.

The pragmatist in me wants to tell you to just get a single GTX 285 and be done with it; at 1080p anything else won't be noticeable for the games you want to play. But I understand what you're trying to do here, the point is excess, so I'd say go with 2 or 3 and be happy. The GTX 480 is kind of like the "modern day" FX 5800 Ultra - it runs very loud, very hot, and sucks down a ton of power. Here's a guru3d review (about the 580) that provides some insight into that: http://www.guru3d.com/articles_pages/geforce_ … 0_review,1.html

Thanks for this information, I'll get in to reading it later. I really doubt a single GTX-285 could do much with 1080p @ 16xAA for most games, even with a fast cpu behind it. I'll read in to some things though. Most reviews of the GTX-500 series don't include DX9 performance though, just a note there.

obobskivich wrote:
The "generations" for nVidia went: […]
Show full quote

The "generations" for nVidia went:

GeForce 8 -> GeForce 9. There isn't much difference there, the 9800GT/GTX are derived from the second-generation GeForce 8800GTS (the G92), they're clocked higher, but otherwise similar.

Then came GTX 200 series, which are different at the top-end. The mid-range GTS 250 is G92 based.

From there we had Fermi (400/500), and then Kepler (600/700).

For DirectX 9, you usually won't see much benefit from the DX11-era hardware, in some cases it actually loses out to the older DX10 parts. Most of the optimizations since around the GTX 285/GTX 480 time-frame have been towards improving GPU Computing performance, power efficiency (ha!), and supporting DirectX 11. You can see this with AMD too - the HD 5870 is a performance improvement over the 4870, but the 6870 is generally a step back (from the 5870) for DirectX 9 games.

The games you're talking about, however, will generally run pretty seriously maxed out on final-era DX9 hardware, like the 7900GTX. I'm assuming, however, that you'll probably move into later DX9 games like Mass Effect, Fallout 3, and Skyrim at some point, so the 8800/200 series is a good place to be. The 8800 will fulfill system requirements for any DirectX 9 game yet released, and the 200 will be more powerful (and if you go with the GTX 280, use less power too).

I've seen this "optimization push" as well, where "both sides of the fence" (AMD & nvidia) both started significantly decreasing DX9 performance and tuning their cards to be faster in DX10 & DX11. So.. since I don't have a lot of experience with nvidia in this time frame, I'm hoping some here can tune in and help me decide on the fastest DX9 card from nvidia before these "optimizations" for dx10 started hitting card families.

Reply 8 of 61, by maximus

User metadata
Rank Member
Rank
Member

Misleading thread title. In my mind, the Ultimate DirectX 9 Setup is a machine with no DirectX 10 or 11 capabilities. So something with either 7900 GTX SLI or X1950 XTX Crossfire (I think the ATI cards would outperform Nvidia cards by a small margin).

That said, it looks like an awesome build 😀

Can't believe how affordable this last-gen hardware has gotten.

PCGames9505

Reply 9 of 61, by kithylin

User metadata
Rank l33t
Rank
l33t
maximus wrote:

Misleading thread title. In my mind, the Ultimate DirectX 9 Setup is a machine with no DirectX 10 or 11 capabilities. So something with either 7900 GTX SLI or X1950 XTX Crossfire (I think the ATI cards would outperform Nvidia cards by a small margin).

That said, it looks like an awesome build 😀

Can't believe how affordable this last-gen hardware has gotten.

I know.. it's kinda crazy how cheap some of this stuff is today. if I went minimalist and just popped a quad core chip in here and stuck with the 4890's, we can get Q6600 quad kentsfield chips for about $40 - $50 today. My E3110 xeon that I'm currently using in here @ 4.5 ghz was only $19 about 8 months ago. And it's proably one of (if not the) fastest dual cores for 775, 6MB cache wolfdale chip. There's a 12MB quad core, which is essentially two of these dual cores on a DIE together for about $75 for a cheap one, and $110 for the one I was looking at with 9x multiplier to reach 4.5 Ghz was only $125.

Heck even board + CPU I7-920 + x58 systems can be had for $180 - $200 today for just bare board + cpu, and they're actually pretty darn speedy little things in their own right.

And I'm sorry if it seems misleading, but some of the Dx9/DX10 cards are actually some of the fastest at doing DX9 stuff, even if they're not "actually DX9 only" cards. If I got some or one of those, I wouldn't be using the DirectX-10 aspect though, probably like.. ever.

EDIT: I may end up getting a trio of GTX-285's after all, they're only $70-$105 each on ebay, and there's a bunch of EVGA ones that appear to match, so getting a matched trio may not be too hard. There's water blocks for em for about $50 each too. I'm still not too sure, just thinking at the moment. With my income it will probably take me about 6-7 months to save up for the power supply, but then work on the case then the video cards. I may turn this thread in to a "build thread" if there's any interest in here and update it as time goes.

Reply 10 of 61, by obobskivich

User metadata
Rank l33t
Rank
l33t
kithylin wrote:

I currently already own an Intel Dx48BT2 "Bonetrail" LGA775 motherboard. I had originally bought it cheap for $30 off ebay (new old stock - bulk pack) and thought I might just use it for a while for this plan, as it used DDR3. But ultimately once I got it here and had a lot of time to spend with it, I've found it's -VERY- poor for overclocking, my 6MB wolfdale chip will not get stable above 3.6 Ghz in this board, no matter what I do with it, even with water cooling. On the other hand, using it in my Gigabyte GA-EP45-UD3P motherboard, the CPU just dials right up to 4.5 ghz with no hesitation and runs stable with 1.45v cpu vcore without a problem. So in general, after this experience, I'm never touching another Intel motherboard, ever again. Also of note: While I appreciate the idea, it's pretty much going to be not-feasible to water cool a dual-CPU system + 3 hot nvidia cards. I've run the estimates and even a 12MB LGA775 quad core is looking to be around 400 watts of heat @ 4.4 ghz, not counting what the video cards dump in to the cooling system, (a pair of GTX-295's would be about 560 watts more heat, for example) If I did all that I'd have to be getting in to 1500 watt power supplies and massively huge 19-inch radiators. So to be realistic and not go -totally insane- here I want to stick to a single-CPU system. Then of course it boils down to performance and price. I've looked and a 12MB wolfdale quad + a non-OEM 780i motherboard would be about close to $350 (I can't find a 790i board anywhere). Then a entry level i7 with x58 motherboard would accomplish the same task, probably end up a little faster and can be acquired for about $200. So for performance vs price, I do think I'll probably end up building another x58 system, most likely, even though I think I'd kinda perfer a 775 system somewhere.

Intel boards aren't for overclocking - they're very stable though. Should've mentioned that. 😊 (I usually don't bother OC'ing systems; as you caught later on 🤣).

If you want to OC, you'll have to go elsewhere. Gigabyte's X58 boards are probably a good choice in that respect. If you want to go with 775 you'll have to find an nVidia board *or* ditch SLI (or use the hacks to enable it on X38/X48 - that's also an option too I guess).

Also - where in the world are you getting these power ratings from? Core 2 Quad with 12MB is around 100W TDP depending on model; calculations show it being maybe 200W if overclocked to 4GHZ.

As far as the not feasible with heat/cooling - you did say don't worry about that in mentioning hardware. 😊 🤣

Also about power supplies. A LGA775 quad @ 12MB @ 4ghz will pull around 350 watts just for the CPU, then add in something like 3 x GTX-480's (around 250 watts each), and I'd be looking at 1100 watts just for the board + GPU's, not factoring in the efficiency model of what ever power supply I use which will probably knock that down a lot. So.. I'm pretty much right on target with a 1200 - 1250 unit, generally we want the power supply to be about +10% above max load. It's not healthy to run a power unit @ 100% load at all times.

Power draw numbers for the CPU don't look right. The graphics cards are 250W/ea max - with a trio of them you should actually probably go with something more like 1500W (although if you're keeping that 250W Tt it could probably handle one of them to let you "get away" with a 1200W unit). Efficiency doesn't "knock it down" - it means extra draw at the wall and more heat. e.g. if your PSU is 80% efficient, and you need 960W through it, it will put out 960W and draw 1200W at the wall and express the difference as heat. That's why I said the TurboCool units are getting long in the tooth (because they are around 80% efficient); newer units will be closer to 90% (like the Seasonic you mentioned earlier), which means less heat to deal with.

As far as where a PSU should be run - generally the peak of the efficiency curve is between 40-60% (based on tests from TechReport and JonnyGuru, and published figures from Corsair, PC Power, and Antec). You probably won't be able to accomplish this with a trio of GTX 480s on a single PSU though. But you are dead-on that you generally don't want to see 100% loading (PC Power claims their units can survive that, but I'd still rather not test it out). Have you considered a multi-PSU system? (again, you did say "max out" 😎)

Try this out for calculating your power needs:
http://extreme.outervision.com/psucalculatorlite.jsp (remember to add in capacitor aging at the bottom)

You might also consider gutting some various components off the system's PSU - like all of the cooling can be run separately (this won't be huge power savings, but it's at least something).

Actually there's been a lot of research done online, and with me and some of my friends. With Tri-SLI (and a game that supports it) and newer drivers, actually you can gain up to +90% for each card in the system -IF- you have enough CPU to push this sort of setup. That's actually what the limiting factor is and it's why most people have come to thinking Tri-SLI doesn't scale well. In fact it does scale perfectly fine, with a fast CPU setup. At minimum a quad core @ 4ghz is generally required to get the most out of it, a i7 @ 4.5 would be better.

It depends on the game, the application, etc.

For example here's a Tom's comparison with GTX 480s:
http://www.tomshardware.com/reviews/geforce-g … re,2622-12.html

Performance gains are between 70 and 90% depending on the title (seems to more or less live up to your projections in some cases). The bigger advantage, however will be the improved IQ that tri-SLI gives you.

Here's another comparison from Anand with 8800 Ultras:
http://www.anandtech.com/show/2403/3

Again in some cases you will realize the 70-90% performance improvements, but in others you won't see much at all. They also go on to argue that at 1080p or lower, the third GPU doesn't do much in terms of measured performance (it should still help the micro-stutter phenomenon based on Tom's data). They also test the "CPU bound" assertion; in some cases it holds true, but not in all cases. Really depends on the application at hand. With older games you probably will have a better time of it because you aren't dealing with backports as much.

Micro-stuttering article:
http://www.tomshardware.com/reviews/radeon-ge … sfire,2995.html

Technically, any GPU produced with CUDA cores and at mimum 256 MB of onboard memory will be capable of running PHYSX & 3D at the same time, this is a specification and listed on the nvidia website somewhere.

I'd still probably add another card (if your motherboard will support it), since you did say "max out." 😎 I vaguely remember the 8800/9800 series not being 100% up to the task of doing it all themselves, but the Fermi chips shouldn't have any problem; as you noted. Where I'm unsure of how they'd handle it are the single-GPU 200 series cards running PhysX and 3D at once. Looked for benchmarks briefly but couldn't find anything that directly answered that question; perhaps you'll have better luck.

Did find this FAQ though:
http://www.techpowerup.com/forums/threads/the … estions.135891/

The issues really is not present with any family of AMD cards from newest to oldest, the main issue is with nvidia cards starting with GTX-500 and newer, I described that in my original post. However, I do want to stick to nvidia. I want nvidia-inspector, and PHYSX (For some games that support it) So.. thanks for the comments on AMD but I'm still sticking to nvidia 😁 And I'll most likely be using vista for this build, which should eliminate most of the Win7 compatibility problems.

Wasn't trying to sway you away from nVidia (sorry if that was how it seemed); just providing information in response to both your, and tetrium's, comments regarding support/compatibility. With DX9 it shouldn't be a problem was my point, but DX8 games seem more hit and miss under Windows 7 and later (and I've identified this as an OS-related problem; because the same hardware running XP had no troubles). Hopefully Vista fares better in that respect.

This is another subject worth touching briefly. Actually Tri-GPU setup is usually fine (but not always, there's some games it will still screw up on). But the thing is with tri-SLI, all 3 video cards have to be an exact 100% match of all clock speeds, at all times. This is a big problem of tri-SLI stability issues, it's common for some people to "mix and match" different brands and then the clocks don't sync and then there you have problems. I'm aware of this and prepared to force cards in to same-clocks either with software or re-programming their bios's. Skyrim works great in multi-gpu, even a pair of 4890's it runs perfectly fine.. at lower settings with AA off, but it can run okay. I've played it on this system with the dual core and the 4890's, and as long as I run with AA off and no mods, it runs a smooth solid 60 FPS everywhere with no micro-stutter. I have my AMD 4890's reflashed on their bios's to match clocks though.

The drivers won't generally let it "mix and match" - they should automatically set everything to a common clock and settings (has always been my experience, with Radeon or GeForce cards). GeForce cards have much stricter requirements for SLI (you can't hook up a GTX 280 and GTS 250 for example). I'm surprised Skyrim doesn't run with higher settings on your system though; my single 4890 handles it with 4x/8x AA, maximum settings, and handles vsync no problem. 😕

I *did* forget to mention - the skyrim woes with multi-GPU are cured by turning AA off. Here's a video example:
http://www.youtube.com/watch?v=9oWnO7Wy2p8

I've never experienced it that seriously - just in menus and loading screens with 3-way CrossFire (with the 4870X2 it only happened in menus). I also didn't mean to imply that tri-GPU always has issues; just that it CAN (we really seem to be on the same page on most of these points by the way); there's plenty of games that load up and run very smooth. I had no problems with, for example, Fallout 3, Halo (at nearly 400 FPS even with 32x AA enabled 🤣 ), or Mass Effect 1.

Thanks for the comments, I was kind of leaning towards a GTX-480 maybe for this reason, if I encounter games that won't run on multi-GPU (and won't respond to it being forced-on) then at least this would be a powerful single-gpu option, maybe.

Depending on what games you want to play, the 280 or 285 should probably accomplish that as well - thinking of some DX9 games like Oblivion, UT3, Fallout 3, GTA San Andreas, Half-Life 2 Episode 2, and EYE Divine Cybermancy, a single 280 should be able to run them all. Multiple cards will give you better performance, features, etc, but the single card should *run*.

Price of the 280/285s is generally cheaper IME too (because they're older and not DX11 I'd guess). I've seen some of the 8800s as low as $50-$60 on eBay.

I put in the power supply comments earlier above in this post, but I suspect you probably wern't overclocking the CPU past 4ghz then. Once you start doing that and getting in to the range of the CPU alone pulling 300-400 watts of power, you start getting in to totally different power requirements if you still want powerful GPU's mated to it too. this 700 watt corsair I have will run around 730 watts gaming and shut down after a few seconds, even with just a pair of 4890's and this dual core @ 4.5 ghz. So even this isn't enough, Ideally I'd like the power supply to be about +100 to +200 watts above what I actually use, better life for the power unit, and if you can push your load more towards the middle of it's rated maximum, then you can hit a power supply's maximum efficiency % and be better on consumption @ the outlet.

You are right that I tend not to bother with OC'ing, but based on calculations for power draw, 300-400W seems very outlandish as a requirement. You've certainly piqued my "troubleshooting sense" when you're saying a system with a 700W PSU is buckling with a pair of 4890s and seemingly struggling in Skyrim... 😵

Thanks for this information, I'll get in to reading it later. I really doubt a single GTX-285 could do much with 1080p @ 16xAA for most games, even with a fast cpu behind it. I'll read in to some things though. Most reviews of the GTX-500 series don't include DX9 performance though, just a note there.

Oh no the point wasn't to compare GTX 500 for DX9; just to provide information about the 480's power/heat draw (I think aside from the GTX Titan and K6000 its the hottest running nVidia GPU to date).

A single GTX 285 should have no problems at 1080p at 4-8x AA for most DX9 games from 2008 and older. Higher AA levels will be a big drain (that's one advantage of multi-GPU; AA becomes much "cheaper" relatively speaking).

GTX 285 review from Guru3D:
http://www.guru3d.com/articles_pages/geforce_ … way_sli,18.html (has various DX9 games and 3-way SLI comparisons too - would've put it up with the others, but they didn't break a page off for evaluating scaling like Anand and Tom's did; but between the three you've got GF8, GF200, and GF400 scaling data)

It looks like a single card has no trouble with handling 1080p for the DX9 titles, and scaling isn't bad either. Less heat than 480 too.

I've seen this "optimization push" as well, where "both sides of the fence" (AMD & nvidia) both started significantly decreasing DX9 performance and tuning their cards to be faster in DX10 & DX11. So.. since I don't have a lot of experience with nvidia in this time frame, I'm hoping some here can tune in and help me decide on the fastest DX9 card from nvidia before these "optimizations" for dx10 started hitting card families.

The 200 or 400 series will be that bar; 200 series came out when DX9 was still quite popular and prevalent, the 400 series is faster and also the beginning of DX11. 500 series was touted primarily as a power/performance efficiency improvement from what I recall.

However given the games you've mentioned, the 8800 series would probably be just as suitable for you, and might mean some serious cost savings. Oblivion, for example, really doesn't "need" 3 GTX 480s (remember that we're talking about a game that'll run on a GeForce FX reasonably well).

Reply 11 of 61, by kithylin

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

Also - where in the world are you getting these power ratings from? Core 2 Quad with 12MB is around 100W TDP depending on model; calculations show it being maybe 200W if overclocked to 4GHZ.

I was a little off then, x3700 wolfdale 12mb xeon 775 is only 148 watts @ 4400 mhz & 1.47v, however i7-920 is 275 watts @ 4400 mhz & 1.55v. Still a bit much though. I usually refer to this for power figures: http://www.extreme.outervision.com/tools.jsp

obobskivich wrote:

As far as the not feasible with heat/cooling - you did say don't worry about that in mentioning hardware. 😊 🤣

I did didn't I? Well yeah.. I meant to kind of say that I'll put whatever power supply is needed in to a system to make this work rather. And I'll be water cooling the GPU's, some day. So heat from them isn't much of a factor, video cards don't get too hot when you water cool em.

obobskivich wrote:

Power draw numbers for the CPU don't look right. The graphics cards are 250W/ea max - with a trio of them you should actually probably go with something more like 1500W (although if you're keeping that 250W Tt it could probably handle one of them to let you "get away" with a 1200W unit). Efficiency doesn't "knock it down" - it means extra draw at the wall and more heat. e.g. if your PSU is 80% efficient, and you need 960W through it, it will put out 960W and draw 1200W at the wall and express the difference as heat. That's why I said the TurboCool units are getting long in the tooth (because they are around 80% efficient); newer units will be closer to 90% (like the Seasonic you mentioned earlier), which means less heat to deal with.

I don't want to argue, but your figures are a little backwards on the wrong end. With my big i7 computer I already own, I used to run this corsair 700 watt power supply in it for the longest time, (about the first 2 years I owned it) and used to have the pair of 4890's in it, and with that and 8 x 15,000 rpm hard drives and a 4.6 ghz overclock on a 6-core chip, it used to pull around 650 - 715 watts @ the outlet gaming. And this power supply is only a 78% - 82% efficiency unit. I then recently upgraded that machine to a modern 92% efficiency seasonic 1000 watt unit, and now (without changing anything else in the system what so ever), it only draws 450 - 525 watts @ outlet. Powering the exact same load. So high-efficiency power supplies reduce at-outlet load, not in-computer load. And my new seasonic one runs -a lot- cooler than this corsair one did. (The poor corsair unit was running it's self at nearly 95% of maximum all the time to handle my big i7) The new SeaSonic unit has an automatic fan that only comes on when it gets hot enough (and it rarely does, so far). It is the winter here in Texas though, so that will likely change as the first summer comes with the new power supply. Eventually I'll be converting every computer in the house to 90+ efficiency units some day, reduce the overall power usage for my various computers.

obobskivich wrote:

As far as where a PSU should be run - generally the peak of the efficiency curve is between 40-60% (based on tests from TechReport and JonnyGuru, and published figures from Corsair, PC Power, and Antec). You probably won't be able to accomplish this with a trio of GTX 480s on a single PSU though. But you are dead-on that you generally don't want to see 100% loading (PC Power claims their units can survive that, but I'd still rather not test it out). Have you considered a multi-PSU system? (again, you did say "max out" 😎)

Try this out for calculating your power needs:
http://extreme.outervision.com/psucalculatorlite.jsp (remember to add in capacitor aging at the bottom)

I've already paid for their pro version lifetime (I suggest any other builders on here do too, it's worth it) and it suggests with 30% cap aging (About 5 years run time, I don't plan to replace my power supplies very often, at least 5+ years.. 7 or more with the warranties on some of the new ones) it suggests with my planned quad core 775 system, including the overclock and a pair of gtx-295's and everything I should aim for a 975 watt unit now, but putting in capacitor aging, and it says I should start out with a 1100 watt unit with at least 97 amps on +12v. So I'd probably over-shoot a little for a 1200 watt one or a 1250.

obobskivich wrote:

You might also consider gutting some various components off the system's PSU - like all of the cooling can be run separately (this won't be huge power savings, but it's at least something).

I haven't thought about that! I might could run all the cooling off the TT-250 power supply if I crimped some pci-e power cables to convert to molex plugs, fans and the water pump only run on +12v and don't need +5v. That's an idea! 😀

obobskivich wrote:
It depends on the game, the application, etc. […]
Show full quote

It depends on the game, the application, etc.

For example here's a Tom's comparison with GTX 480s:
http://www.tomshardware.com/reviews/geforce-g … re,2622-12.html

Performance gains are between 70 and 90% depending on the title (seems to more or less live up to your projections in some cases). The bigger advantage, however will be the improved IQ that tri-SLI gives you.

Here's another comparison from Anand with 8800 Ultras:
http://www.anandtech.com/show/2403/3

Again in some cases you will realize the 70-90% performance improvements, but in others you won't see much at all. They also go on to argue that at 1080p or lower, the third GPU doesn't do much in terms of measured performance (it should still help the micro-stutter phenomenon based on Tom's data). They also test the "CPU bound" assertion; in some cases it holds true, but not in all cases. Really depends on the application at hand. With older games you probably will have a better time of it because you aren't dealing with backports as much.

Micro-stuttering article:
http://www.tomshardware.com/reviews/radeon-ge … sfire,2995.html

Thanks for helping me waste a few hours on the internet (this is a good thing.) I'll work on reading all these 😀

obobskivich wrote:

I'd still probably add another card (if your motherboard will support it), since you did say "max out." 😎 I vaguely remember the 8800/9800 series not being 100% up to the task of doing it all themselves, but the Fermi chips shouldn't have any problem; as you noted. Where I'm unsure of how they'd handle it are the single-GPU 200 series cards running PhysX and 3D at once. Looked for benchmarks briefly but couldn't find anything that directly answered that question; perhaps you'll have better luck.

Did find this FAQ though:
http://www.techpowerup.com/forums/threads/the … uestions.135891

This is the problem I've had: trying to find review sites that publish benchmarks on things like physx titles and 200-series cards, and DirectX-9 performance with 400-series cards, with a significantly overclocked cpu system. Both of these are pretty much completely non-existent online, I'll try googling more later I guess.

I found an article on tom's hardware about the GTX 285 three-way system run on some tests with a 4ghz i7-920 system, so that's something.. I'll try and fish up a link here. Boo, I've searched around and can't find it now. 😠

obobskivich wrote:

The drivers won't generally let it "mix and match" - they should automatically set everything to a common clock and settings (has always been my experience, with Radeon or GeForce cards). GeForce cards have much stricter requirements for SLI (you can't hook up a GTX 280 and GTS 250 for example). I'm surprised Skyrim doesn't run with higher settings on your system though; my single 4890 handles it with 4x/8x AA, maximum settings, and handles vsync no problem. 😕

I *did* forget to mention - the skyrim woes with multi-GPU are cured by turning AA off. Here's a video example:
http://www.youtube.com/watch?v=9oWnO7Wy2p8

I have never ever in all my years of computing ever heard of actual -video drivers- changing the -clock speed- of a video card. Regardless of your configuration that should -NEVER!- happen what so ever. That's a very bad thing. And the video drivers only check that the cores of the video card match (All 3 are G92 for example), and then lets you enable SLI or crossfire. The video drivers do not have any coding to "check" if the clock speeds of the video cards match at all, it just matches core, and lets you switch it on and then it's up to the user. This is why almost all folks online on forums generally recommend you match exact models for video cards when overclocking to avoid it. But some people don't know this and just buy for example "any random used gtx 285" and some may be factory overclocked and some not and then that's where you get problems. My two HD4890's... the VisionTek top one defaults to 950 Mhz core from VisionTek, and the XFX one defaults to 915 Mhz core. I can plug in both cards, install the drivers, check SLI, and go load a game and nothing will stop me from trying to game like that. But if I left them alone and tried that, nothing would run and everything would crash. I had to fix it manually and synchronize the clock speeds between the two cards by extracting their BIOS's and re-flashing them with same values. Only after doing that would they all work. In this regard I found some guy on ebay selling used GTX-285 cards and he has a listing with "more than 10" available, I could just save up and go through him and get 3 at once and then be guaranteed they all matched. I might end up doing that for piece of mind.

obobskivich wrote:

I've never experienced it that seriously - just in menus and loading screens with 3-way CrossFire (with the 4870X2 it only happened in menus). I also didn't mean to imply that tri-GPU always has issues; just that it CAN (we really seem to be on the same page on most of these points by the way); there's plenty of games that load up and run very smooth. I had no problems with, for example, Fallout 3, Halo (at nearly 400 FPS even with 32x AA enabled 🤣 ), or Mass Effect 1.

I used to have a pair of 8800 GTX cards back in the day, and so I have a little experience with older nvidia tech and SLI, I used them for a long while until the 4890's came out and jumped on the AMD bandwagon cause they were so much faster for a few years. But the whole 3-4 years I was on the 4890's.. I always longed for nvidia inspector and physx again. So I switched back to nvidia when I upgraded my i7.. the 4890's came back to this machine for now, and I want to replace them with nvidia ASAP too.

obobskivich wrote:

Depending on what games you want to play, the 280 or 285 should probably accomplish that as well - thinking of some DX9 games like Oblivion, UT3, Fallout 3, GTA San Andreas, Half-Life 2 Episode 2, and EYE Divine Cybermancy, a single 280 should be able to run them all. Multiple cards will give you better performance, features, etc, but the single card should *run*.

Price of the 280/285s is generally cheaper IME too (because they're older and not DX11 I'd guess). I've seen some of the 8800s as low as $50-$60 on eBay.

There's this listing I found in a search tonight that I'll save and mentioned above, try and get all 3 at once from him later, I think I might end up on a trio of 285's after all. http://www.ebay.com/itm/360863216388 $84.95 each BIN, there's others a little cheaper on ebay but I might pay a little more to get all 3 from one seller and a guaranteed matched set. I think that'd be worth a few extra bucks.

obobskivich wrote:

You are right that I tend not to bother with OC'ing, but based on calculations for power draw, 300-400W seems very outlandish as a requirement. You've certainly piqued my "troubleshooting sense" when you're saying a system with a 700W PSU is buckling with a pair of 4890s and seemingly struggling in Skyrim... 😵

One factor.. this power supply is OLD! Very old.. I think I bought it in uh.. gods.. 2002, 2003? there abouts. I've replaced the 120mm fan in it at least 6 times now. I'm sure it's capacitors are ancient by now, which probably contributes to above average power draw through it. I'm one of those that I generally don't replace products until they completely die on me, (part of why I still have a lot of my older computers from over the years.) I admit I over-shot a little on the power usage, it's more in the 200-300 watt range, but still.. processors pull a lot when you get to overclocking em. And these pair of 4890's run great in skyrim, if I'm trying to run ultra @ 1080p, just not so much with AA. It's mostly the processor, even trying to do that this 4.5 ghz dual core runs flat out at 90% - 100% usage almost the entire time while running skyrim.

obobskivich wrote:
Oh no the point wasn't to compare GTX 500 for DX9; just to provide information about the 480's power/heat draw (I think aside fr […]
Show full quote

Oh no the point wasn't to compare GTX 500 for DX9; just to provide information about the 480's power/heat draw (I think aside from the GTX Titan and K6000 its the hottest running nVidia GPU to date).

A single GTX 285 should have no problems at 1080p at 4-8x AA for most DX9 games from 2008 and older. Higher AA levels will be a big drain (that's one advantage of multi-GPU; AA becomes much "cheaper" relatively speaking).

GTX 285 review from Guru3D:
http://www.guru3d.com/articles_pages/geforce_ … way_sli,18.html (has various DX9 games and 3-way SLI comparisons too - would've put it up with the others, but they didn't break a page off for evaluating scaling like Anand and Tom's did; but between the three you've got GF8, GF200, and GF400 scaling data)

It looks like a single card has no trouble with handling 1080p for the DX9 titles, and scaling isn't bad either. Less heat than 480 too.

Yeah.. I'm starting to agree there. I think the more I look in to it, for DX9 stuff.. a trio of 285's would probably be my best option.

obobskivich wrote:

The 200 or 400 series will be that bar; 200 series came out when DX9 was still quite popular and prevalent, the 400 series is faster and also the beginning of DX11. 500 series was touted primarily as a power/performance efficiency improvement from what I recall.

However given the games you've mentioned, the 8800 series would probably be just as suitable for you, and might mean some serious cost savings. Oblivion, for example, really doesn't "need" 3 GTX 480s (remember that we're talking about a game that'll run on a GeForce FX reasonably well).

Thanks for explaining some. At least I understand where that "bar" is with nvidia. I know the 4890's with AMD are pretty much essentially the "bar" for the end of extreme DX9 performance. Part of why I'm never letting go of my 4890's even after replacing em later. I always try and have a powerful GPU around just in case.. never know what happens when dealing with overclocked hardware. So far I haven't had any actual "failures" from overclocking since I fried that athlonXP chip years ago way back when once. When I moved over to intel though I haven't had a dead chip in years, even my Q6600 I had at one point years ago I ran it @ 3.8 ghz hitting 78c - 85c load temps constantly, daily, for nearly 4 years and it never gave out. My i7-980x chip in my big machine's been running load temps in the 85c - 94c range since I built it in 2010 and has no problems yet to date, I sometimes run it at 100% load for hours converting video, it doesn't stop either. So part of why I'm sticking to intel and never going to bother with AMD: Intel chips just take insane overclocks 24-7 and just run like a mack truck and don't care.

P.S. I always feel like I'm writing an essay when I respond to you, can't you make shorter posts? 😉

EDIT: Also the current motherboard I have runs both of my 4890's at 8x-8x just to run CrossFireX, so that too likely has an impact on potential performance, although I don't know how large of an impact currently.

Reply 12 of 61, by obobskivich

User metadata
Rank l33t
Rank
l33t

The length of these replies man... 🤣

kithylin wrote:

I was a little off then, x3700 wolfdale 12mb xeon 775 is only 148 watts @ 4400 mhz & 1.47v, however i7-920 is 275 watts @ 4400 mhz & 1.55v. Still a bit much though. I usually refer to this for power figures: http://www.extreme.outervision.com/tools.jsp

That's a lot more realistic - 400W just sounded very extreme. I don't even remember QuadFX using 400W for the CPUs.

I did didn't I? Well yeah.. I meant to kind of say that I'll put whatever power supply is needed in to a system to make this work rather. And I'll be water cooling the GPU's, some day. So heat from them isn't much of a factor, video cards don't get too hot when you water cool em.

Alright I gotcha. Just make sure the WC system has the capacity for all that you're loading into it - if you're living somewhere relatively dry (or can exhaust outside), maybe a super-bong would be worth trying. 😎

So high-efficiency power supplies reduce at-outlet load, not in-computer load.

Yeah. That's exactly what I was saying/explaining. No argument there.

And my new seasonic one runs -a lot- cooler than this corsair one did. (The poor corsair unit was running it's self at nearly 95% of maximum all the time to handle my big i7) The new SeaSonic unit has an automatic fan that only comes on when it gets hot enough (and it rarely does, so far). It is the winter here in Texas though, so that will likely change as the first summer comes with the new power supply. Eventually I'll be converting every computer in the house to 90+ efficiency units some day, reduce the overall power usage for my various computers.

Exactly why I agreed with the SeaSonic (or whoever else) at 90%+ or better, vs the older PC Power units that are in the low 80s. Less heat generation and less draw at the outlet. Again, no argument there. We're on the same page. 😀

I've already paid for their pro version lifetime (I suggest any other builders on here do too, it's worth it) and it suggests with 30% cap aging (About 5 years run time, I don't plan to replace my power supplies very often, at least 5+ years.. 7 or more with the warranties on some of the new ones) it suggests with my planned quad core 775 system, including the overclock and a pair of gtx-295's and everything I should aim for a 975 watt unit now, but putting in capacitor aging, and it says I should start out with a 1100 watt unit with at least 97 amps on +12v. So I'd probably over-shoot a little for a 1200 watt one or a 1250.

Yeah. When I put in your system above with 3 GTX 480s in place of the 4890s it said around the same.

Knowing that you're going 100% nVidia does change things a bit - my comment about not needing 1200W+ was based on HD 4800s (and not knowing that you have 8 15k RPM drives...). GTX 480 or GTX 295 draw much more power though (GTX 480 and GTX 295 can draw around 300W/ea at full load (Guru3D measured 280W for the 480; Geeks3D measured 294W for the 295)).

Geeks3D article:
http://www.geeks3d.com/20100226/the-real-powe … graphics-cards/ (wish this was updated/kept recent)

I haven't thought about that! I might could run all the cooling off the TT-250 power supply if I crimped some pci-e power cables to convert to molex plugs, fans and the water pump only run on +12v and don't need +5v. That's an idea! 😀

To be honest, I didn't even think about using the Tt for that. I was just thinking about using some basic 7-12VDC external brick for all the fans and pumps; the Tt would be a lot neater/cleaner way to accomplish that though! 😊

Thanks for helping me waste a few hours on the internet (this is a good thing.) I'll work on reading all these 😀

😀

This is the problem I've had: trying to find review sites that publish benchmarks on things like physx titles and 200-series cards, and DirectX-9 performance with 400-series cards, with a significantly overclocked cpu system. Both of these are pretty much completely non-existent online, I'll try googling more later I guess.

This is where I ended up while looking for reviews too...

I found an article on tom's hardware about the GTX 285 three-way system run on some tests with a 4ghz i7-920 system, so that's something.. I'll try and fish up a link here. Boo, I've searched around and can't find it now. 😠

The Guru3D article I linked had 285 multi-way, but I don't remember what CPU they used. It showed pretty good scaling regardless for the DX9 games they had, so I'm guessing either none were CPU bound, or none were bound by the CPU they chose. Looked for something from Tom's and couldn't find anything either.

I have never ever in all my years of computing ever heard of actual -video drivers- changing the -clock speed- of a video card. Regardless of your configuration that should -NEVER!- happen what so ever. That's a very bad thing.

Catalyst will set any constituent cards to equalivent for mismatched Crossfire. The drivers (both sides) also adjust clockspeeds for thermal and performance management (especially the new Radeon cards). Perhaps there's a semantic misunderstanding here though.

And the video drivers only check that the cores of the video card match (All 3 are G92 for example), and then lets you enable SLI or crossfire.

SLI requires the cards to be matched - for example two 9800GTX, not a 9800GTX + 8800GTS. You'd have to flash one to make it work. CrossFire doesn't have this requirement - you can usually put any cards from the same family together, like 4860 + 4830 for example.

Article from TR explaining CF-X:
http://techreport.com/review/14284/crossfire-x-explored

The video drivers do not have any coding to "check" if the clock speeds of the video cards match at all, it just matches core, and lets you switch it on and then it's up to the user. This is why almost all folks online on forums generally recommend you match exact models for video cards when overclocking to avoid it. But some people don't know this and just buy for example "any random used gtx 285" and some may be factory overclocked and some not and then that's where you get problems.

With nVidia that is 100% true. nVidia is much pickier when it comes to what they'll officially support for SLI configurations. 😒

My two HD4890's... the VisionTek top one defaults to 950 Mhz core from VisionTek, and the XFX one defaults to 915 Mhz core. I can plug in both cards, install the drivers, check SLI, and go load a game and nothing will stop me from trying to game like that. But if I left them alone and tried that, nothing would run and everything would crash.

SLI? 😕

Also - should not be crashing at all. My 4870X2 and 4890 worked quite happily together (until I got sick of the heat and noise and banished the 4870X2 to storage...).

I had to fix it manually and synchronize the clock speeds between the two cards by extracting their BIOS's and re-flashing them with same values. Only after doing that would they all work.

This should not at all be required; the drivers with CF-X will just settle on the lowest common clock and memory size.

There's this listing I found in a search tonight that I'll save and mentioned above, try and get all 3 at once from him later, I think I might end up on a trio of 285's after all. http://www.ebay.com/itm/360863216388 $84.95 each BIN, there's others a little cheaper on ebay but I might pay a little more to get all 3 from one seller and a guaranteed matched set. I think that'd be worth a few extra bucks.

Yeah I'd say that'd be worth it - especially with nV cards.

One factor.. this power supply is OLD! Very old.. I think I bought it in uh.. gods.. 2002, 2003? there abouts. I've replaced the 120mm fan in it at least 6 times now. I'm sure it's capacitors are ancient by now, which probably contributes to above average power draw through it. I'm one of those that I generally don't replace products until they completely die on me, (part of why I still have a lot of my older computers from over the years.) I admit I over-shot a little on the power usage, it's more in the 200-300 watt range, but still.. processors pull a lot when you get to overclocking em. And these pair of 4890's run great in skyrim, if I'm trying to run ultra @ 1080p, just not so much with AA. It's mostly the processor, even trying to do that this 4.5 ghz dual core runs flat out at 90% - 100% usage almost the entire time while running skyrim.

Still quite surprising about the performance...I've never had an issue running AA in any of the Bethesda games (again, ignoring the menu flicker in Skyrim), with single, double, or triple 4800s. Going 16x or higher prefers triple, but honestly it isn't *that* much of a difference to IQ imho.

Like I said - you've piqued my "troubleshooting sense" with this...

P.S. I always feel like I'm writing an essay when I respond to you, can't you make shorter posts? 😉

🤣

EDIT: Also the current motherboard I have runs both of my 4890's at 8x-8x just to run CrossFireX, so that too likely has an impact on potential performance, although I don't know how large of an impact currently.

I don't think it should - I know Tom's tested this years ago with GeForce 6 or 7 cards and showed that x8 vs x16 for SLI had little to no impact. And remember if your current board is 2.0 (it should be, and the 4890s support it), x8 is equivalent to "original" x16.

Here's some reviews I found with more modern hardware:
http://www.tweaktown.com/articles/4147/nvidia … ysis/index.html
http://www.tomshardware.com/reviews/amd-cross … gpu,2678-5.html

Still looks like it isn't much of a problem. In general x16 is kind of like AGP 8x - much ado about nothing. SoftTH may prefer 2x16 to 2x8, but I don't know that for certain.

Reply 14 of 61, by kithylin

User metadata
Rank l33t
Rank
l33t

Let's see if I can 'trim the fat' and try to cut out large sections to reply to here. Thankfully you've helped me cover the majority of my concerns already. 😀

obobskivich wrote:

Alright I gotcha. Just make sure the WC system has the capacity for all that you're loading into it - if you're living somewhere relatively dry (or can exhaust outside), maybe a super-bong would be worth trying. 😎

*nod* I'm planning to add another high-performance pump to match the one I have now, (redundancy and probably needed to maintain flow rates pushing through 3 video cards and 2 radiators) and another radiator like the one I have already. I'm using the "Black Ice GT Stealth" radiator right now, which is rated for 700 watts of heat for a single 2 x 140mm radiator, just adding another one would come up to about 1400 watts of capacity and a 300 watt cpu + 3 x 285's would only be about 1100 - 1200 watts. So.. that should be a perfect match, and I just need to figure out a computer case both to have mounting for two 240mm radiators and support really-long video cards. I'm still researching that, that'll come after power supply so it'll be a while.

obobskivich wrote:

Knowing that you're going 100% nVidia does change things a bit - my comment about not needing 1200W+ was based on HD 4800s (and not knowing that you have 8 15k RPM drives...). GTX 480 or GTX 295 draw much more power though (GTX 480 and GTX 295 can draw around 300W/ea at full load (Guru3D measured 280W for the 480; Geeks3D measured 294W for the 295)).

Geeks3D article:
http://www.geeks3d.com/20100226/the-real-powe … graphics-cards/ (wish this was updated/kept recent)

The 15k drives are in my big i7 and won't be going in to this DX9 build. IF anything, I'm thinking of setting it up with 6 x SSD's in raid-0. But maybe something really cheap, I've seen some SLC Enterprise tier 32GB SSD's on ebay for about $30/each, 6 of em together would be around 192 GB for 192 GB. I think 6 of em in raid-0 would be more performance than a single modern one, but I'm not 100% on that yet. That's all much much later in this build after video cards, and getting all of em water-plumbed.

obobskivich wrote:
Catalyst will set any constituent cards to equalivent for mismatched Crossfire. The drivers (both sides) also adjust clockspeeds […]
Show full quote

Catalyst will set any constituent cards to equalivent for mismatched Crossfire. The drivers (both sides) also adjust clockspeeds for thermal and performance management (especially the new Radeon cards). Perhaps there's a semantic misunderstanding here though.

SLI requires the cards to be matched - for example two 9800GTX, not a 9800GTX + 8800GTS. You'd have to flash one to make it work. CrossFire doesn't have this requirement - you can usually put any cards from the same family together, like 4860 + 4830 for example.

Article from TR explaining CF-X:
http://techreport.com/review/14284/crossfire-x-explored

One thing I want to touch on this subject. The newer video cards (nvidia 600-series and 700 series), and AMD I think started it with the later 6000 series and 7000 series (I had a pair of HD 7770's for a while but ended up selling em and going back to my 4890's), started with this new "Power curve" stuff or what ever they call it. "Boost clocks" is another term they're passing around. This stuff about clocking the video cards up/down when they get hotter or cooler, ETC, is generally a bad idea and a big portion of why I don't like the new cards. In general the newer driver versions that "support" this boost feature will be modifying clock speeds and such on connected cards, I think. I'm just speculating here. But in theory, using older video cards before 'boost clocks' came out, and then using slightly older (but not really ancient) drivers that also don't support boost clocks, should pretty much eliminate all of this drivers-changing-clocks crap, I think. I guess I'm old school but I prefer my video cards to run at 100% max fixed speeds at all times, no matter how hot they are, or how much power this makes them pull, or how much load that it's enduring. I just want it to all be at full speed all the time, older cards are nice about doing this.

obobskivich wrote:

SLI? 😕

Yes.. I made a Typo. I clearly meant CrossFireX, I should go back and update my post up there to fix it. I thought I read it and caught everything in preview mode, I guess I slipped. I am, sadly.. only human. 😒

obobskivich wrote:

Also - should not be crashing at all. My 4870X2 and 4890 worked quite happily together (until I got sick of the heat and noise and banished the 4870X2 to storage...).

This should not at all be required; the drivers with CF-X will just settle on the lowest common clock and memory size.

I covered the drivers and clock speeds thing above, but it did happen and it required fixing. I might try and PM you and chat off-site and we can discuss and troubleshoot this later as that's kind of off-topic here.

obobskivich wrote:

Still quite surprising about the performance...I've never had an issue running AA in any of the Bethesda games (again, ignoring the menu flicker in Skyrim), with single, double, or triple 4800s. Going 16x or higher prefers triple, but honestly it isn't *that* much of a difference to IQ imho.

I don't see much difference in 8xAA and above, except for some FPS games when you are walking around and you get at just the right angle, you can see aliasing on objects and it looks horribly out of place in an environment that's smooth and pretty otherwise. So sometimes I try and force 16x (or more!) to try and eliminate this, but that's the perfectionist in me.

Reply 15 of 61, by obobskivich

User metadata
Rank l33t
Rank
l33t
kithylin wrote:

Let's see if I can 'trim the fat' and try to cut out large sections to reply to here. Thankfully you've helped me cover the majority of my concerns already. 😀

Glad to help.

*nod* I'm planning to add another high-performance pump to match the one I have now, (redundancy and probably needed to maintain flow rates pushing through 3 video cards and 2 radiators) and another radiator like the one I have already. I'm using the "Black Ice GT Stealth" radiator right now, which is rated for 700 watts of heat for a single 2 x 140mm radiator, just adding another one would come up to about 1400 watts of capacity and a 300 watt cpu + 3 x 285's would only be about 1100 - 1200 watts. So.. that should be a perfect match, and I just need to figure out a computer case both to have mounting for two 240mm radiators and support really-long video cards. I'm still researching that, that'll come after power supply so it'll be a while.

A cube case might be nice for this - like the Lian-Li PC343 or the Mountain Mods cases. Thermaltake also used to make a very large (mini-fridge sized) case, I think it was called Mozart.

The 15k drives are in my big i7 and won't be going in to this DX9 build. IF anything, I'm thinking of setting it up with 6 x SSD's in raid-0. But maybe something really cheap, I've seen some SLC Enterprise tier 32GB SSD's on ebay for about $30/each, 6 of em together would be around 192 GB for 192 GB. I think 6 of em in raid-0 would be more performance than a single modern one, but I'm not 100% on that yet. That's all much much later in this build after video cards, and getting all of em water-plumbed.

Gotcha. Not sure any of that is "needed" - but again, why not? 😎

The SSDs would be quieter for sure.

One thing I want to touch on this subject. The newer video cards (nvidia 600-series and 700 series), and AMD I think started it with the later 6000 series and 7000 series (I had a pair of HD 7770's for a while but ended up selling em and going back to my 4890's), started with this new "Power curve" stuff or what ever they call it. "Boost clocks" is another term they're passing around. This stuff about clocking the video cards up/down when they get hotter or cooler, ETC, is generally a bad idea and a big portion of why I don't like the new cards. In general the newer driver versions that "support" this boost feature will be modifying clock speeds and such on connected cards, I think. I'm just speculating here. But in theory, using older video cards before 'boost clocks' came out, and then using slightly older (but not really ancient) drivers that also don't support boost clocks, should pretty much eliminate all of this drivers-changing-clocks crap, I think. I guess I'm old school but I prefer my video cards to run at 100% max fixed speeds at all times, no matter how hot they are, or how much power this makes them pull, or how much load that it's enduring. I just want it to all be at full speed all the time, older cards are nice about doing this.

If I remember right the GeForce FX introduced "idle clocks" for power management, although not all members of that series support it. Cards since then have adapted it more widespread, as a means of improving idle temps and reducing power consumption. Even the HD 4890 will do this (reference design I believe is 240MHz at idle).

The newer model of adaptively setting the clocks ("dynamic clocks"), like what the R9 290X does, is kind of silly imho - the card will lower its clocks to cool itself off, but usually the highest heat loads are the result of high processing demands...doesn't make a lot of sense imho. Personally I like the idea of devices reducing power consumption when it makes sense, like the newer Intel processors with EIST, but I understand that some folks don't like that for whatever reason.

I don't see much difference in 8xAA and above, except for some FPS games when you are walking around and you get at just the right angle, you can see aliasing on objects and it looks horribly out of place in an environment that's smooth and pretty otherwise. So sometimes I try and force 16x (or more!) to try and eliminate this, but that's the perfectionist in me.

Usually I find that higher-AA/AF settings help older games more. Like for example Morrowind or the original Hitman, which in turn generally can be run with higher settings due to their lower requirements. Kind of works out I guess.

EDIT
Went and did some looking about the power requirements for OC'ing the i7, and 250W is still a little high unless you need exceptionally high vcore. See more here: http://www.overclock.net/t/538439/guide-to-ov … r-930-to-4-0ghz

In general 200W is probably a realistic top-end for a heavily overclocked CPU, however the overall "need" for such a thing doesn't seem to be reflected in the scaling benchmarks I've found. It certainly won't hurt (power consumption aside), but it shouldn't be required for good performance. If your goal is a very heavily overclocked CPU, I'd suggest spending the time to find the proper stepping, tuning for efficiency, etc - it should improve cooling performance considerably. 😀

Reply 16 of 61, by kithylin

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

EDIT
Went and did some looking about the power requirements for OC'ing the i7, and 250W is still a little high unless you need exceptionally high vcore. See more here: http://www.overclock.net/t/538439/guide-to-ov … r-930-to-4-0ghz

In general 200W is probably a realistic top-end for a heavily overclocked CPU, however the overall "need" for such a thing doesn't seem to be reflected in the scaling benchmarks I've found. It certainly won't hurt (power consumption aside), but it shouldn't be required for good performance. If your goal is a very heavily overclocked CPU, I'd suggest spending the time to find the proper stepping, tuning for efficiency, etc - it should improve cooling performance considerably. 😀

I appreciate your input, but I've been overclocking processors for a long time and well-versed in how to figure it out and get it there. I have my i7-980x chip that's been 24-7 stable for 2 years now @ 4.6 ghz. Which that chip in there requires about 1.58v cpu vcore to do it. It really just depends on how lucky you get in the "silicon lottery". You may have a box with 500 processors, all of them labeled i7-920 but more than likely they will all require different vcore to reach the same 4500 mhz. Some more than others, some less. Sometimes you get lucky and manage to get your paws on the 1 in 5000 chips that will do past 4ghz stable with very low vcore. In overclocking circles these are usually referred to as "Golden Sample" or just "Golden" chips. Like I have a Pentium-D 930 dual core P4 chip that will do 4.7 ghz with only 1.3v vcore, where as my Core2duo here needs up to 1.47v to do 4.5 ghz. Some people have i7-980x chips like my i7 that can do 5ghz and 5.4 ghz on water with only 1.4v vcore. It's all just a game of luck and hope you got lucky with the chip you picked up. Sometimes not so lucky and have to force lots of vcore in to it to do it, and then try and cool it some how.

My i7-980x chip is already doing 248 watts of power/heat, according to the extreme outer vision tools calculator thing.

It's also been published (I forget where) that for example a 4.5 ghz quad core will do better in games than a 16-core 2.4 ghz chip. Games respond better to cpu frequency than they do amount of cores / whatever other tech.

Reply 17 of 61, by NJRoadfan

User metadata
Rank Oldbie
Rank
Oldbie

I have to laugh. My main machine is a Xeon E3110 on a Gigabyte GA-EX38-DS4. The Xeon E3110s were common since Intel couldn't keep supply of the real E8400 in stock for months back in 2008. Video is nothing special, just a Radeon HD 3870.

Reply 18 of 61, by kithylin

User metadata
Rank l33t
Rank
l33t
NJRoadfan wrote:

I have to laugh. My main machine is a Xeon E3110 on a Gigabyte GA-EX38-DS4. The Xeon E3110s were common since Intel couldn't keep supply of the real E8400 in stock for months back in 2008. Video is nothing special, just a Radeon HD 3870.

They're fantastic little chips for a dual core, great performance. I'm using mine daily for browsing/chatting.

Reply 19 of 61, by gandhig

User metadata
Rank Member
Rank
Member

Seeing the length of the posts in the first page itself, I think the forum's administrator should consider redesigning it with a 'recent posts - first' approach. just kidding.
the pain while scrolling down to the bottom of this post in my mobile phone everytime when there is an update!!!

Dosbox SVN r4019 + savestates Build (Alpha)
1st thread & the only one related to the forum(?)...warning about modern-retro combo
Dead, but, Personal Favourite
Replacement for Candy Crush...Train the Brain