VOGONS


GeForce 6800 memory cooling

Topic actions

Reply 20 of 45, by candle_86

User metadata
Rank l33t
Rank
l33t

I can tell you my 6800GS unlocked and oced to 425/1200 under my Accelero 5 idels around 43C and loads to around 61C to give you a frame of reference. Oh and the Bios was flashed so it runs @ 1.4 instead of 1.3V

Reply 21 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++
obobskivich wrote:
swaaye wrote:

Actually that idle temp is probably fine for a 6800. These GPUs have limited idle power management.

See what it does when gaming on it. The NV shutdown temp is around 120C. 85C is probably a good target.

Oh yeah, I know that NV30 and NV40 can take high temps - the driver control panel says 115* C for the 6800U. I'm not terribly worried about "high" temps on the card, I just have no frame of reference. I'll see what it does with AquaMark or 3DMark or some-such.

I did some searching before on 6800 and temps and I saw various old forum posts about stock temps of 80-90C during gaming with stock cooling.

Reply 22 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++
candle_86 wrote:

I can tell you my 6800GS unlocked and oced to 425/1200 under my Accelero 5 idels around 43C and loads to around 61C to give you a frame of reference. Oh and the Bios was flashed so it runs @ 1.4 instead of 1.3V

6800GS is NV42 right? That's a shrink of NV40.

Reply 23 of 45, by candle_86

User metadata
Rank l33t
Rank
l33t
swaaye wrote:
candle_86 wrote:

I can tell you my 6800GS unlocked and oced to 425/1200 under my Accelero 5 idels around 43C and loads to around 61C to give you a frame of reference. Oh and the Bios was flashed so it runs @ 1.4 instead of 1.3V

6800GS is NV42 right? That's a shrink of NV40.

6800GS PCIe is NV42, but mine is an AGP, uses the good old fashioned NV40 like the 6800/GT/Ultra AGP cards. Thats why I can unlock it to 16p/6v, NV41 and NV42 only contain 12p/5v max

Reply 24 of 45, by Skyscraper

User metadata
Rank l33t
Rank
l33t
candle_86 wrote:
swaaye wrote:
candle_86 wrote:

I can tell you my 6800GS unlocked and oced to 425/1200 under my Accelero 5 idels around 43C and loads to around 61C to give you a frame of reference. Oh and the Bios was flashed so it runs @ 1.4 instead of 1.3V

6800GS is NV42 right? That's a shrink of NV40.

6800GS PCIe is NV42, but mine is an AGP, uses the good old fashioned NV40 like the 6800/GT/Ultra AGP cards. Thats why I can unlock it to 16p/6v, NV41 and NV42 only contain 12p/5v max

Another issue with the NV41 and NV42 cores is that unused shaders are almost always laser cut to hinder people from trying to reactivate them.
I got a XFX Geforce 6800 XT with a motherboard/CPU bundle I bought, 8p/4v, no unlocking possible and as icing on the cake 128 bit memory 😁

Because of the much lower core clock this card is slower than a 6600GT!.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 25 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++
Skyscraper wrote:

Another issue with the NV41 and NV42 cores is that unused shaders are almost always laser cut to hinder people from trying to reactivate them.
I got a XFX Geforce 6800 XT with a motherboard/CPU bundle I bought, 8p/4v, no unlocking possible and as icing on the cake 128 bit memory 😁

Because of the much lower core clock this card is slower than a 6600GT!.

I'm sorry to hear that you own that.

I think at that time I was running an X800GTO2 which was basically a X850T PE that couldn't quite hit full clocks.

Reply 26 of 45, by Skyscraper

User metadata
Rank l33t
Rank
l33t
swaaye wrote:
Skyscraper wrote:

Another issue with the NV41 and NV42 cores is that unused shaders are almost always laser cut to hinder people from trying to reactivate them.
I got a XFX Geforce 6800 XT with a motherboard/CPU bundle I bought, 8p/4v, no unlocking possible and as icing on the cake 128 bit memory 😁

Because of the much lower core clock this card is slower than a 6600GT!.

I'm sorry to hear that you own it.

I think at that time I was running an X800GTO2 which was basically a X850XT PE that couldn't quite hit full clocks.

Well the bundle was ~4 Euro + ~6 Euro shipping and included a working Abit AV8 socket 939 board, an Athlon X2 3800+ and 4x1GB PC3200 memory so Im not sorry at all 😁

Back in 2004 I bought a Geforce 6800 GT AGP as soon as it was released, I never had a reason to regret it and I still own it.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 27 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++
Skyscraper wrote:

Well the bundle was ~4 Euro + ~6 Euro shipping and included a working Abit AV8 socket 939 board, an Athlon X2 3800+ and 4x1GB PC3200 memory so Im not sorry at all 😁

Hah well yeah that's a sweet acquisition. 😀

My guess would be somebody bought the 6800 XT based off reviews of 6800 GT or Ultra. If he even looked at reviews. Who knows. Brand power baby!

Reply 28 of 45, by obobskivich

User metadata
Rank l33t
Rank
l33t

While I've never actually laid hands on a 6800XT, my 5900XT is a fantastic card. It, however, is not internally crippled, it's just clocked lower than the other 5900 variants. Shame to hear that didn't continue on to later cards...

Reply 29 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++
obobskivich wrote:

While I've never actually laid hands on a 6800XT, my 5900XT is a fantastic card. It, however, is not internally crippled, it's just clocked lower than the other 5900 variants. Shame to hear that didn't continue on to later cards...

With the FX series I don't think they had the same ability to disable internal components. They just varied clock speed. Though also with the FX chips what would one disable? There's not much parallelism in even NV35/38! If you halved a 5950 it'd be something like a 5200.

The 6800XT would probably beat the entire 5xxx lineup. So there's that aspect. Crippling is relative! 😀

Reply 30 of 45, by candle_86

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

While I've never actually laid hands on a 6800XT, my 5900XT is a fantastic card. It, however, is not internally crippled, it's just clocked lower than the other 5900 variants. Shame to hear that didn't continue on to later cards...

the 6800XT was an odd ball, all Nvidia stated was 8p/4v, no clock speeds or memory width specified. You could find them with 128bit and 256bit, DDR, DDR2, and GDDR3 memory. A few used NV40 cores, a few used NV41, but most used NV42 that was laser cut by Nvidia. The XT in this case was simply to get rid of old stock before the 7600 series launched and made them impossible to sell.

Also a 6200 128bit card could beat the entire FX line in DX9 titles 🤣. And come to think of it, I benchmarked a 6200 64bit vs FX5700 Ultra and the 6200 was about 3% faster in DX8 games, and about 20% faster in DX9 titles. Was really sad.

Reply 31 of 45, by Skyscraper

User metadata
Rank l33t
Rank
l33t

I just cleaned the NV Silencer 5 I use to cool my Geforce 6800 GT AGP, I was tempted to move the cooler to my PCI-E 6800 GT but not much would have been gained as then I would have had to use the stock cooling with the AGP card. Washing the heatsink with washing-up liquid decreased the idle temp with 15 degrees C 😀. Now the card is ready for a quick test run with the Opteron system.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 32 of 45, by obobskivich

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

With the FX series I don't think they had the same ability to disable internal components. They just varied clock speed. Though also with the FX chips what would one disable? There's not much parallelism in even NV35/38! If you halved a 5950 it'd be something like a 5200.

The 6800XT would probably beat the entire 5xxx lineup. So there's that aspect. Crippling is relative! 😀

Actually AFAIK, the 5700 Ultra is basically a "halved" 5950 - it has half the "arrays" available to it ("4 pipes" instead of "8 pipes") and half the memory bus (128-bit vs 256-bit), but otherwise runs at the same clocks and has the same 3 VS units, 4 PS units, etc. I've never heard of a 5700 being "unlocked" to a 5950 though. On the other hand, there's NV35 cards like QFX 700 that are "halved" in terms of clockspeed and memory speed, but have fully functional cores. As far as 6800XT beating the FX line, I remember reading an old review recently that showed 5950 Ultra out-pacing 6600GT and 6800 Vanilla in applications that are primarily memory bandwidth constrained (5950 has nearly the same memory bandwidth as 6800GT), but it was certainly a specific case. Otherwise, yeah I think the 6800XT should probably be faster. 🤣

Reply 33 of 45, by candle_86

User metadata
Rank l33t
Rank
l33t

Well remember an FX5700 was actually a 2x1 or 8x0 design, just like the FX5600 , the FX5800 and FX 5900 used an 16x0, 4x1 design, the FX 5200/5500 used a 2x1 or 4x0 design. What I mean is when only processing Alpha channels they preformed faster, but when doing normal work they preformed much worse.

Reply 34 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++
candle_86 wrote:

Well remember an FX5700 was actually a 2x1 or 8x0 design, just like the FX5600 , the FX5800 and FX 5900 used an 16x0, 4x1 design, the FX 5200/5500 used a 2x1 or 4x0 design. What I mean is when only processing Alpha channels they preformed faster, but when doing normal work they preformed much worse.

I tried to classify these chips awhile back and came up with this:
5200/5500 - 2x2
5600/5700 - 4x1
5800/59x0 - 4x2/8x0
59x0 - 16x0 with stencil pass?

Tech Report also saw behavior from the 5200 that was mirroring what a GF4MX was doing and so they thought maybe 5200 had more in common with NV17 than one might guess from codenames.

And this is pretty simplistic because those numbers aren't considering ALU processing time with shader programs. We all know that NV3x doesn't do well when shader programs are added to the mix...

Reply 35 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++
obobskivich wrote:

As far as 6800XT beating the FX line, I remember reading an old review recently that showed 5950 Ultra out-pacing 6600GT and 6800 Vanilla in applications that are primarily memory bandwidth constrained (5950 has nearly the same memory bandwidth as 6800GT), but it was certainly a specific case. Otherwise, yeah I think the 6800XT should probably be faster. 🤣

There might be some scenarios where the bandwidth helps 5900 a lot. Games that are relatively simple, run at high resolution, and with some anti-aliasing added in? Maybe DirectX 7 era games setup like this.

I think Putas also brought up once that we have no idea how efficient NV3x is with bandwidth compared to NV4x so that huge bandwidth on 5900 might be more necessary than it is for say 6600.

Reply 36 of 45, by obobskivich

User metadata
Rank l33t
Rank
l33t
swaaye wrote:
I tried to classify these chips awhile back and came up with this: 5200/5500 - 2x2 5600/5700 - 4x1 5800/59x0 - 4x2/8x0 59x0 - 16 […]
Show full quote

I tried to classify these chips awhile back and came up with this:
5200/5500 - 2x2
5600/5700 - 4x1
5800/59x0 - 4x2/8x0
59x0 - 16x0 with stencil pass?

AFAIK (based on everything I've read on NV3x) the 5800/5900 are all "8 pipe" variations that can operate as 4x2, 8x1, or 16x0. The 5200-5700 series are "4 pipe" and can operate as 2x2, 4x1, or 8x0 (I'm not 100% certain on this with the 5200/5500, cf what you posted below).

Tech Report also saw behavior from the 5200 that was mirroring what a GF4MX was doing and so they thought maybe 5200 had more in common with NV17 than one might guess from codenames.

nVidia's "official line" seems to be that FX 5200 is CineFX I based (which means no NV1x), but they also do not list it as an Intellisample part. If I had to guess, they probably took the memory interface (and maybe more?) off of the MX for the 5200/5500, just as they did with GF4 Ti for the 5800 (4x32-bit controllers). Both cards have similar memory bandwidth, and it wouldn't be surprising if that was a limiting factor for the FX more than the 4 MX, since the 5200 is supposed to be capable of more computational thruput than the MX, especially when you factor in the shader support.

swaaye wrote:

There might be some scenarios where the bandwidth helps 5900 a lot. Games that are relatively simple, run at high resolution, and with some anti-aliasing added in? Maybe DirectX 7 era games setup like this.

If I remember right it was a flight or driving simulator, so it was probably very fill/bandwidth dependent and the shader processing power on GF6 doesn't count for much. If I come across it again I can link you - it was certainly surprising to see the 5950 performing so well. 🤣

I think Putas also brought up once that we have no idea how efficient NV3x is with bandwidth compared to NV4x so that huge bandwidth on 5900 might be more necessary than it is for say 6600.

According to stuff I've read in the past (and would probably be hard-pressed to find again), NV30 was able to achieve something like 4-5:1 efficiency due to its compression features, and early nV marketing materials will often claim it has "equivalent 48GB/s of memory bandwidth" when comparing it to Radeon 9700. What I'm not sure about is the transition to NV35; I would assume that either the NV35 drops some of those bandwidth saving technologies, or the extra memory bandwidth is largely irrelevant and was included as a marketing feature (so they could say "me too!" with 256-bit memory). I say this because many benchmarks (including some I've done myself) comparing NV30 to NV35 usually don't show dramatic performance differences until you get into shader-heavy applications (where NV35 has a computational advantage).

Reply 37 of 45, by swaaye

User metadata
Rank l33t++
Rank
l33t++

I'm pretty sure that 5800/5900 can not operate as 8x1. If they were operating as 8x1, their clock speed would allow them to vastly outperform 9800Pro's single texture fillrate. I think the only 8 pixel per clock mode is 8x0. And 59x0 has the stencil rate boost as well and I think that may only be available in OpenGL (idtech4 stencil shadowing was supposed to be super popular I guess).

Plus there's this
NV31 closer to NV35 than NV30? - More pipeline mysteries.

Dave Baumann, post: 77958, member: 2 wrote:

[NV30] can never write more than 4 pixels per clock - NVIDIA have stated this.

Reply 38 of 45, by Skyscraper

User metadata
Rank l33t
Rank
l33t

The fan on my NV Silencer 5 failed.

Im now running my AGP Geforce 6800 GT with Ultra clocks without any memory cooling at all. Im using an Accelero S1 rev. 2 to cool the GPU. The memory topped out at 1150 MHz with the NV Silencer 5. It will be interesting to see if I have lost any speed, so far 1100 MHz is working fine at least.

edit

I did some more testing. It seems I lost about 25 MHz in top memory overclock and gained 25 MHz in top core overclock, a fair trade.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 39 of 45, by obobskivich

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

I'm pretty sure that 5800/5900 can not operate as 8x1. If they were operating as 8x1, their clock speed would allow them to vastly outperform 9800Pro's single texture fillrate. I think the only 8 pixel per clock mode is 8x0. And 59x0 has the stencil rate boost as well and I think that may only be available in OpenGL (idtech4 stencil shadowing was supposed to be super popular I guess).

This contains nVidia's statement on NV30's internals (and some of TR's interpretation and measurement):
http://techreport.com/review/4966/nvidia-gefo … 800-ultra-gpu/4

It can do 8 texture/stencil/shader operations per clock, but only 4 color+z operations per clock. Per other nVidia specs, it also supports 16 textures per pass (this includes NV30). "8x.5" may be more accurate then, but overall it gets quite screwy in real-life because it isn't like 3DLabs P10 where it's "locked" in one array mode or another. Instead this all dances back and forth in real time (clock to clock); I don't know to what extent it supports "mixing and matching" across its available resources, which is where "8x.5" may or may not make better sense.

On the stencil/shadow thing, per this (http://www.nvidia.com/object/feature_ultrashadow.html) it appears all of the FX chips support two-sided stencil and should handle stencil shadowing (if I'm reading correctly), but the NV35 added shadow culling, which is meant to improve performance (like HyperZ), but appears to be tied to the renderer, as opposed to the hardware, so the developer would have to implement it ("programmers can define" as opposed to "the blessed chip automatically figures out"). In a Tom's Hardware comparison, 5800 and 5900 performed almost identically in a Doom 3 test, which could potentially be interpreted as a lack of shadow culling support (or importance).

Skyscraper wrote:
The fan on my NV Silencer 5 failed. […]
Show full quote

The fan on my NV Silencer 5 failed.

Im now running my AGP Geforce 6800 GT with Ultra clocks without any memory cooling at all. Im using an Accelero S1 rev. 2 to cool the GPU. The memory topped out at 1150 MHz with the NV Silencer 5. It will be interesting to see if I have lost any speed, so far 1100 MHz is working fine at least.

edit

I did some more testing. It seems I lost about 25 MHz in top memory overclock and gained 25 MHz in top core overclock, a fair trade.

So it is working without memory cooling - like X800. Interesting. Thanks for testing this out, but sorry to hear about the NV Silencer failing. 😊