VOGONS


Reply 20 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

I write the score to make it fast (GT610 PCI and Linux x64/Wine 5.x vs same GT610 PCI on Win 8.1 x64, proprietary drivers)

3DMark05 Score: 3535 (vs 2080 of Win)
GT1 - Return to Proxicon: 15.2 fps (vs 12.0 fps of Win 8.x)
GT2 - Firefly Forest: 7.9 fps (vs 5.6 fps)
GT3 - Canyon Flight: 23.4 fps (vs 8.5 fps !!!)

Feature Tests:
Fill Rate Single Texturing: 1885.3 Mtexels/s (vs 1134)
Fill Rate Multi Texturing: 5277 (almost identical)
Pixel Shaders: 137 fps (vs 165 fps this is slower)
Vertex Shaders Simple: 48.6 MVertice/s (vs 6.0MV/s !!!)
Vertex Shaders Complex: 51.3 MVertice/s (vs 10.9 MVertice/s !!!)

It's impressive. I think the Pixel Shaders are probably more optimized on Win enviroment but the geometry frame rate different results are incredible. And are real, the tests really run MUCH smoother where geometry is heavy. Probably the 3DMark05 Pixel Shading speed on Wine can't compare but compensate somehow with the triangle/s that in linux seems to fly even on PCI. The only reason I can imagine as a theory is that still the advice that these benchmarks should not be run on such newer o.s. is real, I suppose Win XP or even Win 7 results might be similar, if I'll test the card on those o.s. I'll write here.

Reply 21 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

In 3DMark03 the score is still higher, 6980 on linux/wine vs 5379 on 8.1.
Single tests are a bit different.. Mother Nature tests reach 50 fps vs 24.6 fps of the Win test clearly much smoother at first once the camera goes to the trees the difference is very visible. Here PS 2.0 test is much faster on linux with 90 fps vs 75.3 fps on Win.

So basically this is a unique PCI complex card for sure and to use it for retrogaming on a modern o.s. might not be a good idea but linux with wine seems to gain a better result imho mostly in geometry heavy enviroments for some reason I don't know but I suppose similar to what the score would be on XP or some older lighter native DX9 o.s.
Now I try some modern Unigine Valley test and I don't expect much but it's a test natively available for linux and win at the same time.

Reply 22 of 25, by Errius

User metadata
Rank l33t
Rank
l33t

Be aware that the PNY GeForce 8400 GS PCIe was produced in two versions, with memory clocked at 666 MHz and 800 MHz.

The equivalent PCI card was only produced in a 666 MHz version.

ETA: Also watch out for this with the Zotac GeForce GT 520/610 cards.

The PCIe cards were produced in versions with memory clocked at 1600 MHz, 1333 MHz, and 1066 MHz. But the equivalent PCI cards were (I think) only produced with 1333 MHz.

Is this too much voodoo?

Reply 23 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

This PCI GT610 has 810Mhz for the GPU, 1334Mhz for the DDR3 (512MB) memory and 1620Mhz for the shaders. Strangely other similar cards like the GT210 PCI I have with a different bridge do work with the dynamic scaling but this one might be factory fixed by the card bios on these value and even in a fresh linux installation the situation is the same of Win. Not a problem anyway beside maybe these gpus aren't imagined to dinamically work always in a fixed voltage but it's still a low end low power gpu and anyway performs better on temperatures with a case fan and the x1 PCI-E internal on board config (from the GPU to the IC bridge) decrease gpu load/stress too.
This make me think that if I'll try the 'PCI-PCIe adapter' I might test even a bit faster card cause it should still work on less power demand plus setting a lower clock setting should help just like these original PCI cards. The idea just for testing would be a single slot GT1030 that surpass the 30W limit in the PCIe x16 version but maybe not in a x1 bridged PCI config with a downclocked gpu. In this specific config I'm using, the final wall power demand I'm expecting to not exceed is 48 watts = max value seen for the total system on the factory GT610 PCI; more watts would mean the card requiring probably too much, crashing the system and/or risking the mainboard.
Another option might be a PCIe x1 to PCIe x16 "internal usb 3.0 connected riser/adapter" with a PSU PCIe power connector so the cards I could try might be faster depending on the PSU but I hate the idea of having a USB cable inside the case connecting something like a video card through an USB cable..I would prefer an higher end x16 PCIe riser flat cable but I can't find any both with ext. power connector and that can be fixed on the full atx case screw points.

Reply 24 of 25, by Errius

User metadata
Rank l33t
Rank
l33t

The 1333 MHz GeForce GT 610 PCI @ 33 MHz with 512 MB VRAM actually outperforms the equivalent 1066 MHz PCIe x16 with 1 GB in 3DMark 11

And the 1600 MHz GeForce GT 520 PCIe x16 outperforms all the other cards in all benchmarks. (I don't know if Zotac ever released a 1600 MHz GT 610.)

Is this too much voodoo?

Reply 25 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Interesting even if at the end still it seems like these PCI card can't quite fit into any target. As retrogaming on this card linux seems to performs much better than a modern win o.s. but in the GUI win drivers works better and much smoother but retrogames as tested runs slower. Newer games are simply too heavy on both and the missing PCIe x16 is not like an optional, it really make a difference even on such low power gpu. In a very heavy bench I tested the GT610 PCIe x16 I have can load the x16 v1.1 bus up to 50% of its bandwidth on a G41/ICH7 system. The PCI bridged logic is far from that and depending on multiple factors not for last which bridge chip itself. But the PCIe x1 version of these cards doesn't need a bridge chip and the connection is native even if the lenght of the bus socket is awful imho for a secure installation.