VOGONS


First post, by 386SX

User metadata
Rank l33t
Rank
l33t

Hi,

in these years I found some of the latest unusual PCI video cards like the Radeon 7000 PCI, the Geforce FX 5200 PCI, the Geforce 210 PCI and the latest found Geforce GT610 PCI and some having strange results.
The early ones like the Radeon 7x00 / FX 5200 both have a double chip ICs or what seems to be that for the native AGP GPUs and intended for old PCI mainboards bus while the modern Geforce 210 (GT218) has a PLX 8112 and the GT610 a PI7C9x bridge chip. The 210 card I tested for quite some time results in strange behaviours with latency problems other times interesting results but still mixed generally depending of the scenario.
I think in my tests the combination with the mainboard PCI-E to PCI bridge inside the chipset NM10/ICH7 - 82801 Mobile PCI Bridge (in an Atom minitx) make a "castle" of translations that gives these problems. On a synthetic benchmark increasing the PCI latency timer increase results not improving the general experience anyway even for web pages or the GUI itself but at the same time a 1080p H264 60fps test video is decoded without problems by the GPU with minimal cpu impact.
I suppose the PCI bus itself wasn't the real problem as it wasn't back in the AGP times but maybe these not-native hw bridges combinations that increase the latency. Soon I'll receive the GT610 PCI card to test.
Do you have any experiences with bridged PCI video cards? I watched the youtube PixelPipes review about these last solutions and I begin to think that it's not all about the PCI bus itself but maybe the PCI bridges imho the point to focus in when in addition to the not-native latest PCI mainboards they were built for and tested on. Maybe a native PCI mainboard with the same video cards would perform better.

Thanks

Last edited by 386SX on 2022-01-02, 12:29. Edited 6 times in total.

Reply 1 of 25, by bakemono

User metadata
Rank Oldbie
Rank
Oldbie

I tested an 8400GS PCI card one time with the PEX8112. Only ran 3Dmark01 on it but it was around 2/3 the speed of a PCIe 8400GS. The problem is that each vertex which gets rendered usually has X/Y/Z coordinates, U/V texture coordinates, and a surface normal vector. That's 8x 32-bit floating point values = 32 bytes. So with three verteces per triangle, the PCI bus bandwidth gives an upper limit of around 1.3 million triangles per second. Putting vertex data into video memory (with VBOs) gets around that somewhat but sometimes models still need to change, as well as textures, etc. so the PCI bus can be a bottleneck.

again another retro game on itch: https://90soft90.itch.io/shmup-salad

Reply 2 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Thanks for the interesting answer. I agree that the PCI bus in a stressing 3D scenario could be at the end a bottleneck but the unexpected user experience problem imho compared to the old PCI native video cards is the latency because while I might expect indeed a performance decrease compared to a native PCIe x16 connection, not such latency random lags that happens even in the accelerated GUI and also some strange CPU peak utilization for example in web browsers that sometime results in a very strange mixed performance experience.
I read that latency value is a performance comparison parameter when testing different available solutions (I found at least three different "modern" chip that existed for that task, the PLX811x, the IDT TSI381 and the PI7C9x) and from what I understand it seems like the latency is probably a known factor that needs specific features to somehow be mitigated.

I suppose these GPUs were obviously never built for such specific retro implementation even if these bridges ICs helps the PCI bus to last even longer than it already did, but also probably that old mainboard chipsets having their own PCIe-PCI bridge to mantain the latest PCI slots in the last decade mainboards probably increase the problem even more. It was absurd to use a PCI bus into mini-itx mainboards if these had already PCI-EX line when existed always low profile vga on such bus but anyway I like these alternative solutions to test. It'd be interesting to test such video cards on some real native PCI chipset mainboards to see how better they performs, I suppose I might try.

Last edited by 386SX on 2021-08-18, 17:02. Edited 1 time in total.

Reply 3 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Update: I tried installing after much time my retail W8.1 x86 on this config to test if the latency problem are just like I remember but in fact it seems the Win installation results in a faster much more responsive GUI speed with the humble Geforce 210 PCI with the PLX 8112 bridge. I don't see specific drivers for the bridge but I suppose the chipset mainboard bridge results more optimized on Win.
Strangely the W8.1 Mobile PCI Intel Bridge drivers seems dated 2006 and newer INF installation doesn't update it. Still some random waiting lags exists but much improved generally.
Some bench numbers:

original iGPU SGX545/GMA3600
3DMark2001: 3900
3DMark03: 1924
3DMark05: 750

ext GPU Geforce 210 PCI
3DMark2001: 6642
3DMark03: 5651
3DMark05: 2435

3DMark2000 being a Dx7 bench has speed problems running on Win 8 like it's running in some compatibility mode and both gpu's results in a limited 2400 average bench result and even lower if without a DirectDraw forced wrapper.
Anyway not a bad speed boost for an Atom dual core (SSSE3 1,9Ghz) system on PCI bus.
I attach the 3DMark2001 SE - Geforce 210 PCI result details:

Attachments

  • 3dmark01_GT218.jpg
    Filename
    3dmark01_GT218.jpg
    File size
    45.26 KiB
    Views
    1563 views
    File license
    Fair use/fair dealing exception

Reply 4 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

I will add the GT610 PCI test as soon as I'll install it. Generally it looks like 3DMark05 is already out of the card possibility and even in 3DMark2001 there're moments where the frame rate sometimes has random low numbers like waiting for data bus communications and for example the Car Chase high detail test looks quite heavy even for a much newer low end card like this.
Also I'm thinking I might buy a generic PCI to PCI-EX adapter having the PLX 8112 chip that might still permit to use some more modern low power low profile card. The important thing is to keep the bus wattage below 30W or better even lower. The Geforce 210 seems like staying most of the time around 20 watts or less for the PCI limits.

Reply 5 of 25, by Warlord

User metadata
Rank l33t
Rank
l33t

could just be the version of windows, validity of the install, or you may of had IRQ sharing issues. You don't want your pci vid card fighting over resources. Intel chipset drivers are just INFs to allow the device manager to show the proper name of a piece of hardware on the board. So I doubt it's a driver issue at least with the intel chipset, could of been a issue with the card or the cards drivers.

Reply 6 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

While not "perfect" like a native PCIe version is "ok" in the GUI, lags appear in heavy bus situations like 3D complex tests or also heavy HTML5 webpages with GPU hw. Also I've to consider that Win 8 was an unsupported o.s. for the SoC oriented specifically on Win 7 32bit probably for the iGPU itself that never saw an official stable x64 support, so newer o.s. weren't officially supported while it should work being an x64 low power SoC (it sure works in linux x64 even with the latency problems).

I suppose while linux supports these unusual vga, it might not be really optimized as in Win. I suppose the double 'mainboard PCIEX<>PCI<>vga PCIEX' bridges gives mixed performances in addition to the native PCI bandwidth limits. Maybe it'd need some low level PCI tweak but in the bios there's only the PCI Latency Timer that is increased to the maximun to give the card more time to render being a single bus and the 'PCI 133MB/s bios speed value' and the 'PCI memory map table' but both read only.

It's interesting anyway to see the that polygons/s test and the Car Chase test variable/low frame rate while other tests like the Fill Rate and the Pixel Shaders tests are quite good. Not that this was a gamer card but for a very low power computer are interesting to test. DXVA/VDPAU video decoding is really good just like the iGPU PowerVR SGX, it has browser support but nowdays youtube for example has newer video codecs and this card can't help anymore. H264 codec reach almost its limit @ 1080p 60fps for the video decoding engine.

Last edited by 386SX on 2021-08-26, 13:47. Edited 1 time in total.

Reply 7 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

The GT610 PCI arrived and compared to the light Geforce 210 PCI it's much heavier and longer, the passive heatsink is quite massive. I installed it and testing it so I can post initial results:

original iGPU SGX545/GMA3600
3DMark2001: 3900
3DMark03: 1924
3DMark05: 750

ext GPU Geforce 210 PCI 512MB DDR3 64bit (bridge PLX 8112)
3DMark2001: 6642
3DMark03: 5651
3DMark05: 2435

ext GPU Geforce GT610 PCI 512MB DDR3 64bit (bridge PI7C9X)
3DMark2001: 7001
3DMark03: 5317
3DMark05: 2083

I was expecting faster results but it seems like the different PCI bridge can't improve final speed while it improved on the (less) latency side and generally the speed stability, less variable and smoother with less random slowdowns. While the final fps are similar or lower showing that the PCI serial to parallel to serial communication is the main limit here, the PI7C9x bridge IC seems to do a better job or maybe just more compatible with the Intel mainboard PCI bridge who knows. GPU-Z shows 66% ASIC quality and it seems like changing the frequency scaling method doesn't change the final clock that stays at maximun all the time. On top I used a big @ 850rpm m-atx case fan, temps reach 67°C and stays @ 47°C idle while the PCIex version of the GT610 reached +15°C degrees with the same but faster fan while similar at idle (but with dynamic freqs here doesn't seems to work). 3DMark03 Nature test reach the power peak of 36,8 watts at the wall (with 4GB@3GB DDR3 SODIMM ram and SSD, DVD in standby, basic 500W PSU 80plus). Now it can be read the PCI-EX/PCI bus usage value reaching 100% during 3D test while the PLX/GT218 gpu value couldn't be read. The Atom cpu already seems to push the GPU to 100% usage in these old 3D tests like the much faster Core2 3,3Ghz I tested the GT610 PCIe before.
I attach the 3DMark2001 and GT610 PCI results: the Pixel Shader test doubled but decreased both the Advanced Pixel Shader test and the Polygons test. This last might be somehow a proof of how the different chip works on the bus interface, generallly smoother but probably limited by the PCI bridges combinations anyway.

Attachments

  • 3dmark01_GF119.jpg
    Filename
    3dmark01_GF119.jpg
    File size
    44.38 KiB
    Views
    1482 views
    File license
    Fair use/fair dealing exception

Reply 8 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Some more updates on this GT610 PCI. Generally I'd confirm that the choice of which IC bridge has different positive points but also might limit raw performances. The whole experience seems 'smoother' in the GUI, browser, apps, games even if not necessary faster but I'd say as a feeling 'less variable' similar to a PCIe low(er) end card.
In 3DMark03 the GPU easily reach 99% of usage and same for the PCI-Ex1 GPU bus reading usage. The original GT610 in the PCI-E x16 from my tests anyway never used more than half of the x16 bus, that's why some similar cards (GT710) were released on x8 bus imho.
The GT610 original PCIe card with a Core2 performs faster with something like double the 3DMark2001 score and 1/3 more in other benchs but I noticed strange values in older DirectX games, I think cause W8.1 might not really be the best o.s. to test old games but from Directx9 and above. I don't have Win XP/7-x86 but it'd be interesting to test these on that o.s. For example there're many points where both the GPU usage and CPU usage is like 50% and frame rate is still quite low.

I did some research and it seems like the TSI381 bridge (there're different models) I can't find on any adapter, seems having a short-term caching feature that should give more space for fast applications but as said no adapter seems having it. So I think I'll find a PEX8112 adapter for a PCIe Geforce 710 DDR5 or maybe a GT1030 low profile if they will become cheap somwhere in the future. Theorically they might works at the same way with the only limit of the bus wattage and the low profile low card weight limit to not stress the cards.

Last edited by 386SX on 2021-08-25, 16:52. Edited 1 time in total.

Reply 9 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Some update (even if I doubt anyone's interested in these old but also modern tests 😁): I am trying to expand the Atom system as much as I can just for testing and I now use the x64 Win 8.1 instead of the x86 for the 4GB ram and maybe some more cpu registers/optimization and run quite ok! Now x64 o.s. on x64 cpu and 4GB@1066Mhz CL7 ram. Time to unleash its power! 😉 There is one newer driver for both cards for 64bit o.s. obviously I'm not expecting differences. Something different is that chipset INF drivers can't be installed from the Win7 x64 Intel package even if the o.s. seems to have generic NM10 Express and 82801 chipset older drivers; the original installation give a file copy errors to System folder so I will try again.

The Geforce 210 PCI (PEX8112) consume less watts than the GT610 PCI (PI7C9X) maybe around 5 watts but in Win performs similar but much depends on the scenario. Real differences are the geometry bottleneck raw performance while Pixel shaders and the video engine decoding speed improved in the GT610.
I'm trying again starting from the 210 and will reinstall the GT610 PCI into this x64 o.s. but I think the GT610 card/drivers might have problem with the dynamic freqs cause they stay at maximun while in the 210 in idle clock down to minimun (like 135Mhz GPU) reducing wattage a bit. I can't say if the GT610 has just a reading problem because the wattage in idle is still reduced too almost 10 watts but says @ 810Mhz GPU clock. This even if the Nvidia panel is set to Adaptive or the others values and the vgpu voltage is supposed fixed to 1,04 volts.

Soon I'll receive the adapter to PCIe x16 so I might get the PCIe GDDR4 version of the GT1030 slower version but inside the 30W total limit for the PCI bus. The DDR5 version requires more of the bus specs plus the watts of the adapter and I suppose might be a problem for the mainboard. I've read a card stress values of 35/45 watts absolute peaks in the GDDR5 review; the GDDR4 version so might be the most advanced card to test without ext power connector that even if existed anyway might not compensate the wattage limit of the PCI bus; who knows if the ext-connectors gives vga "only" the difference from the "75W" from the PCI-EX or instead compensate the lack of that. I suppose the first and that might again be a problem for testing cards with ext 12v connector if the bus would works outside the specs.
Here x64 system results with 3DMark2001,3DMark03 and 3DMark05 and the Geforce 210 PCI:

Attachments

  • 3dmark05_GT218_x64.jpg
    Filename
    3dmark05_GT218_x64.jpg
    File size
    25.56 KiB
    Views
    1312 views
    File license
    Fair use/fair dealing exception
  • 3dmark03_GT218_x64.jpg
    Filename
    3dmark03_GT218_x64.jpg
    File size
    23.69 KiB
    Views
    1336 views
    File license
    Fair use/fair dealing exception
  • 3dmark2001_GT218_x64.jpg
    Filename
    3dmark2001_GT218_x64.jpg
    File size
    45.05 KiB
    Views
    1357 views
    File license
    Fair use/fair dealing exception
Last edited by 386SX on 2021-08-26, 13:56. Edited 1 time in total.

Reply 10 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Now here the GT610 PCI (PI7C9X) x64 results with skipped CPU tests that are the same: the different bridge ICs on the vga boards probably works in different ways and the faster GT610 (GF119 gpu) can't really find a way to be faster than the older GT218 of the Geforce 210 that beside where pixel shading are heavily needed might ironically be just as fast; because of that I suppose and before to see the PCI bandwidth as the main problem, it's how the PCI bus works when translated to/from PCIe logic a complex subject. I understood also those old benchmarks should not be run on modern os (Win 8.1) and that's also another probable bottleneck, I suppose faster in a older lighter o.s. and chipsets better optimized for the PCI I suppose.

Interesting to confirm lower polygons/s GT610 PCI speed and imho is the main problem for the results that get mitigated by the clearly higher pixel shading performance (48 CUDA cores vs 16 CUDA cores even if the 3DMark03 PS2.0 test seems different from the others) but still limiting the final rendering and only the bridge chips configuration seems a reason for such different results.
The wattage reach its peak into the multitexturing fill rate and pixel shaders test, with a maximun of 46W for the system with the GT610 PCI and 27W in idle. The Geforce 210 requires a bit less watts and with the active cooler stays lower than 47°C while the GT610 reach 67°C with a case fan added on top.
I'll try games and the next steps might be to adapt a modern low profile PCIe card to see how it scale with much faster GPUs.

Attachments

Last edited by 386SX on 2021-08-26, 16:00. Edited 2 times in total.

Reply 11 of 25, by weedeewee

User metadata
Rank l33t
Rank
l33t

You might want to add some photos to make it a bit more visual. 😀

Right to repair is fundamental. You own it, you're allowed to fix it.
How To Ask Questions The Smart Way
Do not ask Why !
https://www.vogonswiki.com/index.php/Serial_port

Reply 12 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

I post this system photo already in another thread then the other card too. 😉
This is the GT610 PCI with its heavy heatsink I might change for the active cooling while temps with a case fan stays 45°C in idle, 20°C more on stress. Strangely clocks seems fixed to the maximun with no variation (according to GPU-Z) but the gpu voltage reading is fixed to 1.0400 volts that should be the low power for the idle mode (and lower clocks) EDIT: no the idle vcore is 0.90v so it's a default working vcore 1.04v. I tried with the Zotac Firestorm app and it read the same values and I also can set lower clocks but not on demand. I think this is a specific config for such card while the Geforce 210 works as others with Adaptive mode.

EDIT: I've found only "Power State P0" exists or can be read so that's why the fixed clocks but still strange the only gpu voltage is 1.0400 v. I can modify clocks lowering them but I expected an on demand logic like any other cards. At this link https://www.techpowerup.com/vgabios/193488/193488 I've found someone uploaded info/bios unofficial of the exact same number bios version of my card but in their internal readings there're different Power States levels. Maybe different drivers but I don't think related to the identical bios (HEX checked) and even Zotac app read these clocks. Imho a theory may be cause the PCI bus wattage when stressed get close to its limit, so they might have choosen the low voltage vcore with the high freqs of the GF119(GT610) gpu that anyway with such low bandwidth and x1 internal PCIe GPU, would be less demanding still remaining into the 29W limit of the card. Maybe variable voltages/clocks would risk some wattage peaks and they've choosen a more stable setting for the PCI.
Another thing is the PCI list from another tool that says something interesting: the Pericom PI7C9X111 bridge might support 66Mhz PCI mode but the tools says the "Bridge Intel(R) 82801 PCI - 2448" (the mainboard one, generic driver) should NOT support it while at least the Pericom vga bridge talk to the GPU on a PCIe x1 speed. So I guess it's working at source @ 33Mhz anyway (?). Instead the Win "Standard PCI to PCI adapter" in the system devices is indeed the Pericom onboard vga bridge.

Attachments

  • Atom-GT610-PCI.jpg
    Filename
    Atom-GT610-PCI.jpg
    File size
    118.95 KiB
    Views
    1264 views
    File license
    Fair use/fair dealing exception
Last edited by 386SX on 2021-09-12, 08:56. Edited 3 times in total.

Reply 13 of 25, by borgy

User metadata
Rank Newbie
Rank
Newbie

Hello 386sx,

Found this thread searching for 3dmark2001se WinXP results with GT610 PCI, since am running a similar experiment right now.
I'm using Intel's 915 chipset and a Dothan Pentium M 2.0GHz / 133MHz FSB.

You can have a look at the benchmark screenshots in this thread: http://forum.pcekspert.com/showthread.php?p=3542106
(last screenshot is most informative)

tl;dr: 3dmark2001 score is 10405 on WinXP SP3, 32-bit.

Reply 14 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Thanks for the interesting test, I've read translated and they're also interesting cause also on low power cpu and the GT610 vga is the same model I am testing now. Incredible that 3DMark2001 high result, make me think imho Win XP on DIrectx7/8.1 apps probably does a better/native job with less overhead and GT610 drivers seems not only heavy enough but I begin to think most modern vga/drivers were not nearly intended to run any old DirectX apps beside in a slower way.
And in my test there's also the modern o.s. variable added to the equation resulting in these numbers. The Directx9 tests might be closer to the native system they were oriented for but still not close enough imho.
I don't think to a CPU performance problem too: even if these are low power cpus I tried both the x32 and x64 version of CPU-Z bench and here attached the results. Strangely I wasn't expecting the x64 values to be doubled compared to the x32 one even for the single-cpu test (the cpu doesn't support HT but it's a dual core x64). Still the x32 version into a x64 o.s./x64 cpu gives that ~21.5 points for the single thread test like should be the 3DMark2001 and higher than the Pentium M SSE2.
Anyway that 10405 point vs the 6847 points of my Win8.1 x64 build on an equal card and 4GB mainboard DDR3 ram imho says quite a lot. Imho Directx6/7/8.1 games seems like running in some compatibility way compared to the Directx9/10/11 game or benchmark. Do you have any 3DMark03 or 3DMark05 results? Also it'd be interesting the detailed part results of those single tests. 😀

Attachments

  • cpuz_atom_bench.jpg
    Filename
    cpuz_atom_bench.jpg
    File size
    49.17 KiB
    Views
    1145 views
    File license
    Fair use/fair dealing exception

Reply 16 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

It will be interesting to see if the values should, as I expect, be more similar (even if probably still higher in the XP o.s. where Directx9 was a native API). The Atom is the 32nm Cedarview SoC series, no HT but dual x64 1,9Ghz in this low end version. It's not a powerful cpu still 'netbooks oriented' and not well remembered cause the driver support was basically only oriented to Win 7 32bit (plus XP) for the integrated GPU that had high expectations but low results, Dx9 only 32bit drivers (on paper might have been DirectX 10.x) and many were asking for x64 driver that never came (beside a forgotten single Beta release). Linux situation wasn't any better (worse actually). On paper it had good features as one of the last old Atom netbook series before the quad core tablet/mini-pc series.

Reply 17 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Some updates for anyone might be interested in these tests..
Cause I had random white horizontal short lines happening in the Win GUI/browser (while not in any games), I decided to test the GT610 PCI into a G41 based mainboard with the fast Core2 E8600 dual 3,33Ghz / 8GB DDR3 ram. Reinstalled everything and I also have a GT610 PCIe (1GB vs the 512MB of the PCI) I will test later not now; on both I changed the thermal paste decreasing temps. Still I don't see any white lines artifacts until now so let's hope it's not a fault in the GPU or whatever. Anyway I tested the above PCI version of the Zotac GT610 with 3DMarks and results are interesting...

GT610 PCI (PI7C9X) and Core2@3,33Ghz/8GB-DDR3
3DMark2001: 15046 (!)
3DMark03: 6252
3DMark05: 2538

Now what I think.. all the old Directx tests based on old features seems heavily CPU dependent and results increased sometimes even too much considering still using in both systems the same GPU T&L hw and not a "software based T&L" that might benefit from the "fast" CPU. 3DMark03 and 05 being heavily based on Pixel Shading seems to increase but much less around 900 points for the first, 500 points for the second. Geometry raw performance are the same and even Pixel Shading test as the Fill Rate test are similar just a bit improved.
PCI bus might be a bottleneck for sure but this is probably balanced between the raw PCI capabilities and the IC bridge used with their features that might be more oriented for low speed devices or fast ones and also the configuration of those bridges might change the final experience. But at the end looking at the numbers the game tested and its API version and the o.s. tested does make a difference indeed. At the end these cards seems to not have a real target cause they might be too slow for modern even time correct games and too modern for older games that depends on older APIs where older GPU performed much better. Still they are interesting to test but limited indeed by many variables. I suppose also these "modern" GPU might have drivers that might need faster CPU than the ones where PCI bus were usually available, so driver overhead might be a factor when the more modern GPU the heavier drivers might become.

Anyway some detailed test comparison and later I'll attach a photo of the Socket 775 system with this card hopefully without any artifacts happening that would be a bad sign..
3DMark2001 comparison results:

Attachments

Reply 18 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

I also tried 3DMark2001 with the same Core2 system in the PCI-EX full lenght version of the GT610 1GB DDR3 on the same system and it basically "destroy" the PCI bridged version not necessary only for the final result but mostly for the detailed single values... just look at the Polygons/s result.
Anyway I didn't see any white lines artifacts in this system so I think it might have been some incompatibility with the Atom mini-itx board. I noticed that the GT610 PCI seems to have quite a variable power demand on the meter and the fans sound even when typing on the keyboard (and even with its fixed power state clocks/voltage). I suppose it might support only some configs and not perfectly others, who knows, just a theory. And probably related to the maximun PCI bus wattage specs that are probably close. I suppose a GT710 PCI would have been much better for the 28nm GPU. In fact the Geforce 210 PCI with lower power demand works good on that Atom and probably the right card for it. Until I'll receive the adapter to PCI-E that might be interesting with newer low power cards even if I am not really sure about the quality of these adapters.
I think I tested a lot about this for now. I hope it has been an interesting test. 😀

3DMark2001 with GT610 PCI-Express x16 1GB DDR3:

Attachments

Reply 19 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Some update on a new test I'm doing. Considering the problems I had with the GT610 PCI and Win 8.1 (only in accelerated GUI not games, some artifacts like post-rendering white short lines) and I don't think related to a faulty GPU or memory because in another mainboard the card worked ok, I tried on the Atom/NM10 mini-itx board where I did the tests anyway to run linux x64 and proprietary drivers and Wine for 3DMark05 and the results are incredible: 3503 points vs 2080 on Win 8.1 and even higher then the Win PCI-E x16 GT610 test with the Core2@3,33Ghz! The third test doubled the frame rate, the geometry Vertex score is incredible.. around 50Mtriangles on both Simple and Complex tests vs the 10Mtriangles of the Win installation test! I still didn't take screenshots of it but also 3DMark2001 SE results are impressive.. here the PCI-EX general scores are faster but Nature test reach 70 fps (vs 50 of the PCI-Win version) and at least almost the same of the PCI-E x16 Core2 Nature score.. the Geometry 1 light test increase to 90 Milions Triangle/s from the 10 Milions of before with the PCI-Win combination.. This convince me that Directx9 on Win8.x and newer o.s (can't says about Win 7 I don't have) might run in some compatibility mode or maybe not optimized to run natively like it did in the Directx9 times and even Wine "wrapper" through OpenGL linux support even if not really a native solution anyway run faster. And for now no artifacts are visible on web browser or games. The GUI instead seems to lag quite a lot @ 1080p and I'm using LxQT. I suppose WDDM 1.3 driver model for the GUI does a better native job.

Another interesting thing found is that the open nouveau drivers try to run the PCI GT610 bridged gpu @ 0,90v vcore (vs the proprietary single state 1.04 volts) but it crash the GUI mostly. The proprietary driver and control panel shows like in Win that there's only one single power level state the 810Mhz/1334/1620 ones this doesn't crash so open source drivers can't work at least on this distribution/card.

Last edited by 386SX on 2021-09-12, 08:49. Edited 1 time in total.