VOGONS


Last modern PCI vga speed results

Topic actions

Reply 20 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t
Disruptor wrote on 2022-05-25, 15:02:

Well, I've got a Radeon 5450 PCI with 512 MB RAM here. However, it has a PCI-E to PCI bridge on its backside.

Yes it should have the PLX 8112 bridge IC and that's one of the latest or the last built Radeon on PCI bus. The bridge chip is a big variable of the real final speed (of course that might differs a lot from the synthetic numbers) and different models existed in the previous decade. Usually only two of them were used in video cards the Pericom and the PLX ones. I think also Intel in the past did some similar bridges. Reading around it seems each had their ways to keep speed as fast as possible cause latency and bandwidth stability were problems while another brand bridge existed which I don't remember now but I've never seen video cards using it, that was supposed on paper to be a bit faster. Anyway it was quite a difficult task while maybe some proprietary IC similar to the old proprietary ones in the AGP times who knows could have improved something but it always was a thin market sector.

Anyway if you could run 3DMark2001SE and 3DMark05 to compare the AMD GPUs bridged to PCI numbers with the single feature tests it may be interesting to understand more.

Last edited by 386SX on 2022-05-26, 20:56. Edited 2 times in total.

Reply 21 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Another Geforce 210 PCI test with the same Atom dual 1,9Ghz and 3DMark2001SE with Wine/D3D-OpenGL and proprietary NV drivers. Only to make sure I'm not missing something here but the visual speed differences are totally visible compared to any test I remember on Win. I suppose there might be some minimal visual rendering differences using OpenGL instead of a Direct3D6/7 native system but if there are any they would be visible comparing maybe texture filtering or pixel quality on screenshots but on a real time rendering I don't see artifacts or major quality decrease.

The NV linux driver control panel permit a Quality vs Performances OpenGL setting but I suppose it may help only a bit here. Each test was done two times cause the PCI multiple bottlenecks always result better in the second test.
The test is done @ default and anyway at the monitor resolution 1024x768 as I think all the test I did in the past usually at default. On the linux devices can be seen before the vga card the detected on board PLX PCI bridge chip while in Win usually can be found/detected as "PCI to PCI bridge" or something like it.

Edit: I ran again the test with "Quality" OpenGL NV control panel setting and score is even higher, clocks were fixed for the test at performance step 589/1402Mhz for the GPU/Shaders and 1333Mhz for the DDR3 memory.

Attachments

Reply 22 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t

Another 3DMark05 test in linux with the Geforce 210 PCI again with Quality Opengl setting and here seems loosing a bit of speed but it may depend also on the variable frame rate factor which indeed sometimes even in linux isn't stable. The CPU score still looks different from the Win result as already discussed it may not be useful here.

Attachments

Reply 23 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t
agent_x007 wrote on 2022-05-24, 22:06:
So few thing to note : […]
Show full quote

So few thing to note :

2) PCI is still really limiting for GT 210.
3DMark 01 SE.PNG
Here's GT 520 result for comparison :
3DMark 01 SE.PNG

I was looking again at your benchmarks numbers with the original PCI-EX cards and I almost didn't see that the supposed much faster GT520 PCI-EX x16 card results slower than the older GT210 PCI-EX x16 less clocks,ROPs/TMUs etc. It's interesting cause the Polygons/s seems decreased quite a lot compared to the older card. Also single feature tests seems mostly different. Which make me think about how that happens into a light XP, no PCI, no IC bridges... feel like the GT210 gpu itself results more "compatible" for running old apps compared to the newer architecture and/or the driver that specifically move the GT520 gpu. EDIT: I see that is detected an higher pixel Fill Rate of the Geforce 210 even if slower clocks and less units. That'd explain this but how it is calculated?

Reply 24 of 25, by SPBHM

User metadata
Rank Oldbie
Rank
Oldbie

that's a result that I have with a 8400GS PCI (only 8 shaders...)
this was with a SiS chipset, I think I remember seeing some decent variation from one motherboard/chipset to another in terms of bandwidth;

I actually used the aida64 opencl copy test as a reference for the bandwidth, it was normally around 100MB/s on memory copy, but as I said with some decent variation between platforms;

Attachments

Reply 25 of 25, by 386SX

User metadata
Rank l33t
Rank
l33t
SPBHM wrote on 2022-05-26, 19:00:

that's a result that I have with a 8400GS PCI (only 8 shaders...)
this was with a SiS chipset, I think I remember seeing some decent variation from one motherboard/chipset to another in terms of bandwidth;

I actually used the aida64 opencl copy test as a reference for the bandwidth, it was normally around 100MB/s on memory copy, but as I said with some decent variation between platforms;

Interesting even if each different CPU will result in different 3DMark2001SE values here. But on the feature tests side I still see the 10MT/s average geometry values. It might be the way unified shader architectures with modern oriented o.s. drivers (retrocompatible with XP) works with older game engines and even using XP doesn't change that. If this was only a 3DMark2000 / 2001 problem it'd be understood being so old but on more modern bench like 3DMark05/06 well, those numbers I suppose has a different impact being so low.

At this point it'd be interesting to test in XP a PCI bridged card that didn't have a unified shaders design but still fully supporting Directx9 API and at that point I'd expect to see higher geometry and feature tests values considering the one the linux test I did with the equally really slow Geforce 210 PCI anyway seems to be high with Wine through OpenGL at the end frame/s are real and the same numbers should be seen in XP, oh well I'd expect even higher values.
Which might not mean a lot in real game scenario where the PCI communications most probably become soon very busy but still it's interesting. Like has been already said it'd make sense if some/many old features in modern drivers mantains compatibility in unified shaders GPUs but not intended to run fast or be retrogaming oriented and probably even 3DMark06 scenario might be considered old nowdays and running through a "compatibility" road, maybe not on a o.s. level but looking at the XP numbers I wonder if at a driver level.

For example I've tested Far Cry in D3D9 and OpenGL using Wine and here the situation seems less different from Win 8.1 with similar experience. @ 1024x768 HIgh details no AA frame rates are around 10 to 20 fps with the Geforce 210 PCI and more or less I think similar to what I remember in Win 8.1. Faster than the on board GMA gpu but not much surprising even in Linux. And strangely even the OpenGL native path of the game seems to not improve the frame rate here which suggest that probably this is a real game scenario where the PCI communication became busy easily.