From a purely observational perspective, R300 seems to perform a bit better than NV30 or NV40. None are "bad" but there is a noticeable, albeit subtle, difference when they're compared side-by-side. In terms of analog output quality, anything much older runs the risk of bad filter implementation, which can have a substantial impact on image quality. I haven't noticed any significant improvement beyond those with R500, R700, etc, so the "around Vista" timeframe seems pretty reasonable. 😀
Scali wrote:
High-end cards will clock themselves down in 'desktop' mode to save power and keep the noise down.
So indeed, what are you measuring? Basically all you're measuring is the performance that the driver-developers considered 'good enough' for regular desktop usage, to keep the GPU power usage down as far as possible.
.
This is highly variable across different generations of hardware - modern cards (e.g. Kepler, Maxwell, GCN) don't really have a "desktop" vs "gaming" mode dichotomy, instead they dynamically adjust their clocks and resources in response to load, much like modern CPUs. Of course earlier cards with power management features, like GeForce 7, aren't as sophisticated. IME I have not noticed any performance differences between different clock tiers on earlier GeForce cards (FX, 6, 7), even running Aero Glass in Vista, and would agree with your assessment that "Vista-ish era" pretty much equalized everything. That doesn't mean a benchmark couldn't "see" such differences if they exist, but I'm certainly not noticing them in daily usage.