My understanding is that AGP aperture size has more of an influence on GPU performance when the graphics card has a relatively small amount of memory (16-32MB), and as VRAM increases, AGP aperture does not need to increase. For cards that only offer ~32MB of VRAM, in my experience it should be set 1:1 or 2:1 (if you have enough memory to back that), but anything else is kind of pointless. For cards with significantly more memory (128MB or more) it probably isn't as much of a concern. The Polish test seems to confirm this, with the GeForce 2 cards in the test (which probably have 16-32MB of memory) not showing much benefit beyond a 32MB aperture. Also remember that when comparing 3DMark scores, Futuremark did come out a few years ago and say that variances of less than something like 5-10% shouldn't be bothered with, because the margin of error is a few % between each run.
Here's another test showing a more modern set of graphics cards; the GeForce 4 Ti:
Notice that changing AGP aperture size seems to have no effect on the measured performance (the author comes to the same conclusion). These cards have 128MB of memory.
Here's another test with older hardware that has significantly less onboard memory:
http://www.tweak3d.net/articles/aperture-size/ (TNT2 and GeForce 3)
Again, "big" sizes tend not to do much for either card once they've met the card's onboard memory, but they did note some instability by setting the AGP Aperture to extreme values on the GeForce 3.
I'd also add that Microsoft has vaguely made noise about system memory needing to be greater than (I would guess at least double in a perfect world) video memory; msdn.microsoft.com/en-us/library/window ... cal_memory (unfortunately I've yet to see an article that compares this)
Here's another article I found from Tech ARP about it, that may be helpful to some:
http://www.techarp.com/showFreeBOG.aspx?lang=0&bogno=32 The "big point" here is that as VRAM increases, AGP aperture does not - it's inversely related. If the card has more memory onboard it doesn't rely as much on system memory.
As far as the specific issue at hand here, with the FX 5900 doing loads better in 3DMark, tough to make a 100% explanation, but it should be safe to assume that a 64-128M aperture is correct for most graphics cards based on the above. I'm curious what the default value was on the system though. Was it very low (like 4-16M?)? Alternately, it may be that whatever the original value was produced some sort of compatibility/performance issue, like what Tweak3D experienced with the GeForce 3 in their test (IOW don't set it to 4-16MB on modern systems). And finally, some other setting(s) may have been accidentally/unknowingly changed or configured when AAS was adjusted that led to the system working right. It isn't universally necessary, in my understanding, to set AGP aperture 1:1 with all graphics cards however (and data seems to support this) - cards with more VRAM don't rely on it as heavily (if at all). Of course what you're tasking the card with doing also matters - does it actually need more memory than it has onboard or not?
Some results I'd be curious about with this GeForce FX system:
- If AAS is set to 256MB, does it do anything statistically significant?
- Same as above, but at 64MB.
- What was the original AAS?
- Can you test in another benchmark (like AquaMark 3) or a game (like Quake or Unreal)?