VOGONS


Reply 20 of 35, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++
Socket3 wrote on 2024-03-18, 11:44:

- that little on-board Blade 3D ran like a champ. Looking at the specs for my test card (90MHz core / 90MHz ram) it's easy to see why. Despite having shared memory, the on-board version of the Blade 3D is slightly higer clocked, with memory running at 100MHz, despite being shared, and core running at 110MHz according to documentation. That might explain why my old K6 felt faster - that or the nostalgia of FINALLY being able to play all those demo CD's I'd saved up 😁 I guess if I really want to re-create the experience, I really need to find an MVP4 that runs, or a Trident Blade 3D turbo witch is quite a bit higher clocked (135Mhz(ish) for both core and memory depending on model)

Alternate suggestion is to use a socket A board with Blade onboard and a 600Mhz Duron.... some of those might even be unlocked and you could maybe get them running at 500.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 21 of 35, by asdf53

User metadata
Rank Member
Rank
Member

Here's how the Savage4 stacks up against a Voodoo 3 and a GF2 MX on a faster system (Athlon 1000, KT133A):

Savage4 125/143 (default): 4600
Savage4 154/160 (max oc): 5785

Voodoo 3 170/170: 6890

GF2 MX 200/143 (default): 6584
GF2 MX 200/205 (max oc): 7268

I did this test a while ago to see if an overclocked Savage4 would match either of these cards. It did not quite, but held up very well. Now that I see your results I wish I had tested this with a slower CPU, if you still have your test setup I'd love to see how the overclocked Savage would have done on a K6-2+.

Socket3 wrote on 2024-03-18, 13:39:

Output quality on the Trident and Savage 4 is excelent. The SiS 305 on the other hand was over-bright - kind of washed out, like you see on some cheap PCI cards from the mid 90's. Looks fine on a CRT tough. Another mention is that all nvidia cards were very dark under openGL, and no fiddling with the brightness or gamma sliders fixed that. The Savage was pretty dark under openGL as well. The voodoo 2 was too bright in quake 2.

I second that. The output quality of my Trident Blade3D and Savage4 is also amazing, among the very best of any cards from that time. Both are very bright and the colors pop. It's such a joy to play DOS games on them. I have a PCI Blade3d that I would have loved to use in a Socket 3 or Socket 4 build, but sadly it does not boot on either of these systems.

Reply 22 of 35, by Socket3

User metadata
Rank Oldbie
Rank
Oldbie
asdf53 wrote on 2024-03-18, 17:24:
Here's how the Savage4 stacks up against a Voodoo 3 and a GF2 MX on a faster system (Athlon 1000, KT133A): […]
Show full quote

Here's how the Savage4 stacks up against a Voodoo 3 and a GF2 MX on a faster system (Athlon 1000, KT133A):

Savage4 125/143 (default): 4600
Savage4 154/160 (max oc): 5785

Voodoo 3 170/170: 6890

GF2 MX 200/143 (default): 6584
GF2 MX 200/205 (max oc): 7268

I did this test a while ago to see if an overclocked Savage4 would match either of these cards. It did not quite, but held up very well. Now that I see your results I wish I had tested this with a slower CPU, if you still have your test setup I'd love to see how the overclocked Savage would have done on a K6-2+.

Socket3 wrote on 2024-03-18, 13:39:

Output quality on the Trident and Savage 4 is excelent. The SiS 305 on the other hand was over-bright - kind of washed out, like you see on some cheap PCI cards from the mid 90's. Looks fine on a CRT tough. Another mention is that all nvidia cards were very dark under openGL, and no fiddling with the brightness or gamma sliders fixed that. The Savage was pretty dark under openGL as well. The voodoo 2 was too bright in quake 2.

I second that. The output quality of my Trident Blade3D and Savage4 is also amazing, among the very best of any cards from that time. Both are very bright and the colors pop. It's such a joy to play DOS games on them. I have a PCI Blade3d that I would have loved to use in a Socket 3 or Socket 4 build, but sadly it does not boot on either of these systems.

That's about what you'd expect. Well, the point wasn't to showcase the Savage 4's performance in general, but old video card performance on slow / wierd systems. I bet you'll get similar results on say a 266Mhz Pentium II. I don't think the performance impact will be as profound on the super 7 tough.

Reply 23 of 35, by kingcake

User metadata
Rank Oldbie
Rank
Oldbie

Wonder how a Savage 2000 would stack up. I had one of those cards in 2000. It had hardware T&L that was disabled due to being "broken". But modded drivers re-enabled it. It worked fine enabled in most games.

Reply 24 of 35, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

Texture compression is probably quite a good advantage for Savage architecture when AGP and general i/o memory bandwidth restricted.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 25 of 35, by douglar

User metadata
Rank Oldbie
Rank
Oldbie
Garrett W wrote on 2024-03-18, 00:54:

The reason Quake is so much faster on GeForce is the HW T&L, which Quake can make use of and is particularly impressive on a slow CPU such as the K6 series.

Thanks for pointing that out. Good to know. I remembered the 3dfx vs TNT days and the 3DFX solutions were often better with slower CPUs and I remember the disappointment anytime I had to work with a Geforce 2 MX200 and I while I'm sure I must have worked with a Geforce 2 MX400 at some point in time, it wasn't memorable like the MX200.

Last edited by douglar on 2024-03-19, 12:55. Edited 1 time in total.

Reply 26 of 35, by douglar

User metadata
Rank Oldbie
Rank
Oldbie
Socket3 wrote on 2024-03-17, 21:03:

Hey everyone. I've been working on yet another super socket 7 build for the last couple of weeks, with the goal of reproducing the computer I had as a teen, but, since I'm not having any luck sourcing a working VIA MVP4 motherboard, I settled for an MVP3 + a dedicated Trident Blade 3D. After putting it all together, I realized how slow the whole thing is, much slower then I remember. So I started experimenting with different components. The 400MHz K6-II was replaced with a 550MHz K6-II+, and I started testing different budget video cards, looking for that sweet spot - slow, but usable.

If you add any additional cards to the tests, It would be interesting to see a Geforce 2 MX200 or PCI Geforce MX4000. Those are both pretty cheap cards with 64bit memory busses. ATI Rage 128 was almost a viable competitor too.

Here are some links to some contemporaneous benchmarks from back in the day:

https://www.anandtech.com/show/160/10
https://www.anandtech.com/show/205/5
https://www.anandtech.com/show/291/7
https://www.anandtech.com/show/570/11

Reply 27 of 35, by MikeSG

User metadata
Rank Member
Rank
Member
rasz_pl wrote on 2024-03-18, 12:53:
MikeSG wrote on 2024-03-18, 09:16:

It's an AGP 2x motherboard.

WIth AGP 4x, the TNT2-M64 should be +40% faster, and the Geforce 2 MX-400 should be +200-400% faster.

Haha, no! Not even on faster CPUs. You are looking at couple percent at max. Look at platforms memory bandwidth to get a perception of scale, even x2 is too fast for K6.

Memory bandwidth isn't what the CPU uses... it uses the AGP bus which is 66MHz x Transfers per clock. AGP 1x = 1. AGP 2x = 2. AGP 4x = 4. AGP 8x = 8.
So as long as the CPU can physically change it's pin output at 4x multiplied by 66 (264Mhz) and the motherboard supports AGP 4x then the data can get to the card ....

https://en.wikipedia.org/wiki/Accelerated_Gra … s_Port#Versions

Both the TNT2 and Geforce 2 MX-400 you tested were waiting around half the time...

Reply 28 of 35, by Garrett W

User metadata
Rank Oldbie
Rank
Oldbie

This is wrong. Neither the GeForce 2 MX-400 and even less so the TNT2 saturate AGP 2x's bandwidth. AGP 4x does nothing for these chips.
Even if it did, it is likely that the K6-2+ doesn't have enough juice to drive them. AGP bandwidth has been discussed multiple times in the past, AGP 8x wasn't saturated until X1950 Pro and 3850 AGP, AGP 4x was enough for GeForce 4 Ti 4600 etc.

Reply 29 of 35, by douglar

User metadata
Rank Oldbie
Rank
Oldbie
Garrett W wrote on 2024-03-20, 13:22:

This is wrong. Neither the GeForce 2 MX-400 and even less so the TNT2 saturate AGP 2x's bandwidth. AGP 4x does nothing for these chips.
Even if it did, it is likely that the K6-2+ doesn't have enough juice to drive them. AGP bandwidth has been discussed multiple times in the past, AGP 8x wasn't saturated until X1950 Pro and 3850 AGP, AGP 4x was enough for GeForce 4 Ti 4600 etc.

Agreed.

Even with Games, CPU's and Video cards that were a generation more powerful than what we are talking about here, the different between x2 and x4 was modest.

https://www.tomshardware.com/reviews/impact-agp,164-4.html

The best way to show AGP-performance today [this was from Feb 2000] are 3D-scenes with very complex objects in it, using the AGP to transfer huge amounts of triangle data. You will see that in the benchmark results below. However, today's 3D-games are not using by far enough polygons to saturate AGP4x. Again we'll have to wait for 'upcoming titles'. For the time being it is mainly professional OpenGL-software that uses very complex 3D-objects.

Reply 30 of 35, by rasz_pl

User metadata
Rank l33t
Rank
l33t
MikeSG wrote on 2024-03-20, 11:05:

Memory bandwidth isn't what the CPU uses...

the data going to video card comes from ... where exactly ? 😀

MikeSG wrote on 2024-03-20, 11:05:

it uses the AGP bus which is 66MHz x Transfers per clock. AGP 1x = 1. AGP 2x = 2. AGP 4x = 4. AGP 8x = 8.

Its easier to just think in BW:
x1 266 MB/s
x2 533 MB/s
x4 1066 MB/s

Absolute best memory BW on socket 7 platforms is what? <300MB/s on ALI Aladdin V boards? and thats despite using PC100 SDRAM with max theoretical BW of ~800MB/s

MikeSG wrote on 2024-03-20, 11:05:

So as long as the CPU can physically change it's pin output at 4x multiplied by 66 (264Mhz)

You are under an erroneous impression that CPU is actually sitting there writing to some magical raw IO port of AGP
1 There is no faster bus than MEMORY bus in any personal computer older than at least ~10years.
2 AGP transfers are to/from main system memory.
3 AGP transfers must share memory BW with CPU and the rest of the system.

Here some tests on ~3 times faster P3 700MHz on i820 chipset with RDRAM (theoretical BW 1600 MB/s) and GeForce 2 GTS https://www.anandtech.com/show/556/4

Another one https://www.anandtech.com/show/399/9
both GeForce 256 and TNT2 Ultra run faster on AGP x2 BX than AGP x4 RDRAM i820

MikeSG wrote on 2024-03-20, 11:05:

Both the TNT2 and Geforce 2 MX-400 you tested were waiting around half the time...

Waiting for slow CPU, with zero AGP influence.

douglar wrote on 2024-03-20, 14:41:

Even with Games, CPU's and Video cards that were a generation more powerful than what we are talking about here, the different between x2 and x4 was modest.
https://www.tomshardware.com/reviews/impact-agp,164-4.html

and thats the best case scenario test on i840 platform with 3.2 GB/s of memory BW just sitting there doing nothing.

AGP speed was one of those useless paper specs everyone was boasting about, but IRL it didnt do all that much.

Open Source AT&T Globalyst/NCR/FIC 486-GAC-2 proprietary Cache Module reproduction

Reply 32 of 35, by douglar

User metadata
Rank Oldbie
Rank
Oldbie
rasz_pl wrote on 2024-03-20, 18:44:

AGP speed was one of those useless paper specs everyone was boasting about, but IRL it didnt do all that much.

It was clear by 1994 that video was going to need higher bandwidth than PCI at some point in the not too distant future and Intel and Microsoft set about providing the higher bandwidth before it was needed, so that when the hardware and software was ready, the platform was in place and wasn't looking for proprietary PCI-XXX workarounds.

AGP had its issues, and it's easy to laugh at the small real world performance seen on each incremental improvement provided by x2, x4, x8 (or with ATA-66, 100, 133), but it might come back to the hard lessons learned during the prolonged "ISA age" of glacially slow IO. You want an open IO system planned out in advance so that your partners can build to it otherwise you get a muddied soup of Microchannel, EISA and VLB all making a mess of things. And when you plan it out in advance, you are not going to see the real world improvement when it arrives.

Maybe part of the AGP choice was to stick a knife in the socket-7 platform but whatever. Something needed to get done and something was done. By the time AGP4x came out, PCI video cards were not really an option for anything better than desktop use or chunky low res video. The market wouldn't have been well served by unstable rigs running on 166 Mhz Socket 7++ with 5V AT power supplies.

Now the sideband addressing stuff was a miscalculation entirely, but whatever, you can't get it right all of the time. It might have made sense if video memory prices stayed high but that didn't happen and everyone was better off when the prices dropped. It would have been nice if the industry was able to make the jump directly to PCIe, but the industry didn't know that serial IO was the future when AGP was laid out.

Reply 33 of 35, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

Graphics busses seem to be specced on the principle that 6-8 years in the future, someone is going to have astoundingly good CPU and memory performance, but buy the graphics card with less than half the median amount of memory, and need to load every texture on demand. And it never seems to work out that the boards that get that interface at that time, can actually be upgraded to memory or CPU quite that fast, nor do "fast enough to take advantage of it" GPU cores release on boards with tiny amounts of memory. Meanwhile when that memory and CPU and GPU performance does exist, the board they are on has the next graphics bus designed on the principle that 6-8 years in the future... ... ...

So basically, it never did anyone any good being an AGP4x, 8x, PCIe early adopter. 4 lanes were probably good enough until 2012ish, 8 lanes probably only started to fall noticeably behind 5 years ago, and are probably fine on a lower-mid card still... and funnily enough they still don't sell $5 cheaper 4060s with only 1GB RAM so you can fully exploit the PCIe 16x on the sandybridge board you bought 11 years ago.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 34 of 35, by douglar

User metadata
Rank Oldbie
Rank
Oldbie
BitWrangler wrote on 2024-03-21, 13:54:

So basically, it never did anyone any good being an AGP4x, 8x, PCIe early adopter. 4 lanes were probably good enough until 2012ish, 8 lanes probably only started to fall noticeably behind 5 years ago, and are probably fine on a lower-mid card still... and funnily enough they still don't sell $5 cheaper 4060s with only 1GB RAM so you can fully exploit the PCIe 16x on the sandybridge board you bought 11 years ago.

Point taken. Early adopters that buy the latest greatest because of theoretical specs can often be left in with nothing but a high credit card bill and buyers' remorse, and that clever shopper who thought he leveraged his money by planning a solid upgrade path could find out he's got nothing more than an ironic case sticker for his trouble.

Reply 35 of 35, by Minutemanqvs

User metadata
Rank Member
Rank
Member

That's why after some time buying tech gear in general, you learn to buy the midrange and well debugged stuff instead of running after the latest technology. You often get 80% of the performance for less than 50% of the price. Look at the mess that current graphic card power connector is...with the associated PSUs you need to buy.

Searching a Nexgen Nx586 with FPU, PM me if you have one. I have some Athlon MP systems and cookies.