VOGONS


Voodoo Rush

Topic actions

First post, by beepfish

User metadata
Rank Newbie
Rank
Newbie

I tried a Voodoo Rush in a 486 board, t see if I could run glquake. It's hopeless - at 640x480 it's far too jerky, though it looks ok. At 512 conwidth I get only a few colours like magenta [and no brown - OMG!!!], though the speed is OK from what little I can see.

I didn't try a voodoo 1 but I doubt the performance loss from Voodoo 1 to Rush would make any difference to this. No room in the case for main graphics card plus a V1, is why I bothered with a Rush.

I've given up on this and removed the Voodoo card, but out of interest would a Pentium overdrive 83 make the 640 res playable on the Rush do you think? Wondering whether one day I might dig one out to have a go. Current processor is a 5x86-75.

Reply 1 of 26, by GL1zdA

User metadata
Rank Oldbie
Rank
Oldbie

Don't try accelerating 3D on a 486. A Voodoo, like most 3D accelerators of that time, accelerated only a part of the graphics pipeline - all geometry is done on the CPU and depends heavily on FPU performance. I recommend at least a Pentium II, you can try a Pentium, but it won't allow the Voodoo to unleash its true potential.

getquake.gif | InfoWorld/PC Magazine Indices

Reply 2 of 26, by beepfish

User metadata
Rank Newbie
Rank
Newbie

ok thanks, shan't bother getting that overdrive out then.

Reply 3 of 26, by swaaye

User metadata
Rank l33t++
Rank
l33t++

The Verite cards do full triangle setup in hardware. 😁 It doesn't help that much though. 486 CPUs and their platforms are too slow for 3D engines like Quake's.

Reply 4 of 26, by Tetrium

User metadata
Rank l33t++
Rank
l33t++
swaaye wrote:

The Verite cards do full triangle setup in hardware. 😁 It doesn't help that much though. 486 CPUs and their platforms are too slow for 3D engines like Quake's.

Thats why I chose a Verite card for my first 486 build along with good windows performance. It kinda sucks at all other things however. I think the Verite's are best used in P1 systems

Reply 5 of 26, by swaaye

User metadata
Rank l33t++
Rank
l33t++

Oh and btw Voodoo1 does do partial triangle setup. I've read that it does most of the process, actually. Voodoo2 does all of it.

PowerVR doesn't do any.

Anybody remember Brian Hook? The guy who chatted so much about up and coming 3D hardware back in the Quake 2 days? Here's what he had to say about this stuff.

http://groups.google.com/group/rec.games.prog … a581eb29592a0b3

> My point exactly. Like I said about the Voodoo, it's all in the > fillrate. It does not have good polygon performance becaus […]
Show full quote

> My point exactly. Like I said about the Voodoo, it's all in the
> fillrate. It does not have good polygon performance because it
> doesn't do the transforms to be considered to have any inherent
> polygon throughput.

Mike, you're completely wrong here. NO consumer 3D graphics chip does
transformations today. NONE. You have three basic classes of consumer
3D chips when it comes to throughput -- those that require the CPU to do
edge scanning, those that require the CPU to do setup, and those that
only require the CPU to send parameters.

In first category is exceedingly rare these days, and on its way out.
The second category encompasses most current 3D chips. The last
category covers only a handful of existing 3D chips, but most new 3D
chips will have full triangle setup on board.

Now, the Verite DOES support triangle setup on-chip, but its setup
engine is SLOWER than a Pentium doing the same work. This is a fact.
The setup engine attempts to gain back this performance in parallelism,
something it typically does a pretty good job with, but for pure peak
throughput it doesn't match even a Pentium/90 for setup performance.

The Voodoo supports some triangle setup, but not full parameter
computation. This was done for a reason -- the designers of the Voodoo
wanted system performance to scale as processor speeds got faster, not
stall at some artifically low number. They did a very good job of this,
I might add.

> And in games with high polygon counts, even decent CPUs (P166) can't
> keep up with a simple, inexpensive bus mastering, programmable 3d
> card.

This pure techno-babble nonsense.

? The voodoo chokes while the $150 verite flies along. And the
> verite is doing more work (I don't know what the nature of its poly
> transforms are, how much the CPU assists, but whatever it's doing
> works very well).

You are woefully misinformed. The Verite performs NO true geometry
calculations on board. It does triangle setup on board, but that's it
-- lighting, transformations, and clipping are done on the host CPU, not
on the Verite. Ask the folks at Rendition, they'll confirm this.

I'd also like to see comprehensive data (not outliers) that shows me a
case where the Verite flies and the Voodoo chokes. The only example I'm
aware of this are in very busy scenes in Descent 2 where a poor software
implementation on Voodoo significantly slows overall performance down --
this is not the hardware's fault, it was a particular implementation
performance bug that could be fixed in less than a few days. Note that
when this particular bug isn't manifested that Voodoo shows performance
radically higher than Verite.

And also note that this has NOTHING to do with polygon throughput. The
Voodoo suffers performance degradation as a result of texture download
speed, NOT because of higher polygon counts. 3Dfx has demonstrated 500K
triangles/second on a P90 and 1M+ triangles/second on a P5/166 -- I
don't believe Rendition has ever gotten NEAR these numbers.

The Verite has SUBSTANTIALLY slower triangle throughput than the
Voodoo. I own and use both on a daily basis and have Direct3D programs
(that I've written) that use both, and the Verite isn't even CLOSE to
the Voodoo in throughput -- we're talking about half the speed, on a
good day, and it gets worse as CPUs get faster.

> thread, which started when a certain person objected to my saying that
> the Voodoo's performance was all in its fillrate, and proves nothing
> about the benefits of DMA. He had seemed to claimed that the Voodoo
> being such a fast rasterizer was proof that DMA was useless, or
> something.

Reread my original post Mike, I said nothing of the sort. I said that
the Voodoo was evidence that PIO was NOT a performnace killer -- I did
NOT say that DMA itself was "useless". It's a lot easier to win
arguments when you can put words in other people's mouths....

The Voodoo has good performance for several reasons. For starters, yes,
it has mondo fill rate. Second, it can sink data at a huge rate and
will NOT stall the PCI bus except in extreme circumstances. Third, it
has well written drivers and a fast proprietary API (Glide). Finally,
it's a simple chip to program -- you don't need to know about DMA
buffers, TLBs, locking down memory, polling for DMA buffer contention,
etc. This is stuff that application developers don't really want to see
or know about. Because the Voodoo is such a straightforward
accelerator, it's difficult to make it go slow. Developers appreciate
this.

There were lots of folks claiming that DMA was the no-brainer for higher
polygon throughput, and I have offered both theoretical (sample code)
and real (benchmark) data that contradicts this.

Contrary to your beliefs, the Voodoo, with its PIO and lack of DMA,
still possesses the fastest polygon throughput rates* AND fill rates of
any commercially available consumer 3D graphics accelerator. This
doesn't mean that DMA doesn't have its place (it does -- texture
downloads), but it DOES mean that PIO is not necessarily inferior to DMA
for command traffic and, more to the point, a PIO based graphics adapter
has demonstrated that it can achieve POLYGON THROUGHPUT performance well
beyond that of ANY of the DMA based solutions available today.

Brian

Reply 6 of 26, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Voodoo2 on a 486 is better than Rush but it'll still be slow but I know GLQuake is definitely playable with that combination. Remember that Rush is one of 3dfx's early infamous blunders they're not particularily proud about after its unleashing.

apsosig.png
long live PCem

Reply 7 of 26, by sliderider

User metadata
Rank l33t++
Rank
l33t++
swaaye wrote:
Oh and btw Voodoo1 does do partial triangle setup. I've read that it does most of the process, actually. Voodoo2 does all of it. […]
Show full quote

Oh and btw Voodoo1 does do partial triangle setup. I've read that it does most of the process, actually. Voodoo2 does all of it.

PowerVR doesn't do any.

Anybody remember Brian Hook? The guy who chatted so much about up and coming 3D hardware back in the Quake 2 days? Here's what he had to say about this stuff.

http://groups.google.com/group/rec.games.prog … a581eb29592a0b3

> My point exactly. Like I said about the Voodoo, it's all in the > fillrate. It does not have good polygon performance becaus […]
Show full quote

> My point exactly. Like I said about the Voodoo, it's all in the
> fillrate. It does not have good polygon performance because it
> doesn't do the transforms to be considered to have any inherent
> polygon throughput.

Mike, you're completely wrong here. NO consumer 3D graphics chip does
transformations today. NONE. You have three basic classes of consumer
3D chips when it comes to throughput -- those that require the CPU to do
edge scanning, those that require the CPU to do setup, and those that
only require the CPU to send parameters.

In first category is exceedingly rare these days, and on its way out.
The second category encompasses most current 3D chips. The last
category covers only a handful of existing 3D chips, but most new 3D
chips will have full triangle setup on board.

Now, the Verite DOES support triangle setup on-chip, but its setup
engine is SLOWER than a Pentium doing the same work. This is a fact.
The setup engine attempts to gain back this performance in parallelism,
something it typically does a pretty good job with, but for pure peak
throughput it doesn't match even a Pentium/90 for setup performance.

The Voodoo supports some triangle setup, but not full parameter
computation. This was done for a reason -- the designers of the Voodoo
wanted system performance to scale as processor speeds got faster, not
stall at some artifically low number. They did a very good job of this,
I might add.

> And in games with high polygon counts, even decent CPUs (P166) can't
> keep up with a simple, inexpensive bus mastering, programmable 3d
> card.

This pure techno-babble nonsense.

? The voodoo chokes while the $150 verite flies along. And the
> verite is doing more work (I don't know what the nature of its poly
> transforms are, how much the CPU assists, but whatever it's doing
> works very well).

You are woefully misinformed. The Verite performs NO true geometry
calculations on board. It does triangle setup on board, but that's it
-- lighting, transformations, and clipping are done on the host CPU, not
on the Verite. Ask the folks at Rendition, they'll confirm this.

I'd also like to see comprehensive data (not outliers) that shows me a
case where the Verite flies and the Voodoo chokes. The only example I'm
aware of this are in very busy scenes in Descent 2 where a poor software
implementation on Voodoo significantly slows overall performance down --
this is not the hardware's fault, it was a particular implementation
performance bug that could be fixed in less than a few days. Note that
when this particular bug isn't manifested that Voodoo shows performance
radically higher than Verite.

And also note that this has NOTHING to do with polygon throughput. The
Voodoo suffers performance degradation as a result of texture download
speed, NOT because of higher polygon counts. 3Dfx has demonstrated 500K
triangles/second on a P90 and 1M+ triangles/second on a P5/166 -- I
don't believe Rendition has ever gotten NEAR these numbers.

The Verite has SUBSTANTIALLY slower triangle throughput than the
Voodoo. I own and use both on a daily basis and have Direct3D programs
(that I've written) that use both, and the Verite isn't even CLOSE to
the Voodoo in throughput -- we're talking about half the speed, on a
good day, and it gets worse as CPUs get faster.

> thread, which started when a certain person objected to my saying that
> the Voodoo's performance was all in its fillrate, and proves nothing
> about the benefits of DMA. He had seemed to claimed that the Voodoo
> being such a fast rasterizer was proof that DMA was useless, or
> something.

Reread my original post Mike, I said nothing of the sort. I said that
the Voodoo was evidence that PIO was NOT a performnace killer -- I did
NOT say that DMA itself was "useless". It's a lot easier to win
arguments when you can put words in other people's mouths....

The Voodoo has good performance for several reasons. For starters, yes,
it has mondo fill rate. Second, it can sink data at a huge rate and
will NOT stall the PCI bus except in extreme circumstances. Third, it
has well written drivers and a fast proprietary API (Glide). Finally,
it's a simple chip to program -- you don't need to know about DMA
buffers, TLBs, locking down memory, polling for DMA buffer contention,
etc. This is stuff that application developers don't really want to see
or know about. Because the Voodoo is such a straightforward
accelerator, it's difficult to make it go slow. Developers appreciate
this.

There were lots of folks claiming that DMA was the no-brainer for higher
polygon throughput, and I have offered both theoretical (sample code)
and real (benchmark) data that contradicts this.

Contrary to your beliefs, the Voodoo, with its PIO and lack of DMA,
still possesses the fastest polygon throughput rates* AND fill rates of
any commercially available consumer 3D graphics accelerator. This
doesn't mean that DMA doesn't have its place (it does -- texture
downloads), but it DOES mean that PIO is not necessarily inferior to DMA
for command traffic and, more to the point, a PIO based graphics adapter
has demonstrated that it can achieve POLYGON THROUGHPUT performance well
beyond that of ANY of the DMA based solutions available today.

Brian

So this means if you have a slow CPU, then the Verite will do the triangle setup faster? How slow would your CPU have to be for it to be faster to offload the work to the video card rather than letting the CPU do it?

Reply 8 of 26, by swaaye

User metadata
Rank l33t++
Rank
l33t++

Yes it means that on a CPU slower than a P90 apparently, Verite may be faster than Voodoo1 because it offloads the triangle setup. Unfortunately, a CPU that slow will not really allow 3D games to run very well in general.

Basically that advantage is not a practical advantage. In reality the only advantage Verite has is that there are a couple of DOS games that only support it for 3D acceleration. Indycar Racing 2, for example.

V2200 on the other hand has better image quality than Voodoo1 when it works well. It's also about the same speed because it has several times the pixel fillrate of V1000. Sadly, Voodoo2 came out shortly after and that was the end of Rendition.

Reply 9 of 26, by sliderider

User metadata
Rank l33t++
Rank
l33t++
swaaye wrote:

Yes it means that on a CPU slower than a P90 apparently, Verite may be faster than Voodoo1 because it offloads the triangle setup. Unfortunately, a CPU that slow will not really allow 3D games to run very well in general.

Basically that advantage is not a practical advantage. In reality the only advantage Verite has is that there are a couple of DOS games that only support it for 3D acceleration. Indycar Racing 2, for example.

V2200 on the other hand has better image quality than Voodoo1 when it works well. It's also about the same speed because it has several times the pixel fillrate of V1000. Sadly, Voodoo2 came out shortly after and that was the end of Rendition.

I bought a Voodoo 5 5500 PCI around the time 3dfx went belly up and I still have it. The history of 3dfx is really strange. The early Voodoos required a second video card for 2d, the Voodoo Rush did 2d and 3d but was poorly designed and slow as molasses, the Voodoo 2 did 3d better but went back to requiring a separate card for 2d but introduced SLi which was fast but expensive bringing the number of video cards required to 3, the Voodoo Banshee was actually slower than the Voodoo 2 a lot of the time, the Voodoo 3 was a hint faster than the Nvidia TNT2 but lacked a few features that were better for PR than performance but Nvidia promoted those features to 3dfx detriment, the Voodoo 4 and 5 were designed to outperform TNT2 but Nvidia released the Geforce and ATi released the Radeon at the same time that were faster and had hardware T&L. Games looked better in Glide with the AA cranked up to high, but it wasn't enough.

I think I would have hated to be a 3dfx fanboy back when they were still around. Every time you turned around someone was one upping them until they finally got one upped right out of business. I still remember the razzing a lot of 3dfx guys were taking back when the sale to Nvidia was announced. I only used the Voodoo 5 for a few months before buying a Geforce 4MX because Everquest upgraded their game engine and required a card that did hardware T&L. Even that card was short lived because they released Luclin a few months after that and required a card that could do DX 8.1 and you couldn't use Windows 95 anymore so I pretty much had to upgrade everything to keep playing.

Reply 10 of 26, by swaaye

User metadata
Rank l33t++
Rank
l33t++

Actually the Voodoo5 was supposed to beat GeForce 256. It is that card's match. But they were late and went up against GeForce 2 instead and they had no chance there. Radeon wasn't up to the task either (and ATI's drivers were shit in those days [first hand experience]).

I was not a Voodoo guy after Voodoo1. I tried a lot of different cards in those years. In retrospect however I think that Voodoo5 was the best card of its time because of Glide support and its FSAA. When I put together a Win9x box these days, for games from 1996-2000, the Voodoo5 is my first choice basically. That FSAA makes games look amazing and it works perfectly with many of them.

The DX7 T&L PR hype of the time was really over the top. It really never mattered IMO. DX7 was quickly replaced and very few games require DX7 T&L.

Reply 11 of 26, by sliderider

User metadata
Rank l33t++
Rank
l33t++
swaaye wrote:

Actually the Voodoo5 was supposed to beat GeForce 256. It is that card's match. But they were late and went up against GeForce 2 instead and they had no chance there. Radeon wasn't up to the task either (and ATI's drivers were shit in those days [first hand experience]).

I was not a Voodoo guy after Voodoo1. I tried a lot of different cards in those years. In retrospect however I think that Voodoo5 was the best card of its time because of Glide support and its FSAA. When I put together a Win9x box these days, for games from 1996-2000, the Voodoo5 is my first choice basically. That FSAA makes games look amazing and it works perfectly with many of them.

The DX7 T&L PR hype of the time was really over the top. It really never mattered IMO. DX7 was quickly replaced and very few games require DX7 T&L.

It's pretty amazing. I read that the Geforce 256 was actually still being supported in some games as late as 2006. Having hardware T&L may not have been so important when they were still new, but it became more important as time passed since all the other cards of the day that didn't have it were dropped from support long before.

Reply 12 of 26, by swaaye

User metadata
Rank l33t++
Rank
l33t++

GeForce 256 was supported so long because of the existence of the GeForce 4 MX. They have the same featureset. GF4MX was a tragedy for the industry because it's no better than GF256 and because it was ultra popular due to OEM computers. It was so popular in 2004 that Doom3 has a special render path just for it!

In reality though these games that supported NV10-level hardware well after it was ancient history look terrible and run barely playable on such cards. It was only supported to maximize potential customer volume.

GF4MX cards were for sale in stores around here up until a couple of years ago. GeForce MX 4000 comes to mind. It was insanity. Junk like GeForce FX 5200/5500 were also out there.

Reply 13 of 26, by sliderider

User metadata
Rank l33t++
Rank
l33t++
swaaye wrote:

GeForce 256 was supported so long because of the existence of the GeForce 4 MX. They have the same featureset. GF4MX was a tragedy for the industry because it's no better than GF256 and because it was ultra popular due to OEM computers. It was so popular in 2004 that Doom3 has a special render path just for it!

In reality though these games that supported NV10-level hardware well after it was ancient history look terrible and run barely playable on such cards. It was only supported to maximize potential customer volume.

GF4MX cards were for sale in stores around here up until a couple of years ago. GeForce MX 4000 comes to mind. It was insanity. Junk like GeForce FX 5200/5500 were also out there.

You can still buy new GeForce 4 and 5 cards today.

http://www.google.com/products?q=GeForce+4&oe … &cat=297&cond=1

http://www.google.com/products?q=GeForce+FX52 … t=297&scoring=p

Reply 14 of 26, by F2bnp

User metadata
Rank l33t
Rank
l33t

The whole GeForce FX series was junk. I had a 5600 XT back then and my friend's GF4 Ti4200 was faster.

Reply 15 of 26, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Indeed. A Geforce2MX can even outperform the FX series. The shocking part about the FX is the fake benchmark controversy to promote it!

apsosig.png
long live PCem

Reply 16 of 26, by sliderider

User metadata
Rank l33t++
Rank
l33t++
leileilol wrote:

Indeed. A Geforce2MX can even outperform the FX series. The shocking part about the FX is the fake benchmark controversy to promote it!

I don't know why they even bothered with the 5200 at all. The DX9 support was so slow as to be useless so they could have filled the slot just as easily with an updated version of one of their older DX8 chipsets. The only thing I can think of is they needed an outlet for defective 5800/5900 chips like when ATi was putting defective 9700 chips on the 9500 non-pro cards. I lucked out on that deal and ended up with one of the 9500's that was unlockable with custom drivers. Of course they also crippled the memory bus and used chips that didn't overclock very well so you didn't get full 9700 performance but it was a still great value if you were lucky enough to get one with unlockable pipelines.

Reply 17 of 26, by swaaye

User metadata
Rank l33t++
Rank
l33t++

5200 (NV34) is either its own chip or is a cut-down 5600 (NV31). The chips are rather similar. NV wasn't big on discussing the gritty details back then.

It was another really popular cheapo card. It was one of the first chips to bring the top end features to the bottom. Whether the features were worthless or not doesn't matter to the OEMs or even many upgrade customers apparently. You can see the same thing today with the low-end DX10 and 11 cards and IGPs.

5700, 5800 and 5900 are quite a bit different. 5800 doesn't completely suck like 5200 and 5600 but those are rare. 5700 and 5900 improved performance a lot over 5600 and 5800. Of course it's all relative and Radeon 9700/9800 are definitely superior almost without exception.

Reply 18 of 26, by sliderider

User metadata
Rank l33t++
Rank
l33t++
swaaye wrote:

5200 (NV34) is either its own chip or is a cut-down 5600 (NV31). The chips are rather similar.

It was another really popular cheapo card. It was one of the first chips to bring the top end features to the bottom. Whether the features were worthless or not doesn't matter to the OEMs or even many upgrade customers apparently. You can see the same thing today with the low-end DX10 and 11 cards and IGPs.

5700, 5800 and 5900 are quite a bit different. 5800 doesn't completely suck like 5200 and 5600 but those are rare. 5700 and 5900 improved performance a lot over 5600 and 5800. Of course it's all relative and Radeon 9700/9800 are definitely superior almost without exception.

You're probably right about the chipsets being similar. i just found an old review of the 5600 Ultra and 5200 Ultra and the two cards look identical. The only visible difference is when the fans are removed one says NV34 on the GPU and the other says NV31. It looks as though NV31 has more in common with NV30 then NV34, though. The transistor count and fab process are identical, for example. NV34 is different. It looks like the 5600 is a scaled down 5800. The FX5500 is also NV34 so the 5200 is probably a cut down version of that.

Reply 19 of 26, by beepfish

User metadata
Rank Newbie
Rank
Newbie

It was another really popular cheapo card. It was one of the first chips to bring the top end features to the bottom. Whether the features were worthless or not doesn't matter to the OEMs or even many upgrade customers apparently. You can see the same thing today with the low-end DX10 and 11 cards and IGPs.

Have to admit that back then I bought an FX5200 AGP thinking it would be better than the MX440 PCI I had sold - all that gobbledegook on the box, the fact that 5 is a bigger number than 4, and the logic that "surely the latest chips will be better than what went before"; may even have been a dx9 game I wanted to play. Little did I know I would spend a year scratching my head marvelling at the crap performance. On the plus side, that experience prompted me to take more interest in understanding what I was buying in future 🤣.