VOGONS


Worst fastest early 3D cards

Topic actions

Reply 20 of 249, by Joseph_Joestar

User metadata
Rank l33t
Rank
l33t
ubiq wrote on 2024-03-07, 22:34:

I was installing Descent 2 a month or so ago, and it specifically prompts to install the S3 ViRGE accelerated version. I happened to have a ViRGE card, and never used one back in the day so decided to check it out. And... holy moley it was bad. Bad to nearly unplayable frame rates, even for the time, and blurry image quality. Then, trying it in 2D mode 640x480 on the same card and got very smooth fps and sharp image quality. This was on a 233MMX system. So, in this case the card was acting as a 3D decelerator and a huge bottleneck. 🥴

Virge performance depends on a few things. First: the card version, as DX and GX variants are faster than the original Virge and the VX. Second: the clock speed, as that tends to vary quite a bit between manufacturers. Some generic Virge cards are clocked as low as 45 MHz, while the high end models from reputable manufacturers run at 72 MHz or even more. Finally, the memory amount can affect game visuals, and possibly has a slight impact on performance as well.

To give an example, a decently clocked 4MB Virge DX/GX running Tomb Raider in S3D mode can outperform a Pentium 133/166 running that game using software rendering. At the same time, S3D offers 16-bit color depth, perspective correction and bilinear filtering, all of which improve the visuals. Faster CPUs will outclass the Virge in raw performance, but software rendering will still be less visually appealing.

PC#1: Pentium MMX 166 / Soyo SY-5BT / S3 Trio64V+ / Voodoo1 / YMF719 / AWE64 Gold / SC-155
PC#2: AthlonXP 2100+ / ECS K7VTA3 / Voodoo3 / Audigy2 / Vortex2
PC#3: Athlon64 3400+ / Asus K8V-MX / 5900XT / Audigy2
PC#4: i5-3570K / MSI Z77A-G43 / GTX 970 / X-Fi

Reply 21 of 249, by Jo22

User metadata
Rank l33t++
Rank
l33t++
wrote:

My monitor throughout the 90s was a trinitron-like so I didn't have any of this "blurry dot pitch is the only retro way" crap. I saw all the dithering and jaggies and the scanlines on 70hz+ stuff etc. If there's obviously bad image quality, i'd know. I only knew about what it could look like switching to an nVidia *after* the 90s.

Hey, my blurry PC VGA/MCGA monitor was an original IBM™©®! 😂
Something like that . An IBM 8512? If so, it had an mask with 0.41 mm dot pitch.
So it was far from historical revisionism or something like that.

Except if we take into consideration that my monitor was from the late 80s, rather than the "90s".

Then maybe it wasn't period-correct, on paper.

Despite that about everyone had used or seen such a humble VGA monitor at some point in the first half of the 90s.

But that's like with Windows 98SE, I suppose, which everyone considers a 90s OS,
even though it was released in late 1999 and had spent more time in the 2000s than it had spent in the 90s. 🙂

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 22 of 249, by 386SX

User metadata
Rank l33t
Rank
l33t
Joseph_Joestar wrote on 2024-03-08, 06:53:
ubiq wrote on 2024-03-07, 22:34:

I was installing Descent 2 a month or so ago, and it specifically prompts to install the S3 ViRGE accelerated version. I happened to have a ViRGE card, and never used one back in the day so decided to check it out. And... holy moley it was bad. Bad to nearly unplayable frame rates, even for the time, and blurry image quality. Then, trying it in 2D mode 640x480 on the same card and got very smooth fps and sharp image quality. This was on a 233MMX system. So, in this case the card was acting as a 3D decelerator and a huge bottleneck. 🥴

Virge performance depends on a few things. First: the card version, as DX and GX variants are faster than the original Virge and the VX. Second: the clock speed, as that tends to vary quite a bit between manufacturers. Some generic Virge cards are clocked as low as 45 MHz, while the high end models from reputable manufacturers run at 72 MHz or even more. Finally, the memory amount can affect game visuals, and possibly has a slight impact on performance as well.

To give an example, a decently clocked 4MB Virge DX/GX running Tomb Raider in S3D mode can outperform a Pentium 133/166 running that game using software rendering. At the same time, S3D offers 16-bit color depth, perspective correction and bilinear filtering, all of which improve the visuals. Faster CPUs will outclass the Virge in raw performance, but software rendering will still be less visually appealing.

I always thought comparing software rendering frame rate with hardware one that has to solve much more complex effects, logics, resolutions etc, always created the whole "decelerators" name when they were like two different games after all. I imagine it was a marketing nightmare to explain why those games ended up running slower.

Reply 23 of 249, by Jo22

User metadata
Rank l33t++
Rank
l33t++

^Yup. I was thinking same most of time.
The "problem" with the VIRGE 325 was that the memory timings were too conservative and that it had too much features.

Because, S3D titles on DOS do perform pretty good per se if a) the memory timing speed is being increased a bit b) the resolution is slightly less than 640x480.

If the S3 ViRGE wasn't being so feature rich, it wouldn't have performed so badly.

The problem with the games was, I assume, that the game developers didn't just want to promote the S3 card, but also their own games.

The result was that all the bells and whistles had been enabled by default, which turned the game into a slideshow.

However, "slideshow" is relative here.
In the early 90s, when games like "Alone in the Dark" or Elite were still a thing,
we video game players (gamers) were still being used to ~15 FPS game play.

A few years later, near to the end of the 90s, this nolonger was being acceptable.
20, 30 and 60 FPS were more and more becoming sort of an expectation.

I still remember being happy to play Star Wing (Star Fox) on Super NES or Descent's shareware version on a 486DX2-66 and a Pentium 75.

The movement wasn't silk smooth, but I had accepted this as part of the atmosphere.

Moving a brick of a spaceship/shuttle wasn't easy, or so I imagined.

Just like the good ol' Nostromo wasn't very aerodynamic, either.

In comparison, Descent II on a ViRGE 325 was more of an upgrade than a downgrade.
Performance was on par with my 486/P75 experience, if not better. 🙂

Edit: The fastest PC I had access to back then was in the Pentium 90 to 133 range,
I believe, with that Pentium 75 (a dismantled Compaq) being my experimental rig for the longest time.

My main PC (used for school, printing, scanning images etc) on my desk still was a 286-12 running DOS/Windows 3.1x.
It sat there up until the year 2000, if memory serves.

It wasn't much of a gaming PC, though, except for little Windows desktop games and various adventure and puzzle games.

A Pentium MMX 166 was already being part of my Windows XP experience in the early 2000s.

Last edited by Jo22 on 2024-03-08, 15:03. Edited 1 time in total.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 24 of 249, by Joseph_Joestar

User metadata
Rank l33t
Rank
l33t
386SX wrote on 2024-03-08, 14:22:

I always thought comparing software rendering frame rate with hardware one that has to solve much more complex effects, logics, resolutions etc, always created the whole "decelerators" name when they were like two different games after all. I imagine it was a marketing nightmare to explain why those games ended up running slower.

I agree. In particular, the difference in color depth between software rendering and 3D accelerated graphics is what stands out the most to me.

file.php?id=112145&mode=view

Most software renderers use an 8- bit color palette (256 colors total) while even early 3D accelerated APIs like S3D and ATi CIF use a 16-bit color palette (65536 colors total) or better. The color banding which occurs when using a software renderer is very distracting, at least to my eyes. For example, look at the shading on Lara's arms in the screenshot above.

Last edited by Joseph_Joestar on 2024-03-08, 15:06. Edited 1 time in total.

PC#1: Pentium MMX 166 / Soyo SY-5BT / S3 Trio64V+ / Voodoo1 / YMF719 / AWE64 Gold / SC-155
PC#2: AthlonXP 2100+ / ECS K7VTA3 / Voodoo3 / Audigy2 / Vortex2
PC#3: Athlon64 3400+ / Asus K8V-MX / 5900XT / Audigy2
PC#4: i5-3570K / MSI Z77A-G43 / GTX 970 / X-Fi

Reply 25 of 249, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Thanks for the images for comparison, the differences are apparent.
In software-rendering, Lara looks quite a bit like Pinocchio to me. 😥

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 26 of 249, by Joseph_Joestar

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2024-03-08, 15:06:

Thanks for the images for comparison, the differences are apparent.
In software-rendering, Lara looks quite a bit like Pinocchio to me. 😥

Heh, I have a few more if you like:

file.php?id=112144&mode=view

The snow/shadow transition here looks pretty bad in software rendering.

file.php?id=112146&mode=view

And here, the water has a nice effect and additional shading in S3D.

PC#1: Pentium MMX 166 / Soyo SY-5BT / S3 Trio64V+ / Voodoo1 / YMF719 / AWE64 Gold / SC-155
PC#2: AthlonXP 2100+ / ECS K7VTA3 / Voodoo3 / Audigy2 / Vortex2
PC#3: Athlon64 3400+ / Asus K8V-MX / 5900XT / Audigy2
PC#4: i5-3570K / MSI Z77A-G43 / GTX 970 / X-Fi

Reply 27 of 249, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++
Jo22 wrote on 2024-03-08, 07:00:
Hey, my blurry PC VGA/MCGA monitor was an original IBM™©®! 😂 Something like that . An IBM 8512? If so, it had an mask with 0.41 […]
Show full quote
wrote:

My monitor throughout the 90s was a trinitron-like so I didn't have any of this "blurry dot pitch is the only retro way" crap. I saw all the dithering and jaggies and the scanlines on 70hz+ stuff etc. If there's obviously bad image quality, i'd know. I only knew about what it could look like switching to an nVidia *after* the 90s.

Hey, my blurry PC VGA/MCGA monitor was an original IBM™©®! 😂
Something like that . An IBM 8512? If so, it had an mask with 0.41 mm dot pitch.
So it was far from historical revisionism or something like that.

Except if we take into consideration that my monitor was from the late 80s, rather than the "90s".

Then maybe it wasn't period-correct, on paper.

Despite that about everyone had used or seen such a humble VGA monitor at some point in the first half of the 90s.

But that's like with Windows 98SE, I suppose, which everyone considers a 90s OS,
even though it was released in late 1999 and had spent more time in the 2000s than it had spent in the 90s. 🙂

Yes the common experience through the 90s, was that even though high end monitors existed (Imagine someone in year 2054 saying i9 CPU came out in 2017 so by 2024 that was the bare minimum everyone had) that they were mostly confined to business critical use cases and those with deep pockets. They were quite a significant chunk of a system price, the lowest end, 0.39 dot pitch, barely did 800x600 model in around 1992 would be over $200, which is $500 in 2024 dollars, and the 0.28dp 15" would be $500+, over $1000 in 2024 dollars, 17" would be $1000+, ($2500 2024) and you wouldn't see anything much bigger in local computer stores, special order only. Then used monitors held value well compared to other computer equipment, right through the 90s it was like the bare minimum for a monitor was around $100. Upgraders carried their monitors through 2 or 3 generations. There was only really a drastic fall in monitor price this century. I came out of the 90s with a 17" trinitron I had bought refurbished but through most of the rest of it, I was on 14", 15" even those 12" PS/2 monitors which were among the more plentifully available used in the last half of the 90s. Mid to late noughts were golden age for picking up those giant 20-21" professional multithousand dollar tubes for free or next to nothing though.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 28 of 249, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
Joseph_Joestar wrote on 2024-03-08, 15:09:
Heh, I have a few more if you like: […]
Show full quote

Heh, I have a few more if you like:

file.php?id=112144&mode=view

The snow/shadow transition here looks pretty bad in software rendering.

file.php?id=112146&mode=view

And here, the water has a nice effect and additional shading in S3D.

From my recent experience TR1 looks best on Rendition Verite V1000 and S3 ViRGE 325/VX/DX/etc. cards, but these are slower compared to 3dfx Voodoo1.

Best article about S3 ViRGE series videocards I've read so far: https://retro.swarm.cz/s3-virge-325-vx-dx-gx- … ators-deep-dive

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 29 of 249, by 386SX

User metadata
Rank l33t
Rank
l33t
Joseph_Joestar wrote on 2024-03-08, 14:55:
386SX wrote on 2024-03-08, 14:22:

I always thought comparing software rendering frame rate with hardware one that has to solve much more complex effects, logics, resolutions etc, always created the whole "decelerators" name when they were like two different games after all. I imagine it was a marketing nightmare to explain why those games ended up running slower.

I agree. In particular, the difference in color depth between software rendering and 3D accelerated graphics is what stands out the most to me.

Most software renderers use an 8- bit color palette (256 colors total) while even early 3D accelerated APIs like S3D and ATi CIF use a 16-bit color palette (65536 colors total) or better. The color banding which occurs when using a software renderer is very distracting, at least to my eyes. For example, look at the shading on Lara's arms in the screenshot above.

S3D rendering look great for its time. I always thought while less compatible, proprietary API was a great thing and developers should have pushed more the multi-API logic into their games for longer time.

Reply 30 of 249, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

I hear there was a lot of difficulty with Microsoft moving the goal posts for early Direct X versions, silicon would be designed to preliminary specs and instead of the final being just a tweaked and refined version, it would have major differences. So several manufacturers aimed for the amount of perfect standard API performance as could be got for the dollar, in DX, but at time to market DX had changed, so drivers were a bunch of ugly software patches where the hardware didn't mesh and the proprietary APIs were kinda attempts to show what it was meant to do. I guess by DX 6 or 7 it had got to a point where designs were more guaranteed of getting the DX they were promised. Microsoft probably wised up when they saw that the Glide proprietary API could get popular enough that they could be cut out if they didn't sharpen up.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 31 of 249, by 386SX

User metadata
Rank l33t
Rank
l33t
BitWrangler wrote on 2024-03-08, 18:55:

I hear there was a lot of difficulty with Microsoft moving the goal posts for early Direct X versions, silicon would be designed to preliminary specs and instead of the final being just a tweaked and refined version, it would have major differences. So several manufacturers aimed for the amount of perfect standard API performance as could be got for the dollar, in DX, but at time to market DX had changed, so drivers were a bunch of ugly software patches where the hardware didn't mesh and the proprietary APIs were kinda attempts to show what it was meant to do. I guess by DX 6 or 7 it had got to a point where designs were more guaranteed of getting the DX they were promised. Microsoft probably wised up when they saw that the Glide proprietary API could get popular enough that they could be cut out if they didn't sharpen up.

That make sense and even if probably DX API ended up being a good thing for compatibility, I think video cards lost some of their unique feel and maybe some difference that make one card rendering its own way.
If most games had all those APIs we would have had much different video card memories of which was better or worst or at least different looking games.

Reply 32 of 249, by 386SX

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2024-03-08, 14:44:
^Yup. I was thinking same most of time. The "problem" with the VIRGE 325 was that the memory timings were too conservative and t […]
Show full quote

^Yup. I was thinking same most of time.
The "problem" with the VIRGE 325 was that the memory timings were too conservative and that it had too much features.

Because, S3D titles on DOS do perform pretty good per se if a) the memory timing speed is being increased a bit b) the resolution is slightly less than 640x480.

If the S3 ViRGE wasn't being so feature rich, it wouldn't have performed so badly.

The problem with the games was, I assume, that the game developers didn't just want to promote the S3 card, but also their own games.

The result was that all the bells and whistles had been enabled by default, which turned the game into a slideshow.

However, "slideshow" is relative here.
In the early 90s, when games like "Alone in the Dark" or Elite were still a thing,
we video game players (gamers) were still being used to ~15 FPS game play.

A few years later, near to the end of the 90s, this nolonger was being acceptable.
20, 30 and 60 FPS were more and more becoming sort of an expectation.

I still remember being happy to play Star Wing (Star Fox) on Super NES or Descent's shareware version on a 486DX2-66 and a Pentium 75.

The movement wasn't silk smooth, but I had accepted this as part of the atmosphere.

Moving a brick of a spaceship/shuttle wasn't easy, or so I imagined.

Just like the good ol' Nostromo wasn't very aerodynamic, either.

In comparison, Descent II on a ViRGE 325 was more of an upgrade than a downgrade.
Performance was on par with my 486/P75 experience, if not better. 🙂

Edit: The fastest PC I had access to back then was in the Pentium 90 to 133 range,
I believe, with that Pentium 75 (a dismantled Compaq) being my experimental rig for the longest time.

My main PC (used for school, printing, scanning images etc) on my desk still was a 286-12 running DOS/Windows 3.1x.
It sat there up until the year 2000, if memory serves.

It wasn't much of a gaming PC, though, except for little Windows desktop games and various adventure and puzzle games.

A Pentium MMX 166 was already being part of my Windows XP experience in the early 2000s.

The level of slow frame rate games we were used to seems incredible nowdays. On 80/90 game consoles even more sometime, just thinking to games like LHX on the Sega Mega Drive make me smile. Or the original Doom game and how much it pushed every systems around to impossible limits.

Reply 33 of 249, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Always could be one silly big fragment shader wrappery project (targeting modern hardware / not merely postprocessing) for the purpose of recreating rendering quirks and imprecisions. Many of them are mentioned, some documented, but never fully realized and replicated in software.

There could be plenty of dither matrix variety, plenty of bilinear filtering variation, plenty of AAs, reduced caps/extensions, etc... it'd be cool for dgVoodoo2 but that might not fit within the scope of just getting games working.

apsosig.png
long live PCem

Reply 34 of 249, by 386SX

User metadata
Rank l33t
Rank
l33t
leileilol wrote on 2024-03-09, 08:08:

Always could be one silly big fragment shader wrappery project (targeting modern hardware / not merely postprocessing) for the purpose of recreating rendering quirks and imprecisions. Many of them are mentioned, some documented, but never fully realized and replicated in software.

There could be plenty of dither matrix variety, plenty of bilinear filtering variation, plenty of AAs, reduced caps/extensions, etc... it'd be cool for dgVoodoo2 but that might not fit within the scope of just getting games working.

What'd be lost is the low levels optimization of such configurations. Considering these cards, seeing such 3D accelerated games is always interesting also for the limitations of the hardware. I can imagine the the 80/90 engineers behind the hardware design and software coding trying to put new ideas and many hours of thinking how to surpass such limitations and push everything to limits to get that low frame rate results.

Reply 35 of 249, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Virge VX is about 3x faster than Pentium MMX 200 Mhz, when rendering exactly same picture.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 36 of 249, by JustJulião

User metadata
Rank Member
Rank
Member
386SX wrote on 2024-03-06, 19:52:
Hello, […]
Show full quote

Hello,

I was looking for info about the fastest and better built worst but also awesome video cards of the usual late PCI early AGP accelerators to test the highest factory clocked ones. Specifically I might look for something like the Alliance aT3D and the Cirrus Logic Laguna3D or the (not that bad) i740 and SiS cards. Beside the not interesting lower clocked versions, which were the absolute fastest and possibly better built chips/cards/brands? I know most of these were built by low end manufacturers but I suppose few ones might have had a good layout and components.
Any suggestions? Some specific chip versions to look for or advice to find the later better clocked ones? I remember the Cirrus Logic having a late C revision that I think it was a lower power version but I don't care about power demand, just want the best/fastest design to test these to the limits as supposed to run at best. I don't want to buy an expensive card that run at the lowest clocks.
Thanks

The i740 is newer and actually a good card. If you want to go for it, I'd recommend a Real3D Starfighter for its quirks, since it has its own (interesting) drivers. It can't be overclocked via software though. You'll have to change the oscillator for that.
The 6326 has very good compatibility but with some visual bugs and usually not fast enough to play in 640x480. But if you want the best version, you can't go wrong with the Diamond one, which is clocked at 100MHz. I have one, 105MHz doesn't make it a good gaming card of the era, but it's enough to have some fun.
The Laguna3D is best in Chaintech version, clocked at 83MHz. I have one too. I remember it to be decent with Forsaken but it's overall poor in 3D. It can be overclocked using a registry key, but not much.
The Alliance is pure trash.
Very interesting alternatives are Verite 1000, especially the Creative version, because it supports both Creative and Speedy 3D APIs.
Even more expensive is the very first nvidia chip, the nv1, which is pretty good for a 1995 card, but not many games support it.
A bit cheaper and good enough to have some fun are the Mpact 2, with its very unusual architecture, and the Number9 revolution3D. Both are okay-ish with a handful of games.

The best "exotic" card to play with might be the PowerVR though, very unusual architecture, can be quite fast with some games and a good CPU. I remember Unreal to be impressive on it. It's good (and fast) on Tomb Raider, even in 1024x768, and it's the best way to play some games with period correct hardware (Mechwarrior 2 and Resident Evil notably).
It has some interesting exclusive games (Virtual On).
It's rare and expensive though.

Last edited by JustJulião on 2024-03-11, 13:23. Edited 1 time in total.

Reply 37 of 249, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
JustJulião wrote on 2024-03-11, 00:14:
The i740 is newer and actually a good card. If you want to go for it, I'd recommend a Real3D Starfighter for its quirks, since i […]
Show full quote
386SX wrote on 2024-03-06, 19:52:
Hello, […]
Show full quote

Hello,

I was looking for info about the fastest and better built worst but also awesome video cards of the usual late PCI early AGP accelerators to test the highest factory clocked ones. Specifically I might look for something like the Alliance aT3D and the Cirrus Logic Laguna3D or the (not that bad) i740 and SiS cards. Beside the not interesting lower clocked versions, which were the absolute fastest and possibly better built chips/cards/brands? I know most of these were built by low end manufacturers but I suppose few ones might have had a good layout and components.
Any suggestions? Some specific chip versions to look for or advice to find the later better clocked ones? I remember the Cirrus Logic having a late C revision that I think it was a lower power version but I don't care about power demand, just want the best/fastest design to test these to the limits as supposed to run at best. I don't want to buy an expensive card that run at the lowest clocks.
Thanks

The i740 is newer and actually a good card. If you want to go for it, I'd recommend a Real3D Starfighter for its quirks, since it has its own (interesting) drivers. It can't be overclocked via software though. You'll have to change the oscillator for that.
The 6326 has very good compatibility but with some visual bugs and usually not fast enough to play in 640x480. But if you want the best version, you can't go wrong with the Diamond one, which is clocked at 100MHz. I have one, 105MHz doesn't make it a good gaming card of the era, but it's enough to have some fun.
The Laguna3D is best in Chaintech version, clocked at 83MHz. I have one too. I remember it to be decent with Forsaken but it's overall poor in 3D. It can be overclocked using a registry key, but not much.
The Alliance is pure trash.
Very interesting alternatives are Verite 1000, especially the Creative version, because it supports both Creative and Speedy 3D APIs.
Even more expensive is the very first nvidia chip, the nv1, which is pretty good for a 1995 card, but not many games support it.
A bit cheaper and good enough to have some fun are the Mpact 2, with its very unusual architecture, and the Number9 revolution3D. Both are good enough to have some fun too.

The best "exotic" card to play with might be the PowerVR though, very unusual architecture, can be quite fast with some games and a good CPU. I remember Unreal to be impressive on it. It's good (and fast) on Tomb Raider, even in 1024x768, and it's the best way to play some games with period correct hardware (Mechwarrior 2 and Resident Evil notably).
It has some interesting exclusive games (Virtual On).
It's rare and expensive though.

There are two versions of the NV1, one with fixed Vram and the other with upgradeable Vram, one is substantially more expensive than the other and I have yet to see the upgradeable one even hit any markets. For games there is about 7 IIRC that support its quad rendering engine and most of them can be hard to find unless you go sailing the seas.

Reply 38 of 249, by Joseph_Joestar

User metadata
Rank l33t
Rank
l33t
The Serpent Rider wrote on 2024-03-10, 23:46:

Virge VX is about 3x faster than Pentium MMX 200 Mhz, when rendering exactly same picture.

I guess if you could somehow force the CPU to render a game at 640x480x16 with bilinear filtering and perspective correction, it would be. Maybe this can be done under Windows for some DirectX titles, I haven't really tried it.

But in the case of Tomb Raider, people generally compare its built-in software renderer to S3D. I pointed out that S3D is much more feature rich, so that comparison isn't accurate. Even so, on a Pentium 133 (and maybe 166) a Virge DX in S3D is faster than Tomb Raider's software renderer if both are running at 640x480.

PC#1: Pentium MMX 166 / Soyo SY-5BT / S3 Trio64V+ / Voodoo1 / YMF719 / AWE64 Gold / SC-155
PC#2: AthlonXP 2100+ / ECS K7VTA3 / Voodoo3 / Audigy2 / Vortex2
PC#3: Athlon64 3400+ / Asus K8V-MX / 5900XT / Audigy2
PC#4: i5-3570K / MSI Z77A-G43 / GTX 970 / X-Fi

Reply 39 of 249, by JustJulião

User metadata
Rank Member
Rank
Member
Trashbytes wrote on 2024-03-11, 02:25:
JustJulião wrote on 2024-03-11, 00:14:
The i740 is newer and actually a good card. If you want to go for it, I'd recommend a Real3D Starfighter for its quirks, since i […]
Show full quote
386SX wrote on 2024-03-06, 19:52:
Hello, […]
Show full quote

Hello,

I was looking for info about the fastest and better built worst but also awesome video cards of the usual late PCI early AGP accelerators to test the highest factory clocked ones. Specifically I might look for something like the Alliance aT3D and the Cirrus Logic Laguna3D or the (not that bad) i740 and SiS cards. Beside the not interesting lower clocked versions, which were the absolute fastest and possibly better built chips/cards/brands? I know most of these were built by low end manufacturers but I suppose few ones might have had a good layout and components.
Any suggestions? Some specific chip versions to look for or advice to find the later better clocked ones? I remember the Cirrus Logic having a late C revision that I think it was a lower power version but I don't care about power demand, just want the best/fastest design to test these to the limits as supposed to run at best. I don't want to buy an expensive card that run at the lowest clocks.
Thanks

The i740 is newer and actually a good card. If you want to go for it, I'd recommend a Real3D Starfighter for its quirks, since it has its own (interesting) drivers. It can't be overclocked via software though. You'll have to change the oscillator for that.
The 6326 has very good compatibility but with some visual bugs and usually not fast enough to play in 640x480. But if you want the best version, you can't go wrong with the Diamond one, which is clocked at 100MHz. I have one, 105MHz doesn't make it a good gaming card of the era, but it's enough to have some fun.
The Laguna3D is best in Chaintech version, clocked at 83MHz. I have one too. I remember it to be decent with Forsaken but it's overall poor in 3D. It can be overclocked using a registry key, but not much.
The Alliance is pure trash.
Very interesting alternatives are Verite 1000, especially the Creative version, because it supports both Creative and Speedy 3D APIs.
Even more expensive is the very first nvidia chip, the nv1, which is pretty good for a 1995 card, but not many games support it.
A bit cheaper and good enough to have some fun are the Mpact 2, with its very unusual architecture, and the Number9 revolution3D. Both are good enough to have some fun too.

The best "exotic" card to play with might be the PowerVR though, very unusual architecture, can be quite fast with some games and a good CPU. I remember Unreal to be impressive on it. It's good (and fast) on Tomb Raider, even in 1024x768, and it's the best way to play some games with period correct hardware (Mechwarrior 2 and Resident Evil notably).
It has some interesting exclusive games (Virtual On).
It's rare and expensive though.

There are two versions of the NV1, one with fixed Vram and the other with upgradeable Vram, one is substantially more expensive than the other and I have yet to see the upgradeable one even hit any markets. For games there is about 7 IIRC that support its quad rendering engine and most of them can be hard to find unless you go sailing the seas.

I'm pretty sure that a lucky owner of the card on Vogons would happily share the games with you if you provide a proof of ownership, since they were all bundled with cards, they didn't hit the market "by themselves". The games alone are probably rarer than the card they would be bundled with.
There are more versions than that, because some have Vram and others have Dram. I know about the two versions with Vram but I don't know if there are two versions with Dram too.
All of this is for Diamond cards, which are something like 95% of nV1 cards, but there were other makers, some are obscure and others aren't : Aztech (card named Galaxy3D), Yuan, Core Dynamics, Focus, Genoa, Leadtek, Jazz multimedia... Damn, I'd spend crazy amount of money for one of these non-Diamond bundle. Maybe it has different software bundle ? Different frequency ? Tuned drivers ?

Since it's the very first chip of a huge company now, it's probably a good investment even. Good point if you have a wife.