VOGONS


NV3x, R3x0, and pixel shader 2.0

Topic actions

First post, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

I feel it's a lot like the NV30 vs R300 on DirectX 9 games argument. At the end of the day, both series were rather irrelevant at DX9.

As a Radeon 9600XT-owner, can I just say that I *completely* disagree with this one?
SM2.0 was adopted very quickly by games, and NV30 was hurt very hard by this. Most popular example was Half-Life 2 which defaulted to the DX8.1 path on NV30 hardware, while it had no trouble running the full SM2.0 path on R300-based cards.
The Radeon 9700 (Pro) is one of the best graphics cards ever released, and you could enjoy that card for many years if you bought it at launch. Probably the best days in ATi's history.
So no, R300 was VERY relevant at DX9. Heck, it pretty much defined that standard, and was the benchmark for quite a while.

F2bnp wrote:

Of course, I don't have nearly the amount of knowledge as Scali and if he sees this as being a real issue with AMD, I should probably take it at face value.

The issue isn't so much that you need the features in games now, but rather:
1) nVidia *does* offer these features, so it will be difficult for AMD to market their cards against this. People tend to buy the fastest/most feature-rich cards for their money, whether they actually need all the speed/features or not. People just want the best deal.
2) AMD still has to add these features to a future architecture at some point, which means extra investment in R&D that nVidia has already done. Since AMD is already at a disadvantage, losing marketshare fast, and their profit margins under extreme pressure because their current lineup is not that competitive, investing heavily in R&D is going to be difficult. It's the same situation as on the CPU-side.
It takes a few years to come up with a new architecture, so AMD is stuck in their current situation for quite a while.

I think the launch of this 300/Fury series was the crucial 'inflection point'. Had they come out on top of nVidia, they could have started to claw back marketshare, get profit margins up, and become a healthy company again. But since they didn't, they may not get a second chance to get back in the race.

Last edited by Scali on 2015-07-13, 10:14. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 1 of 103, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

SM2.0 was adopted very quickly by games, and NV30 was hurt very hard by this. Most popular example was Half-Life 2 which defaulted to the DX8.1 path on NV30 hardware, while it had no trouble running the full SM2.0 path on R300-based cards.

HL2 is controversial example exactly due to their treatment of FX cards, forcing the 8.1 path was never fully explained. Remember they had working path optimized for FX shown to public like month or two before release. Adoption of PS 2.0 was pretty slow, number of games that would tank on FX during its life time was small.
9700 release: August 2003
FX release: January 2004
HL2 release: November 2004

Reply 2 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
Putas wrote:

HL2 is controversial example exactly due to their treatment of FX cards, forcing the 8.1 path was never fully explained.

Sure it was. It defaulted to 8.1 because the DX9 path was too slow. You could select it manually, and benchmarks of the time showed how poorly NV30 performed in that path.
See here for more info: http://www.anandtech.com/show/1144/6

Putas wrote:

Remember they had working path optimized for FX shown to public like month or two before release.

Yes, this 'optimized path' was DX8.1. It was 'optimized' in the sense that PS2.0 was avoided and PS1.4 was used instead, because NV30 performed so much worse than R300 did.
This was shown in many PS2.0 games at the time, and also 3DMark03.

There is nothing "suspicious" about this. My own code did exactly the same, and I know my code was running 'as-is', because nVidia hadn't had their hands on it yet, and couldn't do any shader replacement in the drivers.
It is exactly as Valve says: half precision is much faster than full precision on NV30. And even then it is considerably slower than PS1.4.
On R300 however, there is no difference between half and full precision, they can only process in 24-bit float. Aside from that, R300 does not have a separate int and float pipeline, so PS1.4 is executed as float as well. This means there's no performance difference there either. R300 just has very fast float pipelines.
The problem with NV30 is that it was designed with fast integer pipelines for legacy, and a slow add-on for PS2.0. nVidia apparently didn't think PS2.0 would catch on quickly. But it did, probably mostly because R300 was fully ready for PS2.0, and performed better with its float pipelines than any integer card.
So developers started developing DX9 games with full SM2.0 shaders, and full precision as soon as they got their R300 hardware.

Putas wrote:

number of games that would tank on FX during its life time was small.

The reason that games didn't tank on FX was because, like Half-Life 2, they used simpler shaders on NV30, rather than full PS2.0.
Either in the game itself, or by shader replacement in the drivers by nVidia.
It just means that things aren't always apples-to-apples, and what you see in terms of framerate does not necessarily reflect the capabilities and performance of the card.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 3 of 103, by F2bnp

User metadata
Rank l33t
Rank
l33t

Half-Life 2 is just one game though. By 2005, things got a lot more serious, that's for sure and R300 cards would pull ahead. To be honest though, I wouldn't bother with either if I had the choice. The way I see it, if you were into buying expensive/luxury cards such as the 9800 Pro/XT or FX 5900/5950, you wouldn't bother to keep them around for long and instead opt to sell them and grab an 6800GT or 7800GS/GT depending on the timeframe. If on the other hand you bought a mid-range card such as the 9600 and FX 5600-5700 or slightly higher end such as the 9600 Pro/XT and FX 5700 Ultra, you wouldn't exactly enjoy great performance with any of these cards.

To add a personal story here, I was on the second camp back then. I got my PC built by someone else who wasn't very knowledgeable on GPUs and got an FX 5600XT, a gigantic piece of crap that was quite a bit more expensive than the FX 5200 (which was alternative on the budget I was on), not to mention rival cards from ATi. If the vanilla FX5600 and Ultra were bad, the XT is like their retarded cousin, underclocked and underperfoming to the point where it is equal to a vanilla FX5200.

I hate NV30 and the like just as much as the next guy (although they've recently found a niche fanbase here on Vogons 😊 ), but in the end I don't think it made that huge of a difference.

Scali wrote:
I have found over the years that AMD has about as much of a 'reality distortion field' as Apple does. What AMD says, and what AM […]
Show full quote

I have found over the years that AMD has about as much of a 'reality distortion field' as Apple does.
What AMD says, and what AMD fans on the internet say, does not necessarily correlate with reality in any way.
If you were to read forum threads of that era, then 4x00 was the best thing since sliced bread. But it's mostly a few vocal AMD fans, and not regular customers actually going out and buying the cards.

The same can be said about Mantle...
The myth that AMD developed Mantle to 'save PC gaming' and push MS to develop DX12 can be found everywhere... except if you pay closer attention, the information always comes either directly from AMD, or from companies in the Gaming Evolved/Mantle program, such as Dice or Oxide Games.

And as you know, Microsoft always releases a major new version of DX with a new OS. In this case that is Windows 10.
So AMD's claim is actually "We pushed MS to release Windows 10 sooner".
Now, is that likely? Or is it more likely that MS was actually working on DX12 anyway, scheduled to be released with Windows 10 as usual... and did AMD just release a pre-emptive strike by doing their own 'DX12-lite', based on what was in development at the time and releasing it as Mantle, because they knew Windows 10 was still a ways off? Hoping to create a bit of vendor-lock by also implying there was a link between Mantle and consoles? Perhaps knowing that Intel and nVidia had added extra rendering features to DX12, which AMD knew they couldn't implement in their own GPUs before DX12/Windows 10 was released?

- The 4x00 series was fantastic. Their value for money was insane, hence they are heralded as an amazing series. I do not know why they didn't make a dent on Nvidia's marketshare, but I would probably attribute most of it to brand loyalty and generally misinformed customers.

- You cannot take AMD's claims about Mantle at face value. They are a company and companies love to spin PR bullshit up on everybody's face. However, I am of the opinion that Mantle did put some pressure on Microsoft to release Windows 10 or DX12 earlier. Let me remind you that we knew next to nothing about DX12 before Mantle crept up. AFAIR, Microsoft released a sleuth of information on the API just a few weeks after Mantle was announced. It all worked for the benefit of consumers in the end. AMD users got an API that helped on certain scenarios and certain games

Reply 4 of 103, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Yes, this 'optimized path' was DX8.1. It was 'optimized' in the sense that PS2.0 was avoided and PS1.4 was used instead, because NV30 performed so much worse than R300 did.This was shown in many PS2.0 games at the time, and also 3DMark03. There is nothing "suspicious" about this.

No, they had PS 2.0 utilizing half precision and dropped it. I found it very suspicious, it being slower then ATI's path is not good enough justification.

Scali wrote:

The reason that games didn't tank on FX was because, like Half-Life 2, they used simpler shaders on NV30, rather than full PS2.0.

I am too lazy to actually count the games just for this argument.

Reply 5 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

The way I see it, if you were into buying expensive/luxury cards such as the 9800 Pro/XT or FX 5900/5950, you wouldn't bother to keep them around for long and instead opt to sell them and grab an 6800GT or 7800GS/GT depending on the timeframe. If on the other hand you bought a mid-range card such as the 9600 and FX 5600-5700 or slightly higher end such as the 9600 Pro/XT and FX 5700 Ultra, you wouldn't exactly enjoy great performance with any of these cards.

I disagree.
I had a 9600XT at the time with 256 MB, and I've enjoyed it for many years.
The only reason why I eventually replaced the card is because the rest of the system (Athlon XP 1800+) was becoming a bottleneck. And the 9600XT was still an AGP card, while PCI-e had become the standard in the meantime.
So when I upgraded to a Core2 Duo system, I had to buy a PCI-e card. Otherwise I'd probably have kept the 9600XT even longer.
I settled on a GeForce 7600 card as in-between, because DX10 was just around the corner by that time. Eventually I got an 8800GTS 320 as my 'serious' new card for the Core2 Duo.

If I had had a 9700/9800 Pro/XT, I would have used it as long as the 9600XT, and enjoyed it even more I guess 😀

F2bnp wrote:

- The 4x00 series was fantastic. Their value for money was insane, hence they are heralded as an amazing series. I do not know why they didn't make a dent on Nvidia's marketshare, but I would probably attribute most of it to brand loyalty and generally misinformed customers.

I think another point is that if you're late to the party (as AMD was with DX10), you have the problem that a lot of people have already upgraded their cards, and aren't interested in a 4x00-card that is only slightly better than what they already own.
You either have to be the first, or you have to offer a product that is considerably better than the competition, to make people want to upgrade.
Like some of the 'classics' such as the original Voodoo, the GeForce2, the R300 and the GeForce 8800.

F2bnp wrote:

However, I am of the opinion that Mantle did put some pressure on Microsoft to release Windows 10 or DX12 earlier. Let me remind you that we knew next to nothing about DX12 before Mantle crept up.

You mean *you* didn't know about it. That doesn't mean MS, GPU vendors and developers didn't know about it.
Look at the tweet I posted earlier: https://twitter.com/XboxP3/status/558768045246541824

We knew what DX12 was doing when we built Xbox One.

So MS says they knew what road they were taking with DX12 when they started on Xbox One. Which makes sense, since DX12 will run on Xbox One.
And Xbox One was released before Mantle.
Which builds a perfect case for MS working with ideas for DX12 for Xbox One, together with AMD... and AMD 'borrowing' these ideas for Mantle.
DX12 also supports nVidia hardware as far back as the original Fermi, where they only support AMD's GCN, which is considerably newer, and clearly modeled more after Fermi than their earlier GPU architectures. Things that make you go "Hmmm".

Last edited by Scali on 2015-07-13, 14:19. Edited 2 times in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 6 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
Putas wrote:

No, they had PS 2.0 utilizing half precision and dropped it. I found it very suspicious, it being slower then ATI's path is not good enough justification.

Did you look at the charts? It's not just 'slower', it's about half the speed of ATi cards. A GeForce 5900XT gets beaten by a Radeon 9600Pro. It was a massacre. No customer would accept their expensive 5900XT being beaten by a cheap 9600Pro. But that's just how badly NV3x sucked. nVidia did a great job of covering it up in games, but as a developer I know how bad the card REALLY is.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 8 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
Putas wrote:

I remember it all to well, and 5900 XT was not expensive at all.

Wow, just wow...
http://www.anandtech.com/show/1144

Our last set of GPU reviews were focused on two cards - ATI's Radeon 9800 Pro (256MB) and NVIDIA's GeForce FX 5900 Ultra, both o […]
Show full quote

Our last set of GPU reviews were focused on two cards - ATI's Radeon 9800 Pro (256MB) and NVIDIA's GeForce FX 5900 Ultra, both of which carried a hefty $499 price tag.
...
In our first test, we see that ATI holds an incredible lead over NVIDIA, with the Radeon 9800 Pro outscoring the GeForce FX 5900 Ultra by almost 70%. The Radeon 9600 Pro manages to come within 4% of NVIDIA's flagship, not bad for a ~$100 card.

At 1280x1024, we're shading more pixels and thus the performance difference increases even further, with the 5900 Ultra being outperformed by 73% this time around.
...
The Radeon 9600 Pro manages to offer extremely good bang for your buck, slightly outperforming the 5900 Ultra.

The performance gap grows to be a massive 61% advantage for the Radeon 9800 Pro over the GeForce FX 5900 Ultra at 1280x1024.

Etc...
So, a $100 card outperforming a $499 card (which was the top price bracket at the time, same as the 9800Pro, which is 70+%, yes... I say it again: 70+% faster!) doesn't make the 5900FX horribly expensive for the performance it delivers?

Wow, just wow.

I thought we were about vintage/retro-computing here, not about rewriting history.

Well GeorgeMan, THERE is an Nvidia-oriented person for you. Geez.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 10 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
Putas wrote:
Scali wrote:

I thought we were about vintage/retro-computing here, not about rewriting history.

So stop with cherry picking and changing letters as you please.

What exactly are you referring to?
I'm not cherry-picking anything. I'm just giving it to you straight as to how NV30 was designed, and why it performed so poorly on SM2.0-code.
I'm also quoting actual technical info and prices from Anandtech. Not like I'm making this up myself.
You're the one making these outrageous claims that NV30 wouldn't have performance issues, and that the FX5900 wasn't expensive at all, while it was in the highest price bracket at the time.

Last edited by Scali on 2015-07-13, 15:18. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 11 of 103, by F2bnp

User metadata
Rank l33t
Rank
l33t
Scali wrote:
What exactly are you referring to? I'm not cherry-picking anything. I'm just giving it to you straight as to how NV30 was design […]
Show full quote
Putas wrote:
Scali wrote:

I thought we were about vintage/retro-computing here, not about rewriting history.

So stop with cherry picking and changing letters as you please.

What exactly are you referring to?
I'm not cherry-picking anything. I'm just giving it to you straight as to how NV30 was designed, and why it performed so poorly on SM2.0-code.
I'm also quoting actual technical info and prices from Anandtech. Not like I'm making this up myself.
You're the one making these outrageous claims that NV30 wouldn't have performance issues, and that the FX5900 wasn't expensive at all, while it was in the highest price bracket at the time.

Scali, you are comparing the 5900 Ultra not the 5900XT which I think was quite less pricey. Techreport was estimating the FX 5900XT at around 200$ on some review, but I think it was a little pricier than that. It would still get annihilated by the 9600 Pro on HL2, no doubt about it, but you are using an article written on September 2003. Half-Life 2 came out on November 2004. I didn't have time to read through the article, but at the time HL2's source code was leaked and compiled by different people. It's not really indicative of final release performance.
Here's a more indicative article, backing your claim once more, which no one is doubting in the case of HL2 by the way.

http://www.anandtech.com/show/1549/4

I think what Putas is also trying to say is that you are, again, using a single game to back your argument and that is HL2, a rather extreme case towards NV30.

Reply 12 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

Scali, you are comparing the 5900 Ultra not the 5900XT which I think was quite less pricey.

Well yes, I pointed to the Anandtech article about HL2, and they used a 5900 Ultra.
Putas was then cherry-picking with the 5900XT. Which by the way is still ~$200, so still twice as expensive as the 9600Pro, which already beats the 5900Ultra, so it outperfoms the 5900XT even more. Still horrible value for money.
So, I don't see where *I* am cherry-picking. It's quite obvious where Putas is cherry-picking though.
Which is rather pathetic. I mean, NV3x is the worst GPU in the history of nVidia by a margin. It was bad enough that all the nVidia fanboys were in denial back when it was new, but I really don't have the patience to go over this all again in 2015. That's beyond pathetic.

I think what Putas is also trying to say is that you are, again, using a single game to back your argument and that is HL2, a rather extreme case towards NV30.

I think he's just an annoying fanboy.
As I said: I am a developer myself, I wrote and optimized code for the NV3x and R300 back in the day. I *know* what these cards perform like, what they REALLY perform like, before nVidia does shader replacement. And it is exactly as Valve says.
Valve isn't being extreme. The others just sugar-coated it, as I said. NV3x really *is* that bad. 3DMark03 showed the same, as I said. People completely tore Futuremark apart because of it (it was the first SM2.0 software available to end-users, so the first time people were confronted with NV3x's real shortcomings).
nVidia then came up with some 'magic' driver updates that greatly boosted performance in 3DMark03, but were caught cheating insanely. Most notably, the whole PS2.0 nature test was rendered with integer shaders instead (yes, PS1.4, just like what HL2 does), resulting in blocky water etc:
http://www.extremetech.com/computing/54154-dr … idia-benchmarks
http://techreport.com/review/5131/nvidia-deto … r-fx-drivers/14
http://www.geek.com/games/futuremark-confirms … nchmark-553361/

So next time, please READ WHAT I WRITE. I gave explanations of how the NV30 and R300 differ in their pipeline implementations, and how that translates to the performance you see in 3DMark03 and HL2. I did not 'use a single game to back your argument'. My argument is about the technical details. HL2 and 3DMark03 simply demonstrate this. Don't turn it around. I'm a developer, I can give you plenty of shader code you can try yourself (without drivers doing shader replacement), to find out that these cards REALLY perform like that.
Looking at games gives you a distorted view, as I already said. Let's stick to the facts, not to nVidia's interpretation of a game in their driver.

Or is it too difficult to understand the technical info I gave about the pipelines? Is it too difficult to google some reviews and whitepapers and verify that my information on these pipelines is correct? Are game benchmarks all you can look at? Is there no critical thinking? No deeper understanding? Because then we're done. I don't want to waste my time on that.

Last edited by Scali on 2015-07-13, 16:50. Edited 2 times in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 13 of 103, by alexanrs

User metadata
Rank l33t
Rank
l33t

The only thing I remember about NV30 is how disapointed I was when I got my then-shiny new Athlon 64 + 64-bit variant of the 5200. I wasn't expecting much, but in some cases it actually felt slower than my previous PC (Duron 1200 + MX440). I never whispered a word about it to my mother (she had just spent a lot on that PC for our economical standards back then), but until I replaced it with a 6800XT AGP that PC was the most underwhelming upgrade in my life.

Reply 14 of 103, by candle_86

User metadata
Rank l33t
Rank
l33t
Scali wrote:
Putas wrote:

No, they had PS 2.0 utilizing half precision and dropped it. I found it very suspicious, it being slower then ATI's path is not good enough justification.

Did you look at the charts? It's not just 'slower', it's about half the speed of ATi cards. A GeForce 5900XT gets beaten by a Radeon 9600Pro. It was a massacre. No customer would accept their expensive 5900XT being beaten by a cheap 9600Pro. But that's just how badly NV3x sucked. nVidia did a great job of covering it up in games, but as a developer I know how bad the card REALLY is.

your also forgetting nvidia got cut out of the DX9 design discussion, they got into an argument with Microsoft over the XBOX's NV2A GPU and in retaliation Microsoft wouldn't let Nvidia into any DX9 meetings, they got DX9 specs at the last possible second, and had to rush it out. Its very much Microsofts fault that Nvidia didn't know how to build a DX9 card at DX9 launch time, they didn't have the relevant data.

And really even a 9800XT couldn't handle Half Life 2, FarCry or Fear @ 1600x1200 with AA/AF and still be playable. When DX9 games really started to roll, the first gen DX9 cards had to sit down and cry. I remember my 6600GT owning a friends 9800 Pro at FarCry at 1280x1024 pretty badly, i got around 71FPS at high and he got around 47. First gen products are usually not up to par for whats comming, and the reason is, they don't know what to expect yet.

Reply 15 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
candle_86 wrote:

your also forgetting nvidia got cut out of the DX9 design discussion, they got into an argument with Microsoft over the XBOX's NV2A GPU and in retaliation Microsoft wouldn't let Nvidia into any DX9 meetings, they got DX9 specs at the last possible second, and had to rush it out. Its very much Microsofts fault that Nvidia didn't know how to build a DX9 card at DX9 launch time, they didn't have the relevant data.

Do you have any proof of this? Because this is the first time I hear of this, and it sounds rather like a crackpot theory.
Also, it doesn't add up. The problem is not that nVidia's hardware wasn't up to DX9-spec, because it was up to and even *beyond* DX9 spec. The only problem was that it was too slow.
And I already explained why: nVidia banked on fast integer pipelines, with floating point as a 'second-class citizen'. They either thought they couldn't make the card perform well enough in legacy software without integer pipelines (which R300 proved to be possible), or they thought games would only use PS2.0 sparingly, so their transistors were better spent on integer performance.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 16 of 103, by candle_86

User metadata
Rank l33t
Rank
l33t
Scali wrote:
Do you have any proof of this? Because this is the first time I hear of this, and it sounds rather like a crackpot theory. Also, […]
Show full quote
candle_86 wrote:

your also forgetting nvidia got cut out of the DX9 design discussion, they got into an argument with Microsoft over the XBOX's NV2A GPU and in retaliation Microsoft wouldn't let Nvidia into any DX9 meetings, they got DX9 specs at the last possible second, and had to rush it out. Its very much Microsofts fault that Nvidia didn't know how to build a DX9 card at DX9 launch time, they didn't have the relevant data.

Do you have any proof of this? Because this is the first time I hear of this, and it sounds rather like a crackpot theory.
Also, it doesn't add up. The problem is not that nVidia's hardware wasn't up to DX9-spec, because it was up to and even *beyond* DX9 spec. The only problem was that it was too slow.
And I already explained why: nVidia banked on fast integer pipelines, with floating point as a 'second-class citizen'. They either thought they couldn't make the card perform well enough in legacy software without integer pipelines (which R300 proved to be possible), or they thought games would only use PS2.0 sparingly, so their transistors were better spent on integer performance.

yea ill try to find it after work, i remember hearing about this back in 2003

Reply 17 of 103, by swaaye

User metadata
Rank l33t++
Rank
l33t++

I too have read that Pixel Shader 2.0 was basically designed around R300. They brought out the small iterations to cover NV3x, R4x0, and others later. NV3x is apparently considerably more flexible than R300, but we all know how the performance adds up.

I've also read that a major problem with NV3x was register pressure. It was just about impossible to make FP32 perform adequately. FP16 and then FX12 were better, but the best was register combiners! Compiler development was a nightmare.

The OpenGL.org forum had some people who really liked the 5200 as a cheap D3D9 development experimentation platform. An interesting angle on that chip.

Reply 18 of 103, by Scali

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

I've also read that a major problem with NV3x was register pressure. It was just about impossible to make FP32 perform adequately. FP16 and then FX12 were better, but the best was register combiners! Compiler development was a nightmare.

Yea, which supports what I said earlier.
FX12 is 12-bit fixedpoint integer, which is basically nVidia's OpenGL extension equivalent of PS1.4 in DX9.
FP16 is the 'half' datatype in DX9.
And register combiners are the legacy integer pipeline of the original GeForce256/2/3/4.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 19 of 103, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Putas was then cherry-picking with the 5900XT.

Just say straight you made a typo, nobody is gonna put you down for that.

Scali wrote:

Which is rather pathetic. I mean, NV3x is the worst GPU in the history of nVidia by a margin. It was bad enough that all the nVidia fanboys were in denial back when it was new, but I really don't have the patience to go over this all again in 2015.

Feel free to show when was I ever in denial of NV3x drawbacks. I will wait until you come to your senses.