VOGONS

Common searches


Comeback for AMD? [Polaris]

Topic actions

Reply 20 of 170, by Snayperskaya

User metadata
Rank Member
Rank
Member

I see the RX4xx as the new HD4xxx series, except it's competing with an older generation. I'd buy a RX480 if I didn't have a GTX 970 already. For anyone new into 1080p gaming there's little competition, price-wise.

Reply 21 of 170, by Munx

User metadata
Rank Oldbie
Rank
Oldbie
PhilsComputerLab wrote:

I think I saw a photo of a partner card from MSI. What always annoys be about modern partner cards is that they like to take a main-stream card (like a 960), then slap 2 or 3 XXL fans on it, and you end up with a card that is larger than a 1080 🤣

Why wouldn't you like big coolers? I mean take a look at how glorious my 960 looks 🤣

CAM00247[1].jpg
Filename
CAM00247[1].jpg
File size
2.67 MiB
Views
1164 views
File license
Fair use/fair dealing exception

On a more serious note, it does provide really quiet and effective cooling, to the point where the fans are idle even when I'm doing light gaming.

My builds!
The FireStarter 2.0 - The wooden K5
The Underdog - The budget K6
The Voodoo powerhouse - The power-hungry K7
The troll PC - The Socket 423 Pentium 4

Reply 22 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
Munx wrote:
Why wouldn't you like big coolers? I mean take a look at how glorious my 960 looks :lol: […]
Show full quote
PhilsComputerLab wrote:

I think I saw a photo of a partner card from MSI. What always annoys be about modern partner cards is that they like to take a main-stream card (like a 960), then slap 2 or 3 XXL fans on it, and you end up with a card that is larger than a 1080 🤣

Why wouldn't you like big coolers? I mean take a look at how glorious my 960 looks 🤣

CAM00247[1].jpg

On a more serious note, it does provide really quiet and effective cooling, to the point where the fans are idle even when I'm doing light gaming.

Heh, my GTX970OC is the exact opposite... It has a small cooler so it can fit in a mini-ITX case:
2474735-a.jpg

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 23 of 170, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

Yea I have a thing for small, ideally single slot cards 😀

I think this is the worst example of a XXL 960:

index.php?ct=articles&action=file&id=15265&admin=0a8fcaad6b03da6a6895d1ada2e171002a287bc1

Whereas this is more what I like:

Gigabyte-GeForce-GTX-960-Mini-1.jpg

YouTube, Facebook, Website

Reply 24 of 170, by Munx

User metadata
Rank Oldbie
Rank
Oldbie

Yeah, I will agree that triple fan setups are overboard when it comes to mid range cards. Card sagging is quite an issue with these, to the point where one maker gave canes to go with their 3x fan cards 🤣 (was it Asus or Gigabyte? I forgot). Though it still feels like youre getting your moneys worth with a big, heavy piece of tech 😀

My builds!
The FireStarter 2.0 - The wooden K5
The Underdog - The budget K6
The Voodoo powerhouse - The power-hungry K7
The troll PC - The Socket 423 Pentium 4

Reply 25 of 170, by F2bnp

User metadata
Rank l33t
Rank
l33t
Scali wrote:

Pascal doesn't have to be much more than a die-shrink with subtle tweaks, given that Maxwell is the benchmark as far as GPU architectures go. It was well ahead of anything AMD had on offer in the 28 nm era, and RX480 makes it painfully obvious just how difficult it is to get to Maxwell levels. AMD barely reaches that with 14 nm technology, and 2 extra years of GPU development.

I don't really agree with that sentiment. It's definitely the more power efficient architecture over Hawaii and Fiji, but you are overestimating how fast the end products were. GTX 980 sure was a little faster, but it also cost 550$. GTX 970 was on par with R9 290 and R9 290x, albeit with a lower power consumption. As far as value goes, you could regularly find sales for the R9 290 for ~250$ and I got mine for 280E (damn you VAT 😵 ). The GTX 960 was particularly expensive for what it was IMO and the R9 280/280x and especially R9 380 where the cards with the most performance/$.

The two standout cards for me was the GTX 950 which was just untouchable (a little disappointing they didn't aim for sub 75w from the get-go though), as the only alternative was the R7 370 (the old 7850 masquerading as a new card 😁) and the GTX 980 Ti which totally stole the Fury X's spotlight. At stock, the two cards are mostly trading blows, but the 980 Ti had 6GB VRAM and more importantly, could easily overclock upwards ~20%!
Still, the 980Ti cost quite a bit and like I said I'm not particularly happy with how pricing works on cards nowadays.

Scali wrote:

Rise of the Tomb Raider already uses it for voxel-based global illumination effects. You simply can't get these effects on those 'forward looking' AMD GPUs. That's the thing that surprises me most. Everyone talks about async compute, but it's not an actual rendering feature. It's more like HyperThreading or such. Yes, it can boost performance a bit, but it doesn't do anything new or revolutionary. if you can get the same or better performance without HyperThreading/Async compute, your software will run exactly the same, look exactly the same etc.

Are those effects missing on AMD hardware? Never heard of that to be honest!

Scali wrote:
This is the biased AMD-fanboy version. The truth is that the GTX970 has 4 GB, divided into two clusters: 3.5 GB connected to a 2 […]
Show full quote

This is the biased AMD-fanboy version.
The truth is that the GTX970 has 4 GB, divided into two clusters:
3.5 GB connected to a 224-bit wide interface
0.5 GB connected to a 32-bit wide interface

These two interfaces can be used at the same time.
So technically nVidia is 100% correct in stating it has 4 GB of memory. It actually does.
nVidia is not even wrong in saying that it has a 256-bit interface to that memory.
The only confusion is that it is clustered, so it does not act entirely the same a single cluster of 4 GB connected to a single 256-bit interface.
But the benchmarks speak for themselves: you can't really tell the difference in performance between this card and a 'real' 4 GB card. Whatever nVidia has done exactly with the memory management in their drivers, it seems to work quite well. So I'm not really sure why people are making such a big deal out of this.

No, this is not bias. The fact of the matter is that Nvidia gave false specs to the public and journalists and only answered truthfully when they were apprehended about it. I think they even came out and said "it's a feature!". Had they explained in detail how this works from the get-go, I would let that pass. It is 100% shady business practice though, no matter how you look at it, as it sets a precedent for other companies to follow.

It doesn't even matter whether or not this hits the card's performance (hint: it does already, but not in many cases that matter). I am of the mind that it will inevitably hit the card at some point in the near future however.

Scali wrote:

I'm quite happy with things like GameWorks actually. Thanks to this, you actually get to see effects like PhysX, tessellation and VXGI in games, and you are actually able to use all those great features of your GPU.
AMD does exactly the same by the way, with their Gaming Evolved. How did you think async compute ended up in AotS and other games? And they had TressFX... which 'somehow' ran extremely poorly on nVidia cards at launch...
AMD's pockets aren't as deep as nVidia's, so their list of supported games is a bit smaller. But still, they're both doing exactly the same, so I don't see why you would just call out GameWorks. I think GameWorks is quite fair anyway, in being mostly just DX11-code, which would work fine on hardware from other vendors, assuming they have proper support for DX11 features such as tessellation. AMD did not, well boohoo. They sucked just as hard in tessellation games and benchmarks that were not part of GameWorks, so it's not like it's some big conspiracy.

Again, I disagree. The TressFX issue you speak of was in Tomb Raider (2013) and was fixed almost immediately. By comparison, every Gaming Evolved title I can think of runs fine on Nvidia hardware. It gives a small boost to AMD hardware, but it doesn't destroy performance on Nvidia. That has been the case since the early 00's anyway with "The Way It's Meant To Be Played" and other marketing crap like that.

Not saying I like Gaming Evolved, again I'm not fond of PC Gaming getting fragmented like that, but I can't deny that it is far more acceptable than what Nvidia is doing.

And you had to bring up Tesselation, which has multiple examples across quite a few games in which Tesselation was hampering even Nvidia's performance for no perceptible IQ gain, just in order to show AMD in a negative light. I was shocked when I saw it on Crysis 2 and I was shocked that they did it again with Witcher 3.

Can't say I really care about PhysX, seems like a gimmick. Now had it been open sourced and available to everyone, I think we'd have seen some really cool shit with it. That still saddens me somewhat 🙁.

Scali wrote:
Again, retarded hyperbole from the AMD camp. You can't compare different games and try to establish a 'trend' like that. Games a […]
Show full quote

Again, retarded hyperbole from the AMD camp.
You can't compare different games and try to establish a 'trend' like that.
Games and game engines evolve over time, and demands on hardware change. After all, the hardware itself changes. For example, at some point in time, 1 GB was a common VRAM size. These days it's more like 4 GB. So, games are now designed to make use of more and more detailed textures. Cards with less VRAM (or lower bandwidth) suffer harder on this. That's not planned obsolescence, that's just evolution. It might hit one vendor's products harder than another at any given time, but it's pretty random. Neither nVidia nor AMD have a crystal ball, so they can't really predict where things go during the lifetime of a GPU. Sometimes one gets more lucky, other times it's the other. But that doesn't mean in the least that your product's performance has degraded over time. The performance is still the same, it just doesn't translate to the same gaming capability as it once did.

Aside from that, there's the variable of drivers that can't be underestimated. By this metric, the vendor that gets its drivers in order first, will be the 'worst', because they are perceived as 'not gaining performance over time'. They just gained that performance earlier.
There are various sites that benchmark older games and older hardware with every new driver revision by the way, and nVidia shows consistent FPS throughout, and in some cases some boosts, sometimes even years after a card was launched, sometimes even with games that are years old. There are no signs of planned obsolescence whatsoever.

Okay, can we stop with the insults? I think I was very fair in the way I addressed you in my previous post.

I'm wasn't talking about VRAM sizes although I don't see why I shouldn't. GTX 680/770 and 780/780Ti all launching with 2GB and 3GB versions respectively isn't doing Nvidia any favors. They should have had more VRAM, since AMD products did.
Not to mention that GTX 960 2GB VRAM in January 2015 😵 .

Keep in mind how expensive the 780Ti was and how it beat the R9 290 and the GTX 970. Nowadays, both cards have very clearly distanced themselves from the 780Ti, which is just sad.

It's not just the VRAM though. If you take a look at reviews from TechPowerUp (choosing this site since it has very nice "Relative Performance" charts for multiple resolutions and videocards) and compare 2012/2013 to nowadays, you'll see 280X catching up on GTX 780 even!

Reply 26 of 170, by F2bnp

User metadata
Rank l33t
Rank
l33t

Double post, but perhaps this is important to someone.

http://forums.anandtech.com/showpost.php?p=38 … 75&postcount=58

It seems like this is a non-issue? Not sure what to think here, we'll have to wait on other sites examining it properly. Either way, I'm not touching the reference cards 🤣 .

Reply 27 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

I don't really agree with that sentiment. It's definitely the more power efficient architecture over Hawaii and Fiji, but you are overestimating how fast the end products were. GTX 980 sure was a little faster, but it also cost 550$. GTX 970 was on par with R9 290 and R9 290x, albeit with a lower power consumption. As far as value goes, you could regularly find sales for the R9 290 for ~250$ and I got mine for 280E (damn you VAT 😵 ). The GTX 960 was particularly expensive for what it was IMO and the R9 280/280x and especially R9 380 where the cards with the most performance/$.

See, that's where we won't get along. I'm an engineer, I look at the technical merits of an architecture. To me, the pricetag is nothing more than an arbitrary number that the IHV sticks on its products. It's completely irrelevant to the technology.
AMD puts lower pricetags on their technology... yea, whatever. nVidia *could* do the same, since their technology is more advanced and more efficient, and therefore cheaper to produce. They're more expensive because that's what you can do when you have the technological edge.
There's no reason for them to compete on price. GTX960 and GTX970 have been by far the best-sold GPUs of the past year, and are the most popular by a margin on eg Steam HW survey: http://store.steampowered.com/hwsurvey/videocard/
I think you can interpret that as the market saying nVidia's prices are just fine.

F2bnp wrote:

Are those effects missing on AMD hardware? Never heard of that to be honest!

Yup, you only get them on nVidia hardware at this point (and in theory on Intel, but I don't think their iGPUs would perform well enough anyway):
http://steamcommunity.com/games/391220/announ … 690772757808963

Adds NVIDIA VXAO Ambient Occlusion technology. This is the world’s most advanced real-time AO solution, specifically developed for NVIDIA Maxwell hardware. (Steam Only)

Here is more background information on VXAO: https://developer.nvidia.com/vxao-voxel-ambient-occlusion

F2bnp wrote:

No, this is not bias. The fact of the matter is that Nvidia gave false specs to the public and journalists and only answered truthfully when they were apprehended about it.

No, they weren't 'false', they were 'not sufficiently detailed' at best. The bias is in people claiming it's false and deceptive and whatnot.
nVidia probably thought it wasn't an important detail, as you wouldn't notice anyway. And they were right. The GTX970 cards were reviewed, and nobody noticed anything. It wasn't until some people started poking around with debugging tools that they saw the 3.5 GB-figure pop up somewhere, and didn't know how to interpret it.

F2bnp wrote:

Again, I disagree. The TressFX issue you speak of was in Tomb Raider (2013) and was fixed almost immediately.

By nVidia, not by AMD.
Perhaps AMD should also 'fix' the 'issues' with GameWorks.

F2bnp wrote:

By comparison, every Gaming Evolved title I can think of runs fine on Nvidia hardware.

That's what happens when you write decent drivers, apply per-game fixes and optimizations where necessary, and design GPUs that perform well overall, rather than just in select scenarios.

F2bnp wrote:

It gives a small boost to AMD hardware, but it doesn't destroy performance on Nvidia. That has been the case since the early 00's anyway with "The Way It's Meant To Be Played" and other marketing crap like that.

I can name plenty of "The Way It's Meant To Be Played"-titles that work fine on AMD hardware... Problem is, it took AMD 1 or 2 more generations of GPUs to fix their problems before those games started performing.

F2bnp wrote:

Not saying I like Gaming Evolved, again I'm not fond of PC Gaming getting fragmented like that, but I can't deny that it is far more acceptable than what Nvidia is doing.

It's exactly the same, it just doesn't hurt nVidia as much, because nVidia have their act together, as said above.

F2bnp wrote:

And you had to bring up Tesselation, which has multiple examples across quite a few games in which Tesselation was hampering even Nvidia's performance for no perceptible IQ gain, just in order to show AMD in a negative light. I was shocked when I saw it on Crysis 2 and I was shocked that they did it again with Witcher 3.

Funny you mention Crysis 2. That's exactly one of those games where AMD's current GPU architecture was hurt, but the generation after that actually outperformed even nVidia's offerings at the time.
Why? Because AMD's early tessellators sucked a great deal. They then copy-pasted 4 of them into the next GPU... It didn't scale as well as nVidia's in extreme situations, but it was good enough for the moderate tessellation that Crysis 2 performed. Had they gone all-out on tessellation in Crysis 2, AMD's hardware would still have been in trouble.

F2bnp wrote:

Can't say I really care about PhysX, seems like a gimmick.

That's just one of those things that AMD is in the way of. PhysX is pretty awesome technology, but because it is limited to nVidia-only, it won't see widespread use in games. If either AMD drops out of the market, or a vendor-neutral version of GPU physics would arrive, then games can make full use of it, without having to worry about it not working on some machines.
Which means PhysX is limited to only bolt-on gimmick effects. Still pretty cool, but not reaching the full potential of GPU physics as a concept.

F2bnp wrote:

Now had it been open sourced and available to everyone, I think we'd have seen some really cool shit with it. That still saddens me somewhat 🙁.

Why would it have to be open source? AMD can implement CUDA at any time (see http://www.techradar.com/news/computing-compo … hnology--612041), and run PhysX on their GPUs. CUDA is open, the compiler is even open sourced. All AMD would have to do is write a back-end for their GPUs.

F2bnp wrote:

Okay, can we stop with the insults?

I'm not insulting you. You obviously just copied that rhetoric from elsewhere. I've seen it surface numerous times. It's all part of the AMD propaganda machine.

F2bnp wrote:

I'm wasn't talking about VRAM sizes although I don't see why I shouldn't. GTX 680/770 and 780/780Ti all launching with 2GB and 3GB versions respectively isn't doing Nvidia any favors. They should have had more VRAM, since AMD products did.
Not to mention that GTX 960 2GB VRAM in January 2015 😵 .

So? You get what you pay for. If you buy a 2 GB card, you should expect it to perform as a 2 GB card. Which it does.
Trying to spin that as "planned obsolescence" is quite sad, desperate even.

F2bnp wrote:

It's not just the VRAM though. If you take a look at reviews from TechPowerUp (choosing this site since it has very nice "Relative Performance" charts for multiple resolutions and videocards) and compare 2012/2013 to nowadays, you'll see 280X catching up on GTX 780 even!

Yea, I've seen that theory. One AMD fanboy even posted it directly on my own blog. It's total conjecture. See, the problem with the TechPowerUp relative performance charts is that you cannot compare them from one review to the next.
Namely, TechPowerUp doesn't use the exact same set of games, the same drivers, OS, CPU etc when doing these charts.
Obviously, changing any of these parameters, especially the set of games, will result in different "Relative performance" figures.
"280X catching up to GTX780" could simply be a result of TechPowerUp having dropped some games from the set that were unfavourable to the 280X, and added some new games that are more favourable.
Or they could have moved to a different CPU, which suits the driver for the 280X better than the GTX780.
Just some examples of why you can't compare these numbers.

Anyway, the problem you're now facing is that you've mentioned a number of common points often brought forward by AMD fanboys. So I'm not sure if that's just coincidence, or if you're part of that group of AMD people that spread this nonsense across the web, like the guy that posted pretty much the exact same things on my blog some weeks ago.

Last edited by Scali on 2016-06-30, 19:46. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 29 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

Double post, but perhaps this is important to someone.

http://forums.anandtech.com/showpost.php?p=38 … 75&postcount=58

It seems like this is a non-issue? Not sure what to think here, we'll have to wait on other sites examining it properly. Either way, I'm not touching the reference cards 🤣 .

The 750Ti chart is taken from this review: http://www.tomshardware.com/reviews/geforce-g … ew,3750-20.html
They zoom in on the chart to show that those peaks are very short (small bursts aren't that harmful, can be covered by capacitors, and you won't heat up the traces that quickly), and on average it is well below 75W (about 64W).
I would like to see what the RX480 chart looks like when zoomed in like that.
At any rate, you can tell from Tomshardware's charts that the *average* of the RX480 is well over 75W, and that's where the danger is. The average current will be more than what the PCI-e spec says you should design your slots for.
So it's not the spikes you should be worried about.

Note also that the 75W is the total slot power, and the slot has both 3.3v and 12v lines. These should be added together, and remain below 75W combined. Tomshardware measures 80+W on the 12v alone.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 30 of 170, by Aideka

User metadata
Rank Member
Rank
Member
Scali wrote:

Funny you mention Crysis 2. That's exactly one of those games where AMD's current GPU architecture was hurt, but the generation after that actually outperformed even nVidia's offerings at the time.
Why? Because AMD's early tessellators sucked a great deal. They then copy-pasted 4 of them into the next GPU... It didn't scale as well as nVidia's in extreme situations, but it was good enough for the moderate tessellation that Crysis 2 performed. Had they gone all-out on tessellation in Crysis 2, AMD's hardware would still have been in trouble.

http://techreport.com/review/21404/crysis-2-t … of-a-good-thing. If that is not "going all out", then what in your opinion is? Why in the name of all that is holy would Crytec leave tesselated water running under the ground?

8zszli-6.png

Reply 31 of 170, by ODwilly

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

Waiting for custom cards to jump to the RX 480. The card is great value!

This ^. Kinda reminds me of the founders edition nvidia 1080 cards. Just poorly designed. Just wait for Gigabyte, MSI and Sapphire to kick out some good quality cards, possibly with an extra 6pin power connector or 8 pin.

Main pc: Asus ROG 17. R9 5900HX, RTX 3070m, 16gb ddr4 3200, 1tb NVME.
Retro PC: Soyo P4S Dragon, 3gb ddr 266, 120gb Maxtor, Geforce Fx 5950 Ultra, SB Live! 5.1

Reply 32 of 170, by FFXIhealer

User metadata
Rank Oldbie
Rank
Oldbie

So not that I need to get into this discussion at all....but aren't AMD cards just a different flavor of GPU? They're all mostly the same, right? My list:

1999: Diamond Stealth II G460 8MB AGP (Intel i740 chip)
2002: ATI Radeon 7500 64MB AGP
2005: nVidia GeForce Go 6800 Ultra 256MB PCI-Express
2007: nVidia GeForce Go 7800 GTX 256MB PCI-Express (upgrade to above laptop 6800 Ultra that failed)
2010: nVidia GeForce GTX 480 1.5GB PCI-Express 2.0
2015: nVidia GeForce GTX 980Ti 6GB PCI-Express 3.0

2016: Diamond Viper V770 32MB AGP (retro PC)
2016: Diamond Monster 3D II 8MB PCI (retro PC)

The ATI Radeon did its job well back in the day, but the nVidia cards on the Dell XPS laptop blew me away the first time I played games on them...even the 6800. I didn't see all that much of an improvement when it died and I replaced it with the 7800. The 480 was a huge step up, but by this point it was just more of the same, you know? Like, yeah, much better frame rates, but I didn't see anything new. It even did the TressFX hair on Tomb Raider, but it was a pretty good sized hit unless I turned other stuff off. Now, the 980Ti runs it all without missing a beat. I only got my 144Hz 1440p monitor a month ago. I like it, but it's no 4K display, so I think the card doesn't really have to work so hard.

What I DO like about the competition between nVidia and AMD is that it forces both companies to suck in their guts and try to innovate...and also try to keep some prices competitive. AMD always had better prices for processors, but I don't like how hot they run to get there. And they're always two steps behind Intel on lithography, even though AMD had the 64-bit desktop processor AND the dual-core desktop processors first.

EDIT: Don't know if this matters, but I want to add that I had an AMD Athlon XP processor for the ATI card PC and Intel with all the other cards. Pentium II 350MHz, AMD Athlon XP 1800+ 1.5GHz, Pentium M 2.1GHz, Core i7-860 2.6GHz, Core i7-6700K 4.0GHz, to list them all.

Last edited by FFXIhealer on 2016-07-02, 20:40. Edited 1 time in total.

292dps.png
3smzsb.png
0fvil8.png
lhbar1.png

Reply 33 of 170, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

We all have our preferences and made experiences.

For me, Intel and Nvidia are my go-to products. They "just work" and that matters a lot to me. However at some points in time I went with AMD, for example with the Radeon 9700 and 4850. And Athlon 64 of course. With AMD it usually takes 2 or 3 generations for everything to work well 😊 The original Phenom is a good example. Now AM3+ is my favorite platform for building a Windows XP retro gamer.

The 480 is the first revision, I hope Polaris will improve over time.

YouTube, Facebook, Website

Reply 34 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
Aideka wrote:
Scali wrote:

Funny you mention Crysis 2. That's exactly one of those games where AMD's current GPU architecture was hurt, but the generation after that actually outperformed even nVidia's offerings at the time.
Why? Because AMD's early tessellators sucked a great deal. They then copy-pasted 4 of them into the next GPU... It didn't scale as well as nVidia's in extreme situations, but it was good enough for the moderate tessellation that Crysis 2 performed. Had they gone all-out on tessellation in Crysis 2, AMD's hardware would still have been in trouble.

http://techreport.com/review/21404/crysis-2-t … of-a-good-thing. If that is not "going all out", then what in your opinion is? Why in the name of all that is holy would Crytec leave tesselated water running under the ground?

Going all-out is hard-coding all tessellation to 64x amplification. There's no way AMD's GPUs would have survived that. Crysis 2 doesn't do that at all. It does adaptive tessellation, trying to adjust the amplification factor to some heuristics such as distance and viewing angle. The average amplification factor isn't all that high. Probably in the range of 8-16x.
In fact, they don't even use tessellation on all objects. They rely on POM for some objects, tessellation for others.

It just seems that people are scared of 'many triangles'. They don't seem to understand that:
1) Tessellation is an automated process. It's not easy to tune your geometry in such a way that you get the absolute minimum required triangles in every area under every possible angle. You have *some* control over how many triangles are generated, but it isn't all that accurate. Especially when you add it to a game after-the-fact, such as with Crysis 2 and most other games from that era. The geometry was not originally designed for tessellation, so it will obviously be suboptimal.
2) The obvious thing here is that *some* GPUs (in the same price range) can handle this geometry load just fine. So if other GPUs struggle, the geometry load is not the real problem.

Why would we even need to discuss this? AMD's tessellation was so terribad back then that literally Intel's iGPUs outperformed AMD's high-end discrete GPUs in synthetic tessellation benchmarks. If that doesn't show beyond a shadow of a doubt that AMD is doing something wrong, I don't know what does.

I discussed it many times over on my blog anyway, the charts in this article speak volumes: https://scalibq.wordpress.com/2011/12/24/amd- … t-relationship/
Tessellation performance on the 5000/6000-series of AMD tanks horribly from an amplification factor of 1 to about 11. There is hardly any scaling. There's a huge bottleneck in the pipeline because they try to stuff all triangles through a single rasterizer (nVidia has up to 16 parallel rasterizers).
The 7970 still has poor scaling, but because they doubled up some hardware, it tanks just as hard, but doesn't reach ground zero until about 16x.
This relatively small difference was enough to get AMD from lousy performance in Crysis 2 to the 7970 being the best-performing card in Crysis 2. Ergo, Crysis doesn't go beyond the 16x range much, else the 7970 still wouldn't stand a chance against the nVidia cards that obviously scale way better in the higher amplification ranges.

Why are we still arguing about this in 2016? Even in 2011 it was painfully obvious that AMD simply didn't have a clue how to build a GPU for tessellation. Given that exponential dropoff it's no wonder they struggled in any game that even tried to use moderate tessellation. And yes, this is a sample that AMD themselves contributed to the DirectX 11 SDK. So don't even try to start with "unfair nVidia-paid software".

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 35 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
FFXIhealer wrote:

So not that I need to get into this discussion at all....but aren't AMD cards just a different flavor of GPU? They're all mostly the same, right?

Depends on who you ask. You'd say both Intel and AMD make x86 CPUs. In that sense they're similar. But they use completely different architectures, and the performance characteristics are way different.
GPUs don't even share a common instructionset or anything, so they differ even more than AMD vs Intel CPUs.

FFXIhealer wrote:

What I DO like about the competition between nVidia and AMD is that it forces both companies to suck in their guts and try to innovate...and also try to keep some prices competitive. AMD always had better prices for processors, but I don't like how hot they run to get there. And they're always two steps behind Intel on lithography, even though AMD had the 64-bit desktop processor AND the dual-core desktop processors first.

I'm afraid competition between nVidia and AMD ceased about 2 years ago.
The RX480 makes that painfully obvious: they are still struggling to compete with the GTX970 and GTX980 2 years down the line, using more modern manufacturing technology. Now, that in itself is bad enough... But remember, we were stuck on 28 nm for years on the GPU-side, and AMD made the leap all the way to 14 nm now. Normally you'd have at least one in-between step, such as 20 nm. So they are basically doing two die-shrinks in one, which should yield a considerable improvement in terms of power efficiency.
And even then they struggle to match the power efficiency of nVidia's 2-year old 28 nm technology (not to mention they also don't even implement some of the DX12-features that these 2-year old cards already offer).
That is a huge gap. And in those 2 year since nVidia introduced these cards (and the more high-end ones), AMD basically hasn't had anything that competes with it. So the situation on the GPU-side is very similar to the one on the CPU-side for AMD: They seem to be two steps behind, and delivering GPUs with better prices, but running much hotter.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 36 of 170, by eL_PuSHeR

User metadata
Rank l33t++
Rank
l33t++

I am having a lot of troubles on newly formatted W10 PCs/laptops. The culprit: AMD cards. Their drivers are also quite terrible (a friend of mine lived a hell under Linux, now solved by swapping to nVidia). So AMD is a big NO to me.

Intel i7 5960X
Gigabye GA-X99-Gaming 5
8 GB DDR4 (2100)
8 GB GeForce GTX 1070 G1 Gaming (Gigabyte)

Reply 37 of 170, by Deep Thought

User metadata
Rank Newbie
Rank
Newbie
F2bnp wrote:

It seems to be slightly more efficient than maxwell

A GTX 980 is 12% more efficient.

F2bnp wrote:

I don't really think Pascal is all that different from Maxwell either, seems to be a mere die-shrink with subtle tweaks, but I haven't really looked into it too much.

It's 81% more efficient than Polaris and should be significantly faster than Maxwell or Polaris/Vega in VR. Simultaneous multi-projection is a huge deal for that.
And the 1080 isn't even the "big chip" - GP102 with HBM2 memory is still on the way.

F2bnp wrote:

ATi has always had a bad rep on drivers, rightfully so in the early years (especially the Rage cards, ugh...), but I find that this has been taken out of proportions many times.

Just look at their performance in recent games like DOOM and Rise of the Tomb Raider, how slow they still are to support Crossfire in games or have games working correctly on day one.
Their drivers are still as problematic as they were 15 years ago.

F2bnp wrote:

And you had to bring up Tesselation, which has multiple examples across quite a few games in which Tesselation was hampering even Nvidia's performance for no perceptible IQ gain, just in order to show AMD in a negative light. I was shocked when I saw it on Crysis 2 and I was shocked that they did it again with Witcher 3.

NVIDIA doesn't just use excessive tessellation for the sake of it.
Crysis 2 looks terrible with AMD's "optimized" tessellation. (just forcing it to run at a lower setting)

Just because some of the effects that NVIDIA are pushing make use of tessellation does not mean that they are intentionally crippling AMD performance.
If AMD's tessellation performance was better, it wouldn't be an issue.
Let's not forget that it was AMD who were originally pushing for tessellation in games, until they couldn't compete any more.

Aideka wrote:

http://techreport.com/review/21404/crysis-2-t … of-a-good-thing. If that is not "going all out", then what in your opinion is? Why in the name of all that is holy would Crytec leave tesselated water running under the ground?

There is this thing that 3D engines use called Occlusion Culling. Total non-issue.

Scali wrote:

At least some of the games you mention actually run *slower* on DX12 than they do in DX11 mode, on both AMD and nVidia hardware. I think we can safely conclude that the implementation of DX12 in these games leaves a lot to be desired, and drawing conclusions about DX12 based on these poor implementations is rather meaningless.

Well there are a few issues with that.
The DX12 renderer is probably doing more which would be why the performance demands are higher. The same thing happened with games that offered DX10/11 rendering modes in addition to DX9. They looked better but it wasn't free.
And it also depends on your definition of "slower". Maximum/Average framerates sometimes drop in games, but minimum framerates can be significantly higher.
I don't care if the maximum framerate drops 30 FPS and the average drops 5 if I gain 20 on the minimum. DX11 / DX12
Minimum framerates are what affects gameplay.

PhilsComputerLab wrote:

Yea I have a thing for small, ideally single slot cards 😀
I think this is the worst example of a XXL 960: http://www.guru3d.com/index.php?ct=articles&a … 2e171002a287bc1
Whereas this is more what I like: http://cdn.videocardz.com/1/2015/01/Gigabyte- … -960-Mini-1.jpg

I get the aesthetic appeal of the small GPU - especially single-slot - but I'd rather have a card that stays cool and quiet. You can't tell what size the card is once it's in a case.
However you can tell when a system has a GPU with a small heatsink and fan inside because you'll hear it.
The larger a fan is, the quieter it will be and the lower-pitched any noise it makes will be when moving the same amount of air.
The more fans you have, the quieter the GPU will be when shifting the same amount of air.
The more surface area the heatsink has, the more efficiently it can be cooled.

Last edited by Deep Thought on 2016-07-01, 23:16. Edited 2 times in total.

Reply 38 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
Deep Thought wrote:

Well there are a few issues with that.
The DX12 renderer is probably doing more which would be why the performance demands are higher. The same thing happened with games that offered DX10/11 rendering modes in addition to DX9. They looked better but it wasn't free.

That could be another explanation. But at least in the case of Tomb Raider, it looks the same, and they mainly wanted to reduce CPU overhead: http://tombraider.tumblr.com/post/14085922283 … ise-of-the-tomb
But it was clearly a DX11/console game, and DX12 was added after the fact, so the engine is probably not very optimal for the new API.
The situation is also different, because DX10/11 added very different hardware with a lot more capabilities than DX9. DX12, especially in the case of AMD, basically can't do much that you can't do in DX11. At least with DX12_1 you still have the new CR and ROV features.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 39 of 170, by Scali

User metadata
Rank l33t
Rank
l33t

This site just posted a bunch of rumours about nVidia's 1060, including an alleged launch date of July 7th: http://videocardz.com/61753/nvidia-geforce-gt … ter-than-rx-480
If this is true, then RX480 is pretty much stillborn.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/