VOGONS

Common searches


Comeback for AMD? [Polaris]

Topic actions

First post, by snorg

User metadata
Rank Oldbie
Rank
Oldbie

So the Polaris-based R480 looks like it might be the next mainstream graphics card to beat. Do you guys think AMD will bounce back, or are they on the ropes for good?

Reply 2 of 170, by Scali

User metadata
Rank l33t
Rank
l33t

It's going to be incredibly easy to beat, since AMD can barely match the performance-per-watt and absolute performance levels of the 2-year old GTX970. And the GTX970 actually has DX12_1 feature support, which AMD still does not have. So it's a more feature-complete card as well.
With the recent pricedrops on the GTX970, and the inflated launch prices of the RX480, the 970 is actually remarkably competitive for a 2-year old card.
So basically, we can say that AMD is 2 years behind, technologically, both in terms of features and in terms of performance/efficiency.

All nVidia would have to do is do an as-is die shrink of the GTX970, and they'd have a card that easily beats AMD in performance-per-watt, and is also
cheaper to make (smaller die, simpler cooler, simpler PCB).
But instead, they'll come up with a Pascal-based card soon, which will improve over a GTX970.
So AMD will be on the ropes for a while yet.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 3 of 170, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

I read a range of reviews and my opinions are mixed. I was very mich hyped for this card.

Australian pricing will be a challenge. For similar money you can get the 970, which has been around for quite some time.

The power draw issues are a real concern. A few reviewers measured current draw past PCIe specifications and the card pulling over 150W, which is over spec for a 6 pin power plug card. I do not see this 2.8x performance per watt improvement. The 1070 / 1080 cards are still way ahead in performance per watt, which is impressive for Nvidia, they are even on a larger manufacturing node.

I will wait for the 460 and 470. I get the impression that they wanted to be competitive at all costs, so they clocked it quite high, which results in the card running so hot and drawing over 150W.

I think in the end I will just go for the little 460 and wait it out 😐

Also keen on what the 1060 will be like.

YouTube, Facebook, Website

Reply 4 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
PhilsComputerLab wrote:

The power draw issues are a real concern. A few reviewers measured current draw past PCIe specifications and the card pulling over 150W, which is over spec for a 6 pin power plug card.

More specifically, at Tomshardware they measured that it even pulls up to 90W over the PCI-e slot, while the spec is 75W max. So it draws more than it should from your motherboard, greatly over-stressing your components:
http://www.tomshardware.com/reviews/amd-radeo … -10,4616-9.html

The load distribution works out in a way that has the card draw 86W through the motherboard’s PCIe slot. Not only does this exceed the 75W ceiling we typically associate with a 16-lane slot, but that 75W limit covers several rails combined and not just this one interface.
...
Believe it or not, the situation gets even worse. AMD's Radeon RX 480 draws 90W through the motherboard’s PCIe slot during our stress test. This is a full 20 percent above the limit.

When overclocking, things went into the danger zone quickly:

We skipped long-term overclocking and overvolting tests, since the Radeon RX 480’s power consumption through the PCIe slot jumped to an average of 100W, peaking at 200W. We just didn’t want to do that to our test platform.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 6 of 170, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

Yea Scali, it's a real concern. This shouldn't be happening and there is no way they didn't know this or it "slipped by" somehow.

The partner cards will likely all have dual 6 pin or 8 pin connectors out of the gate.

YouTube, Facebook, Website

Reply 7 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
PhilsComputerLab wrote:

The partner cards will likely all have dual 6 pin or 8 pin connectors out of the gate.

They will also need to redesign the power management to solve the PCI-e slot issue.
It looks like the reference card just does a 50:50 split between PCI-e slot and 6-pin connector or so. So it draws 160-180W at times, and half of that (or slightly more) comes from the PCI-e slot, resulting in the 80-90W draw that they have measured from the PCI-e slot.
If you don't change that, and just put on more power connectors, it will still draw way too much from the PCI-e slot. They need to redesign it so that it limits the draw from the PCI-e slot, and draws most from the power connectors (that would have been safer to begin with, as PSUs tend to be more resilient against this than motherboards are).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 8 of 170, by oerk

User metadata
Rank Oldbie
Rank
Oldbie

It's almost like they wanted to say: Look! It's efficient! It only needs a single 6-pin connector! - when it really would've needed two or one 8-pin.

Yeah, there's no way this could've slipped by. They counted on reviewers not to notice.

Drawing more than 75W from the PCIe slot is inexcusable.

Heise reports the same issues, by the way.

Reply 9 of 170, by Deep Thought

User metadata
Rank Newbie
Rank
Newbie

I don't see it happening with this card, that's for sure.
Even if they sort out the power draw issues, it's another overhyped and under-delivering card from AMD.
That amazing efficiency they were touting doesn't even match Maxwell's best, and NVIDIA are 60% more efficient now with Pascal.

And there's still no talk of driver command list support for Direct3D 11. I don't think it's ever going to happen.
AMD seem to have been focusing all their efforts on Mantle/Vulkan/D3D12, while claiming that DCL support is "useless" and only affects benchmarks.
Sorry, but there are a ton of games which are still being released using D3D11 today that show big performance improvements from DCL support, and a huge library of existing games running on D3D11 that I still want to play - I think people here will understand that more than others that only play the latest releases.

Until they support multi-threaded rendering on D3D11, I won't even consider buying another AMD GPU. It's really hurting performance in later D3D11 titles.
You have games where a faster AMD card is losing out to a slower NVIDIA card as a result of this - yet AMD owners try blaming NVIDIA's GameWorks or have other conspiracy theories for it if the game doesn't support GameWorks, instead of admitting that AMD are still building fast cards that are hobbled by their drivers after all these years.

Reply 10 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
Deep Thought wrote:

And there's still no talk of driver command list support for Direct3D 11. I don't think it's ever going to happen.
AMD seem to have been focusing all their efforts on Mantle/Vulkan/D3D12, while claiming that DCL support is "useless" and only affects benchmarks.
Sorry, but there are a ton of games which are still being released using D3D11 today that show big performance improvements from DCL support, and a huge library of existing games running on D3D11 that I still want to play - I think people here will understand that more than others that only play the latest releases.

The same also goes for the other side of Vulkan: OpenGL.
There are plenty of games that still run on OpenGL, and DOOM 2016 demonstrated once again that AMD doesn't have its drivers in order there either.

And on the hardware side it's no different. AMD has been trailing behind in tessellation performans for years. Instead of getting their hardware up to par with nVidia and Intel(!!), they kept blaming nVidia for unfair optimizations etc.
And now the story repeats itself, with nVidia and Intel supporting DX11.3/DX12_1, while AMD is stuck at DX11.2/DX12_0, and all they can talk about is async compute, that one DX12 feature that they do support (but they need a synthetic benchmark which they paid for by themselves, to show how that is even useful).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 11 of 170, by Oldskoolmaniac

User metadata
Rank Oldbie
Rank
Oldbie

Im really considering jumping on the intel band wagon, its hard to find a good board for my fx8350 socket am3+ only supports pcie2.1 not to mention they only support up to 1600mhz ram.
The only thing i need to save for is good intel board and processor.
To me AMD has always been way behind intel.

Motherboard Reviews The Motherboard Thread
Plastic parts looking nasty and yellow try this Deyellowing Plastic

Reply 13 of 170, by F2bnp

User metadata
Rank l33t
Rank
l33t

I think what is going on here is that AMD clocked the cards higher than they should have in order to be more competitive. That would be okay, but it seems some chips couldn't cope with that unless they were given more voltage, so they probably ended up overvolting a little which would explain the higher than it should be power consumption. This is usual for AMD in the last couple of years, as soon as you hit a certain point, efficiency goes out the window 🤣 .

As it stands, I am disappointed by the perf/watt, but the perf/$ is fantastic. It seems to be slightly more efficient than maxwell and a few people have managed to get the card running stable at 1450MHz and 1490MHz. That's actually really nice, at these speeds the card is right around the Fury Nano performance.
I honestly can't wait to see custom cards with great cooling solutions, 8pin connector and OC headroom.

Scali, you are honestly the only person I've seen online that bashes AMD for their DX12 support. I don't want to turn this into a flame war, but in most of the DX12 releases (granted most of them are somewhat broken, Gears of War, Rise of the Tomb Raider, Quantum Break, Hitman etc) it seems AMD is enjoying a healthy boost in performance and is usually hitting Nvidia pretty hard. Most people on tech-sites are echoing what I'm saying and what those DX12 results are showing, that AMD has the more forward thinking design with GCN. I'm not going to bring up Async Compute, I really have no idea whether or not this will matter in the long run.
I don't really think Pascal is all that different from Maxwell either, seems to be a mere die-shrink with subtle tweaks, but I haven't really looked into it too much.
I have 0 clue whether or not Feature Level 12.1 will help Nvidia, I guess we'll find out in the future DX12 titles (the new Deus Ex is coming out this August, should be interesting).

ATi has always had a bad rep on drivers, rightfully so in the early years (especially the Rage cards, ugh...), but I find that this has been taken out of proportions many times. I was skeptical of buying my first ATi card in 2008, a Radeon 4850, but I bit the bullet and grabbed it in the first couple weeks after the release. It was an amazing value for money card and I never had many issues with the drivers, not any more than say Nvidia. AMD nowadays is in a rougher spot financially than it was back in 2008 and I'll admit that they had been lackluster in the driver releases (not that I will complain about this, I'm not very fond of very very frequent "Game-Ready" driver releases either) during 2012-2014, but from the Omega driver release and onward (December 2014 I believe), they have been upping their game all the time. The new Crimson tool was nice (can't say I really care about it but whatever) and the new Wattman (lol) utility seems really great actually. I could see MSI Afterburner getting uninstalled from my drive if this trend continues 😁.
What did annoy me was that they put some of their products on legacy. 4xxx and older was especially bad, I had a lot of issues trying to find the correct driver for these cards to perform as they should, thankfully Windows 10 grabs a decent driver by default for these cards so I don't have to do this tedious work on older systems. I think they also put anything pre-GCN on Legacy of last December? That kinda sucks too, but hopefully it is not nearly as bad as 4xxx and older.

I used to like Nvidia a lot, they had some great products, but I feel like they treat me like shit as a customer. Unless they really wow me with a mid-range product that impresses on the most important metrics (perf/$ for example), I really don't give a shit about them anymore. For example, the 6600GT, 8800GT, GTX 460 were some those really amazing products at just the right price. I don't see them releasing cards like that anytime soon to be honest. A few of the things they have done in the past few years off the top of my head:

-GTX 970 is not a 4GB 256bit card, but a 3.5GB 224bit card
-Nvidia Gameworks bloat-ware 😵
-Planned obsolescence (just look at Kepler vs GCN nowadays)
-Founder's Edition "premium" components for "premium" pricing and jacking up the prices on MSRP
-980Ti pricing (big die chip on much higher price than they used to be, you can thank AMD with Fury X for that too)

Among other things, I also feel like the GTX 970 should have been their 960Ti or whatever mid-range card. The GTX 960 offered like 12-15% more performance than the GTX 760 before it, I found that absolutely pathetic.

Again, not wanting to start a flame war, just genuinely interested in your opinion and I also felt like I had to provide counter-arguments.

Last edited by F2bnp on 2016-06-30, 16:34. Edited 1 time in total.

Reply 14 of 170, by clueless1

User metadata
Rank l33t
Rank
l33t

I was disappointed in the power draw as well. I'm a big fan of power efficiency and bang for your buck with graphics cards. Currently own a GTX 750Ti and happy with it (in my son's gaming PC). I was hoping the 480 would be the card to replace the 750Ti...
My graphics card price ceiling is $200 (personally can't justify spending more than that for a graphics card), and my hope was the 480 would come down in price to $200. But with these power results, the partner cards will have to do a lot to change my mind and the price will have to come down a little.

The more I learn, the more I realize how much I don't know.
OPL3 FM vs. Roland MT-32 vs. General MIDI DOS Game Comparison
Let's benchmark our systems with cache disabled
DOS PCI Graphics Card Benchmarks

Reply 15 of 170, by swaaye

User metadata
Rank l33t++
Rank
l33t++

Seems to me the fault is probably at least partially in the manufacturing process not working out as well as projected. The chip just doesn't clock high enough and remain efficient.

But it's priced ok for what it is. Hopefully Vega isn't a disaster against its NV counterpart.

The perpetual driver dilemma is of course a good point. I don't really even consider AMD anymore, frankly. Their D3D11 support is weak and they drop the ball every time an OpenGL game is released. I also don't like how quickly they abandon driver support for their products in general.

Reply 16 of 170, by F2bnp

User metadata
Rank l33t
Rank
l33t

Perhaps the RX 460 or RX 470 will be your card then. RX 460 will be a <75W card and it should offer performance around the GTX 960 and R9 380 (no benchmarks out yet) and the RX 470 will probably be a 100-110W card a little slower than the 480 depending on specs.

It seems RX 460 will retail for 100$ and RX 470 for 150$. I will recommend the RX 470 to many of my friends that are looking for an upgrade from their 5770, GTX 560 and 6850 cards 😀.

There's also the GTX 1060, these will probably be released in two weeks, 14th July, but they will probably be more expensive than the RX 480, so they probably won't fit your budget. 1050 would be nice with sub 75W consumption, but there are 0 rumours about these, so they're probably far away.

Reply 17 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

Scali, you are honestly the only person I've seen online that bashes AMD for their DX12 support. I don't want to turn this into a flame war, but in most of the DX12 releases (granted most of them are somewhat broken, Gears of War, Rise of the Tomb Raider, Quantum Break, Hitman etc) it seems AMD is enjoying a healthy boost in performance and is usually hitting Nvidia pretty hard.

Is that so?
That seems to be a matter of interpretation.
At least some of the games you mention actually run *slower* on DX12 than they do in DX11 mode, on both AMD and nVidia hardware. I think we can safely conclude that the implementation of DX12 in these games leaves a lot to be desired, and drawing conclusions about DX12 based on these poor implementations is rather meaningless.

Trying to measure the delta from DX11 to DX12 is very difficult between different vendors anyway. For example, take AotS. On AMD this performs extremely poorly in DX11-mode. When AotS originally came out as a Mantle-showpiece, nVidia invested heavily in driver optimizations to counter the broken DX11 code in the benchmark. As a result, nVidia's fastest cards could actually outperform AMD's cards running on Mantle. AMD never bothered to do anything about DX11 performance, since the bigger the gap with Mantle, the better it was for their marketing.
So if you try to measure DX11-to-DX12 delta now, then yes, AMD looks to get a 'healthy boost', while the gains on nVidia are much smaller. But that says more about AMD's state of DX11 than it does about nVidia's state of DX12.
Similar stories go for various other games... once games started to implement DX11 multithreading, AMD suffered, because they do not implement DCLs at all, and therefore serialize all DX11 calls to a single thread. nVidia however does perform proper DX11 multithreading, so their base level of performance is higher, and they have less to gain from DX12.

F2bnp wrote:

Most people on tech-sites are echoing what I'm saying and what those DX12 results are showing, that AMD has the more forward thinking design with GCN.

Most people are wrong then.
The only DX12 feature AMD even supports is Async Compute.
They do not support Conservative Rasterization and Rasterizer Ordered Views.
I don't see how you can call GCN more forward thinking when it doesn't implement various DX12 features.
nVidia supports them all. Just because an AMD-paid async compute benchmark performs better on AMD doesn't mean anything. In fact, even Oxide themselves have gone on record to state they didn't know much about async compute when they wrote it, and only developed it for AMD-hardware, they don't recommend using it as a benchmarking tool.
Async compute is very sensitive to the underlying implementation. You can't write a single codepath that performs optimally on AMD, nVidia and Intel hardware. You need to perform hardware-specific optimization for each specific architecture.
If the shoe was on the other foot, AMD fanboys would be crying out 'no fair' everywhere. Just look at AMD CPUs, where some popular benchmarks are optimized for Intel CPUs, some even compiled with Intel's own compiler.

F2bnp wrote:

I don't really think Pascal is all that different from Maxwell either, seems to be a mere die-shrink with subtle tweaks, but I haven't really looked into it too much.

I'm not sure why you're even bringing it up. Pascal doesn't have to be much more than a die-shrink with subtle tweaks, given that Maxwell is the benchmark as far as GPU architectures go. It was well ahead of anything AMD had on offer in the 28 nm era, and RX480 makes it painfully obvious just how difficult it is to get to Maxwell levels. AMD barely reaches that with 14 nm technology, and 2 extra years of GPU development.

F2bnp wrote:

I have 0 clue whether or not Feature Level 12.1 will help Nvidia, I guess we'll find out in the future DX12 titles (the new Deus Ex is coming out this August, should be interesting).

Rise of the Tomb Raider already uses it for voxel-based global illumination effects. You simply can't get these effects on those 'forward looking' AMD GPUs. That's the thing that surprises me most. Everyone talks about async compute, but it's not an actual rendering feature. It's more like HyperThreading or such. Yes, it can boost performance a bit, but it doesn't do anything new or revolutionary. if you can get the same or better performance without HyperThreading/Async compute, your software will run exactly the same, look exactly the same etc.
DX12_1 on the other hand makes new volumetric effects possible, as well as other trickery, such as efficient 3d collision detection on the GPU and such. These things can't be emulated in any other way, because they'd be horribly inefficient if you would implement them with CPU or GPGPU routines instead. So where async compute may get you some gains in the order of 10-15% performance, if used properly, DX12_1 gives you new effects that are out of reach on other hardware.

F2bnp wrote:

-GTX 970 is not a 4GB 256bit card, but a 3.5GB 224bit card

This is the biased AMD-fanboy version.
The truth is that the GTX970 has 4 GB, divided into two clusters:
3.5 GB connected to a 224-bit wide interface
0.5 GB connected to a 32-bit wide interface

These two interfaces can be used at the same time.
So technically nVidia is 100% correct in stating it has 4 GB of memory. It actually does.
nVidia is not even wrong in saying that it has a 256-bit interface to that memory.
The only confusion is that it is clustered, so it does not act entirely the same a single cluster of 4 GB connected to a single 256-bit interface.
But the benchmarks speak for themselves: you can't really tell the difference in performance between this card and a 'real' 4 GB card. Whatever nVidia has done exactly with the memory management in their drivers, it seems to work quite well. So I'm not really sure why people are making such a big deal out of this.

F2bnp wrote:

-Nvidia Gameworks bloat-ware 😵

I'm quite happy with things like GameWorks actually. Thanks to this, you actually get to see effects like PhysX, tessellation and VXGI in games, and you are actually able to use all those great features of your GPU.
AMD does exactly the same by the way, with their Gaming Evolved. How did you think async compute ended up in AotS and other games? And they had TressFX... which 'somehow' ran extremely poorly on nVidia cards at launch...
AMD's pockets aren't as deep as nVidia's, so their list of supported games is a bit smaller. But still, they're both doing exactly the same, so I don't see why you would just call out GameWorks. I think GameWorks is quite fair anyway, in being mostly just DX11-code, which would work fine on hardware from other vendors, assuming they have proper support for DX11 features such as tessellation. AMD did not, well boohoo. They sucked just as hard in tessellation games and benchmarks that were not part of GameWorks, so it's not like it's some big conspiracy.

F2bnp wrote:

-Planned obsolescence (just look at Kepler vs GCN nowadays)

Again, retarded hyperbole from the AMD camp.
You can't compare different games and try to establish a 'trend' like that.
Games and game engines evolve over time, and demands on hardware change. After all, the hardware itself changes. For example, at some point in time, 1 GB was a common VRAM size. These days it's more like 4 GB. So, games are now designed to make use of more and more detailed textures. Cards with less VRAM (or lower bandwidth) suffer harder on this. That's not planned obsolescence, that's just evolution. It might hit one vendor's products harder than another at any given time, but it's pretty random. Neither nVidia nor AMD have a crystal ball, so they can't really predict where things go during the lifetime of a GPU. Sometimes one gets more lucky, other times it's the other. But that doesn't mean in the least that your product's performance has degraded over time. The performance is still the same, it just doesn't translate to the same gaming capability as it once did.

Aside from that, there's the variable of drivers that can't be underestimated. By this metric, the vendor that gets its drivers in order first, will be the 'worst', because they are perceived as 'not gaining performance over time'. They just gained that performance earlier.
There are various sites that benchmark older games and older hardware with every new driver revision by the way, and nVidia shows consistent FPS throughout, and in some cases some boosts, sometimes even years after a card was launched, sometimes even with games that are years old. There are no signs of planned obsolescence whatsoever.

Last edited by Scali on 2016-06-30, 17:33. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 18 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

There's also the GTX 1060, these will probably be released in two weeks, 14th July, but they will probably be more expensive than the RX 480, so they probably won't fit your budget. 1050 would be nice with sub 75W consumption, but there are 0 rumours about these, so they're probably far away.

I would expect the 1060 to be introduced together with a 'sister card', just like the 1080 and 1070. That will probably be the 1050. One being the full GPU, the other a harvested/cut-down version. It's what AMD and nVidia usually do.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 19 of 170, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

Did any reviewer look into under-clocking / under-volting of this card? Plot a chart to see if they did indeed push it a bit too far.

I was hoping that the 460 and 470 would also get reviewed, but turns out there doesn't even seem to be a launch date for these cards yet?

So more waiting is in order. I'm very keen for the 1060. I think Nvidia can do several things here, charge a premium or go right up against AMD.

They could make a killer product like the 8800 GT, that would be awesome and great for competition. Performance per watt of the 1060 should be outstanding when you look at perf / watt of the new 10 series cards.

I think I saw a photo of a partner card from MSI. What always annoys be about modern partner cards is that they like to take a main-stream card (like a 960), then slap 2 or 3 XXL fans on it, and you end up with a card that is larger than a 1080 🤣

The 460 is a card I'm eager to find out more about. How will it compared against the 750 Ti? That card has amazing perf / watt, is super tiny and has escaped the triple fan upgrade...

EDIT: Also I noticed a real lack of comparisons against older $200 cards. Adore TV is the only one who included a 750 Ti 😀 I asked a few reviewers, and he was the only one to respond positively 😊 Would have loved to see cards like the 560 Ti, 660, 760 and Radeon equivalents included.

YouTube, Facebook, Website