VOGONS

Common searches


Comeback for AMD? [Polaris]

Topic actions

Reply 60 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
PhilsComputerLab wrote:

Was the R9 270X a especially "bad" perf / watt card?

Not really, at least, not back then: https://www.techpowerup.com/reviews/AMD/R9_270X/27.html
But it is a strange pick. Quite an old card (this is pre-Maxwell), and also a relatively low-end card.

Note also that they compare the RX470 there, not the RX480. The 470 is not released yet... The RX480 seems to be quite a different story at least.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 61 of 170, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

I think it's clear what happened with the 480. They wanted to, at all costs, end up competitive with the 970 and maybe used voltages and clock speeds that are not ideal for perf / watt.

The 470 is likely dialed down enough to have better perf / watt. They do talk about 110 W.

I haven't seen any reviews of the 4 GB cards, I take it the 8 GB version would need more power than a 4 GB version?

New info about "PowerGate"

AMDJoe - Today at 10:41 AM As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).

YouTube, Facebook, Website

Reply 62 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
PhilsComputerLab wrote:

I haven't seen any reviews of the 4 GB cards, I take it the 8 GB version would need more power than a 4 GB version?

Yup... aside from obviously there being twice as many memory chips to power, AMD also runs the memory on the 4 GB card at a lower clock, so that will drop power even further.
I think that's another strange choice. Historically you'd find the card with more memory to run at lower clock, to compensate a bit for the extra power draw (and to improve signal quality). The same happens on many motherboards/integrated memory controllers: They tend to be able to run 2 strips of memory at higher speeds than 4 strips of memory.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 63 of 170, by Tiger433

User metadata
Rank Member
Rank
Member

It`s bad that AMD have problems with his new GPU and that i740 was good years ago, I even tried on that card GTA: Vice City and ran good on that card even that 🤣

W7 "retro" PC: ASUS P8H77-V, Intel i3 3240, 8 GB DDR3 1333, HD6850, 2 x 500 GB HDD
Retro 98SE PC: MSI MS-6511, AMD Athlon XP 2000+, 512 MB RAM, ATI Rage 128, 80GB HDD
My Youtube channel

Reply 64 of 170, by archsan

User metadata
Rank Oldbie
Rank
Oldbie

Followup to the forementioned article:
http://wccftech.com/amd-rx-480-pcie-power-iss … g-investigated/

I wish to see all this come to pass ASAP... then RX 490/495, Vega, GP102 variants... I don't believe this is the prime time to be getting a DX12 card anyway.

Tiger433 wrote:

It`s bad that AMD have problems with his new GPU and that i740 was good years ago, I even tried on that card GTA: Vice City and ran good on that card even that 🤣

We need more active players in the enthusiast GPU market, that's for sure!

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 65 of 170, by Tiger433

User metadata
Rank Member
Rank
Member

For DX12 games and GPU`s is better to wait some time, even one year or 1,5. Maybe AMD come back in that time with good CPU and GPU, I hope they do that. It will be good also for big return of Intel i740 🤣

W7 "retro" PC: ASUS P8H77-V, Intel i3 3240, 8 GB DDR3 1333, HD6850, 2 x 500 GB HDD
Retro 98SE PC: MSI MS-6511, AMD Athlon XP 2000+, 512 MB RAM, ATI Rage 128, 80GB HDD
My Youtube channel

Reply 66 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
archsan wrote:

I don't believe this is the prime time to be getting a DX12 card anyway.

Depends... If you're waiting for something faster than 1080, then perhaps not yet.
Otherwise, both AMD and nVidia have released their latest architectures, so this is exactly what you'll be getting for the next 12-24 months, until they do another refresh, which may or may not significantly shake up things like performance and price.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 67 of 170, by archsan

User metadata
Rank Oldbie
Rank
Oldbie

If NVIDIA were to follow previous schedule, a "Titan" model will be released in about 6 months, and then a "Ti" a couple months later. AMD Vega should also keep that schedule in check, so that the Green didn't delay things for too long like with the 680 (almost full year from GK104 to GK110).

In the mean time, I'm torn whether to go 'low' for a 780/Ti (in terms of perf/power it's already obsolete -- though interestingly it can be used for a tricked out XP rig), or reaching higher between 980 Ti (more CUDA cores, 250W TDP) and 1070 (less cores, much faster clock, 150W TDP). Pascal CUDA probably won't be ready for rendering apps until August or later, making 980Ti still relevant in that regard. My bets are 980 Ti will still beat at least the 1070 in Octane Bench, but damn that lower power is handy.

In terms of power efficiency, Pascal is quite astonishing already, makes you wonder what can it do with a 250W limit. I don't know how much Vega will be different (or the same) to Polaris, but for the sake of good competition, I really hope it's something REALLY different.

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 68 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
archsan wrote:

Pascal CUDA probably won't be ready for rendering apps until August or later, making 980Ti still relevant in that regard. My bets are 980 Ti will still beat at least the 1070 in Octane Bench

Why would you think that?
I see no reason to assume Pascal would perform significantly worse than Maxwell in CUDA. Pascal is just a newer iteration of the same architecture.
You wouldn't expect Intels next Core i7 to be slower than the current one either, would you?

Besides, don't forget that Pascal actually debuted in the Tesla P100 series, which are mainly for CUDA/supercomputing.

archsan wrote:

I don't know how much Vega will be different (or the same) to Polaris, but for the sake of good competition, I really hope it's something REALLY different.

It's not. VEGA is to Polaris as GP100/GP102/whatever they will call it is to GP104: the same architecture implemented on a larger scale.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 69 of 170, by archsan

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Why would you think that? I see no reason to assume Pascal would perform significantly worse than Maxwell in CUDA. Pascal is jus […]
Show full quote
archsan wrote:

Pascal CUDA probably won't be ready for rendering apps until August or later, making 980Ti still relevant in that regard. My bets are 980 Ti will still beat at least the 1070 in Octane Bench

Why would you think that?
I see no reason to assume Pascal would perform significantly worse than Maxwell in CUDA. Pascal is just a newer iteration of the same architecture.
You wouldn't expect Intels next Core i7 to be slower than the current one either, would you?

I have no doubt 1080 "Ti" will obliterate 980 Ti fair and square. Or that 1080 and 1070 will obliterate 980. However it is a little difficult to compare between different gens and different configurations, but you can see how they historically compare in Octane, look at 580 vs 680, 780 Ti vs 980 (also 780 vs 970):

https://render.otoy.com/octanebench/results.p … er=&singleGPU=1

Also with OC cards like Palit 980Ti Super JetStream (~1300MHz core), raw performance is roughly between 1080 and 1070 in DOOM, so that would make it interesting to compare also.

Scali wrote:
archsan wrote:

I don't know how much Vega will be different (or the same) to Polaris, but for the sake of good competition, I really hope it's something REALLY different.

It's not. VEGA is to Polaris as GP100/GP102/whatever they will call it is to GP104: the same architecture implemented on a larger scale.

Then that's not good, I'm afraid they're far behind in terms of perf/watt that even a dual-GPU flagship card won't save them this time around

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 70 of 170, by archsan

User metadata
Rank Oldbie
Rank
Oldbie

Pascal CUDA probably won't be ready for rendering apps until August or later, making 980Ti still relevant in that regard

And in case you were wondering about that part, it's simply something I take from Octane Render's forum re: CUDA 8 support. GTX 1070 and 1080 are still not usable in OR right now.

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 71 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
archsan wrote:

Pascal CUDA probably won't be ready for rendering apps until August or later, making 980Ti still relevant in that regard

And in case you were wondering about that part, it's simply something I take from Octane Render's forum re: CUDA 8 support. GTX 1070 and 1080 are still not usable in OR right now.

I think you misinterpret that. They probably mean that they don't use the *new* CUDA stuff in Pascal yet (known as 'CUDA 8'): https://devblogs.nvidia.com/parallelforall/cu … tures-revealed/
But Pascal is fully backward-compatible, so it will work fine with older versions of CUDA, and all CUDA software works fine on 1070/1080.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 72 of 170, by archsan

User metadata
Rank Oldbie
Rank
Oldbie

Well, I'm not the dev so...
https://render.otoy.com/forum/viewtopic.php?f … tart=20#p280269

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 73 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
archsan wrote:

I have no idea what Octane even is, or what it's supposed to do... but if it doesn't work, they're doing something wrong. Pascal does not require specific modifications before a CUDA application can run.
Here are some CUDA benchmarks, and the 1080 does great in them: http://www.phoronix.com/scan.php?page=article … 1080-cuda&num=2

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 74 of 170, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
archsan wrote:

What the dev is saying there is that they tried to implement a CUDA 8 driver, which hasn't gone too well. They seem to think the problem is with with the cards' implementation (nvidias problem) hence bouncing the problem back to them ("conversations with nvidia").

I suspect this means you can get the new 1070/1080 and still use Octane (it may be faster than previous iteration anyway being the next generation of card), it just won't be using CUDA 8. Whether or not this api/sdk is better than the previous is anyones guess (abstrax seems to think its not).

Reply 75 of 170, by archsan

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
archsan wrote:

I have no idea what Octane even is, or what it's supposed to do... but if it doesn't work, they're doing something wrong. Pascal does not require specific modifications before a CUDA application can run.

The mumbo jumbo is that it's an "unbiased, physically-based renderer"... takes a sec to google/bing image search them to see what it's like I guess. It's similar to iRay for 3dsmax.

And is it so strange that adding full support for new gen hardware to a piece of production-grade software (I don't know how large/small their team is) would take some time? They've just released a major new version, and it seems that the CUDA 8 toolkit is not even a final release yet.

Here are some CUDA benchmarks, and the 1080 does great in them: http://www.phoronix.com/scan.php?page=article … 1080-cuda&num=2

Thanks for that link, the clock boost does help greatly with the 1080 I see. Now, 1070 vs 980 Ti on CUDA: http://www.phoronix.com/scan.php?page=article … -gtx-1070&num=4

Between them, it's not clear-cut yet, which card does better on which application, and it's not like I wanted the Pascal specimen to perform worse, even if it's a scaled-down version. If 1070 beats 980Ti on OR3 straight away, all the better... since 8GB is always nicer than 6GB, 150W is always better than 250W etc etc.

spiroyster wrote:

What the dev is saying there is that they tried to implement a CUDA 8 driver, which hasn't gone too well. They seem to think the problem is with with the cards' implementation (nvidias problem) hence bouncing the problem back to them ("conversations with nvidia").

I suspect this means you can get the new 1070/1080 and still use Octane (it may be faster than previous iteration anyway being the next generation of card), it just won't be using CUDA 8. Whether or not this api/sdk is better than the previous is anyones guess (abstrax seems to think its not).

A couple of members there reported that they can't use their 1080s yet. It will just take a little bit more time until it's officially supported, I'm sure, and then it's happily ever after. It's happened before with new gen cards, no biggie.

Alright, it's going to minute off-topic details already. 😜 Let's get back talking about AMD's demis--um, comeback.

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 76 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
archsan wrote:

And is it so strange that adding full support for new gen hardware to a piece of production-grade software (I don't know how large/small their team is) would take some time? They've just released a major new version, and it seems that the CUDA 8 toolkit is not even a final release yet.

Yes, that is strange. CUDA is a framework. It's comparable to OpenGL, DirectX, or the Windows API for example.
Whenever a new CPU or GPU is released, software 'just works', because these frameworks are abstractions of the hardware, allowing you to plug any kind of hardware underneath, regardless of the exact architecture/design/implementation details.
No need to recompile or modify any code. All your existing software works as-is, out-of-the-box. This is in fact one of the biggest selling points of CUDA: the widespread application support.
All you need to run CUDA apps is a CUDA driver (which comes as a standard part of the GeForce driver set). The toolkit is for *development*, end-users don't need it.

The new toolkit and development is only required when you want to actually *optimize* for this new architecture, and make the most of the architecture's strengths and new features.
But even without doing that, CUDA software should just work, and perform well.

archsan wrote:

Between them, it's not clear-cut yet, which card does better on which application

Note also that the CUDA 8 toolkit is still very new, so I doubt that these applications use much of the new Pascal features yet, if at all.
So some applications may perform better on 1070/1080 once a newer version of the software starts using these features.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 77 of 170, by archsan

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

All you need to run CUDA apps is a CUDA driver (which comes as a standard part of the GeForce driver set). The toolkit is for *development*, end-users don't need it.

OK, the OP of that linked thread: a user buys a 1080, installs CUDA 8.0, runs OR3, and then finds out that the card is not detected/supported by the program. That simple really. For _whatever_ reason.

See, it's not just limited to that one example:
https://forum.nvidia-arc.com/showthread.php?1 … 80-be-supported (re: iRay)
http://www.daz3d.com/forums/discussion/89996/ … 80-iray-support (same as above)
https://forums.geforce.com/default/topic/9402 … -am-i-missing-/ (re: Furryball GPU renderer, Octane, and Skanect)

CUDA is fully supported, but those software needs to be updated with CUDA Toolkit 8.0 to support Pascal GPUs. [snip]

CUDA Toolkit 8.0 is only in RC atm and can only be downloaded by registered developers, unknown when the final public release will be out.

If you're a dev for a similar application, then please go ahead and tell them (=all those lazy developers) they're not doing it right. No point discussing it any further with this layperson here. 😀

[Admins, sorry for derailing this thread, feel free to split it out anyway you like.]

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 78 of 170, by Scali

User metadata
Rank l33t
Rank
l33t
archsan wrote:

OK, the OP of that linked thread: a user buys a 1080, installs CUDA 8.0, runs OR3, and then finds out that the card is not detected/supported by the program. That simple really. For _whatever_ reason.

Like I said, that program apparently does something weird somewhere.
Just download GPU Caps Viewer for example, it can also list the installed CUDA devices: http://www.ozone3d.net/gpu_caps_viewer/
Even older versions (versions released long before Pascal) should detect the 1080 as a CUDA device without a problem.
Or GPU-Z...
Here's a screenshot of GPU-Z on 1080. Shows CUDA support:
nvidia-gtx1080-gpuz.jpg
Detecting and using CUDA is not rocket science. But some devs just don't RTFM, or try to be overly creative with their routines, creating some dependencies on specific hardware or driver versions or whatever...

archsan wrote:

If you're a dev for a similar application, then please go ahead and tell them (=all those lazy developers) they're not doing it right.

nVidia is already helping them to solve their problems apparently. They say that "those software" needs changes, which implies that it's specific to that software, not a general CUDA problem on Pascal cards (in which case you wouldn't need to update the software anyway, you'd need to update the CUDA driver).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 79 of 170, by mr_bigmouth_502

User metadata
Rank Oldbie
Rank
Oldbie

I was excited for it at first, but I've heard it's been kind of a disappointment. My main concern with it is actually the amount of power it draws from a PCI-E slot, since it reportedly runs out of spec. You'd think having an external power connector would rectify that, but it doesn't. If I ever get one, I'm gonna hold out for a redesign that doesn't draw excess power from the PCI-E slot. Hopefully it'll be cheaper by then too, because right now its price-performance ratio doesn't seem that good.

I'm a fan of AMD and I like rooting for them since they're the only real competition Nvidia and Intel have, but I've seriously considered jumping ship on a number of occasions. I mean, technically I'm using Intel right now since I'm typing this on my Thinkpad, and if I ever invest in an external GPU, that pretty much means I'd have to go Nvidia since that's the only way I'd be able to have an external GPU output on the main screen. If AMD ever implements a similar feature in its drivers however, then I might not jump to Nvidia. 😉