VOGONS

Common searches


The GTX 1060 Thread

Topic actions

Reply 21 of 61, by nforce4max

User metadata
Rank l33t
Rank
l33t
Scali wrote:
nforce4max wrote:

Less critical lelelelel experience says otherwise

Really? How long have you had your Pascal-based cards now?

Sure 97c in a cool room, factor in climate and how rarely most people maintain their systems those vrm temps are going to be even higher.

On a far away planet reading your posts in the year 10,191.

Reply 22 of 61, by Scali

User metadata
Rank l33t
Rank
l33t
nforce4max wrote:

Sure 97c in a cool room, factor in climate and how rarely most people maintain their systems those vrm temps are going to be even higher.

Seems fine then I guess. VRMs tend to be specced up to 125C. High quality ones can be specced up to 160C or more.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 23 of 61, by archsan

User metadata
Rank Oldbie
Rank
Oldbie
nforce4max wrote:

Sure 97c in a cool room, factor in climate and how rarely most people maintain their systems those vrm temps are going to be even higher.

Is that 1080 or 1070, FE/custom? Any recommendation on custom card makers known for best quality/longevity?

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 24 of 61, by clueless1

User metadata
Rank l33t
Rank
l33t

I'm a little sad that new video cards are only offering one DVI connection anymore. DisplayPort is the future, but what of us with multiple DVI monitors? Have to invest in adapters...😒

The more I learn, the more I realize how much I don't know.
OPL3 FM vs. Roland MT-32 vs. General MIDI DOS Game Comparison
Let's benchmark our systems with cache disabled
DOS PCI Graphics Card Benchmarks

Reply 25 of 61, by dexter311

User metadata
Rank Member
Rank
Member
clueless1 wrote:

I'm a little sad that new video cards are only offering one DVI connection anymore. DisplayPort is the future, but what of us with multiple DVI monitors? Have to invest in adapters...😒

They also have HDMI, which only requires a passive adapter to DVI. They've been like this for a long time now - the only difference is that most have dropped the single-link DVI port.

AFAIK you only need passive DP to DVI adapters for modern cards though, so it's not like you need to buy expensive active ones.

Reply 26 of 61, by SPBHM

User metadata
Rank Oldbie
Rank
Oldbie

I like the card, it's a little bit faster (and a little more expensive) than the RX 480, but the 480 4GB still looks like the better buy (if you can find these cards for their correct prices),
also I'm excited for the cheaper stuff, RX 470 is looking good (in terms of specs vs price), and there is supposedly a 3GB "1060".

the 1060 is certainly a lot more attractive than the 960 was, but the RX 480 is a good competitor, the good thing for the 480 is performance on DX12 and Vulkan (AMD investing on that since Mantle years ago, and all the consoles also use AMD GCN hardware and are the base version for most games), but the 1060 is overall a little faster right now;

also it's good that the 1060 is actually good, because the 1070 is looking to expensive compared to what the 970 was.

and yes, if you don't want to go higher than 1920x1200 I think, the HDMI to DVI adapters are ultra cheap, if you need higher res or refresh rate you might be in trouble (more expensive adapters are required)

Reply 27 of 61, by Scali

User metadata
Rank l33t
Rank
l33t
SPBHM wrote:

also it's good that the 1060 is actually good, because the 1070 is looking to expensive compared to what the 970 was.

Yea, that's because the 1070 is also good, perhaps too good 😀 It pretty much matches the 980Ti in performance.
I expect that nVidia has started the 10x0 series quite high up in price, and will gradually drop them over time (they will probably have to, if they still want a 1080Ti and Titan above the GTX1080, without going completely nuts on the price-tag).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 28 of 61, by SPBHM

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
SPBHM wrote:

also it's good that the 1060 is actually good, because the 1070 is looking to expensive compared to what the 970 was.

Yea, that's because the 1070 is also good, perhaps too good 😀 It pretty much matches the 980Ti in performance.
I expect that nVidia has started the 10x0 series quite high up in price, and will gradually drop them over time (they will probably have to, if they still want a 1080Ti and Titan above the GTX1080, without going completely nuts on the price-tag).

the 970 also beats the 780 ti on newer games (it was slightly behind at launch, but more of match), and the 970 was priced a lot more aggressively, so that's not all that impressive (but they had competition with the 290s)..

I think the RX 480 forces Nvidia to stay reasonable with the 1060, while there is no real competitor for the 1070/1080.

Reply 29 of 61, by MusicallyInspired

User metadata
Rank Oldbie
Rank
Oldbie

Yikes. Maybe I should have waited. I just bought a 970 on the 1st. 1060 is $40 cheaper. Of course, they're out of stock...ah well.

Yamaha FB-01/IMFC SCI tools thread
My Github
Roland SC-55 Music Packs - Duke Nukem 3D, Doom, and more.

Reply 30 of 61, by luckybob

User metadata
Rank l33t
Rank
l33t

I got the ati 480.

The future is dx12 / vulkan and the 480 will be king of those games. Took me almost 3 weeks to get one. Plus the future option of crossfire and it was cheaper made it an easy decision.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 31 of 61, by Scali

User metadata
Rank l33t
Rank
l33t
luckybob wrote:

The future is dx12 / vulkan and the 480 will be king of those games.

Not really. The gap between AMD and nVidia is (currently) a bit smaller in DX12/Vulkan. But AMD still loses.
DOOM Vulkan is also artificially inflating AMD's numbers, by enabling various features and extensions on AMD-hardware, where nVidia runs a 'vanilla' Vulkan path with no special optimizations (in fact, nVidia has not even made a 'Game Ready' driver for DOOM Vulkan yet, so there's no game-specific optimizations on the driver side even).
So the performance is not representative of Vulkan, but rather of vendor-specific optimizations (their use of the shader intrinsics extension is basically a form of shader replacement. The exact same thing can be, and has been done, for any other API out there). And we all know who is the king of vendor-specific optimizations.

Also, AMD misses some crucial DX12 rendering features (feature level 12_1), which I expect to become commonplace in the near future (conservative rasterization mainly, allowing efficient voxel/raytracing solution for global illumination. Rise of the Tomb Raider already makes use of this).

So no, I don't think the 480 will be the king of DX12/Vulkan games.
FutureMark's Time Spy is probably the most 'neutral' DX12-benchmark out there currently (since both AMD and NVidia were actively involved in developing it, and the benchmark uses a single code-path for all hardware, no special tricks to favour any specific vendor), and the GTX1060 and RX480 are pretty much tied there (that is, with async on. Turn it off, and the RX480 drops further back).
I expect to see the same in neutral DX12/Vulkan games (if there even is such a thing), and any vendor-sponsored games will favour whichever vendor happens to be sponsoring that title.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 32 of 61, by clueless1

User metadata
Rank l33t
Rank
l33t
luckybob wrote:

I got the ati 480.

The future is dx12 / vulkan and the 480 will be king of those games. Took me almost 3 weeks to get one. Plus the future option of crossfire and it was cheaper made it an easy decision.

Let us know how you like it. Always curious to hear a Vogoner perspective on such things.

Cheers.

The more I learn, the more I realize how much I don't know.
OPL3 FM vs. Roland MT-32 vs. General MIDI DOS Game Comparison
Let's benchmark our systems with cache disabled
DOS PCI Graphics Card Benchmarks

Reply 33 of 61, by laxdragon

User metadata
Rank Member
Rank
Member
luckybob wrote:

The future is dx12 / vulkan and the 480 will be king of those games.

First off, the 480 is a perfectly fine mainstream card. I'm sure you will be very happy with it. But, calling it king is a bit of a stretch. In the latest DX12 3DMark benchmark the 1060 narrowly beats out the 480 both with async compute on or off. See: https://www.youtube.com/watch?v=dTJPp7WdEBs

The video shows that with async on, the 480 gets a nice bump. Nvidia does not see quite so much of a bump. 1060 still comes out on top in any case.

edit: It has come to my attention that the 3Dmark bench did not really support full async, more of a preemptive mode. Also, currently AMD wins on DOOM because of some API squabble between id and nvidia.

ANYWAY... either card will run everything out now at awesome fps and visuals. All other arguments are just fanboy-ism.

laxDRAGON.com | My Game Collection | My Computers | YouTube

Reply 34 of 61, by Scali

User metadata
Rank l33t
Rank
l33t
laxdragon wrote:

The video shows that with async on, the 480 gets a nice bump. Nvidia does not see quite so much of a bump. 1060 still comes out on top in any case.

Async is not enabled yet for DOOM on NVidia hardware, as you can read in the DOOM Vulkan FAQ.

laxdragon wrote:

edit: It has come to my attention that the 3Dmark bench did not really support full async, more of a preemptive mode.

These are lies spread by clueless AMD fanboys, who read something about pre-emption in NVidia's whitepaper, and don't quite understand the whole story.
Firstly, the DX12 API cannot control how async workloads are executed. So you can't specify whether you want 'full async', 'pre-emptive mode', or 'sequential' or whatever.
Only the driver can decide this, the application can't even tell what the driver does. FutureMark explains it very well in their press release: http://www.futuremark.com/pressreleases/a-clo … 3dmark-time-spy
Secondly, fine-grained pre-emption is ONE feature that Pascal has introduced to improve latencies on async workloads: http://www.anandtech.com/show/10325/the-nvidi … ition-review/10
This is more relevant to combinations of heavy/long and high-priority/time-critical workloads, and doesn't really apply to Time Spy.
There is however ANOTHER feature for async compute, and that is a hardware-based dynamic load balancing: http://www.anandtech.com/show/10325/the-nvidi … dition-review/9
This is very similar to AMD's implementation with ACE's, which is why Time Spy gets decent gains from the same code running on either AMD or NVidia Pascal cards.

Maxwell v2 does not have this feature, and its async compute is more difficult to optimize for. That is why the driver makes all the code run sequentially by default, which is usually faster than running async code that is not properly tuned for the architecture. Effectively that means turning async on or off in Time Spy does nothing, because even though Time Spy uses multiple command queues when async is on, the driver will put them into a single queue internally, which comes down to the same thing as running Time Spy with async off, where the application puts everything into a single queue.

Another article dealing with Time Spy: http://www.extremetech.com/gaming/232122-lies … 2-time-spy-test

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 35 of 61, by nforce4max

User metadata
Rank l33t
Rank
l33t
archsan wrote:
nforce4max wrote:

Sure 97c in a cool room, factor in climate and how rarely most people maintain their systems those vrm temps are going to be even higher.

Is that 1080 or 1070, FE/custom? Any recommendation on custom card makers known for best quality/longevity?

Those vrm temps were on a stock 1060 in a room temp of 22c or so, not good in the long run. I still remember the Fermi era with vrms being speced for 115c+ but were dropping left, right, and center.

Just look for custom models using a different pcb design with a good phase count then everything will be fine plus the overclocking will be a bit better.

On a far away planet reading your posts in the year 10,191.

Reply 36 of 61, by archsan

User metadata
Rank Oldbie
Rank
Oldbie
nforce4max wrote:

Just look for custom models using a different pcb design with a good phase count then everything will be fine plus the overclocking will be a bit better.

Unfortunately not all manufacturers list that in their specs. Is there a jonnyguru equivalent for gfx cards?

I bought reference GTX 470s days after launch and don't want to repeat it again... Sold one early and still have one that survives til today though, but that's more because it's been very rarely used.

Anyway this pretty much explains all those "FE" cards being pretty much ignored on the market. I almost shed a tear for those who preordered and bought them enthusiastically earlier. Reminds me of my 2010 self.

edit: OK, I found this and this and some more like this. I hope for non-OC purposes, 8-phase (e.g. Palit Super Jetstream) is good enough? I could really use some primer on this.

"Any sufficiently advanced technology is indistinguishable from magic."—Arthur C. Clarke
"No way. Installing the drivers on these things always gives me a headache."—Guybrush Threepwood (on cutting-edge voodoo technology)

Reply 37 of 61, by Scali

User metadata
Rank l33t
Rank
l33t

Some interesting figures on GTX1060 and RX480 in DOOM (OpenGL and Vulkan):
Pg4xTmn.png

As you can see, the RX480 is only faster on a high-end CPU. With lower-end CPUs, the framerate still drops quite hard. So it looks like AMD's drivers still have considerably more CPU overhead, even in Vulkan.
Which means that the gains AMD is demonstrating in DOOM aren't coming from them being more efficient in the API, but rather from the AMD-specific shader path simply making the rendering itself more efficient. If they used the same shaders as in OpenGL (or the ones nVidia uses in Vulkan, because apparently on a fast CPU, there's barely any performance difference, because you are getting GPU-limited, even in OpenGL), they probably wouldn't beat nVidia even on fast CPUs.
We are talking a 4 fps drop on nVidia, vs a 50 fps drop on AMD, from the fastest CPU to the slowest. So by that logic, nVidia is actually the one that is doing best in Vulkan. Removing CPU-limits is what Vulkan should be all about. All the more painful that on these slow CPUs, even nVidia's OpenGL performance is higher than AMD's Vulkan performance.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 38 of 61, by luckybob

User metadata
Rank l33t
Rank
l33t

All the major game engines coming out in the future will be Vulkan enabled. https://en.wikipedia.org/wiki/Vulkan_(API)#Game_engines

Green team can suck it.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 39 of 61, by Scali

User metadata
Rank l33t
Rank
l33t
luckybob wrote:

Green team can suck it.

You mean the red team, because apparently their Vulkan drivers are severely CPU-limited, and they can only reach nVidia's performance by having specific shader optimizations AND a fast CPU.
That means AMD will lose in any game where they can't get their own shader optimizations in, or where their optimizations won't gain enough, or where nVidia puts equally good or better shader optimizations in.

Last edited by Scali on 2016-07-21, 19:16. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/