Reply 260 of 279, by TELVM
- Rank
- Oldbie
Anandtech review of the 6c/12t Ryzen 5 1600X: http://www.anandtech.com/show/11244/the-amd-r … eads-vs-four/17
Let the air flow!
Anandtech review of the 6c/12t Ryzen 5 1600X: http://www.anandtech.com/show/11244/the-amd-r … eads-vs-four/17
Let the air flow!
There's been an interesting development in which Nvidia's driver apparently doesn't scale beyond 4 cores. Using an AMD GPU alleviates that and makes the Ryzen CPUs run a lot better, closing the gap in gaming between the 7700k substantially. You can see it in the Anandtech review as well, check out GTX 1060 vs RX 480 in Rocket League, a DirectX 9 title(!).
AdoredTV "discovered" it apparently and since then other publications and benchmarkers have picked up on it. https://www.youtube.com/watch?v=0tfTZjugDeg
wrote:You can see it in the Anandtech review as well, check out GTX 1060 vs RX 480 in Rocket League, a DirectX 9 title(!).
I think it's jumping to conclusions based on a single outlier.
There's no way of knowing why that is. It doesn't necessarily have to do with the driver either. It could be a specific sequence of draw calls/states that leads to serialization on one architecture, but not on the other (so you can't 'fix' that in the drivers, it is what it is. AMD has that issue in DX11 with GCN, which is why they can't scale with deferred contexts on separate threads).
I would say it's quite peculiar that we see a DX9 title here. I wouldn't expect NV or AMD to still bother optimizing their drivers for the latest DX12+ GPUs for DX9 games. The performance and scaling in DX9 is probably all over the place. It doesn't seem to happen in any DX11/DX12 titles.
Another thing I find somewhat peculiar is that no Core i7s were included in the results. At the very least it would be interesting to see if the addition of HT would change anything in terms of multithreaded scaling. After all, they also included the Ryzen 7 for reference.
Really, those YT videos are usually a bunch of nonsense. It's a waste of time, these people don't really know what they're talking about.
The real reason why AMD can't support it is because of the way they handle rendering states:
They use actual commands that they insert into the command stream during execution.
Now the problem for them with DX11 is: you don't know when these deferred command lists are actually executed, ergo you don't know what state the GPU is going to be in.
This means that they have to postpone the generation of the final GPU-native rendering buffer until it is actually executed, basically serializing the workload.
Of course, with clever driver work, you may still be able get around that somewhat, by using some placeholders in the stream and some self-modifying code or such. But AMD never bothered with that, and just told developers not to use the feature.
NV's hardware is fundamentally different here, and therefore doesn't suffer from that issue. It's far simpler for them to compile deferred commandlists directly to GPU-native code.
And they even went beyond that: they found ways to speed up even 'single-threaded' DX11 code with multiple threads inside the driver.
I post the video @ 22:59. The video is 19 minutes long (and quite interesting). Yet by 23:09 Scali already "knows" that it's "a bunch of nonsense". 🤣
Let the air flow!
I don't think he bothered with that video I posted either, since as he said " It doesn't seem to happen in any DX11/DX12 titles."
I've watched the relevant parts of the video, and it went on about differences in hardware scheduling. Which is NOT the problem, as I explained above (if anything, hardware scheduling should make it easier to execute deferred command lists, not harder).
Not sure what your problem is anyway, I clearly explained it, and clearly the video says other things (unless I happened to miss the part where they explain how states are managed inside native command buffers?). So why attack me personally about whether or not I watched the video, instead of commenting on the actual information I posted?
Nevermind, I already know the answer.
Seems like a new bug was discovered on Ryzen related to the VME and legacy OSes (affects both physical and VMs)
Also. AMD teased Threadripper, their new HEDT plataform, up to 16C/32T Ryzen CPUs and quad channel RAM
wrote:Also. AMD teased Threadripper, their new HEDT plataform, up to 16C/32T Ryzen CPUs and quad channel RAM
Come on AMD. Make a server version. It will kill Xeon's
wrote:wrote:Also. AMD teased Threadripper, their new HEDT plataform, up to 16C/32T Ryzen CPUs and quad channel RAM
Come on AMD. Make a server version. It will kill Xeon's
Ask, and ye shall receive:
http://hothardware.com/news/amd-naples-zen-ar … tacenter-market
Looks like this is the official name for the Naples 32 core part. The Vega specs are giving me a massive chub too:
http://hothardware.com/news/amd-radeon-rx-veg … -16gb-hbm2-4k60
wrote:Ask, and ye shall receive: […]
Ask, and ye shall receive:
http://hothardware.com/news/amd-naples-zen-ar … tacenter-market
Looks like this is the official name for the Naples 32 core part. The Vega specs are giving me a massive chub too:
http://hothardware.com/news/amd-radeon-rx-veg … -16gb-hbm2-4k60
AMD has been remain silent about VME in Ryzen thing, so its wonder me if the newcoming Zen based Epyc would exihibit similar flaws?
Its must be hillarious when thats would actually happened.
-fffuuu
Servers are where the broken VME hits even bigger.
Backwards compatibility to 32 bit OS [when there seems an easy workaround anyway] doesn't sound like a deal-breaker for power users which are presumably where the money mostly lies.
I don't know where you get your info but servers is where the money mostly lies.
Sure, and i question whether all that many new servers will be required for VMs running 32 bit let alone 16 bit operating systems.
Btw i dont have an info source, i make it up as i go along.
The last 32-bit OS (server or workstation) I worked with was around 2011 and Windows Server 2008 R2 had already been out for quite some time. I do understand the need to use a 32-bit OS in some scenarios, but those don't require any of the new CPU's to run properly.
Regardless, it looks like the VME issue can be fixed in microcode anyway so it will probably be a nonissue soon.
wrote:I do understand the need to use a 32-bit OS in some scenarios, but those don't require any of the new CPU's to run properly.
Sometimes you need *new* machines, and still be able to run 32-bit OS on them.
It's one thing if AMD or Intel announced that their new CPU wouldn't run 16 and 32-bit OS'es, It's another when it's supposed to work but doesn't.
And it's not really possible to remove legacy CPU modes without breaking all existing 64-bit OS'es and bootloaders.