VOGONS

Common searches


First post, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

These were pretty dark times for ATi.

Yes, shortly before these cards came on the market, ATi was bought out by AMD.
I think ATi sold out to AMD because they knew the 2000-series was going to be a failure. ATi might not have survived long enough to recover if it wasn't for AMD.

It seems that the 300/Fury series are a similar situation... nVidia has had DX12-hardware on the market for many months, and has already taken most of the market. AMD's offerings are finally out now, but they aren't too competitive. They miss a vital feature like HDMI 2.0, they don't support DX12 level 12_1, and their move to HBM limits them to 4 GB.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 1 of 17, by obobskivich

User metadata
Rank l33t
Rank
l33t
Scali wrote:

Yes, shortly before these cards came on the market, ATi was bought out by AMD.
I think ATi sold out to AMD because they knew the 2000-series was going to be a failure. ATi might not have survived long enough to recover if it wasn't for AMD.

I've never heard this interpretation of the ATi/AMD merger - usually the primary reason I've seen given is that AMD wanted to expand into chipset/GPU (especially toward the APU concept), and had apparently considered buying nVidia at some point (remember, SNAP was still largely a "big deal" right up until the ATi merger). The primary impetus for all of this being OEM sales.

It seems that the 300/Fury series are a similar situation... nVidia has had DX12-hardware on the market for many months, and has already taken most of the market. AMD's offerings are finally out now, but they aren't too competitive. They miss a vital feature like HDMI 2.0, they don't support DX12 level 12_1, and their move to HBM limits them to 4 GB.

AMD has had DX12 hardware on the market for many years. They've also had longer experience with CTM APIs (via Mantle). In the few pre-release benchmarks I've seen their drivers are also much more efficient at handling DX12 calls. Gotta love nVidia's marketing team for talking "12.1" before 12 itself is even launched too.

As far as the "300 series" - they're largely re-packagings of existing GCN parts. No features have been added or removed. Fury is new silicon, and does implement HBM (which, as you've noted, is limited to 4GB currently) - again, gotta love nVidia's marketing team for pushing 12GB of VRAM as a necessity (and just ignore the effectively 4GB card that's still faster - that's obviously an illusion and anybody who is anybody will have 96GB of VRAM and DX17 support by Christmas).

swaaye wrote:

Yup it has Theater 200. It has a mini DIN breakout port between the DVI ports that apparently supports S-Video, component and composite output.

So it uses the same DIN breakout as the GeForce 7/8 cards? I've actually not seen that on an ATi card with Rage Theater 200 (afaik it doesn't support HD), instead Component (YPbPr) comes out via an adapter that plugs into one of the VGA/DVI-I ports. I've always thought that adapter was stupid, because it effectively eliminates a VGA/DVI output just for TV support, so that's neat that they moved to a DIN breakout.

Scali wrote:

Quite a few ATi cards of that era had that chip, and had VIVO through their s-video port.
I have an X1800XT and X1900XTX which both have it.
The All-in-wonder cards also had a TV tuner, these cards do not.

Rage Theater 200 is much older than that - my X850XTP and 9800XT both have it as well. A bit of searching indicates the chip came out in 2002, originally with the Radeon 9700. 😲

Last edited by obobskivich on 2015-07-12, 20:59. Edited 1 time in total.

Reply 3 of 17, by obobskivich

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

We'll see how the D3D 12 thing goes down when the first game comes along with support for it.

What I'm more interested in seeing is DX11.3 vs DX12 real world in-game - given that Mantle tends not to be an overwhelming performance improvement despite all of its purported advantages, call me skeptical about DX12's merits real-world. I think the primary platform to benefit from DX12, at least in the next year, will be Xbox One.

Reply 4 of 17, by swaaye

User metadata
Rank l33t++
Rank
l33t++
obobskivich wrote:

[What I'm more interested in seeing is DX11.3 vs DX12 real world in-game - given that Mantle tends not to be an overwhelming performance improvement despite all of its purported advantages, call me skeptical about DX12's merits real-world. I think the primary platform to benefit from DX12, at least in the next year, will be Xbox One.

Yeah Mantle's implementations only helped boost slow CPUs. But with games designed from the ground up for a better API, it should be more interesting. When this will happen is anyone's guess though, and how it will work out with everything being multiplatform is another question. Developers don't leverage PC specific advantages much anymore. Maybe ports will be better and that's about all that will happen.

I think the Xbox One angle is misunderstood. The API they've had there isn't like what is used on PC. They can already go low level with the hardware. They need to be able to do that in order to get decent performance out of the CPU there. Seems to me D3D 12 on Xbox would be more of a business strategy to get more/better ports to PC.

Reply 5 of 17, by Scali

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

I've never heard this interpretation of the ATi/AMD merger - usually the primary reason I've seen given is that AMD wanted to expand into chipset/GPU (especially toward the APU concept), and had apparently considered buying nVidia at some point (remember, SNAP was still largely a "big deal" right up until the ATi merger).

Then you're only looking at one side of the merger. Why did nVidia turn AMD down?
And why didn't ATi turn them down?
A healthy business has no reason to merge with a struggling CPU manufacturer, and give up control. Which is why nVidia just laughed AMD off when they suggested the merger.
ATi probably took it because they knew they were in for a lot of trouble. They knew the 2000-series was going to be expensive to produce and wasn't going to be competitive.

AMD has had DX12 hardware on the market for many years.

Only the R285, which is basically the same architecture as Fury (GCN 1.2), except for HBM.
The rest doesn't support either 12_0 or 12_1.

They've also had longer experience with CTM APIs (via Mantle). In the few pre-release benchmarks I've seen their drivers are also much more efficient at handling DX12 calls.

Sounds like you've bought the Mantle marketing 😀
I'm in the DX12 early access program, and have helped with developing the DX12 API.

Gotta love nVidia's marketing team for talking "12.1" before 12 itself is even launched too.

What they mean is 12_1, but yes, they do support that.

Rage Theater 200 is much older than that - my X850XTP and 9800XT both have it as well. A bit of searching indicates the chip came out in 2002, originally with the Radeon 9700. 😲

I didn't say the Rage Theater 200 wasn't older than that. Just that cards from that era often had VIVO as standard. Afaik the 9500/9600/9700/9800 did not have this. Pretty sure my 9600XT didn't anyway, it did not come with the required cable, and I've never seen the chip in Device Manager.
As I recall, you had to have the AIW version in that era, in which case it had the RT200 chip indeed.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 6 of 17, by Scali

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

I think the Xbox One angle is misunderstood. The API they've had there isn't like what is used on PC. They can already go low level with the hardware. They need to be able to do that in order to get decent performance out of the CPU there. Seems to me D3D 12 on Xbox would be more of a business strategy to get more/better ports to PC.

Yes, that is what MS also said about it publicly: They said people shouldn't expect performance improvements from DX12 on Xbox.
This is also an interesting tweet: https://twitter.com/XboxP3/status/558768045246541824

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 7 of 17, by Scali

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

What I'm more interested in seeing is DX11.3 vs DX12 real world in-game - given that Mantle tends not to be an overwhelming performance improvement despite all of its purported advantages, call me skeptical about DX12's merits real-world. I think the primary platform to benefit from DX12, at least in the next year, will be Xbox One.

I'm not interested in the performance increases on the CPU side at all really. May be interesting for consoles and AMD CPUs, but Core i5/i7 never really had any problems in DX11 anyway.
I'm interested in the 12_1 features, mostly conservative rasterization, which allows for more efficient realtime global illumination, for example.
So I'm quite disappointed that AMD just rehashed GCN1.2, instead of implementing the new features in DX12. Especially after all AMD's talk about Mantle and pushing PC gaming forward. Yes, very convincing when your competitor has had this new technology on the market for more than half a year already.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 8 of 17, by F2bnp

User metadata
Rank l33t
Rank
l33t

I'm okay with AMD/ATi drivers. I don't own a crossfire setup, so I don't really care if they release drivers after months of waiting. Their beta drivers are also pretty good. Other than owning an SLI/Crossfire setup, I really don't see why you would need drivers every month or so, I would actually consider it a nuisance, what with the update notification bugging me constantly.

The issue of DX12 has crept up and I can't help but share the same sentiments with swaaye. It's just way too early and I think by the time we see it used regularly, most cards in the current line-up will be somewhat irrelevant. I feel it's a lot like the NV30 vs R300 on DirectX 9 games argument. At the end of the day, both series were rather irrelevant at DX9.
Of course, I don't have nearly the amount of knowledge as Scali and if he sees this as being a real issue with AMD, I should probably take it at face value.

AMD is in a pretty bad position right now, it's only thanks to the consoles and Apple contracts that they are kept afloat, which is a very sad state of affairs for the consumers and the industry in general. I really hope things take a turn for the best in 2016. Zen is finally happening, as are 14nm GPUs. Kick-ass!

Reply 9 of 17, by alexanrs

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

The issue of DX12 has crept up and I can't help but share the same sentiments with swaaye. It's just way too early and I think by the time we see it used regularly, most cards in the current line-up will be somewhat irrelevant. I feel it's a lot like the NV30 vs R300 on DirectX 9 games argument. At the end of the day, both series were rather irrelevant at DX9.
Of course, I don't have nearly the amount of knowledge as Scali and if he sees this as being a real issue with AMD, I should probably take it at face value.

I don't know.... Processing power was escalating much faster back then. Just notice that high end Kepler cards (from 2012) are still good performers and can play any modern game as long as you're not aiming for Ultra settings or higher than 1080p. I have a 980 now, and looking back on how much my mid-low end Fermi lasted me, I can only imagine I'll get 3-4 years out of this card. Also, since DX12 supposed to be faster than DX11, it is bound to be more relevant for people with slightly older systems than for someone with the latest and greatest i7 and the beefiest SLI setup money can buy.

F2bnp wrote:

AMD is in a pretty bad position right now, it's only thanks to the consoles and Apple contracts that they are kept afloat, which is a very sad state of affairs for the consumers and the industry in general. I really hope things take a turn for the best in 2016. Zen is finally happening, as are 14nm GPUs. Kick-ass!

I dearly hope so. Even though I'm an Intel guy now, AMD CPUs have served me well in the past (RIP my old Duron, may you rest in peace with whoever my dad sold you to. I kept my Athlon 64, thankfully), and it is sad to see them as cheap-but-worse alternatives to Intel. And even sadder to see they really have nothing to compete against the i7, and they need a freaking liquid cooling to trade blows with an i5. AMD CPUs just seem so... obsolete.

Reply 11 of 17, by GeorgeMan

User metadata
Rank Oldbie
Rank
Oldbie

Scali, I don't disagree with you, but your write-style is clearly Nvidia-oriented.

Core i7-13700 | 32G DDR4 | Biostar B760M | Nvidia RTX 3060 | 32" AOC 75Hz IPS + 17" DEC CRT 1024x768 @ 85Hz
Win11 + Virtualization => Emudeck @consoles | pcem @DOS~Win95 | Virtualbox @Win98SE & softGPU | VMware @2K&XP | ΕΧΟDΟS

Reply 12 of 17, by Scali

User metadata
Rank l33t
Rank
l33t
GeorgeMan wrote:

Scali, I don't disagree with you, but your write-style is clearly Nvidia-oriented.

I take offense at that.
I am DX12-oriented.
I also don't see why you feel the need to post this here, even if you do feel that way.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 13 of 17, by obobskivich

User metadata
Rank l33t
Rank
l33t

I agree with Putas and GeorgeMan - Scali's posts in this thread are cherry-picked and very biased towards nVidia. I would go as far as saying he's shilling for the GeForce 900 series. He has also cherry-picked numbers and data to create various strawman and other fallacious arguments (e.g. "people would not..." "people will..." etc -> which people, where, when, how, based on what, etc). All of the "think about it HMMM" stuff that seems to imply AMD "ripped off" Fermi, "ripped off" Mantle, etc is also entirely unsourced and unsubtantiated as well. It's a lot of empty conjecture and postualation, some of which borders on conspiracy theory. Coupled with that, he has gotten incredibly defensive when questioned about it, often just crawling back to "I am Scali, therefore I am right" ("I am a developer READ WHAT I WRITE") - that's not valid evidence or sourcing to make many of the arguments he's attempted to put forth in this thread. Even if there weren't large blocks of [Citation Needed] for his claims and assertions in this thread, simply insisting he's right because he's right is not good enough. Scali, I would ask you to please use the multi-quote and edit features in future replies as your multi-posting is very hard to read and follow, but it largely doesn't matter as you're going on my ignore list after this post, because I'm tired of the consistentently abusive and elitist attitude you've displayed on these forums in the last few months.

A few things to untangle:

- 5900XT was released as a competitor for 9600Pro and XT, and they were under $200. I bought mine for $174 (I don't know why I remember that number, I just always have). It is based on NV35, like the Ultra, but it is not the same as the Ultra (it has 128MB RAM, is clocked slower, etc). If memory serves, much like the 6800XT and 7900GS that came after, they were readily on sale for around $150 at many times. It's cherry-picked to provide on-sale prices for the 9600s (at $100; release SRP for the 9600Pro was $169-$199 (source: TechReport), and 9600XT at around the same, or somewhat higher (source: VR-Zone, listing it at 150 GBP, and TweakTown, listing it at "in the same range as 9600Pro")). Performance wise it isn't fair to say that "5900XT was slaughtered" imho - specifically focusing on Half-Life 2 is not the entire story though, especially when Half-Life 2 came out a year later (at which time GeForce 6 and Radeon X were available). Here are some reviews of 5900XT/SE ("SE" is the same card - some vendors, like eVGA, just went with SE as opposed to XT; I think it's a regionalized thing), linked to the first page of benchmarks (for convenience):
http://techreport.com/review/5990/nvidia-gefo … x-5900-xt-gpu/5
http://hothardware.com/reviews/aopen-geforce- … t-review?page=3

Interpret the data as you will. From owning a 5900XT "back in the day" (I had a 9600Pro and 9700np too - does that mean I get into the special decoder ring club?), it was perfectly fine for games that I was actually playing 2002-2004. By the time Half-Life 2 rolled around (in late '04), it was increasingly less competent, and replaced with a 6800GT. The Radeon cards were also fine, but their drivers at the time were a weak point (especially for multi-monitor systems). That was my reasoning for going with 6800 instead of X800. From more recent experiences with my 9550, 9800, and X850s (both the 9600 and 9700 died early, as many R3xx cards seem to do; the 5900XT still survives to this day) the drivers have improved quite a bit from Catalyst 3.x, which is welcome.

- GCN supports DX12. That isn't just Fury. That isn't just 285. All GCN parts support DX12. The few benchmarks I've seen from Futuremark indicate very good things even for the older GCN parts in terms of efficiency, and that nVidia likely has some catch-up on their drivers (in some benchmarks the R9 270 ends up faster than GTX 980 - that's likely driver-related as the GTX 980 should be faster). This will all probably be sorted by next month after Windows 10 has launched, and if it isn't, it will likely be sorted by the time we actually see DX12 games.
Sources:
http://www.legitreviews.com/amd-says-gcn-prod … l-coming_137794
http://hexus.net/gaming/news/pc/67721-microso … tx-12-gdc-2014/
This may be why the claim is made that GCN "doesn't support DX12":
http://www.computerbase.de/2015-06/directx-12 … level-12-0-gcn/
That article has been turned into clickbait (e.g. on Guru3D, WCCFTech, etc (example: http://www.overclock.net/t/1558938/wccftech-a … -on-gcn-1-1-1-2)) with titles like "GCN does not fully support DX12" when in reality it's that GCN does not support DX12.1. Also worth noting is the evidence for this is from an nVidia PR presentation - this takes me back to "Scali is shilling for GeForce 900 series" as the "GCN isn't DX12 part, nVidia has market dominance" is almost verbatim from nVidia's PR presentation.

For further clarity, nVidia have also listed Fermi and up as being DX12 compatible. If I'm not mistaken this leaves Terrascale (Radeon 5000/6000) as the only DX11 cards that won't be supported in DX12.

For DX12 benchmarks, here's an example of what I'm referencing:
http://www.pcper.com/reviews/Graphics-Cards/3 … X12-Performance

It is important to note this is NOT real-world application testing, it is an API overhead performance test. I looked for the one with the lower-spec parts but could not find it (it's based on the same Futuremark benchmark).

- I've yet to find anything about DX12.1 that isn't from nVidia, so it's either an nVidia-specific extension to DX12 (e.g. like DX9a) or it's a minor addendum (e.g. like DX10.1). Either way it appears the GeForce 900 series are the only thing that support it, and if that's the case, it's unlikely to be very important in the long run as obtuse/narrowly supported features tend to be passed over (e.g. like DX9a or 10.1, or other things like TerraScale or Ultra Shadow). Of course history may prove this assumption wrong, but that's my guess. The Overclock.net link above includes slides from an nVidia PR presentation that shows a few features for DX12 and 12.1; perhaps others can find more about this.

- As far as DX9 on NV3x/R300/whatever - Half-Life 2 is a bad example; it was optimized heavily for the R3xx architecture. Other, early DX9 games generally run comparably between the FX 5800/5900 and Radeon 9700/9800, like Halo, Tomb Raider: AoD, The Sims 2, and Gun Metal (some of this is visible in the 5900XT links I provided above). That said, neither series of cards is what I'd consider competent for DX9 era games (late 2004 into 2005 and beyond) - I'd really rather see an SM3.0 part with higher performance, like Radeon X1800/1900 or GeForce 7800/7900. Radeon 9 and X aren't bad cards by any means, but they're much better suited to things from the early 2000s like games based on Quake 3 and Unreal Engine 2.x. That said, from the perspective of building a retro machine, the GeForce FX (and GeForce 4) have some additional advantages, like being universal AGP cards (there are universal Radeon 9 cards, but not all Radeon 9 cards are universal), supporting palletized textures, and having working fog table in Windows 9x. This doesn't mean that in 2003 the Radeon 9700/9800 wasn't the latest-and-greatest, but with the benefit of hindsight and time we don't have to settle with a 9800 for Half-Life 2 or Doom 3 or whatever - we can get something much faster (like the HD 2900XT that started this thread). This leaves the Radeon 9/X in kind of a weird position from the perspective of building a retro machine imho. As far as the VIVO thing - afaik it was up to the IHV to decide whether or not to implement, and I can tell you my 9800 and X850XTP both feature it, but my 9550 does not (I don't remember my 9700 offering it, but I know my 9600Pro had the cables for it - never tested it though). Here's the link to my 9800's product page from Asus:
http://www.asus.com/Graphics-Cards/A9800PROTVD256M/ (and as far as "is it a Pro or an XT?" -> it will identify itself as R360 clocked at something like 400MHz, the board says 9800Pro but many software applications will say XT, and beyond that I don't know)

Reply 14 of 17, by Scali

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

I agree with Putas and GeorgeMan - Scali's posts in this thread are cherry-picked and very biased towards nVidia. I would go as far as saying he's shilling for the GeForce 900 series.

Wow, strong accusations... followed by a huge wall of text...
Reported.

obobskivich wrote:

- 5900XT was released as a competitor for 9600Pro and XT, and they were under $200.

The 9600Pro cost $100, as already shown. You are distorting the truth severely, to suit your agenda.

obobskivich wrote:

- GCN supports DX12. That isn't just Fury. That isn't just 285. All GCN parts support DX12.

Please not this again.
If you want to go down that route, yes, all Fermis and up, and Haswell and higher 'support' DX12, as in they can use the API in one of the DX11 downlevel modes.
My point was about *new* DX12 functionality, which is level 12_0 and 12_1, which is clear from the context.
Only GCN1.2 (and 1.1?) supports 12_0, and AMD does not support 12_1 at all.

obobskivich wrote:

The few benchmarks I've seen from Futuremark indicate very good things even for the older GCN parts in terms of efficiency, and that nVidia likely has some catch-up on their drivers (in some benchmarks the R9 270 ends up faster than GTX 980 - that's likely driver-related as the GTX 980 should be faster). This will all probably be sorted by next month after Windows 10 has launched, and if it isn't, it will likely be sorted by the time we actually see DX12 games.

As I say, I am in the DX12 early access program myself, and until earlier this year, AMD did not even have DX12 drivers at all, available to developers. Both Intel and nVidia had them for months. It is AMD playing catch-up here. I guess AMD was too busy pushing Mantle.
Thing is, with low-level APIs, you can't expect all vendors to have the same CPU overhead. Differences in GPU-architecture mean that some operations are more CPU-heavy on one architecture than on the other.
For example, because of how GCN works, AMD could not really implement DX11 multithreading properly. They had to defer the processing of the command list for the driver until the actual draw call, pretty much negating the multithreaded gains.

obobskivich wrote:

- I've yet to find anything about DX12.1 that isn't from nVidia, so it's either an nVidia-specific extension to DX12 (e.g. like DX9a) or it's a minor addendum (e.g. like DX10.1).

It's called Feature level 12_1, and you can just get the info from the web:
https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D
http://www.extremetech.com/extreme/207598-dem … nd-dont-deliver
The same features (Conservative Raster, ROV and Volume Tiled Resources) are also available as DX11.3:
https://msdn.microsoft.com/en-us/librar ... s.85).aspx
For someone who accuses others, you are awfully misinformed, and this just so happens to support your agenda of discrediting nVidia and their 12_1 support, which, as you can see, is just standard Microsoft API stuff.

As I said before, 12_1 is actually the most interesting stuff, because that's what allows new rendering algorithms, rather than just the same old same old with more efficient CPU-code.

Get a clue, and stop accusing others of being shills because they actually know what they're talking about.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 16 of 17, by Lo Wang

User metadata
Rank Member
Rank
Member
swaaye wrote:

We'll see how the D3D 12 thing goes down when the first game comes along with support for it.

From what I've heard, it's going to be sufficiently faster to justify the switch, but the kind of detail they're shooting for truly calls for full-on voxel-based video hardware, and of course, a new api, but I don't see that happening any time soon.

"That if thou shalt confess with thy mouth the Lord Jesus, and shalt believe in thine heart that God hath raised him from the dead, thou shalt be saved" - Romans 10:9

Reply 17 of 17, by Scali

User metadata
Rank l33t
Rank
l33t
Lo Wang wrote:
swaaye wrote:

We'll see how the D3D 12 thing goes down when the first game comes along with support for it.

From what I've heard, it's going to be sufficiently faster to justify the switch, but the kind of detail they're shooting for truly calls for full-on voxel-based video hardware, and of course, a new api, but I don't see that happening any time soon.

Well, conservative raster in 12_1 at least allows you to do more efficient voxel-based rendering: https://developer.nvidia.com/content/dont-be- … e-rasterization

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/