VOGONS


First post, by W.x.

User metadata
Rank Member
Rank
Member

What was logic behind Radeon VE SDR 64-bit? Reference Radeon VE should have DDR memory.
But there is lotsa SDR versions. Probably for OEM machines?

But why bother in 2001 with Radeon VE SDR, when there was still Rage 128 Pro 64-bit (for example XPert 2000 Pro 32MB)
https://cdn.aukro.cz/images/sk1743175869760/p … -224185911.jpeg
https://cdn.aukro.cz/images/sk1743175869918/p … -224185913.jpeg

It had 32MB memory too and had to be cheaper.

What was logic behind Radeon VE (or 7000) SDR? What advantage it had, when computer was not used for 3D (usually is found in OEM machines)? Why was Rage 128 Pro not suitable, when it was cheaper, and they have to pay extra for Radeon VE/7000, but crippled with SDR memory.

Last edited by DosFreak on 2025-03-31, 10:49. Edited 1 time in total.

Reply 1 of 17, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Cheap more modern video acceleration. Some had Hydravision too.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 2 of 17, by W.x.

User metadata
Rank Member
Rank
Member
The Serpent Rider wrote on 2025-03-30, 08:45:

Cheap more modern video acceleration. Some had Hydravision too.

All right, I definitely didn't mean these Radeons
http://old.vgamuseum.info/images/stories/zaat … ati7000_flq.jpg

of course, this would have sense to pay extra, if you use extra VGA connector.

But I see SDR memory on these types of PCB.
http://old.vgamuseum.info/images/stories/zaat … 0-64_v2_fhq.jpg

More modern video acceleration means what? I thought about more specific and deep answer. Or did they jump on marketing gimmick, that it;s more modern, it would be probably better, so we pay more?

Reply 3 of 17, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

TV outs are part of Hydravision too. "Aside from the new 3D hardware, Radeon also introduced per-pixel video-deinterlacing to ATI's HDTV-capable MPEG-2 engine" - which, I guess, is a part of early MPEG-4 standard.

Another not so obvious part would be low-profile availability (with TV out options too), which was limited during Rage 128 Pro production.

Last edited by The Serpent Rider on 2025-03-31, 08:17. Edited 1 time in total.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 4 of 17, by W.x.

User metadata
Rank Member
Rank
Member

TV out in offices, employees watching TV during work? And used TV out? Not sure about this. Also, low profile computers among OEM were not often to see.

And it is also this type of Radeon VE, without TV out:
https://ibb.co/cSMMqqzz

So if it would be the reason, they would use SDR memory only on TV out versions or double-VGA out unit. But it is used also on Radeon VE/7000 with VGA out only, so it's obviously meant for lowend office graphic for 2D or weak 3D.
Also, there was this non lowprofile version without TV out https://www.alza.sk/EN/ati-radeon-7000-64mb-d39684.htm

Reply 5 of 17, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Also, low profile computers among OEM were not often to see.

If Dell was not common in early 2000s, then I don't know what is.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 6 of 17, by momaka

User metadata
Rank Oldbie
Rank
Oldbie
W.x. wrote on 2025-03-31, 00:56:

TV out in offices, employees watching TV during work? And used TV out?

Well, not sure of your age, but during my time in high school (early-mid 2000's), CRT TVs were sometimes used in classrooms where the teacher had a computer but didn't have a color projector. In fact, most of the classrooms in my high school did not have color projectors well until the latter half of the mid-2000's. Most classrooms just had oldschool portable transparency sheet projectors... that, and large CRT TVs (27-32"), typically connected with VCRs or VCR+DVD combos (and local analog broadcast from the main office).

So for the few classrooms that did have a CRT TV connected to a computer, you can be it was through a TV-out connector or S-video cable at best.

That aside and back to the main topic...
I think SDR memory was also starting to get really cheap in value compared to DDR. Given that the Radeon VE/7000 isn't really that good of a video card in terms of 3F capabilities, some manufacturers probably figured they could make it cheaper by using SDR memory instead of DDR.

FWIW, I also have a Radeon 7000 with 32 MB of SDR RAM and it's pretty slow. I use it as a test AGP video card. It does have one thing that I have not seen on many Rage cards: native DVI connection. And from what I've read, R100 and RV100 cards were the first from ATI to have dual RAMDACs - something the Rage does not have, AFAIK. So I can use my Rad.7000 to drive 2 separate monitors.

Reply 7 of 17, by W.x.

User metadata
Rank Member
Rank
Member
momaka wrote on 2025-04-01, 14:02:

That aside and back to the main topic...
I think SDR memory was also starting to get really cheap in value compared to DDR. Given that the Radeon VE/7000 isn't really that good of a video card in terms of 3F capabilities, some manufacturers probably figured they could make it cheaper by using SDR memory instead of DDR.

Well, only this part is important in this topic, and it's object of my interest.

The question is, why not to use Rage 128 pro, which it is even cheaper. Also , it was used in very late stage (like in 2003, there were still Xpert 2000 pro sold) Both cards also can have 32MB memory.
How much slower is rage 128 pro 64-bit from radeon 7000 in 3d? But I guess, RV100 will be slightly faster.
But more interesting, ... Ati Rage 128 pro 128-bit (memory) - will it be faster in 3D? Will it be more expensive to make? Or about the same? That's the most important question, because if yes, then I don't know, what sense had Radeon 7000 SDR with one connector, VGA out. I sense a cheat on customer... as it is newer, lets sell it more expensive, even despite the fact, he doesn't get anything extra from rage 128 pro.

Reply 8 of 17, by bertrammatrix

User metadata
Rank Member
Rank
Member

Because it was a budget card targeted at a market where it didn't matter - oem or cheap home user. Most people would pay much more attention to the fact it's a shiny new model "7000" when viewing a spec sheet then they would pay attention to SDR vs DDR.

Not much different then what Nvidia did with the Geforce 2 MX line, a crippled GF2 with SDR. Even though the MX could theoretically come with DDR as well - it almost never did, even though it would obviously help performance somewhat with a fairly minimal price difference. The MXs are ubiquitous, so obviously that worked out well.

Reply 9 of 17, by shamino

User metadata
Rank l33t
Rank
l33t

If ATI was already mass producing Radeon chips, then it makes sense they'd want to find a lower end product they could use the same chips with.

W.x. wrote on 2025-04-01, 15:15:

But more interesting, ... Ati Rage 128 pro 128-bit (memory) - will it be faster in 3D? Will it be more expensive to make? Or about the same? That's the most important question, because if yes, then I don't know, what sense had Radeon 7000 SDR with one connector, VGA out. I sense a cheat on customer... as it is newer, lets sell it more expensive, even despite the fact, he doesn't get anything extra from rage 128 pro.

Fast memory for video cards is expensive. This is always the first thing they try to economize on budget cards. So I seriously doubt that a 128-bit Rage 128 would be cheaper to make than a 64-bit Radeon, provided they've reached the point that Radeon chips have good yields and volume.

The Geforce2 MX was an excellent budget/mainstream card. It was faster than a TNT2 and drew much less power, meaning it not only ran cool but you also didn't have to worry about using it on a motherboard with subpar power delivery to the AGP slot.
The typical standard GF2MX had 128-bit SDRAM. That's the good version. Creative offered 64-bit DDR. 128-bit SDRAM is faster than 64-bit DDR.
People quickly found that memory speed was holding back the MX. That's pretty typical with budget cards - using a cheaper memory configuration is a big part of what makes them cheaper.

Reply 10 of 17, by W.x.

User metadata
Rank Member
Rank
Member
bertrammatrix wrote on 2025-04-01, 16:00:

Not much different then what Nvidia did with the Geforce 2 MX line, a crippled GF2 with SDR. Even though the MX could theoretically come with DDR as well - it almost never did, even though it would obviously help performance somewhat with a fairly minimal price difference. The MXs are ubiquitous, so obviously that worked out well.

In this case, Geforces 2 MX DDR were 64-bit, while Geforces 2 MX SDR were 128-bit (in memory bus), so bandwidth after transition to SDR remained the same.
In case of radeon 7000, all were 64-bit, so bandwidth got cut to half by using SDR memory.
So it's not quite the same scenario.

Reply 11 of 17, by momaka

User metadata
Rank Oldbie
Rank
Oldbie
shamino wrote on 2025-04-01, 22:53:

If ATI was already mass producing Radeon chips, then it makes sense they'd want to find a lower end product they could use the same chips with.

Yup, basically, this ^^ is probably the biggest reason.

With any chip manufacturing, you always have to take into account the yields you will get.
For starters, anything made on a larger manufacturing node (e.g. 250 nm vs 180 nm, vs 130 nm) will give less chips per the amount of silicon you have. So going to a smaller manufacturing node is cheaper in the long run for IC manufacturers... and hence why ATI ditched the older Rage manufacturing process (and architecture)... or rather improved on with the R100/RV100 / Radeon 7000 series. So at some point, perhaps the supply of new Rage chips just dwindled/ended when ATI shifted to mass-producing newer (and more feature-rich) Radeon 7000 chips, making video card OE's to not have much other choice and just churn out new video cards with these newer chips. On the plus side, it did give video card manufacturers more options what to put on their cards (if they wanted to) while simultaneously being cheaper to produce and sell.

So yeah, the SDR Radeon VE probably was not much better than most of the Rage cards, but it was a replacement for them... and perhaps the cheapest one.

W.x. wrote on 2025-04-02, 02:44:

In this case, Geforces 2 MX DDR were 64-bit, while Geforces 2 MX SDR were 128-bit (in memory bus), so bandwidth after transition to SDR remained the same.

On paper, yes.
But in reality, the SDR implementation is still slightly faster from what I've seen/read. Of course, a lot depends on the specific memory chips that were used too... so I guess it's a bit hard to argue exactly what is faster and slower.

Reply 12 of 17, by shamino

User metadata
Rank l33t
Rank
l33t
momaka wrote on 2025-04-03, 08:54:
W.x. wrote on 2025-04-02, 02:44:

In this case, Geforces 2 MX DDR were 64-bit, while Geforces 2 MX SDR were 128-bit (in memory bus), so bandwidth after transition to SDR remained the same.

On paper, yes.
But in reality, the SDR implementation is still slightly faster from what I've seen/read. Of course, a lot depends on the specific memory chips that were used too... so I guess it's a bit hard to argue exactly what is faster and slower.

DDR has greater setup/latency penalties than regular SDRAM. DDR doesn't fully double the performance but doubling the bus width does.

The original Geforce2 MX cards were normally 128-bit SDRAM, but later on I think DDR started to be more economical for a given performance target so manufacturers started to use it more. By the time of Geforce4 MX cards, I'm not sure if any of those had SDR at all.

Reply 13 of 17, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

GeForce MX 420 were all SDR 128-bit, but SDR was dropped pretty quickly, because it probably was cheaper to make 64-bit DDR 440 cards.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 14 of 17, by W.x.

User metadata
Rank Member
Rank
Member
momaka wrote on 2025-04-03, 08:54:
With any chip manufacturing, you always have to take into account the yields you will get. For starters, anything made on a larg […]
Show full quote

With any chip manufacturing, you always have to take into account the yields you will get.
For starters, anything made on a larger manufacturing node (e.g. 250 nm vs 180 nm, vs 130 nm) will give less chips per the amount of silicon you have. So going to a smaller manufacturing node is cheaper in the long run for IC manufacturers... and hence why ATI ditched the older Rage manufacturing process (and architecture)... or rather improved on with the R100/RV100 / Radeon 7000 series. So at some point, perhaps the supply of new Rage chips just dwindled/ended when ATI shifted to mass-producing newer (and more feature-rich) Radeon 7000 chips, making video card OE's to not have much other choice and just churn out new video cards with these newer chips. On the plus side, it did give video card manufacturers more options what to put on their cards (if they wanted to) while simultaneously being cheaper to produce and sell.
On paper, yes.
But in reality, the SDR implementation is still slightly faster from what I've seen/read. Of course, a lot depends on the specific memory chips that were used too... so I guess it's a bit hard to argue exactly what is faster and slower.

But Radeon VE SDR appeared in the times, when Rage 128 Pro was still selling. Not sure if you know, but Rage 128 pro (particulary in Ultra version) was produced and sold in 2003 and 2004. (not only sold, but cards have date codes with 2003 and 2004 on them, so they were manufactured in that time)
Not sure, if new chips were made though, maybe they left producing new chips, because obviously, it's 250nm vs 180nm (radeon VE/7000). But in this case, there had to be quantums of chips manufactured, and they wanted to get rid of them... they would cut the price. So maybe it was cheaper to produce RV100 cores, but another question was, if they were cheaper or not. I would say, Rage 128 Pro chips was still cheaper.

So this reason , you've mentioned , doen't give me much sense, as RV100 SDR cards appeared at the end of 2001. And as they were produced by Sapphire/PC Partner, or directly under Ati, both these companies had to have access to these chips easily.

So still a mystery for me, why they were making non-doubleVGA/TV out Radeon VE/7000 SDR version during 2002-2003. Still have a sense, that main reason is, they could sell them more expensive (even despite the fact, it didn't give any advantage over Rage 128 Pro), but in this case, it was deception on customer.

(Edit: they could write Direct X 7.0 support to RV100 while Rage 128 Pro was Direct X 6.0... so probably this made price too, questionable is, when somoene buy it for non-gaming system, if he could use Direct X 7.0 for something.

Reply 15 of 17, by momaka

User metadata
Rank Oldbie
Rank
Oldbie
shamino wrote on 2025-04-03, 10:22:

By the time of Geforce4 MX cards, I'm not sure if any of those had SDR at all.

They did.
Here is what some of these have my collection:
- 128-bit with SDR (an MSI GF4 MX 400 and MX-440)
- 128-bit with DDR (Manli GF4 MX-440 8x AGP)
- 64-bit with (very slow) DDR (WinFast GF4-MX420 with 6-ns RAM)
I think the only one I don't have and not sure if it exists is 64-bit with SDR... but I wouldn't be surprised if it does. The GF4 MX were considered budget cards, and some manufacturers really cheapened out on these (e.g. the WinFast above with the slow 64b DDR.)

W.x. wrote on 2025-04-07, 15:48:

But Radeon VE SDR appeared in the times, when Rage 128 Pro was still selling. Not sure if you know, but Rage 128 pro (particulary in Ultra version) was produced and sold in 2003 and 2004. (not only sold, but cards have date codes with 2003 and 2004 on them, so they were manufactured in that time)
Not sure, if new chips were made though, maybe they left producing new chips, because obviously, it's 250nm vs 180nm (radeon VE/7000). But in this case, there had to be quantums of chips manufactured, and they wanted to get rid of them... they would cut the price. So maybe it was cheaper to produce RV100 cores, but another question was, if they were cheaper or not. I would say, Rage 128 Pro chips was still cheaper.

Well, I believe you answered your own question here. 😉
Sure the Rage 128 Pro might have been made and sold quite late and past when Radeon VE appeared... but that doesn't really mean anything.
Like you said, ATI probably had already produced quite a few of the Rage chips, so they just didn't want them to go to waste and kept making cards with them, despite also having now chips for Radeon VE.
As for why the Radeon VE SDR appeared... it's quite possible the cores in these VE SDR cards had defective memory bus. So like anything else in manufacturing, ATI probably wanted to cut losses and still find a market to use those defective chips. So that's one possibility why the VE SDR exists. After all, they did the same thing with the R300 cores - any that had defective mem bus and/or shaders went to the Radeon 9500/Pro line. Then there's the Radeon 9700 - same thing as the 9700 Pro, but clocked slower (probably due to lower-binned cores) and thus matched with slower VRAM too.
And we see this all the time with CPUs too - the ones that have defective cache had it disabled and sold as a "budget" chip (e.g. Duron & Celeron).

So Radeon VE SDR was likely some kind of pitch to clear the production floor from "less-than-perfect" Radeon VE cores.

No, I am pretty sure ATI didn't do this to "deceive" their customers. It was turbulent times for GPU manufacturers in that era, and to stay on the top or close to (or at least not fall behind), you have to move on/up with the technology of the time. The Radeon VE was just that - a move to a newer manufacturing process with a few improvements on the core over the previous Rage line. And with the move to a new process also come new quirks... which likely attributed to some batches of GPU chips being not-so-great... and one way to get rid of those and still make something is to put them in a low-end product with any problematic features disabled.

Reply 16 of 17, by mkarcher

User metadata
Rank l33t
Rank
l33t
shamino wrote on 2025-04-03, 10:22:

DDR has greater setup/latency penalties than regular SDRAM. DDR doesn't fully double the performance but doubling the bus width does.

"citation required". Comparing DDR and SDR at the same clock frequency, the setup/latency penalties are expected to be identical. In the case of the GeForce 2MX, we are looking at a memory clock of 166MHz, i.e. PC166 / DDR333. Looking at a random 166MHz SDR 8Mx32 chip, in this case the IS42S32800J-6BL , I find them to be specified for CL3 at 166MHz. Looking at a similar DDR chip, I found the IS43R32800D-6BL, which is specified for CL2.5 at 166MHz. This is actually the same latency. While the DDR RAM is half a clock cycle faster to output the first 32 bits, the second 32 bits appear half a clock later, which is exactly the same point in time the whole 64 bits by two SDR chips appear. If you need the whole 64 bits at once, DDR and SDR in this example (I just picked 8Mx32 chips I can buy right now at mouser, without any cherry picking for specs) are equally fast. If you can profit from the first 32 bits being early, 32-bit DDR is actually faster than 64-bit SDR.

Reply 17 of 17, by shamino

User metadata
Rank l33t
Rank
l33t
mkarcher wrote on 2025-04-23, 15:59:
shamino wrote on 2025-04-03, 10:22:

DDR has greater setup/latency penalties than regular SDRAM. DDR doesn't fully double the performance but doubling the bus width does.

"citation required". Comparing DDR and SDR at the same clock frequency, the setup/latency penalties are expected to be identical. In the case of the GeForce 2MX, we are looking at a memory clock of 166MHz, i.e. PC166 / DDR333. Looking at a random 166MHz SDR 8Mx32 chip, in this case the IS42S32800J-6BL , I find them to be specified for CL3 at 166MHz. Looking at a similar DDR chip, I found the IS43R32800D-6BL, which is specified for CL2.5 at 166MHz. This is actually the same latency. While the DDR RAM is half a clock cycle faster to output the first 32 bits, the second 32 bits appear half a clock later, which is exactly the same point in time the whole 64 bits by two SDR chips appear. If you need the whole 64 bits at once, DDR and SDR in this example (I just picked 8Mx32 chips I can buy right now at mouser, without any cherry picking for specs) are equally fast. If you can profit from the first 32 bits being early, 32-bit DDR is actually faster than 64-bit SDR.

You might be right about this. I went over to the "VGA Legacy" site and looked up some RAM chips that were actually used on Geforce2 MX cards.
An Asus V7100 "Deluxe Combo" which I think has very ordinary SDRAM uses 128-bit worth of
Samsung K4S643232E-TC60
and an MSI MS-8817 is photographed with
Samsung K4S643232C-TC60 which I'm guessing is just an older version of the same RAM
These are presumably clocked at 166MHz CL3 on these cards

The oddball Creative 64-bit DDR card uses
Hyundai HY5DV651622 TC-G7
which is only rated for 143MHz, so there's the obvious explanation right there.

But let's suppose Creative had actually used the 6ns rated version of those DDR chips, matching what 128-bit SDR cards were using.

I looked up datasheets for the above RAM chips and when compared at 6ns, I don't see any glaring difference in their latencies. Whatever small differences there are, I don't know if they have any significance.
The figures for tRRD, tRCD, tRP all match.

The HY5DV651622 at 6ns (if Creative had used that grade, as they should have) can be used at 166MHz at either CL2 or CL3. I'm not clear on the consequences of this choice, but the text of the datasheet implies that using CL3 gives it a longer "pipeline", so I guess it can do more consecutive reads at that latency. It seems like video cards always prefer higher latencies to get this advantage, or whatever the advantage is, so I'm going to guess that if using this RAM, the card would probably run the RAM at 166MHz (DDR333) CL3.

Comparing the datasheets for the above memory ICs (which appeared on real cards at the time), but upgrading Creative's 7ns DDR to the 6ns rated chips, it looks like the latencies are very equal. The only disadvantage that the DDR might have is that because of the narrower 64-bit bus width, maybe the DDR setup would spend a greater percentage of it's time dealing with latencies in between those 64-bit reads, including maybe needing to switch Rows more often. But I don't know if that's actually the case (how big is a row? This stuff is too esoteric for me). The duration of those latencies when they do occur appears to be the same as the SDR chips, it's just a question of whether they'd be invoked equally often.

It would be interesting to overclock a Creative 64-bit DDR Geforce2 MX to run the RAM at 166MHz and see how closely it matches the performance of a conventional 128-bit SDR card at the same clock. The downgraded clock speed is probably most, if not all of the difference that makes that card slower.