gdjacobs wrote on 2019-12-30, 17:49:
Your statement implied that 800mhz RIMMs were the slowest option. I was just clarifying that.
Well, I would say that in the context of my post, it would be obvious that I meant that the oldest/slowest *chipset* for RAMBUS support allowed for 3.2 GB/s with the PC800 RIMMs. I was talking about the technology, not the possible configurations that OEMs such as Dell could or would actually sell. That is not relevant.
After all, the context was that Intel chose a partnership with RAMBUS because of the technological advantage that RAMBUS would offer, in terms of bandwidth. In that context obviously you would be looking at the maximum bandwidth possible, not the minimum.
This is what I said:
When RDRAM was introduced on the P4, it had considerably higher bandwidth than DDR. RDRAM ran at 800 MHz, 16-bit, effectively de […]
When RDRAM was introduced on the P4, it had considerably higher bandwidth than DDR. RDRAM ran at 800 MHz, 16-bit, effectively delivering 3.2 GB/s in the dual channel setup of a P4.
DDR single channel (32-bit) was originally 266 MHz, which delivered only 2.1 GB/s (dual channel didn't arrive until years later).
The update to 333 MHz still only came up to 2.7 GB/s.
Eventually DDR became faster, but that was mainly because RDRAM was abandoned anyway, and no further development happened to chipsets and RAM modules (there has only been one chipset for RAMBUS, which was the i850, the chipset that the P4 launched with, which only had a small update from PC800 to PC1066 memory support in the i850E).
Clearly it was about what the maximum performance of RAMBUS vs DDR was at the time, and why RAMBUS would be advantageous to Intel.
A 1.1 GB/s advantage in bandwidth is pretty obvious.
So I'm not sure why you felt like you had to throw in Dells with PC600. They aren't relevant to the context of my statement or the discussion at hand.
And I don't see how what I said would imply that I was talking about the slowest possible option. On the contrary.
Also, as said, if you wanted to be fair/objective, you should have also mentioned DDR200, which you didn't. Interesting...
Then again, you constantly do that. You twist and turn words, move goalposts, put words in my mouth etc.
I mean, earlier you made the claim that I would have said that EPIC was 'somehow better'.
I never said any of that. If you read back I merely said that it was 'more innovative', and it is pretty obvious why: It does something entirely different from x86 or even RISC CPUs. Something that was new at the time. Which I would say is the definition of innovative.
Not all innovations turn out to be 'good'/'better' for whatever definitions of 'good' or 'better', so trying to rephrase what I said as 'somehow better' is a strawman.
And now this PC600 strawman... why?
Actually, I don't care why, I would just like you to stop trolling like this.
The problem is that you are trying to frame me as some kind of Intel fanboy, which I'm not.
In fact, trying to use EPIC for that is pretty stupid, since EPIC was mostly designed by HP, not by Intel.
So since you can't really give Intel credits for the ideas behind the EPIC architecture, being 'innovative' would also not apply to Intel, but rather to HP.
Since I wasn't even trying to give credit to Intel in the first place, I didn't think it was even relevant to go there. But well, when you make remarks like 'somehow better', now suddenly I have to add yet more explanation for things I never wanted to discuss in the first place.
That's just very annoying to say the least.