swaaye wrote:It's interesting that considering the skill level of the people who design CPUs, and the level of validation Intel does for their products, that problems like this weren't apparent. Even across many years of iteration. It tells you how complex it is to design these things.
With how CPUs and supporting hardware (and drivers) are being torn apart, and OSs and applications need updates like every 3 days, it seems like computing security needs an entirely new approach.
Yes, I think that MELTDOWN and SPECTRE ushered in a paradigm shift in that respect.
Intel obviously does a lot of validation for their products, but the big problem in validation is always: what do you validate?
I suppose Intel was mainly concerned with software compatibility in SMT. As in: an SMT-enabled CPU must function identically to a conventional multi-core CPU in all cases. No race conditions, deadlocks or anything else.
The issue that is being exploited recently is that modern CPUs allow you to make extremely accurate measurements, even from a language as crude as JavaScript. So accurate, that you can detect slight differences in timing at the microarchitectural level, which can imply cache hits/misses, exceptions thrown by inaccessible memory, and whatnot.
The so-called side-channels in security.
But I don't think that's really a case of 'complexity'... it just means that any special case you optimize for, can be measured. The only way to really defeat this is to remove any special-cases (remove caches, instruction prefetches, remove exceptions, because any exception handler that is triggered, can be measured etc).
Basically we'd be back in the stone age of computing. So where do you make the tradeoff between performance, features and security?