VOGONS


MELTDOWN and SPECTRE vulnerabilities and older hardware?

Topic actions

Reply 140 of 151, by swaaye

User metadata
Rank l33t++
Rank
l33t++

It's interesting that considering the skill level of the people who design CPUs, and the level of validation Intel does for their products, that problems like this weren't apparent. Even across many years of iteration. It tells you how complex it is to design these things.

With how CPUs and supporting hardware (and drivers) are being torn apart, and OSs and applications need updates like every 3 days, it seems like computing security needs an entirely new approach.

Reply 141 of 151, by Scali

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

It's interesting that considering the skill level of the people who design CPUs, and the level of validation Intel does for their products, that problems like this weren't apparent. Even across many years of iteration. It tells you how complex it is to design these things.

With how CPUs and supporting hardware (and drivers) are being torn apart, and OSs and applications need updates like every 3 days, it seems like computing security needs an entirely new approach.

Yes, I think that MELTDOWN and SPECTRE ushered in a paradigm shift in that respect.
Intel obviously does a lot of validation for their products, but the big problem in validation is always: what do you validate?
I suppose Intel was mainly concerned with software compatibility in SMT. As in: an SMT-enabled CPU must function identically to a conventional multi-core CPU in all cases. No race conditions, deadlocks or anything else.
The issue that is being exploited recently is that modern CPUs allow you to make extremely accurate measurements, even from a language as crude as JavaScript. So accurate, that you can detect slight differences in timing at the microarchitectural level, which can imply cache hits/misses, exceptions thrown by inaccessible memory, and whatnot.
The so-called side-channels in security.

But I don't think that's really a case of 'complexity'... it just means that any special case you optimize for, can be measured. The only way to really defeat this is to remove any special-cases (remove caches, instruction prefetches, remove exceptions, because any exception handler that is triggered, can be measured etc).
Basically we'd be back in the stone age of computing. So where do you make the tradeoff between performance, features and security?

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 142 of 151, by 386SX

User metadata
Rank l33t
Rank
l33t
Scali wrote:
Yes, I think that MELTDOWN and SPECTRE ushered in a paradigm shift in that respect. Intel obviously does a lot of validation for […]
Show full quote
swaaye wrote:

It's interesting that considering the skill level of the people who design CPUs, and the level of validation Intel does for their products, that problems like this weren't apparent. Even across many years of iteration. It tells you how complex it is to design these things.

With how CPUs and supporting hardware (and drivers) are being torn apart, and OSs and applications need updates like every 3 days, it seems like computing security needs an entirely new approach.

Yes, I think that MELTDOWN and SPECTRE ushered in a paradigm shift in that respect.
Intel obviously does a lot of validation for their products, but the big problem in validation is always: what do you validate?
I suppose Intel was mainly concerned with software compatibility in SMT. As in: an SMT-enabled CPU must function identically to a conventional multi-core CPU in all cases. No race conditions, deadlocks or anything else.
The issue that is being exploited recently is that modern CPUs allow you to make extremely accurate measurements, even from a language as crude as JavaScript. So accurate, that you can detect slight differences in timing at the microarchitectural level, which can imply cache hits/misses, exceptions thrown by inaccessible memory, and whatnot.
The so-called side-channels in security.

But I don't think that's really a case of 'complexity'... it just means that any special case you optimize for, can be measured. The only way to really defeat this is to remove any special-cases (remove caches, instruction prefetches, remove exceptions, because any exception handler that is triggered, can be measured etc).
Basically we'd be back in the stone age of computing. So where do you make the tradeoff between performance, features and security?

Maybe developers should go back focusing on a good old optimization of the code or even better lower-level code languages instead of waiting for newer speed oriented features/tricks to compensate the code?

When I play with the older console Z80/6502/68000 based, I am everytime impressed of how much they did with so low level code, low performance hardware and low memory. Nowdays everything in the hardware<>software balance of resources is just sad.

Reply 143 of 151, by retardware

User metadata
Rank Oldbie
Rank
Oldbie
swaaye wrote:

It's interesting that considering the skill level of the people who design CPUs, and the level of validation Intel does for their products, that problems like this weren't apparent. Even across many years of iteration. It tells you how complex it is to design these things.

How complex and unattractive to look into these things are is also told by Intel's cheating skill level and their success in making people believe their processors got updates, when actually the blobs Intel distributed as "meltdown/spectre microcode updates" did not contain updated microcodes, but instead old microcodes.

So, in other words, the "microcode updates" they officially released to the public are fake, at least regarding the "updates" for Apollo Lake D0 (CPUID 506C9), Arrandale (20652, 20655), Clarkdale (20652, 20655, 20652), Lynnfield (106E5), Nehalem (106A5) and Westmere (206F2). (Source)

Reply 144 of 151, by Nprod

User metadata
Rank Newbie
Rank
Newbie

It's funny, but the fastest/most recent x86 CPU not vulnerable to spectre/meltdown is probably the first IDT Winchip 😁

Reply 145 of 151, by 386SX

User metadata
Rank l33t
Rank
l33t
retardware wrote:
swaaye wrote:

It's interesting that considering the skill level of the people who design CPUs, and the level of validation Intel does for their products, that problems like this weren't apparent. Even across many years of iteration. It tells you how complex it is to design these things.

How complex and unattractive to look into these things are is also told by Intel's cheating skill level and their success in making people believe their processors got updates, when actually the blobs Intel distributed as "meltdown/spectre microcode updates" did not contain updated microcodes, but instead old microcodes.

So, in other words, the "microcode updates" they officially released to the public are fake, at least regarding the "updates" for Apollo Lake D0 (CPUID 506C9), Arrandale (20652, 20655), Clarkdale (20652, 20655, 20652), Lynnfield (106E5), Nehalem (106A5) and Westmere (206F2). (Source)

Lately I heard about newer cpu having "hardware mitigations" for these problems... everytime I think at it I think it's incredible.. hardware mitigations..

Reply 146 of 151, by 386SX

User metadata
Rank l33t
Rank
l33t
Nprod wrote:

It's funny, but the fastest/most recent x86 CPU not vulnerable to spectre/meltdown is probably the first IDT Winchip 😁

Maybe not. I think the Atom D2x00 for x64 and the Cortex A53 for armv8 cores are. 😉

Reply 147 of 151, by Nprod

User metadata
Rank Newbie
Rank
Newbie
386SX wrote:
Nprod wrote:

It's funny, but the fastest/most recent x86 CPU not vulnerable to spectre/meltdown is probably the first IDT Winchip 😁

Maybe not. I think the Atom D2500 for x64 and the Cortex A53 for armv8 cores are. 😉

Cortex is ARM (i mentioned x86) and the Atoms aren't affected by Meltdown but they are still affected by Spectre. Winchips on the other hand are practically 486es, but made in the late 90s...

Reply 148 of 151, by 386SX

User metadata
Rank l33t
Rank
l33t
Nprod wrote:
386SX wrote:
Nprod wrote:

It's funny, but the fastest/most recent x86 CPU not vulnerable to spectre/meltdown is probably the first IDT Winchip 😁

Maybe not. I think the Atom D2500 for x64 and the Cortex A53 for armv8 cores are. 😉

Cortex is ARM (i mentioned x86) and the Atoms aren't affected by Meltdown but they are still affected by Spectre. Winchips on the other hand are practically 486es, but made in the late 90s...

Are you sure about it? In linux at the boot both test are skipped and no signs of it in the logs. I think newer Atoms are and not the older ones being in-order execution cores.

Last edited by 386SX on 2019-05-18, 17:22. Edited 1 time in total.

Reply 149 of 151, by 386SX

User metadata
Rank l33t
Rank
l33t

From wikipedia:

"On May 3, 2018, eight additional Spectre-class flaws provisionally named Spectre-NG were reported affecting Intel and possibly AMD and ARM processors. Intel reported that they were preparing new patches to mitigate these flaws.[29][30][31][32] Affected are all Core-i processors and Xeon derivates since Nehalem (2010) and Atom-based processors since 2013.[33] Intel postponed their release of microcode updates to July 10, 2018.[34][33]"

But maybe I could be wrong, I'll search for more confirmations. 😀

Reply 151 of 151, by Nprod

User metadata
Rank Newbie
Rank
Newbie

I wasn't able to find any concrete evidence to either confirm or deny it, i guess almost nobody has bothered to investigate the early Atom CPUs. Swaaye is correct that Intel gave up on doing anything about pre-sandy bridge chips, which made plenty of Core2 users angry. I mentioned the IDT Winchip (C6) as that's the one i'm most certain of, since it doesn't even have simple branch prediction.