So, what exactly is your claim? That Windows handles USB devices worse than contemporary OSX/Linux? But didn't you just say that it was fixed in XP (2001) and probably also in 2K (1999)? And you're comparing it to "early OS X" (2000-2001), so where's this technology gap?
If, for instance, you could show that macOS/Linux had flawless USB support since 1995, and Microsoft only caught up in 2006, then your claim is valid, yes; but that is not true.
I think we can agree that Microsoft and Intel were pretty much leading the industry to USB -- and the whole plug-n-play paradigm, really. You definitely have to give them some leeway on this front. They were blazing a trail, and of course it wasn't perfect right out of the gate. No problem.
But the handling of drivers was steeped in a "well it's better than DOS" kind of mentality. I think of it as design tunnel-vision, where the technical teams, the UI designers, the vision of the UX as a whole, was limited to juuuuust beyond the current reality. You're right, I'm comparing Linux and OS X at around the time where MS started to figure it out, and so of course they're relatively good. Partly, this is my failing for not having much at all experience with Mac System 9 and earlier. I really don't know how well it handled USB, and from what I understand, Apple was in a dark period there where they had been trying to escape from under the old OS paradigm for a while, unsuccessfully, until OS X finally came out and caught fire. Similarly, I started using Linux around this time as well. Combine that with the fact that Linux was still relatively new, and so lacked sufficient momentum up to that point to be considered feature complete enough for an honest comparison.
To sum up, Microsoft was an authority figure. They had more clout with the industry to make emerging hardware protocols work the way they (MS) thought they should, given the vision for a "just plug it in and it works" future. The hardware was there. We're STILL using it, with only minor enhancements and the inevitable leaps and bounds forward in bandwidth. The device class framework, where you (in theory) don't need special drivers at all, was VERY forward-thinking. So why was the software so inept in comparison? Was the solution they came up with really the best they could do? Others sure got away with a much more user-friendly implementation. Why could the world's dominant software house not reconcile the same technical issues?
So that's just one example I find somewhat flawed (not to mention the rarity of the use case itself; it's not like anyone regularly boots the PC without a keyboard/mouse).
In my exact case, I powered on a Win ME PC that I had previously used with a PS2 keyboard and mouse, and now had a Logitech USB wireless receiver attached. (Which appear as standard HID KB and mouse devices to the OS.) So that is quite a reasonable example. However, it has also happened where something between the PC, KVM, and peripherals just didn't jive and the keyboard wasn't detected. That is probably not so common. At any rate, they are examples only to illustrate the inherent flaw in driver handling - particularly for critical components. The vision had been for years to move people away from discrete serial and parallel interfaces to one simple plug-and-play interface. And they botched the OS' handling of the out-of-the-box experience with USB peripherals.
I'm not going to respond to most of your post, because it's things I've seen dozens of times before - cherry-picking specific points to demonstrate advantages of one OS over another, based on the image one is trying to create
"Give me examples." "You're cherry-picking." OK.
If you take away practical considerations like hardware support, or the ability to run current software, would a user actually be better off using Windows 10 than, say, Windows 95? I don't really think so. If anything, it's probably a little more straight-forward.
These 'practical considerations' you are willing to take away for the sake of the discussion are not really easy to separate, you know; software and hardware capabilities are tied into the OS in many ways, and part of the UI changes are there to support workflows that simply did not exist in the past, because there were no software/hardware to support them.
Take, for instance, the way Windows 10 has a panel to conveniently manage all your communication devices - WiFi, BT, cellular, NFC, Mobile Hotspot, etc. etc. This does not exist in early Windows UI, because the technologies did not exist; and even if you magically added all the hardware support for this into Win95, you'd still find that its native UI is much more restrictive than that which is currently offered by Win10 for this task. Just one example off the top of my head.
I don't think it's that difficult to separate at all. USB mass storage support was grafted onto Win 98 by a third party. Bluetooth has been grafted onto XP. Linux has been a rolling update since its inception. It doesn't look a whole lot different than it ever did, but has seen sea-changes in hardware, and been ported to all kinds of platforms. As I said before, OS X hasn't aged a day in 20 years, despite seeing all the same changes Windows has. There's nothing preventing (e.g.) Windows 95 from being adapted to modern use cases at all -- and in fact, that's more or less what happened until they cut over to the NT codebase.
So, yeah, you can absolutely separate the kernel and driver layer from the UI -- and that's been my point of this entire thread, really. We keep buying new OSes that are rarely more than a driver update pack (which is valuable), and a re-skin that is of arguable benefit. Why can't THOSE things be decoupled? Major under-the-hood updates, incremental look and feel updates -- and only where it makes sense, makes it genuinely better, or at least some subjective aesthetic benefit. Rather than paying somebody to randomly rearrange the furniture and tell us "this is how it is now -- like it or not."
As I mentioned earlier, I find the ribbon 'good design'. Other software vendors would not have adopted it if it was bad. Can you explain why you think it's bad, other than it was different from what you were used to?
This is a very well articulated list of problems with the ribbon design. I used it for a few years and never got used to it. Things I rarely used (say, Outlook signatures) took forever to find, where they were more or less placed in a logical menu somewhere previously. Ultimately, the ribbon tried to marry the forward presence of oft-used shortcuts with having the entire kitchen sink available to you. That's fundamentally flawed design. You can't remove clutter and have all options available simultaneously - they're antithetical goals.
It was obviously a move in preparation for touch UI, which is the other fundamental failure -- thinking you can morph existing applications into touch-friendly ones. Countless flops should have made it abundantly clear that a touch evolution was not in the cards. A touch revolution worked fine, though. (Again, see: iPad.) One size does not fit all. If you haven't heard the 99% Invisible podcast before, there's an episode on the fallacy of "Average" that dovetails nicely into this argument. (Link: https://99percentinvisible.org/episode/on-average) Be a desktop app, or be a touch app -- being both results in a sub-par experience for everyone.
About the screen space on low resolution screens, this is true to an extent, but resolutions already started to go up at that point [...], and in any case, the ribbon can be hidden, so that it appears exactly like the old menu bar.
Resolutions have, but mostly in DPI. The actual perceivable size of a UI element has to stay somewhat constant though, otherwise your interaction with the computer turns into an exercise in hunting pixels. The ribbon consumes a lot of screen real-estate, and the only fix for that is to have more screen. (Not necessarily more resolution.) Well and good except for where size is a constraint. Like a laptop, or a tablet, or a phone, or even just an existing monitor, or the ergonomics of a workstation.
The old toolbars we used to use presented the features we needed most often. Everything else was in the menus, in logical groups. Available if you needed it, hidden if you didn't. What sense does it make to reclaim screen real-estate by hiding ALL features -- those used often, and those used rarely -- just to fix the problem created by trying in vain to move every option into the forefront? It's backwards logic.
Some others DID follow in the ribbon footsteps, but not really many. Browser UIs have actually gone more minimal. We discovered swipes and long-taps and 3D-touch on mobile devices. Somebody invented the hamburger menu. About the only place I see ribbon-inspired UIs is in Microsoft applications -- and only legacy ones really at that -- and those born and baptized in the Microsoft ecosystem.
It could well be my own perception, but the honest reality that I perceive is that the industry kind of decided en masse it wasn't the next big thing in UI design. Users vocally hated it, then learned to tolerate it for lack of options, and then either moved on with their lives or used something else. I can't agree that acceptance is the same thing as truly embracing it.
While I agree win10 is a step(or two) backward for the power user, being a snob about it makes you as bad as the Linux circlejerk.
Who benefits from having two distinct sets of control panels? Who benefits from a radical disorganization and cluttering of the Start menu? This isn't snobbery, it's common sense.
Without users, there's no point in designing software. Without bearing in mind the needs of the user, software is designed solely for the whims of the developer. It's actually the height of arrogance to say you, as a developer, know more about how a user should use their computer than the user themselves. I mean, I do understand sometimes people don't know what they really want, or want something that can't be done in reality, but this is not the case here. It's not snobbery to say "I don't like having control wrested away from me, increasingly with every release." There are ways to reduce supportability concerns without locking the user out of his own computer. If you have to convince your users that something is better.... perhaps it really isn't?