dr_st wrote:Still, even cases that are 'driver fault' may not always be entirely IHV fault. It's one thing when sloppy coding introduced a sleeping function at high IRQ level causing a bluescreen. It's another thing when you have a subtle performance loss due to sub-optimal use of a Windows API, due to incorrect assumptions, due to inadequate documentation (which still sometimes happens with Microsoft, especially with cutting edge stuff), etc.
I think everything is still the IHV's fault for one simple reason:
They actually put this to market. Which means the product went through QA. Either their QA failed to identify the issues, or they launched the product with 'known issues' anyway.
An IHV making sub-optimal uses of a Windows API is clearly not MS' fault. Neither are incorrect assumptions.
Inadequate documentation is somewhat debatable, but again, only the IHV is responsible for actually putting the product out there, with issues.
If I find inadequate documentation, and I can't get my stuff to work, I first contact Microsoft, and then they help me to fix the code and/or update the documentation.
dr_st wrote:Since a lot of the driver frameworks are actually defined and developed as joint efforts between MS and IHVs, a lot of the time the "blame" is also mutual.
I disagree with that. DirectX seems to be the main case where that is true, mainly because graphics is such a complicated field, and the hardware development is still in full motion.
Most other driver-frameworks and APIs are mostly the work of MS alone.
dr_st wrote:but I think if you go to check who the maintainers actually are, they will be employees of the respective IHV (e.g., Intel).
Shows how little you know about the linux world...
Intel is the exception to the rule. They are the one IHV that has nothing to gain from keeping their graphics drivers closed-source (they are the lowest common denominator, there's nothing in their drivers and hardware that NV and AMD don't already know).
Which is why Intel the one IHV that doesn't have any kind of closed-source drivers. Instead they employ a team of developers to develop their graphics drivers as open source. That is not exactly a secret either.
AMD merely has a 'token' open source driver, but it is limited in supported hardware, features and performance compared to their closed-source drivers. So in practice you'll want to use the closed-source drivers.
Intel is also interested in linux support for other reasons: server/HPC market. The better their chips are supported on linux, the more value their products have over the competition. So Intel also develops other things for linux, such as their compiler. Not many other IHVs are that position (AMD is, technically, but they don't have the resources to really support linux, let alone develop their own compiler).
Looking outside the graphics world, there's not a lot of IHVs that offer linux drivers at all. Especially with sound cards and such.
A lot is supported only with community drivers. In many cases that means that your fancy hardware is merely running in 'legacy mode'. There's a difference between 'supported' and 'supported' in that sense.
dr_st wrote:Again, my experience is limited and does not cover all the cases. But all situations I encountered was as I described, so I think it's more likely to be the norm, rather than the exception.
On my blog there's plenty of examples of linux-users complaining about the IHV's unwillingness to even support independent developers that are trying to write drivers for their hardware. So that seems to be the norm.
dr_st wrote:If you think that all the people who worked on Microsoft's core technologies are some sort of geniuses, or even that all of them are very experienced people, you would be wrong.
Now *you* are overglorifying, I never used the word 'genius' or anything remotely to that extent.
dr_st wrote:I don't know if you have first-hand experience with big corporations, but I do, and I can tell you - that in most cases, most of the actual coding is done by very junior staff.
And why would that matter?
The software architecture and designing of interfaces etc is not done by this junior staff. The whole point is to set up a framework in which it is easy to develop and difficult to make mistakes.
That's what the skilled and experienced people at Microsoft do (and what I do at my company). We prepare the work so that less experienced people can execute it.
To use a car-analogy... we design the car and production process, so that relatively unskilled workers can put the cars together with a very low failure rate, on a conveyor belt.
Even so, the 'junior' coders that Microsoft hires are still the 'pick of the litter' fresh out of university.
dr_st wrote:but there is a lot of plainly average talent working there, even in big names like Microsoft and RedHat and Intel and AMD.
As I said, that doesn't matter. They do have great people in all the right places, and they have a good process for development and QA, so that code quality remains well above average.
dr_st wrote:But even that won't catch everything.
Nobody claimed otherwise, and clearly the list of Windows updates released every month shows that plenty still slips through the cracks.
dr_st wrote:The few hardcore high-caliber experts cannot read and understand every line of code and every function in a codebase with millions of lines.
Wow, you think at that level? Just, wow.
dr_st wrote:And just because at one point 20-something years ago someone hired 'the best of the best', do you think all of them still work there? None left? No knowledge was ever lost? It's naive to think so.
Is that even relevant? I am talking about the people who set up Windows NT, the Windows API, the development process etc, and many related things which still live on in Windows today.
Again, wow... you think at that level?
dr_st wrote:because historically the OSes were developed with a different set of goals in mind.
I think it's not so much about the goals, but rather about the fact that linux was created by a student with no prior experience in developing kernels, no prior work experience, and at best only theoretical knowledge of how to engineer software and set up a development and QA process.
Basically, linux was and is a hackjob, as was UNIX before it. It's a miracle that it works as well as it does. But since there is no method to the madness, you always have to patch sourcecode and hack around, everytime you want to change or add things.
You want examples of just how retarded and dysfunctional it can get? Well, I had a home server, most software built from source, custom-compiled kernel, only compiled in the drivers I needed etc...
Motherboard broke down... I replaced it with a similar motherboard, same CPU and everything... However, it had a slightly different variation of Realtek NIC.
Guess what? It was not compatible with the old one. So my custom-compiled kernel didn't work.
Fine... I also couldn't load other Realtek drivers, because I had one compiled into my kernel, which could not be unloaded.
So I first had to switch back to a generic kernel to even be able to try other NIC drivers.
Then I found out that there was no Realtek driver for my specific variation.
So I had to grab the sources and build it myself... But I couldn't. Between the time that I installed my OS, and the driver for this NIC was written, they had completely restructured some of the driver 'model' (and I use the term extremely loosely). As a result, there was no way to get this source to compile against my kernel. Doing a diff against the Realtek driver that I did have, wasn't very useful either, because there were so many differences, that I couldn't exactly do a quick patch to make it work.
So, I would have to upgrade my entire kernel and rebuild any related code, basically an entire OS upgrade really, just to get the new NIC working.
Can you imagine something like that happening in Windows? I can't.
This is just a huge failure in software engineering 101: stable interfaces.
dr_st wrote:They have their reasons to do so, and the instability is the price they pay. It's not because Linux is inherently bad, it is because every time you bend the system to make it work in ways it was not designed to, you are likely to encounter certain issues.
1) They have to 'bend the system to make it work in ways it was not designed to' because the design is inept. nVidia's Optimus is a fine example of that (and a fine example of how Linus Torvalds is the most ignorant kernel 'developer' (and I use the term loosely) in the world). Windows 7 got support for Optimus: a way to switch between two GPUs on-the-fly, more specifically a low-power IGP and a discrete high-end GPU.
No support for linux (or Windows XP and Vista, but somehow nobody cared or noticed). Torvalds calls out NV and gives them the finger for not giving linux this functionality.
Reality: Microsoft updated the driver model in Windows 7 so that there is a universal API to share buffers between multiple display devices (even when running on different drivers from different vendors. This is what interfaces/abstraction layers do). Because of these new features, NV was able to implement Optimus: they could communicate with another GPU, and move workloads from one GPU to the next.
Linux however has no such interface. So there is no way to implement something like Optimus. NV should be giving Linus the finger for not developing a proper driver model with features that modern users expect. Linus is an incompetent fool, who doesn't even have a clue about what Optimus is, or what the Windows driver model does to enable this.
NV in fact offered some code for some interfaces to enable this, but the code was rejected for some retarded political reasons.
NV never needs to 'bend the system' on the Windows side.
2) Open source is no excuse for not having stable interfaces. Not having stable interfaces is inherently bad.
dr_st wrote:Yes, you can make. In practice, how many GPUs have the same driver package supporting everything from Vista to W10?
All, until recently.
That is, Vista support was dropped some time ago, so drivers were mostly Windows 7-10 from then on. And after recent updates to Windows 10, some vendors also split off Windows 7 and Windows 10 now.
But yes, look here for example: http://www.nvidia.com/content/DriverDownload- … us&type=GeForce
The filename says it all: Win8, Win7, Vista in a single driver here.
I believe the Win8 driver would work in Win10, but their official driver is separate here. I think at one point they may all have been a single package.
dr_st wrote:Similarly, the problems you described with Linux GPU drivers are also due to the decisions made by the IHVs, not because of inherent Linux flaws.
See above, clearly inherent linux flaws.
dr_st wrote:Core stability does not mean much, in itself.
It did, at one point. Windows 9x wasn't exactly stable, and you would often have BSODs and lose work.
Windows NT has always been rock-solid however, but never quite got credit for that, because of the 9x legacy.
The linux community loved to point out the 'instability' of Windows, mainly projecting 9x issues on NT-based versions.
dr_st wrote:An OS is as only as good as its interfaces.
To an end-user.
For me as a developer, I care more about the programming interfaces, kernel features and performance and such.
I mostly develop software where the user doesn't interact with anything other than my application. And my application interacts with various hardware and other systems and such.
So the 'usability' of the OS is irrelevant to me and my users. The usability is only relevant for my applications. Aside from that, I need a solid, stable and reliable basis to build my functionality. And that's what the kernel and its drivers should provide. I need low latency, high scalability, that sort of thing.
I have sometimes created software where people would usually pick a *NIX flavour as their go-to solution... But that's cargo cult. It's "what we do" in the industry.
However, unlike most people in that industry, I am not limited to *NIX, I also understand Windows in great detail. So I can sometimes build solutions based on Windows that they didn't even know was possible, yet the Windows version preforms better than a *NIX solution would, because I know about features that Windows offers that *NIX doesn't, and I know how to use them (of course if the shoe is on the other foot, I will also pick *NIX over Windows... however, usually the arguments for *NIX are more along the lines of 'but the licensing costs are cheaper, so let's do that', and rarely because of technical advantages).