VOGONS


Is Vista now Retro

Topic actions

Reply 140 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
dr_st wrote:

Just to point out that you cannot always absolve Microsoft of all blame if it's "driver-related".

Yes, but that's purely theoretical, 99% of the time it's the driver's fault.
And in this case it obviously is, since most systems work fine.

dr_st wrote:

"Linux" writes all the drivers? Who is "Linux"? Either I didn't get your statement, or you don't understand how things work "in the Linux world".

Ah, we have a linux nerd! This is going to be fun! Dunning-Kruger everywhere.
I never said "Linux" writes all the drivers. What I meant should be obvious: the IHVs do not write most drivers, the 'linux community' (or however you want to call it, combination of hobbyists and linux-related companies such as Red Hat, Canonical, IBM etc) write most of the drivers, often with little or no support from IHVs at all (of course there are exceptions to every rule).

dr_st wrote:

I find it curious it how you live in the world where IHV hire shitty engineers that make broken hardware, and crappy coders that write bad drivers, but obviously on the OS side, all the APIs are "nice" and "relatively simple" and obviously Microsoft's own coders never have any bugs in their code. Or if they do - well, it's because the stupid IHV engineer didn't use it correctly, right? Never mind that documentation is often obscure/unavailable until it's too late.

That's not at all what I said.
Firstly, not all IHVs hire shitty engineers, and they do not all make broken hardware and write crappy drivers.
But we are specifically talking about AMD here, and they have quite the track-record. Heck, only recently they once again failed with Ryzen. And let's not forget, we are talking about early Vista here, so around 2006. They may have cleaned up their chipset and video drivers over the past 10 years, but back then they weren't quite up to today's standards yet.

Secondly, I never said that Microsoft's coders never have bugs in their code.
However, it is a fact that the core of Microsoft's OS (as in kernel and driver model etc, as per the context of this discussion) is developed by a team of very skilled and experienced developers, probably the best in the industry (look at Windows NT's history, Microsoft basically literally hired the best people from the world of UNIX/VMS).
Likewise, the basis of the graphical subsystem, as in both the driver layer and the DirectX API, is developed by some of the best people in the industry.
They're still human, nobody is perfect, but the process of development and testing they use rules out a lot of problems in the first place, and the level of skill and experience in their team does the rest.
The standard of their work is indeed among the highest in the industry.
(Heck, if you look at linux, they don't even *have* interfaces for a lot of the things Microsoft has. You just 'hack it' because you have the source of the kernel and drivers, and you can just poke around everywhere... of course in the process you routinely break things, which leads to eg every binary release of graphics drivers being broken after every kernel update. That is not good software engineering. This does not happen in the Windows-world. In the Windows world, you can make a single driver package that can work in Vista, Windows 7, 8, 8.1 and 10. Because the driver interface was first defined by Vista, and was designed to be extensible. Newer OSes simply use an updated version of the interface, allowing you to make backward and forward compatible drivers. That is solid software engineering. Put that in your linux pipe and smoke it).

Most of the gripes people have with Windows are not about the core of the OS, but rather about user interfaces, shells, file managers, web browsers and that sort of thing, which are developed by entirely different teams. Stability is rarely an issue with Windows, because the basis is solid.

dr_st wrote:

I'd say you over-glorify Microsoft just a tad.

I don't, but you as a linux-nerd would be offended by anything that isn't super-negative about Microsoft anyway, so you are currently experiencing a severe case of cognitive dissonance, and cannot suppress the urge to act on it.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 141 of 249, by appiah4

User metadata
Rank l33t++
Rank
l33t++

Oh ffs this has devolved into name calling? I mean the condescension, hubris and anger management issues were one thing but "linux-nerd"? Shall we bring back M$ too? You would like that wouldnt you it would help your arguments so far which can sumarrily be described as pointing fingers everywhere but where you dont want to and diluting the matter with all kinds of straw men and semantic arguments. What year is this again by the way 1998? This whole thread has been shameful imo.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 142 of 249, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

Throttling due to power management is an implementation issue, ergo it's the hardware vendors fault. Not the software. If Aero was as unresponsive and laggy in all other cases, Aero might be to blame, but in this case it is not.

The software developer will no doubt do some digging, but once they come across the issue, and verified it with the hardware vendor, what else can be done? If there is an easy fix, another code path could be taken if the program can identify if its running on a problematic platform. If this is too much effort I guarantee the next action will be issuing memorandum to customers in a compatibility list or something.

I've had this problem with Aero drag and drop issues on ATi/AMD chipsets (See my first post in this thread), so I speak from experience. The amount of work to do our end just to isolate the issue, then adding checks to verify the execution platform (if it is a problematic one), a special code path to deal with it and stop it happening o.0 more code paths, more complexity, more room for problems which will only come back to bite the software developer on the arse. Why make life difficult for yourself for something that is not your fault, at the end of the day it only manifests on ATi chipsets. The pisser was the fact that a lot of our customers (didn't realise at the time) had ATi chipsets in the laptops they used. The solution, disable aero if you want correct redraws/animations for drag drop functionality with ATi chipsets. Not really very elegant and embarrassing for us because it looked like this was our fault and we couldn’t be arsed to fix it. Needless to say from that point on we suggested getting Intel or Nvidia chipsets to get the full experience we intended our customers to have (and designed in accordance with the standards). From my point of view on that, ATi failed to deliver an implementation adhering to the standards it said it would. It was fixed in the end, but the reputation damage was done.

At least Aero stiil works with the throttling. o.0

At the end of the day, there is a standard and an implementation. If the implementation doesn't work in some cases, but does in most other cases, how is that the fault of the standard?

MS do have nice API's to work with (especially in comparison to other platforms...OSX is also easy), which (while they sometimes do have problems) are soooo mature you have to dig deep to come across standards issues. You usually come across them (if ever) when new stuff comes out, however they tend to be ironed out with the next update. you could have the best OS, but with no software it becomes effectively useless. Like wise you could have the shittest OS in the world, but if it has lots of software because it provides an easy platform (API's) for third parties to write entire suites of software for, it becomes popular. There is a reason Windoze has the market share it does o.0.

Reply 143 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

Hi,
First a correction: the tested HW uses ATI RS482M/Xpress 1150 not RS690/Xpress 1200.
Instead of taking part in the who is to blame debate I would like to present some data that may help us to draw conclusions beyond this specific HW.
I have made 2 pairs of pictures. The left part is always about the maximum performance state and the right is the maximum power saving state.
Max. performance means: 2000MHz CPU, DDR2 800MHz memory, 400 MHz GPU
Max. Power saving means: 800 MHz CPU, DDR2 320MHz memory, 100 MHz GPU.

wei.jpg
Filename
wei.jpg
File size
166.75 KiB
Views
942 views
File license
Fair use/fair dealing exception
dwm.jpg
Filename
dwm.jpg
File size
194.56 KiB
Views
942 views
File license
Fair use/fair dealing exception

As can be seen Aero performance is one of the bigger losers. Normal 3D performance is not much affected. After running a full assessment you can find a file with an interesting/useful metrics in:
C:\Windows\Performance\WinSAT\DataStore\’date+time’ DWM.Assessment (Recent).WinSAT.xml.
The interesting metric is DWMFps. It seems Aero performance is highly video memory bandwidth dependent. The FPS value is linear with it.
Let’s look at this FPS value: it’s nearly 38 FPS in max. performance mode (score: 3.3) and nearly 12(!) FPS in max. power saving mode (score: 2.0). Although it is relative what one feels fluent but I think we can agree the 12 FPS is definitely not fluent. According to these results we can safely conclude that below 3.0 Aero WEI score you are below 30 FPS and in the low 2.x range you are below 20 FPS. I also have a hypothesis that since Aero uses vsync-ed Blt’s you really
cannot experience e.g. 28 FPS since it will be rather 20. (Please, correct me if I’m wrong in this.)
If it is true then if you have a 60Hz display and you cannot reach at least 30 (DWM)FPS you will have a problem with the responsiveness of Aero.

If my vsync theory is not true the situation is still can be problematic in the low 20 FPS range.
Turning to the point: Not only AMD hardware (in max. power saving mode) produced Aero WEI score in the low 2.x range. Intel hardware was affected too. When I posted to this topic first I had mentioned the Intel equivalent of this AMD notebook that I also used and had Aero performance problem with (GMA 950). I can still prove directly the problem with AMD but unfortunately proving the problem with this Intel HW can only be indirect (I do not have one anymore). Namely you can find on the net many tests where GMA 950 scored in the low 2.x range (so it could be only in the 10-20 FPS range).
E.g.: http://www.tomshardware.co.uk/notebook-laptop … w-31408-14.html

Also in the test above Scali’s 965/GMA X3100 is faster (at least) under Aero than the 950.
And also mentioned here:
https://www.notebookcheck.net/Intel-Graphics- … 100.2176.0.html
Bye.

Ps: I have not experienced so far that Intel’s video drivers are ‘premium’ quality (factually any better than AMD's), be it today or yesterday's standard. 😀

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 144 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

As can be seen Aero performance is one of the bigger losers. Normal 3D performance is not much affected.

Which can probably be explained by the driver treating 3D workloads as a special case, and not applying as much throttling as in 2D/Aero-mode.

Falcosoft wrote:

The interesting metric is DWMFps. It seems Aero performance is highly video memory bandwidth dependent. The FPS value is linear with it.

Which is no surprise, as I said before: the shaders are basically 'free' given the pipelined design of a GPU. So Aero basically does 3 things:
1) Z-buffering
2) Texture-mapping
3) Apply shader for eg 'glass' effect

1) and 2) are depedent on memory bandwidth, where 3) is a very simple shader, and should be faster to execute than the texture mapping, so it will be 'free' on most hardware.

Falcosoft wrote:

Ps: I have not experienced so far that Intel’s video drivers are ‘premium’ quality (factually any better than AMD's), be it today or yesterday's standard. 😀

In my experience, the GM965 was a bit of a 'turning point': The drivers before that time were quite buggy. The GM965 wasn't that great initially either... what's worse, even though the hardware was DX10-capable, when I bought the laptop, the drivers only supported DX9. It wasn't until much later that Intel finally released their first DX10-capable driver.
However, Intel's driver quality gradually picked up from there.

Another factor, which you may have been able to notice in the blog I posted earlier... Intel's hardware is often the least feature-rich on the market (or at least it was, ironically enough their current DX12 GPUs actually have both NV and AMD beat on the feature-front). So my 3D engine would make assumptions about things like pixel formats, which normally every GPU would support. But then one Intel GPU only supported format A, while another Intel GPU only supported format B. So technically, my 3D Engine was the problem, since it couldn't handle different pixel formats. Yet it only was a problem on certain Intel GPUs, because every other vendor just supported these formats. Normally Intel would get the blame.
I have found other bugs in my 3D Engine that only ever were triggered by Intel chips. In all cases, Intel was actually 'correct', and doing things by DX specifications. In fact, because of some Intel drivers I found out that some code that actually worked on SM2.0 hardware such as the Radeon 9600, should never have worked in the first place! I was using certain features that were only supported on SM3.0. It just 'happened' to work on the Radeon 9600, because it was not strictly SM2.0, but more like 'SM2.5', and the driver did not enforce SM2.0 restrictions.

Since then, I decided to regularly test my code on Intel GPUs, because it is valuable QA for my code.
I wouldn't be surprised if most games that have problems on Intel GPUs actually have bugs of a similar nature as my engine did: making assumptions about DX because things just work on NV and AMD hardware, then not bother checking on Intel hardware and reading the small print.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 145 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

Hi,

Which can probably be explained by the driver treating 3D workloads as a special case, and not applying as much throttling as in 2D/Aero-mode.

I forgot to mention that the hardware was forced to this lowest performance mode during the whole assessment test. So the throttling was constant.
I think the main reason is (as I have mentioned before) that the real bottleneck here is the Turion64's linked memory speed to CPU core speed ( and thus drastic video memory bandwidth decrease that Aero seems to be very sensitive to).
I hope you can agree with me in that if we focus on notebooks of that time with integrated video solutions without dedicated VRAM and/or not Vista-friendly power saving methods, it could cause unexpected(?) performance problems with Aero.

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 146 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

I forgot to mention that the hardware was forced to this lowest performance mode during the whole assessment test. So the throttling was constant.
I think the main reason is (as I have mentioned before) that the real bottleneck here is the Turion64's linked memory speed to CPU core speed ( and thus drastic video memory bandwidth decrease that Aero seems to be very sensitive to).

Well if we can assume that the clockspeeds were the same in both cases, then the explanation would be that the Aero test has such a light shader load, that it is *only* affected by the memory speed, where the 3D test probably is bottlenecked somewhat by other components in the non-throttled situation, so it isn't using all available memory speed, and as such the drop is less significant when throttling.

Falcosoft wrote:

I hope you can agree with me in that if we focus on notebooks at that time the integrated Video solutions without dedicated VRAM and/or not Vista-friendly power saving methods cause unexpected(?) performance problems with Aero.

Sure, that was never point of debate.
My points are rather:
1) There are various integrated solutions without dedicated VRAM without performance problems with Aero
2) There are various systems with power saving methods that do not cause performance problems with Aero
3) The issue is not an intrinsic problem of Vista or Aero, but rather of the implementation in the hardware/drivers of the affected systems

Initially I was just somewhat surprised that it was even possible to have performance problems with Aero, given that I had a low-end laptop with Intel GPU at the time, and its performance was just as good as my desktop with GeForce 8800. Even when I ran it in power saving mode on the battery, I never encountered issues.
Since you don't need a very fancy GPU to run Aero, I was surprised that there were machines that, when throttled, would not be fast enough.

Because, let's do the math...
Say you have a 1280x720 screen (that's what my GM965 laptop has, common resolution for low-end laptops in the day), that is 1280*720 = 921600 pixels.
Each pixel takes 4 bytes in the z-buffer, and 4 bytes for 32-bit XRGB.
Worst case, 100% overdraw of the desktop, would mean you read each z-buffer value and overwrite each z-buffer value, and also read and write each pixel (for blending effects).
So you'd have 921600 *(4+4)*2 = 14.7 MB of bandwidth for updating a screen.
Say you want to update at 60 Hz, that is 60*14.7 = ~885 MB/s of bandwidth required.

That is not a whole lot, on 2006-era hardware.
DDR2-320 should give us 320*8 = 2.56 GB/s.
DDR2-800 would give 800*8 = 6.4 GB/s.

So even at full throttle, Aero should only require a fraction of the bandwidth to run at 60 Hz in 1280x720, about 35%. At full speed, the impact of Aero is just 14% of the total bandwidth available.

So on paper the throttle settings should have worked, at least for Aero. In practice, there's probably some huge inefficiency somewhere.

Edit: This is the laptop I have: https://uk.hardware.info/product/16804/fujits … /specifications
Except mine has a 1.5 GHz CPU, not 1.7 GHz.
But yea, DDR2-667, so slower than the AMD machine. Not sure how low it throttles, but wouldn't be surprised if it also goes to 320 or lower.
I see the resolution is actually 1280x800, not 1280x720, but can't be bothered to redo the math above, won't change much 😀

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 147 of 249, by dr_st

User metadata
Rank l33t
Rank
l33t
Scali wrote:

Yes, but that's purely theoretical, 99% of the time it's the driver's fault.

Actually, it's ~95% of the time from the recent statistics I received, but that does not invalidate your point, of course.

Still, even cases that are 'driver fault' may not always be entirely IHV fault. It's one thing when sloppy coding introduced a sleeping function at high IRQ level causing a bluescreen. It's another thing when you have a subtle performance loss due to sub-optimal use of a Windows API, due to incorrect assumptions, due to inadequate documentation (which still sometimes happens with Microsoft, especially with cutting edge stuff), etc. Since a lot of the driver frameworks are actually defined and developed as joint efforts between MS and IHVs, a lot of the time the "blame" is also mutual.

Scali wrote:

I never said "Linux" writes all the drivers. What I meant should be obvious: the IHVs do not write most drivers, the 'linux community' (or however you want to call it, combination of hobbyists and linux-related companies such as Red Hat, Canonical, IBM etc) write most of the drivers, often with little or no support from IHVs at all (of course there are exceptions to every rule).

Based on my limited experience, I feel you are wrong. IHVs still write the drivers, at least their core. It is done "through the community" of course, since this is the nature of open-source, but I think if you go to check who the maintainers actually are, they will be employees of the respective IHV (e.g., Intel). Hobbyists will submit a patch here and there, but overall that's far less code than the bulk that gets pushed by the IHV in the initial drop. Large Linux-related corporations also typically don't write hardware drivers, but they do write their VM drivers and things like that. FreeBSD often takes full ownership of its own drivers and, but the original code is often a clone of 80% or more of the original Linux driver as supplied by the IHV.

Again, my experience is limited and does not cover all the cases. But all situations I encountered was as I described, so I think it's more likely to be the norm, rather than the exception.

Scali wrote:
Secondly, I never said that Microsoft's coders never have bugs in their code. However, it is a fact that the core of Microsoft's […]
Show full quote

Secondly, I never said that Microsoft's coders never have bugs in their code.
However, it is a fact that the core of Microsoft's OS (as in kernel and driver model etc, as per the context of this discussion) is developed by a team of very skilled and experienced developers, probably the best in the industry (look at Windows NT's history, Microsoft basically literally hired the best people from the world of UNIX/VMS).
Likewise, the basis of the graphical subsystem, as in both the driver layer and the DirectX API, is developed by some of the best people in the industry.
They're still human, nobody is perfect, but the process of development and testing they use rules out a lot of problems in the first place, and the level of skill and experience in their team does the rest.
The standard of their work is indeed among the highest in the industry.

Exactly like I said it. You are overglorifying.

If you think that all the people who worked on Microsoft's core technologies are some sort of geniuses, or even that all of them are very experienced people, you would be wrong. I don't know if you have first-hand experience with big corporations, but I do, and I can tell you - that in most cases, most of the actual coding is done by very junior staff. And like with everything, the law of averages applies. Some of them are great, some are mediocre. The really bad ones typically don't stay, but there is a lot of plainly average talent working there, even in big names like Microsoft and RedHat and Intel and AMD.

The development process and the knowledge of the experienced folks is supposed to catch the bugs and improve the code. I readily accept that Microsoft's process is probably among the best in the business. After all they have been almost a pure software company for decades - it is their bread and butter. But even that won't catch everything. The few hardcore high-caliber experts cannot read and understand every line of code and every function in a codebase with millions of lines.

And just because at one point 20-something years ago someone hired 'the best of the best', do you think all of them still work there? None left? No knowledge was ever lost? It's naive to think so.

Scali wrote:

(Heck, if you look at linux, they don't even *have* interfaces for a lot of the things Microsoft has.

Yes, and they have other interfaces for things that Windows does not have, because historically the OSes were developed with a different set of goals in mind. But that's a different discussion.

Scali wrote:

You just 'hack it' because you have the source of the kernel and drivers, and you can just poke around everywhere... of course in the process you routinely break things, which leads to eg every binary release of graphics drivers being broken after every kernel update. That is not good software engineering.

You may be surprised how certain frameworks of Linux are very well-defined and very tightly maintained by a limited set of maintainers. "Hacking it in" is something you can do on your local system, and even inside your organization, but you are unlikely to ever get it upstream; it simply will not pass any of the community reviews. Been there, done that (on both sides, actually).

You focus on the graphics drivers (which is what we are mostly discussing here, so it's OK, I guess), and I must say that this is one area I don't really know much about. However, I do know that the concept of binary driver releases (as opposed to releasing the source) is generally frowned upon in the community. The GPU IHVs want to release their drivers as binary to keep their source closed. They have their reasons to do so, and the instability is the price they pay. It's not because Linux is inherently bad, it is because every time you bend the system to make it work in ways it was not designed to, you are likely to encounter certain issues.

Scali wrote:

This does not happen in the Windows-world. In the Windows world, you can make a single driver package that can work in Vista, Windows 7, 8, 8.1 and 10. Because the driver interface was first defined by Vista, and was designed to be extensible. Newer OSes simply use an updated version of the interface, allowing you to make backward and forward compatible drivers. That is solid software engineering.

Yes, you can make. In practice, how many GPUs have the same driver package supporting everything from Vista to W10? Heck, there were even things that worked in Vista and already in 7 they did not work right, or at all. And good luck finding Vista drivers for newer GPUs.

No doubt you will blame the IHVs, and you will be right (for the most part). Similarly, the problems you described with Linux GPU drivers are also due to the decisions made by the IHVs, not because of inherent Linux flaws. Let's not turn this into a pointless and never-ending Linux-v-Windows argument.

Scali wrote:

Most of the gripes people have with Windows are not about the core of the OS, but rather about user interfaces, shells, file managers, web browsers and that sort of thing, which are developed by entirely different teams. Stability is rarely an issue with Windows, because the basis is solid.

Core stability does not mean much, in itself. An OS is as only as good as its interfaces. If my shell (explorer) crashes or starts misbehaving, I don't care that the kernel is solid - I still have to restart it, and sometimes reboot the damn thing. If I install a patch that wiped out my UI customizations, I don't need someone educating me that "well, that's not the core of the OS - it's a bug in the shell code / registry / whatever". To the user it's as much a part of the OS as the kernel itself.

Scali wrote:

I don't, but you as a linux-nerd would be offended by anything that isn't super-negative about Microsoft anyway, so you are currently experiencing a severe case of cognitive dissonance, and cannot suppress the urge to act on it.

Since we do not know each other personally, I kindly ask that you respond to things I say, not whatever image of me you created in your mind. You writing me off as a "linux nerd" who is only looking for "super-negative" things to say about Microsoft clearly shows that you have no understanding of my overall point of view on these things, even though it should be obvious from the history of my postings on the forum. Perhaps in the heat of the argument you got confused between who you are replying to?

It must be hard arguing solo against a whole bunch of people. No sarcasm, I've been in such situations, and they can feel very frustrating and exhausting. Still, no need to get personal. If we cannot stay technical, better just drop it. BTW, if you feel that I failed to stay technical and impartial in some cases, please do point it out; I will try to fix it. Seriously.

My general suggestion about how to manage these arguments? Pick your battles. Don't bother answering to obvious trolls and ignorant folks. They will understand zilch, and you would just be wasting your energies.

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 148 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

But yea, DDR2-667, so slower than the AMD machine. Not sure how low it throttles, but wouldn't be surprised if it also goes to 320 or lower.
I see the resolution is actually 1280x800, not 1280x720, but can't be bothered to redo the math above, won't change much 😀

Would you do a favor and post the DWM assessment results (DWMFps, VideoMemBandwidth) of your laptop?
It's in C:\Windows\Performance\WinSAT\DataStore\’date+time’ DWM.Assessment (Recent).WinSAT.xml.

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 149 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
dr_st wrote:

Still, even cases that are 'driver fault' may not always be entirely IHV fault. It's one thing when sloppy coding introduced a sleeping function at high IRQ level causing a bluescreen. It's another thing when you have a subtle performance loss due to sub-optimal use of a Windows API, due to incorrect assumptions, due to inadequate documentation (which still sometimes happens with Microsoft, especially with cutting edge stuff), etc.

I think everything is still the IHV's fault for one simple reason:
They actually put this to market. Which means the product went through QA. Either their QA failed to identify the issues, or they launched the product with 'known issues' anyway.
An IHV making sub-optimal uses of a Windows API is clearly not MS' fault. Neither are incorrect assumptions.
Inadequate documentation is somewhat debatable, but again, only the IHV is responsible for actually putting the product out there, with issues.
If I find inadequate documentation, and I can't get my stuff to work, I first contact Microsoft, and then they help me to fix the code and/or update the documentation.

dr_st wrote:

Since a lot of the driver frameworks are actually defined and developed as joint efforts between MS and IHVs, a lot of the time the "blame" is also mutual.

I disagree with that. DirectX seems to be the main case where that is true, mainly because graphics is such a complicated field, and the hardware development is still in full motion.
Most other driver-frameworks and APIs are mostly the work of MS alone.

dr_st wrote:

but I think if you go to check who the maintainers actually are, they will be employees of the respective IHV (e.g., Intel).

Shows how little you know about the linux world...
Intel is the exception to the rule. They are the one IHV that has nothing to gain from keeping their graphics drivers closed-source (they are the lowest common denominator, there's nothing in their drivers and hardware that NV and AMD don't already know).
Which is why Intel the one IHV that doesn't have any kind of closed-source drivers. Instead they employ a team of developers to develop their graphics drivers as open source. That is not exactly a secret either.
AMD merely has a 'token' open source driver, but it is limited in supported hardware, features and performance compared to their closed-source drivers. So in practice you'll want to use the closed-source drivers.
Intel is also interested in linux support for other reasons: server/HPC market. The better their chips are supported on linux, the more value their products have over the competition. So Intel also develops other things for linux, such as their compiler. Not many other IHVs are that position (AMD is, technically, but they don't have the resources to really support linux, let alone develop their own compiler).

Looking outside the graphics world, there's not a lot of IHVs that offer linux drivers at all. Especially with sound cards and such.
A lot is supported only with community drivers. In many cases that means that your fancy hardware is merely running in 'legacy mode'. There's a difference between 'supported' and 'supported' in that sense.

dr_st wrote:

Again, my experience is limited and does not cover all the cases. But all situations I encountered was as I described, so I think it's more likely to be the norm, rather than the exception.

On my blog there's plenty of examples of linux-users complaining about the IHV's unwillingness to even support independent developers that are trying to write drivers for their hardware. So that seems to be the norm.

dr_st wrote:

If you think that all the people who worked on Microsoft's core technologies are some sort of geniuses, or even that all of them are very experienced people, you would be wrong.

Now *you* are overglorifying, I never used the word 'genius' or anything remotely to that extent.

dr_st wrote:

I don't know if you have first-hand experience with big corporations, but I do, and I can tell you - that in most cases, most of the actual coding is done by very junior staff.

And why would that matter?
The software architecture and designing of interfaces etc is not done by this junior staff. The whole point is to set up a framework in which it is easy to develop and difficult to make mistakes.
That's what the skilled and experienced people at Microsoft do (and what I do at my company). We prepare the work so that less experienced people can execute it.
To use a car-analogy... we design the car and production process, so that relatively unskilled workers can put the cars together with a very low failure rate, on a conveyor belt.

Even so, the 'junior' coders that Microsoft hires are still the 'pick of the litter' fresh out of university.

dr_st wrote:

but there is a lot of plainly average talent working there, even in big names like Microsoft and RedHat and Intel and AMD.

As I said, that doesn't matter. They do have great people in all the right places, and they have a good process for development and QA, so that code quality remains well above average.

dr_st wrote:

But even that won't catch everything.

Nobody claimed otherwise, and clearly the list of Windows updates released every month shows that plenty still slips through the cracks.

dr_st wrote:

The few hardcore high-caliber experts cannot read and understand every line of code and every function in a codebase with millions of lines.

Wow, you think at that level? Just, wow.

dr_st wrote:

And just because at one point 20-something years ago someone hired 'the best of the best', do you think all of them still work there? None left? No knowledge was ever lost? It's naive to think so.

Is that even relevant? I am talking about the people who set up Windows NT, the Windows API, the development process etc, and many related things which still live on in Windows today.
Again, wow... you think at that level?

dr_st wrote:

because historically the OSes were developed with a different set of goals in mind.

I think it's not so much about the goals, but rather about the fact that linux was created by a student with no prior experience in developing kernels, no prior work experience, and at best only theoretical knowledge of how to engineer software and set up a development and QA process.
Basically, linux was and is a hackjob, as was UNIX before it. It's a miracle that it works as well as it does. But since there is no method to the madness, you always have to patch sourcecode and hack around, everytime you want to change or add things.

You want examples of just how retarded and dysfunctional it can get? Well, I had a home server, most software built from source, custom-compiled kernel, only compiled in the drivers I needed etc...
Motherboard broke down... I replaced it with a similar motherboard, same CPU and everything... However, it had a slightly different variation of Realtek NIC.
Guess what? It was not compatible with the old one. So my custom-compiled kernel didn't work.
Fine... I also couldn't load other Realtek drivers, because I had one compiled into my kernel, which could not be unloaded.
So I first had to switch back to a generic kernel to even be able to try other NIC drivers.
Then I found out that there was no Realtek driver for my specific variation.

So I had to grab the sources and build it myself... But I couldn't. Between the time that I installed my OS, and the driver for this NIC was written, they had completely restructured some of the driver 'model' (and I use the term extremely loosely). As a result, there was no way to get this source to compile against my kernel. Doing a diff against the Realtek driver that I did have, wasn't very useful either, because there were so many differences, that I couldn't exactly do a quick patch to make it work.

So, I would have to upgrade my entire kernel and rebuild any related code, basically an entire OS upgrade really, just to get the new NIC working.

Can you imagine something like that happening in Windows? I can't.
This is just a huge failure in software engineering 101: stable interfaces.

dr_st wrote:

They have their reasons to do so, and the instability is the price they pay. It's not because Linux is inherently bad, it is because every time you bend the system to make it work in ways it was not designed to, you are likely to encounter certain issues.

1) They have to 'bend the system to make it work in ways it was not designed to' because the design is inept. nVidia's Optimus is a fine example of that (and a fine example of how Linus Torvalds is the most ignorant kernel 'developer' (and I use the term loosely) in the world). Windows 7 got support for Optimus: a way to switch between two GPUs on-the-fly, more specifically a low-power IGP and a discrete high-end GPU.
No support for linux (or Windows XP and Vista, but somehow nobody cared or noticed). Torvalds calls out NV and gives them the finger for not giving linux this functionality.
Reality: Microsoft updated the driver model in Windows 7 so that there is a universal API to share buffers between multiple display devices (even when running on different drivers from different vendors. This is what interfaces/abstraction layers do). Because of these new features, NV was able to implement Optimus: they could communicate with another GPU, and move workloads from one GPU to the next.
Linux however has no such interface. So there is no way to implement something like Optimus. NV should be giving Linus the finger for not developing a proper driver model with features that modern users expect. Linus is an incompetent fool, who doesn't even have a clue about what Optimus is, or what the Windows driver model does to enable this.
NV in fact offered some code for some interfaces to enable this, but the code was rejected for some retarded political reasons.

NV never needs to 'bend the system' on the Windows side.

2) Open source is no excuse for not having stable interfaces. Not having stable interfaces is inherently bad.

dr_st wrote:

Yes, you can make. In practice, how many GPUs have the same driver package supporting everything from Vista to W10?

All, until recently.
That is, Vista support was dropped some time ago, so drivers were mostly Windows 7-10 from then on. And after recent updates to Windows 10, some vendors also split off Windows 7 and Windows 10 now.
But yes, look here for example: http://www.nvidia.com/content/DriverDownload- … us&type=GeForce
The filename says it all: Win8, Win7, Vista in a single driver here.
I believe the Win8 driver would work in Win10, but their official driver is separate here. I think at one point they may all have been a single package.

dr_st wrote:

Similarly, the problems you described with Linux GPU drivers are also due to the decisions made by the IHVs, not because of inherent Linux flaws.

See above, clearly inherent linux flaws.

dr_st wrote:

Core stability does not mean much, in itself.

It did, at one point. Windows 9x wasn't exactly stable, and you would often have BSODs and lose work.
Windows NT has always been rock-solid however, but never quite got credit for that, because of the 9x legacy.
The linux community loved to point out the 'instability' of Windows, mainly projecting 9x issues on NT-based versions.

dr_st wrote:

An OS is as only as good as its interfaces.

To an end-user.
For me as a developer, I care more about the programming interfaces, kernel features and performance and such.
I mostly develop software where the user doesn't interact with anything other than my application. And my application interacts with various hardware and other systems and such.
So the 'usability' of the OS is irrelevant to me and my users. The usability is only relevant for my applications. Aside from that, I need a solid, stable and reliable basis to build my functionality. And that's what the kernel and its drivers should provide. I need low latency, high scalability, that sort of thing.
I have sometimes created software where people would usually pick a *NIX flavour as their go-to solution... But that's cargo cult. It's "what we do" in the industry.
However, unlike most people in that industry, I am not limited to *NIX, I also understand Windows in great detail. So I can sometimes build solutions based on Windows that they didn't even know was possible, yet the Windows version preforms better than a *NIX solution would, because I know about features that Windows offers that *NIX doesn't, and I know how to use them (of course if the shoe is on the other foot, I will also pick *NIX over Windows... however, usually the arguments for *NIX are more along the lines of 'but the licensing costs are cheaper, so let's do that', and rarely because of technical advantages).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 150 of 249, by dr_st

User metadata
Rank l33t
Rank
l33t
Scali wrote:

If I find inadequate documentation, and I can't get my stuff to work, I first contact Microsoft, and then they help me to fix the code and/or update the documentation.

What's the turnaround time on that, eh?

My experience has been that it can sometimes be notoriously difficult to get Microsoft to respond/change something in their documentation/API. And it can take months, and these are months you don't have as an IHV when you have your own product delivery schedule. So you end up hacking something to the best of your abilities. Sometimes it works well, sometimes not so well.

Of course you can adopt the "holier than thou" approach - and say "it's the fault of the IHV for bad plannig, impossible schedules etc.", but that's life, that's how things work, and especially as a junior engineer in a big company, there is absolutely nothing you can do to affect it. Either the SW will be completed with a hack or two, or it won't be complete, and there'll be no product.

Not to mention the cases where Microsoft changes "stable interfaces" and basically forces the IHVs to jump through a lot of hoops to get their stuff working in the new driver model. Vista was probably the best example of this. Even if you suppose that all the changes are for the better, to an outside it still looks like they suddenly have to do a lot of work, which gives them zero benefit. And I don't think all changes are for the better. I do think a lot of it is "Fire and motion" as Joel Spolsky put it nicely in his blog post (read from the paragraph that starts with "When I was an Israeli paratrooper").

Scali wrote:
Shows how little you know about the linux world... Intel is the exception to the rule. They are the one IHV that has nothing to […]
Show full quote

Shows how little you know about the linux world...
Intel is the exception to the rule. They are the one IHV that has nothing to gain from keeping their graphics drivers closed-source (they are the lowest common denominator, there's nothing in their drivers and hardware that NV and AMD don't already know).
Which is why Intel the one IHV that doesn't have any kind of closed-source drivers. Instead they employ a team of developers to develop their graphics drivers as open source. That is not exactly a secret either.
AMD merely has a 'token' open source driver, but it is limited in supported hardware, features and performance compared to their closed-source drivers. So in practice you'll want to use the closed-source drivers.

Again, you seem to talk exclusively about graphics drivers. I am not.

Who writes the network adapter drivers? Bus drivers? Disk controllers? Is Intel the only IHV that provides drivers for their chipset and integrated components?

Scali wrote:

On my blog there's plenty of examples of linux-users complaining about the IHV's unwillingness to even support independent developers that are trying to write drivers for their hardware. So that seems to be the norm.

Can you link me to a few examples? I'd like to see and understand them better.

Scali wrote:

You want examples of just how retarded and dysfunctional it can get? Well, I had a home server, most software built from source, custom-compiled kernel, only compiled in the drivers I needed etc...So, I would have to upgrade my entire kernel and rebuild any related code, basically an entire OS upgrade really, just to get the new NIC working.
....
Can you imagine something like that happening in Windows? I can't.

Heck, at least you could get it done, if you were determined enough.

Now imagine that you want a Vista driver for the Intel xHCI, which was already brought up here. Or for the Skylake SATA/LAN controllers. Intel does not provide them, neither does Microsoft. What do you do? Compile your own? No. You're screwed. Now you really have to upgrade your OS. And at least by expectation, you should also pay for it.

And I'm the one in a cognitive dissonance? 😐

I am not going to respond to everything you said about Linux, and Linus, and the rest. It is clear enough that you believe Linux is a awful awful mess, while Windows is an example of solid engineering. That's a legitimate opinion. You will find plenty of developers that hold a similar point of view, and plenty of those who hold a diametrically opposite one. This is exactly the endless Linux/Windows fanboy argument that I want to avoid.

I have seen examples of things where Windows shines and Linux sucks, and vice-versa. Both as a developer and a user. I can use both, I can develop for both, and I stick to my view - that they each have strength and weaknesses, and a lot depends on your personal mindset as well as the things you are trying to accomplish.

It's hilarious that I get called a "linux nerd" about 1 week after I was called a "Windows fanboy" in another thread on another forum. It really just depends on which fanboys I'm debating with. 😉

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 151 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
dr_st wrote:

What's the turnaround time on that, eh?

Few hours to a few days at most.

dr_st wrote:

And it can take months

Not for me.

dr_st wrote:

So you end up hacking something to the best of your abilities. Sometimes it works well, sometimes not so well.

None of that would explain why AMD's power saving leads to poor Aero performance.
If they 'hacked it to the best of their abilities', then they would simply disable the lowest power saving settings.
Either that, or even AMD considers it a non-issue.
But let's not get distracted here. See, you are now going off on a tangent about how MS this, MS that, but there is nothing here that MS could have or should have done. It really *is* AMD's fault here.
It has nothing to do with MS, Vista, some allegedly poor documentation or anything. The only problem is that their lowest power management settings yield a very inefficient GPU apparently (since the above calculations show that it probably wasn't a lack of raw bandwidth, but rather the lack of effective bandwidth).

dr_st wrote:

Not to mention the cases where Microsoft changes "stable interfaces"

What cases?
I mean, seriously. I have been developing Windows-software for over 20 years, and I cannot think of a single instance where MS changed any interface.
In fact, I would go even further and say that most of the Windows-software I wrote in those 20 years still works today, on the latest version of Windows.
They either expand interfaces in a way that is perfectly backward compatible, or they stop supporting the interface. But they do not change it.

dr_st wrote:

Who writes the network adapter drivers? Bus drivers? Disk controllers? Is Intel the only IHV that provides drivers for their chipset and integrated components?

Some IHVs do, certainly not all.
A lot of times, you run in 'legacy' mode, as I said (new chipsets tend to have at least limited backward compatibility with older chipsets).
I'm pretty sure Realtek didn't write their own network drivers for linux, you should see the ranting in the comments in the source code 😀

dr_st wrote:

Heck, at least you could get it done, if you were determined enough.

No I couldn't, I had to replace/upgrade my OS.

dr_st wrote:

Now imagine that you want a Vista driver for the Intel xHCI, which was already brought up here. Or for the Skylake SATA/LAN controllers. Intel does not provide them, neither does Microsoft. What do you do? Compile your own? No. You're screwed. Now you really have to upgrade your OS. And at least by expectation, you should also pay for it.

Newsflash: you *can* in fact develop drivers for Windows, even open source ones. So there's nothing stopping you from taking open source drivers from linux or some other OS, and adapting them to Vista, if you REALLY want. So you could get it done, if you were determined enough.
Otherwise, it's the same story in both cases: upgrade your OS. Which is a lot less painful for Windows than it is for linux, I might add.

dr_st wrote:

and plenty of those who hold a diametrically opposite one.

That is mostly because your average linux advocate has never worked a regular job before, is probably still in college, and isn't all that skilled or experienced. They're all about the linux because to them that's the cool thing to do. Sorta like the software-equivalent of being a vegan.
Even the rest of the *NIX world isn't too impressed with linux, the way things are developed, and where things are going.

But this whole 1970s technology in 2017... really? I mean, just recently a new sudo-exploit was found. Even in 2017 they are still just parsing some info by listing info via the filesystem such as /proc/*, and then parsing the strings to grab the relevant info (in this case a handle to its tty).
And then there were bugs in the parsing.
Are you really going to tell me that is perfectly acceptable, sound software engineering?
No, of course not, that is a retarded hackjob of a kludge if I ever saw one. It's bad enough that someone built this as a quick fix back in the 1970s... but all these people who still think it's a good idea to do this in 2017 should be... well, nevermind, you get my drift.
In Windows you don't have that sort of crap. There's APIs to request such data from the OS, via a well-defined contract, so no string parsing and no chance of getting bugs and exploits of this nature. That's how a software engineer would do it.
The linux solution is that of a script kiddie.

dr_st wrote:

It's hilarious that I get called a "linux nerd" about 1 week after I was called a "Windows fanboy" in another thread on another forum. It really just depends on which fanboys I'm debating with. 😉

I'm an Amiga fanboy (or well, I was, back in the day). Now THERE's a system if there ever was one.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 152 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie
None of that would explain why AMD's power saving leads to poor Aero performance. If they 'hacked it to the best of their abilit […]
Show full quote

None of that would explain why AMD's power saving leads to poor Aero performance.
If they 'hacked it to the best of their abilities', then they would simply disable the lowest power saving settings.
Either that, or even AMD considers it a non-issue.
But let's not get distracted here. See, you are now going off on a tangent about how MS this, MS that, but there is nothing here that MS could have or should have done. It really *is* AMD's fault here.
It has nothing to do with MS, Vista, some allegedly poor documentation or anything. The only problem is that their lowest power management settings yield a very inefficient GPU apparently (since the above calculations show that it probably wasn't a lack of raw bandwidth, but rather the lack of effective bandwidth).

I feel this section was not perfectly balanced/objective. Intel GMA 950's WEI score of 2.x cannot be AMD's fault, can it? If Intel's HW would have been inherently much more efficient (and closer to the theoretical bandwidth limits you have pointed out) we should not have seen 2.x or even 3.x Aero scores in case of Intel chips. That is maximum 1-2 GB/sec VidMemBandwidth in Aero terms and maximum 30 and some FPS. Far from the the ideal picture of 'efficient' hardware that can do easily 60Hz without a problem (as you have also pointed out).
Please, send me the WEI results of your laptop if it's not a problem for you. Thanks.

Last edited by Falcosoft on 2017-06-01, 16:02. Edited 1 time in total.

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 153 of 249, by dr_st

User metadata
Rank l33t
Rank
l33t
Scali wrote:

See, you are now going off on a tangent about how MS this, MS that, but there is nothing here that MS could have or should have done. It really *is* AMD's fault here.

In case it wasn't clear, I don't care about the particular AMD discussion you are having here, and I didn't even read it. I was making general remarks, going off a tangent, as you say. Can't see why you and some of the others can "hijack" this thread to talk about AMD GPUs and Aero, and I can't do the same. 😀

Scali wrote:

They either expand interfaces in a way that is perfectly backward compatible, or they stop supporting the interface. But they do not change it.

VXD to WDM? XPDM to WDDM? EAX to OpenAL?

What's the differences between "stopping supporting" the interface and "changing it" other than the silly semantics you put on it? In both cases it means that it's impossible to just use the old driver and a new one has to be written.

Scali wrote:

Some IHVs do, certainly not all.
A lot of times, you run in 'legacy' mode, as I said (new chipsets tend to have at least limited backward compatibility with older chipsets).
I'm pretty sure Realtek didn't write their own network drivers for linux, you should see the ranting in the comments in the source code 😀

I don't know about Realtek (their driver directory seems to be a bit messy and limited-content), but I sampled a few of the major vendors (Atheros, Aquantia, Broadcom, Intel, Marvell), and if you look at the commit history and the submitters' emails, you see that in all cases the vendor is rather dominant...

Scali wrote:

Newsflash: you *can* in fact develop drivers for Windows, even open source ones. So there's nothing stopping you from taking open source drivers from linux or some other OS, and adapting them to Vista, if you REALLY want. So you could get it done, if you were determined enough.

Is that why you see plenty of Windows drivers for OEM hardware written by third-party hobbyists, WHQLed, offered through Windows Update?

Scali wrote:

Otherwise, it's the same story in both cases: upgrade your OS. Which is a lot less painful for Windows than it is for linux, I might add.

There are a lot of people who beg to differ. Funnily I was called a "Windows fanboy" the other day exactly because I suggested that one Linux fanboy who was ranting exactly about how much easier it is to install/upgrade his Linux distro compared to Windows was "not completely objective".

Scali wrote:

That is mostly because your average linux advocate has never worked a regular job before, is probably still in college, and isn't all that skilled or experienced. They're all about the linux because to them that's the cool thing to do. Sorta like the software-equivalent of being a vegan.

Now you are just showing your ignorance and some bigotry. Either you have no idea how many of the most senior Linux maintainers have been holding regular jobs at some of the biggest technology companies, or you are deliberately ignoring this. You can check linux/MAINTAINERS if you are curious, then look up their résumés. I'm personally acquainted with a few of them, and I can assure you that they don't go to their job every day saying "Damn, how I wish I was a Windows developer".

You really should just accept that there can be more than one opinion, and people may prefer different ways to do things.

Scali wrote:

But this whole 1970s technology in 2017... really? I mean, just recently a new sudo-exploit was found. Even in 2017 they are still just parsing some info by listing info via the filesystem such as /proc/*, and then parsing the strings to grab the relevant info (in this case a handle to its tty).
.....
In Windows you don't have that sort of crap. There's APIs to request such data from the OS, via a well-defined contract, so no string parsing and no chance of getting bugs and exploits of this nature. That's how a software engineer would do it.

Riiight, and these well-defined contracts is why there was never any exploit found in Windows. And certainly there was never any vulnerability found that turned out to be dating all the way back to Win9x days. And there certainly was not a vulnerability deemed critical enough just last month to get Microsoft to release a special patch for an 3-years-out-of-support Windows XP. And that's for a freaking closed-source OS where you cannot even look at the code to see the vulnerabilities; you have to guess what they are.

But, no, let's focus on the sudo-exploit that was found, and dismiss the entire Linux community as a bunch of "script kiddies". And this I hear from a guy who, by his own admission, loves the demo scene, and retro programming (is anything more "hackish" than that)? 😉

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 154 of 249, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
dr_st wrote:

...the demo scene, and retro programming (is anything more "hackish" than that)? 😉

Na man, this is good!

Writing code for older, and in some cases, completely different systems is nothing like coding within modern environments on modern platforms with modern tools.

I've had a little flirt with retro programming and I certainly wouldn't write code like I do at work. In many cases, retro syntax wouldn't even make it past the first code review, sometimes because coding standards don't have to be enforced with little to no testing, but other times because the tools are just not compatible, standards change, compilers change, syntax change, so many things are different.

Yes demo scene is hackish, but it's not like you just move these demos to deployable software. Every PoC/tech-demo is just that, a demo... there is a big step to getting a viable versatile engine from a demo. Demos are cool for seeing what can be done when hardware/software is pushed. IRL, software needs to be a tool which is stable, compatible, and in many cases perform multiple functions. 😀

Reply 155 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

Would you do a favor and post the DWM assessment results (DWMFps, VideoMemBandwidth) of your laptop?
It's in C:\Windows\Performance\WinSAT\DataStore\’date+time’ DWM.Assessment (Recent).WinSAT.xml.

I've uploaded it here: https://www.dropbox.com/s/oc8by7bu4p2nh7e/200 … WinSAT.xml?dl=0
Short version:
DWMFps: 40.02060
VideoMemBandwidth: 2117.76000

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 156 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

I feel this section was not perfectly balanced/objective. Intel GMA 950's WEI score of 2.x cannot be AMD's fault, can it?

My point was that it was not Microsoft's fault. So for Intel it would be Intel's fault.
However, I don't think the WEI score will necessarily translate directly to how good a user experience you may or may not get.
The Windows test seems to test the CPU and GPU separately. However, in practice you usually use the GUI while the CPU is also running tasks. So the performance of a system with shared memory will also depend on how efficiently the CPU and GPU can do this sharing.

Anyway, the results seem to speak in Intel's favour here: my score seems to be slightly higher than the AMD system, but my system only runs on a 1.5 GHz CPU and 667 MHz memory.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 157 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
dr_st wrote:

Can't see why you and some of the others can "hijack" this thread to talk about AMD GPUs and Aero, and I can't do the same. 😀

We're not hijacking the thread, Vista is still the topic. You however started talking about linux, and the only remotely Vista-related stuff is that you try to blame Microsoft for all the world's problems, apparently.

dr_st wrote:

EAX to OpenAL?

Oh really now? That's just sad....
EAX was a proprietary Creative solution. OpenAL was originally developed by Loki, but was later acquired by Creative as well.
Microsoft has nothing to do with either.

dr_st wrote:

What's the differences between "stopping supporting" the interface and "changing it" other than the silly semantics you put on it? In both cases it means that it's impossible to just use the old driver and a new one has to be written.

No it is not.
You see... stopping support does not change the interface. Everyone expects the interface not to be supported in new products.
Changing the interface can be valid, but only if you extend it in a way that is backward-compatible.
The problem in the linux world is that:
1) Interfaces regularly change, without any prior warning or notification.
2) These changes generally break existing code, requiring patches and recompilation.

Basically the OS keeps falling apart, and people keep putting it back together.
In Windows the only thing that may change from time to time is the driver model, but that's only once every so many years, not a few times a month, like with linux.
And when this is done in Windows, there are good reasons for it, and the new interface will be well-documented.
In fact, even though the *interface* for the drivers changes, the actual development environment, not so much.
Microsoft has some 'boilerplate' in the DDK, which allows you to very easily compile basically the same driver for various versions of Windows, so abstracting between XP and Vista+ driver models isn't all that difficult really.
Some years ago I've developed a virtual serial port driver that way, which had to run both on XP and Windows 7. This was a remarkably transparent process for development.

dr_st wrote:

Is that why you see plenty of Windows drivers for OEM hardware written by third-party hobbyists, WHQLed, offered through Windows Update?

Ah, goalposts moeved again? Now it is not enough that you can write a driver, it has to be WHQLed and offered through Windows Update as well?

dr_st wrote:

Now you are just showing your ignorance and some bigotry. Either you have no idea how many of the most senior Linux maintainers have been holding regular jobs at some of the biggest technology companies, or you are deliberately ignoring this.

I'm not just talking about the linux maintainers obviously. They are only a very small subset of the total linux community.
But even then, Linus himself is a fine example of someone who was merely a student at the time, and still I don't think he ever had a regular job at a regular company. Some companies just hired him for his name. God knows what it is he has actually done at Transmeta.

dr_st wrote:

and I can assure you that they don't go to their job every day saying "Damn, how I wish I was a Windows developer".

Should they then?

dr_st wrote:

You really should just accept that there can be more than one opinion, and people may prefer different ways to do things.

Not really. Some things are just wrong, period.
If you don't see what is fundamentally wrong about my example of grabbing some text-data from a filesystem and then putting it through a string-parser in order to get a handle to a kernel-object, then I can only pity your lack of understanding of software engineering in general, and even more so the character flaws that make you brush this off as a 'difference in opinion'.

dr_st wrote:

Riiight, and these well-defined contracts is why there was never any exploit found in Windows.

That's not what I said, is it?
Really, you're just ranting like a moron here. You are clearly out of your league.
Anyone with a bit of experience in the world of software engineering knows that there is no way to guarantee that there will not be any exploits.
They will also agree that there are certain patterns you should follow to minimize the possibility of exploits and bugs.
Parsing strings is universally accepted as being rather error-prone, while having well-defined API functions and parameters (which is what is meant by a 'code contract' in case you aren't familiar with that term) is universally accepted as good practice, also known as 'defensive programming'. Functions and parameters will at least be (strongly) typed, so that there is less chance of getting wrong information, or in this case, the right information being in the wrong place.

dr_st wrote:

And this I hear from a guy who, by his own admission, loves the demo scene, and retro programming (is anything more "hackish" than that)? 😉

Wait, are you now comparing my hobby coding projects to an OS that is deemed 'enterprise-ready' and is being run on millions of (corporate) servers worldwide?

But I guess that brings us back to what a 'hack' is. There are two commonly accepted definitions:
1) A quick-and-dirty solution
2) Something so clever that it does things you didn't think were possible in ways you would never have conceived

Democoding, especially on very old machines, with cycle-exact code and whatnot, may possibly be the most bug-free code out there: For cycle-exact code to work, you have to know exactly what the entire system is doing at every cycle. You could argue that there literally is no room for error (or bugs).

The sudo-exploit clearly falls in the former category. It's just sloppy coding, it's not a smart solution, it's not efficient, it's not elegant, it doesn't even work very well.
For the second category, I would refer you to The story of Mel.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 158 of 249, by Falcosoft

User metadata
Rank Oldbie
Rank
Oldbie

I've uploaded it here: https://www.dropbox.com/s/oc8by7bu4p2nh ... T.xml?dl=0

Thanks!

Anyway, the results seem to speak in Intel's favour here: my score seems to be slightly higher than the AMD system, but my system only runs on a 1.5 GHz CPU and 667 MHz memory.

Strictly speaking the comparison is not 'period correct' if we define the 2 periods as pre-Vista and post-Vista. Your 965M/GMA X3100 is a post-Vista chipset/GPU. The comparable pre-Vista chipset on Intel side is the 945M/GMA 950 that never reached 3.x Aero WEI score.
But to tell you the truth I doubt later mobile AMD chipsets for the K8 based CPU's could do much better in this respect.
I have a theory about the reasons:
The K8 IMC while was a revolutionary solution at that time in the x86 world has a definitive drawback. It is inherently less North Bridge-integrated video friendly than Intel's classic NorthBridge style memory controller at that time.
Let's look at the numbers:
Scali's Intel PC has a considerably worse Memory score/bandwidth than the my AMD one:
(Notice that AMD is actually capped at 5.5 because of the amount of memory)
Intel:

<MemoryScore>4.5</MemoryScore>
...
<MemoryMetrics>
<Bandwidth units="MB/s">3046.67041</Bandwidth>
</MemoryMetrics>

AMD:

- <MemoryScore>
<LimitApplied Friendly="Physical memory available to the OS is less than 3.0GB-64MB : limit mem score to 5.5" Relation="LT">3154116608</LimitApplied>
</MemoryScore>
</LimitsApplied>
</WinSPR>
- <Metrics>
- <MemoryMetrics>
<Bandwidth units="MB/s">5657.01438</Bandwidth>
</MemoryMetrics>
</Metrics>

Contrary to the obvious bandwidth advantage on the CPU side, on the GPU side the result is the opposite. I think the reasons are:
1. While the K8 IMC brought the memory closer to the CPU core (and thus provides good CPU-memory bandwidth and exceptional latency) in the same step it placed the memory harder to reach from the perspective of the NB-integrated GPU. The GPU can only communicate with the memory(controller) only through the 800Mhz HyperTransport.
2. Because of point 1 the 'Powernow' power saving features of the CPU affects the video memory bandwidth much more than the classic design since the memory speed are very closely linked to the CPU speed( when you decrease the CPU core speed by reducing the multipliers the memory speed is also decreases proportionally). Intel's CPU power saving feature 'SpeedStep' when changes the core clock with the help of reduced FSB multipliers do not affect the integrated video memory bandwidth too much since the memory speed is linked to the FSB speed that does not change when the CPU core clock reduced.

That cannot be a coincidence that later IMC implementations (K10, Nahelm) chose a different path. CPU-NB in K10 (Uncore in Intel's term) provided a way to decouple the memory controller more from the very CPU core. This way the memory speed/bandwidth became independent from the CPU core clock and could provide the necessary video bandwidth to integrated video solutions even when power saving features was active.
Overall while the K8 IMC was a good solution regarding the CPU bandwidth/latency requirements(In AIDA64's memory latency benchmark chart ten years later still an Athlon64 is the leader) it proved to be a subpar solution when it had to share the memory with integrated GPU's

Website, Facebook, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper

Reply 159 of 249, by Scali

User metadata
Rank l33t
Rank
l33t
Falcosoft wrote:

1. While the K8 IMC brought the memory closer to the CPU core (and thus provides good CPU-memory bandwidth and exceptional latency) in the same step it placed the memory harder to reach from the perspective of the NB-integrated GPU. The GPU can only communicate with the memory(controller) only through the 800Mhz HyperTransport.

That should not necessarily be a problem for a GPU. After all, a GPU is deeply pipelined. Discrete GPUs usually have memory with very high latencies, but high bandwidth. The latency can easily be hidden, because texture fetches can be predicted very easily. Especially for the use-case of Aero, where you just render screen-facing polygons with no perspective, so no fancy texture filtering required. GPUs are great at performing prefetching and huge burst transfers and getting close to the theoretical maximum bandwidth in practical situations.

The additonal latency wouldn't explain why you don't get anywhere near the actual theoretical bandwidth when it is throttled to 320 MHz.

Falcosoft wrote:

2. Because of point 1 the 'Powernow' power saving features of the CPU affects the video memory bandwidth much more than the classic design since the memory speed are very closely linked to the CPU speed( when you decrease the CPU core speed by reducing the multipliers the memory speed is also decreases proportionally). Intel's CPU power saving feature 'SpeedStep' when changes the core clock with the help of reduced FSB multipliers do not affect the integrated video memory bandwidth too much since the memory speed is linked to the FSB speed that does not change when the CPU core clock reduced.

I'm not sure if this matters in practice? I mean, if you throttle memory bandwidth, you throttle memory bandwidth. I don't think it matters whether it's done in the CPU itself, or in the chipset.

Falcosoft wrote:

That cannot be a coincidence that later IMC implementations (K10, Nahelm) chose a different path. CPU-NB in K10 (Uncore in Intel's term) provided a way to decouple the memory controller more from the very CPU core. This way the memory speed/bandwidth became independent from the CPU core clock and could provide the necessary video bandwidth to integrated video solutions even when power saving features was active.

Not sure if they did that because of the GPU specifically, or that it was mostly with the aim of better NUMA-performance in multi-socket systems.
Also, ultimately they just integrated the GPU on the CPU. Problem solved.

Falcosoft wrote:

Overall while the K8 IMC was a good solution regarding the CPU bandwidth/latency requirements(In AIDA64's memory latency benchmark chart ten years later is still an Athlon64 is the leader) it proved to be a subpar solution when it had to share the memory with integrated GPU's

I think it's more a shortcoming of the memory controller and HT interface itself than it is the GPU, seeing as high-latency should not be an issue with a GPU.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/